GotAI.NET
Форум: Проблемы искусственного интеллекта
Регистрация
|
Вход
Все темы
|
Новая тема
Стр.18 (39)
<<
< Пред.
|
След. >
>>
Поиск:
Автор
Тема: На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:18
PART-4: MACHINE LEARNING:
=210=> 1249p
Maintaining Predictions Over Time Without a Model
,
Erik Talvitie, Satinder Singh,
http://ijcai.org/papers09/Abstracts/210.html
A common approach to the control problem in partially observable environments is to perform a direct search in policy space, as defined over some set of features of history. In this paper we consider predictive features, whose values are conditional probabilities of future events, given history. Since predictive features provide direct information about the agent's future, they have a number of advantages for control. However, unlike more typical features defined directly over past observations, it is not clear how to maintain the values of predictive features over time. A model could be used, since a model can make any prediction about the future, but in many cases learning a model is infeasible. In this paper we demonstrate that in some cases it is possible to learn to maintain the values of a set of predictive features even when a learning a model is infeasible, and that natural predictive features can be useful for policy-search methods. text:
http://ijcai.org/papers09/Papers/IJCAI09-210.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:19
PART-4: MACHINE LEARNING:
=211=> 1255p
On Multiple Kernel Learning with Multiple Labels
,
Lei Tang, Jianhui Chen, Jieping Ye,
http://ijcai.org/papers09/Abstracts/211.html
For classification with multiple labels, a common approach is to learn a classifier for each label. With a kernel-based classifier, there are two options to set up kernels: select a specific kernel for each label or the same kernel for all labels. In this work, we present a unified framework for multi-label multiple kernel learning, in which the above two approaches can be considered as two extreme cases. Moreover, our framework allows the kernels shared partially among multiple labels, enabling flexible degrees of label commonality. We systematically study how the sharing of kernels among multiple labels affects the performance based on extensive experiments on various benchmark data including images and microarray data. Interesting findings concerning efficacy and efficiency are reported. text:
http://ijcai.org/papers09/Papers/IJCAI09-211.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:19
PART-4: MACHINE LEARNING:
=212=> 1261p
Toward Unsupervised Activity Discovery
Using Multi Dimensional Motif Detection in Time Series
,
Alireza Vahdatpour, Navid Amini, Majid Sarrafzadeh,
http://ijcai.org/papers09/Abstracts/212.html
This paper addresses the problem of activity and event discovery in multi dimensional time series data by proposing a novel method for locating multi dimensional motifs in time series. While recent work has been done in finding single dimensional and multi dimensional motifs in time series, we address motifs in general case, where the elements of multi dimensional motifs have temporal, length, and frequency variations. The proposed method is validated by synthetic data, and empirical evaluation has been done on several wearable systems that are used by real subjects. text:
http://ijcai.org/papers09/Papers/IJCAI09-212.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:19
PART-4: MACHINE LEARNING:
=213=> 1267p
Multi-Class Classifiers and their Underlying Shared Structure
,
Volkan Vural, Glenn Fung, Romer Rosales, Jennifer G. Dy,
http://ijcai.org/papers09/Abstracts/213.html
Multi-class problems have a richer structure than binary classification problems. Thus, they can potentially improve their performance by exploiting the relationship among class labels. While for the purposes of providing an automated classification result this class structure does not need to be explicitly unveiled, for human level analysis or interpretation this is valuable. We develop a multi-class large margin classifier that extracts and takes advantage of class relationships. We provide a bi-convex formulation that explicitly learns a matrix that captures these class relationships and is de-coupled from the feature weights. Our representation can take advantage of the class structure to compress themodel by reducing the number of classifiers employed, maintaining high accuracy even with large compression. In addition, we present an efficient formulation in terms of speed and memory. text:
http://ijcai.org/papers09/Papers/IJCAI09-213.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:19
PART-4: MACHINE LEARNING:
=214=> 1273p
Manifold Alignment without Correspondence
,
Chang Wang, Sridhar Mahadevan,
http://ijcai.org/papers09/Abstracts/214.html
Manifold alignment has been found to be useful in many areas of machine learning and data mining. In this paper we introduce a novel manifold alignment approach, which differs from semi-supervised alignment and Procrustes alignment in that it does not require predetermining correspondences. Our approach learns a projection that maps data instances (from two different spaces) to a lower dimensional space simultaneously matching the local geometry and preserving the neighborhood relationship within each set. This approach also builds connections between spaces defined by different features and makes direct knowledge transfer possible. The performance of our algorithm is demonstrated and validated in a series of carefully designed experiments in information retrieval and bioinformatics. text:
http://ijcai.org/papers09/Papers/IJCAI09-214.pdf
=======================
PART-4: MACHINE LEARNING:
=215=> 1279p
Generalized Cluster Aggregation
,
Fei Wang, Xin Wang, Tao Li,
http://ijcai.org/papers09/Abstracts/215.html
Clustering aggregation has emerged as an important extension of the classical clustering problem. It refers to the situation in which a number of different (input) clusterings have been obtained for a particular data set and it is desired to aggregate those clustering results to get a better clustering solution. In this paper, we propose a unified framework to solve the clustering aggregation problem, where the aggregated clustering result is obtained by minimizing the (weighted) sum of the Bregman divergence between it and all the input clusterings. Moreover, under our algorithm framework, we also propose a novel cluster aggregation problem where some must-link and cannot-link constraints are given in addition to the input clusterings. Finally the experimental results on some real world data sets are presented to show the effectiveness of our method.
text:
http://ijcai.org/papers09/Papers/IJCAI09-215.pdf
=======================
PART-4: MACHINE LEARNING:
=216=> 1285p
Preference Learning with Extreme Examples
,
Fei Wang, Bin Zhang, Ta-Hsin Li, Wenjun Yin, Jin Dong, Tao Li,
http://ijcai.org/papers09/Abstracts/216.html
In this paper, we consider a general problem of semi-supervised preference learning, in which we assume that we have the information of the extreme cases and some ordered constraints, our goal is to learn the unknown preferences of the other places. Taking the potential housing place selection problem as an example, we have many candidate places together with their associated information (e.g., position, environment), and we know some extreme examples (i.e., several places are perfect for building a house, and several places are the worst that cannot build a house there), and we know some partially ordered constraints (i.e., for two places, which place is better), then how can we judge the preference of one potential place whose preference is unknown beforehand? We propose a Bayesian framework based on Gaussian process to tackle this problem, from which we not only solve for the unknown preferences, but also the hyperparameters contained in our model.
text:
http://ijcai.org/papers09/Papers/IJCAI09-216.pdf
=======================
PART-4: MACHINE LEARNING:
=217=> 1291p
Knowledge Transfer on Hybrid Graph
,
Zheng Wang, Yangqiu Song, Changshui Zhang,
http://ijcai.org/papers09/Abstracts/217.html
In machine learning problems, labeled data are often in short supply. One of the feasible solution for this problem is transfer learning. It can make use of the labeled data from other domain to discriminate those unlabeled data in the target domain. In this paper, we propose a transfer learning framework based on similarity matrix approximation to tackle such problems. Two practical algorithms are proposed, which are the label propagation and the similarity propagation. In these methods, we build a hybrid graph based on all available data. Then the information is transferred cross domains through alternatively constructing the similarity matrix for different part of the graph. Among all related methods, similarity propagation approach can make maximum use of all available similarity information across domains. This leads to more efficient transfer and better learning result. The experiment on real world text mining applications demonstrates the promise and effectiveness of our algorithms. text:
http://ijcai.org/papers09/Papers/IJCAI09-217.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:19
PART-4: MACHINE LEARNING:
=218=> 1297p
Early Prediction on Time Series: A Nearest Neighbor Approach
,
Zhengzheng Xing, Jian Pei, Philip S. Yu,
http://ijcai.org/papers09/Abstracts/218.html
In this paper, we formulate the problem of early classification of time series data, which is important in some time-sensitive applications such as health-informatics. We introduce a novel concept of MPL (Minimum Prediction Length) and develop ECTS (Early Classification on Time Series), an effective 1-nearest neighbor classification method. ECTS makes early predictions and at the same time retains the accuracy comparable to that of a 1NN classifier using the full-length time series. Our empirical study using benchmark time series data sets shows that ECTS works well on the real data sets where 1NN classification is effective.
text:
http://ijcai.org/papers09/Papers/IJCAI09-218.pdf
=======================
PART-4: MACHINE LEARNING:
=219=> 1303p
Discriminative Semi-Supervised Feature Selection via Manifold Regularization
,
Zenglin Xu, Rong Jin,
Michael R. Lyu, Irwin King
,
http://ijcai.org/papers09/Abstracts/219.html
We consider the problem of semi-supervised feature selection, where we are given a small amount of labeled examples and a large amount of unlabeled examples. Since a small number of labeled samples are usually insufficient for identifying the relevant features, the critical problem arising from semi-supervised feature selection is how to take advantage of the information underneath the unlabeled data. To address this problem, we propose a novel discriminative semi-supervised feature selection method based on the idea of manifold regularization. The proposed method selects features throughmaximizing the classification margin between different classes and simultaneously exploiting the geometry of the probability distribution that generates both labeled and unlabeled data. We formulate the proposed feature selection method into a convex-concave optimization problem, where the saddle point corresponds to the optimal solution. To find the optimal solution, the level method, a fairly recent optimization method, is employed. We also present a theoretic proof of the convergence rate for the application of the level method to our problem. Empirical evaluation on several benchmark data sets demonstrates the effectiveness of the proposed semi-supervised feature selection method. text:
http://ijcai.org/papers09/Papers/IJCAI09-219.pdf
=======================
PART-4: MACHINE LEARNING:
=220=> 1309p
Multi-Relational Learning with Gaussian Processes
,
Zhao Xu,
Kristian Kersting, Volker Tresp
,
http://ijcai.org/papers09/Abstracts/220.html
Due to their flexible nonparametric nature, Gaussian process models are very effective at solving hard machine learning problems. While existing Gaussian process models focus on modeling one single relation, we present a generalized GP model, named multi-relational Gaussian process model, that is able to deal with an arbitrary number of relations in a domain of interest. The proposed model is analyzed in the context of bipartite, directed, and undirected univariate relations. Experimental results on real-world datasets show that exploiting the correlations among different entity types and relations can indeed improve prediction performance.
text:
http://ijcai.org/papers09/Papers/IJCAI09-220.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:20
PART-4: MACHINE LEARNING:
=221=> 1315p
Transfer Learning using Task-Level Features with Application to Information Retrieval
,
Rong Yan, Jian Zhang,
http://ijcai.org/papers09/Abstracts/221.html
We propose a probabilistic transfer learning model that uses task-level features to control the task mixture selection in a hierarchical Bayesian model. These task-level features, although rarely used in existing approaches, can provide additional information to model complex task distributions and allow effective transfer to new tasks especially when only limited number of data are available. To estimate the model parameters, we develop an empirical Bayes method based on variational approximation techniques. Our experiments on information retrieval show that the proposed model achieves significantly better performance compared with other transfer learning methods. text:
http://ijcai.org/papers09/Papers/IJCAI09-221.pdf
========================
PART-4: MACHINE LEARNING:
=222=> 1321p
Spatio-Temporal Event Detection Using Dynamic Conditional Random Fields
,
Jie Yin, Derek Hao Hu, Qiang Yang,
http://ijcai.org/papers09/Abstracts/222.html
Event detection is a critical task in sensor networks for a variety of real-world applications. Many real-world events often exhibit complex spatio-temporal patterns whereby they manifest themselves via observations over time and space proximities. These spatio-temporal events cannot be handled well by many of the previous approaches. In this paper, we propose a new Spatio-Temporal Event Detection (STED) algorithm in sensor networks based on a dynamic conditional random field (DCRF) model. Our STED method handles the uncertainty of sensor data explicitly and permits neighborhood interactions in both observations and event labels. Experiments on both real data and synthetic data demonstrate that our STED method can provide accurate event detection in near real time even for large-scale sensor networks.
text:
http://ijcai.org/papers09/Papers/IJCAI09-222.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:20
PART-4: MACHINE LEARNING:
=223=> 1327p
Robust Distance Metric Learning with Auxiliary Knowledge
,
Zheng-Jun Zha, T.Mei, Meng Wang, Zengfu Wang, XS.Hua,
http://ijcai.org/papers09/Abstracts/223.html
Most of the existing metric learning methods are accomplished by exploiting pairwise constraints over the labeled data and frequently suffer from the insufficiency of training examples. To learn a robust distance metric from few labeled examples, prior knowledge from unlabeled examples as well as the metrics previously derived from auxiliary data sets can be useful. In this paper, we propose to leverage such auxiliary knowledge to assist distance metric learning, which is formulated following the regularized loss minimization principle. Two algorithms are derived on the basis of manifold regularization and log-determinant divergence regularization technique, respectively, which can simultaneously exploit label information (i.e., the pairwise constraints over labeled data), unlabeled examples, and the metrics derived from auxiliary data sets. The proposed methods directly manipulate the auxiliary metrics and require no raw examples from the auxiliary data sets, which make them efficient and flexible. We conduct extensive evaluations to compare our approaches with a number of competing approaches on face recognition task. The experimental results show that our approaches can derive reliable distance metrics from limited training examples and thus are superior in terms of accuracy and labeling efforts. text:
http://ijcai.org/papers09/Papers/IJCAI09-223.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:20
PART-4: MACHINE LEARNING:
=224=> 1333p
Fast Active Tabu Search and its Application to Image Retrieval
,
Chao Zhang, Hongyu Li, Qiyong Guo, Jinyuan Jia, I-Fan Shen,
http://ijcai.org/papers09/Abstracts/224.html
This paper proposes a novel framework for image retrieval. The retrieval is treated as searching for an ordered cycle in an image database. The optimal cycle can be found by minimizing the geometric manifold entropy of images. The minimization is solved by the proposed method, fast active tabu search. Experimental results demonstrate the framework for image retrieval is feasible and quite promising.
text:
http://ijcai.org/papers09/Papers/IJCAI09-224.pdf
=======================
PART-4: MACHINE LEARNING:
=225=> 1339p
M3IC: Maximum Margin Multiple Instance Clustering
,
Dan Zhang, Fei Wang, Luo Si, Tao Li,
http://ijcai.org/papers09/Abstracts/225.html
Clustering, classification, and regression, are three major research topics in machine learning. So far, much work has been conducted in solving multiple instance classification and multiple instance regression problems, where supervised training patterns are given as bags and each bag consists of some instances. But the research on unsupervised multiple instance clustering is still limited . This paper formulates a novel Maximum Margin Multiple Instance Clustering problem for the multiple instance clustering task. To avoid solving a non-convex optimization problem directly, M3IC is further relaxed, which enables an efficient optimization solution with a combination of Constrained Concave-Convex Procedure CCCP) and the Cutting Plane method. Furthermore, this paper analyzes some important properties of the proposed method and the relationship between the proposed method and some other related ones. An extensive set of empirical results demonstrate the advantages of the proposed method against existing research for both effectiveness and efficiency. text:
http://ijcai.org/papers09/Papers/IJCAI09-225.pdf
=======================
PART-4: MACHINE LEARNING:
=226=> 1345p
An Efficient Nonnegative Matrix Factorization Approach in Flexible Kernel Space
,
Daoqiang Zhang, Wanquan Liu,
http://ijcai.org/papers09/Abstracts/226.html
In this paper, we propose a general formulation for kernel nonnegative matrix factorization with flexible kernels. Specifically, we propose the Gaussian nonnegative matrix factorization (GNMF) algorithm by using the Gaussian kernel in the framework. Different from a recently developed polynomial NMF (PNMF), GNMF finds basis vectors in the kernel-induced feature space and the computational cost is independent of input dimensions. Furthermore, we prove the convergence and nonnegativity of decomposition of our method. Extensive experiments compared with PNMF and other NMF algorithms on several face databases, validate the effectiveness of the proposed method.
http://ijcai.org/papers09/Papers/IJCAI09-226.pdf
=======================
PART-4: MACHINE LEARNING:
=227=> 1351p
Smart PCA
, Yi Zhang,
http://ijcai.org/papers09/Abstracts/227.html
PCA can be smarter and makes more sensible projections. In this paper, we propose smart PCA, an extension to standard PCA to regularize and incorporate external knowledge into model estimation. Based on the probabilistic interpretation of PCA, the inverse Wishart distribution can be used as the informative conjugate prior for the population covariance, and useful knowledge is carried by the prior hyperparameters. We design the hyperparameters to smoothly combine the information from both the domain knowledge and the data itself. The Bayesian point estimation of principal components is in closed form. In empirical studies, smart PCA shows clear improvement on three different criteria: image reconstruction errors, the perceptual quality of the reconstructed images, and the pattern recognition performance.
text:
http://ijcai.org/papers09/Papers/IJCAI09-227.pdf
=======================
PART-4: MACHINE LEARNING:
=228=> 1357p
Non-Metric Label Propagation
,
Yin Zhang, ZH Zhou,
http://ijcai.org/papers09/Abstracts/228.html
In many applications non-metric distances are better than metric distances in reflecting the perceptual distances of human beings. Previous studies on non-metric distances mainly focused on supervised setting and did not consider the usefulness of unlabeled data. In this paper, we present probably the first study of label propagation on graphs induced from non-metric distances. The challenge here lies in the fact that the triangular inequality does not hold for non-metric distances and therefore, a direct application of existing label propagation methods will lead to inconsistency and conflict. We show that by applying spectrum transformation, non-metric distances can be converted into metric ones, and thus label propagation can be executed. Such methods, however, suffer from the change of original semantic relations. As a main result of this paper, we prove that any non-metric distance matrix can be decomposed into two metric distance matrices containing different information of the data. Based on this recognition, our proposed NMLP method derives two graphs from the original non-metric distance and performs a joint label propagation on the joint graph. Experiments validate the effectiveness of the proposed NMLP method. text:
http://ijcai.org/papers09/Papers/IJCAI09-228.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:20
PART-4: MACHINE LEARNING:
=229=> 1363p
Multiclass Probabilistic Kernel Discriminant Analysis
,
Zheng Zhao, Liang Sun, Shipeng Yu, Huan Liu, Jieping Ye,
http://ijcai.org/papers09/Abstracts/229.html
Kernel discriminant analysis (KDA) is an effective approach for supervised nonlinear dimensionality reduction. Probabilistic models can be used with KDA to improve its robustness. However, the state of the art of such models could only handle binary class problems, which confines their application in many real world problems. To overcome this limitation, we propose a novel nonparametric probabilistic model based on Gaussian Process for KDA to handle multiclass problems. The model provides a novel Bayesian interpretation for KDA, which allows its parameters to be automatically tuned through the optimization of the marginal loglikelihood of the data. Empirical study demonstrates the efficacy of the proposed model.
text:
http://ijcai.org/papers09/Papers/IJCAI09-229.pdf
=======================
PART-4: MACHINE LEARNING:
=230=> 1369p
Multiple Information Sources Cooperative Learning
,
Xingquan Zhu, Ruoming Jin,
http://ijcai.org/papers09/Abstracts/230.html
Many applications are facing the problem of learning from an objective dataset, whereas information from other auxiliary sources may be beneficial but cannot be integrated into the objective dataset for learning. In this paper, we propose an omni-view learning approach to enable learning from multiple data collections. The theme is to organize heterogeneous data sources into a unified table with global data view. To achieve the omni-view learning goal, we consider that the objective dataset and the auxiliary datasets share some instance-level dependency structures. We then propose a relational k-means to cluster instances in each auxiliary dataset, such that clusters can help build new features to capture correlations between the objective and auxiliary datasets. Experimental results demonstrate that omni-view learning can help build models which outperform the ones learned from the objective dataset only. Comparisons with the co-training algorithm further assert that omni-view learning provides an alternative, yet effective, way for semi-supervised learning.
text:
http://ijcai.org/papers09/Papers/IJCAI09-230.pdf
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:20
PART-4: MACHINE LEARNING
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:21
PART-4: MACHINE LEARNING
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:21
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:21
[
Ответ
][
Цитата
]
Capt.Drew
Сообщений: 4179
На: Ai Drew :: IJCAI 09 :: Междунар. ии конфа: Позднее лето-2009 - Коротко о Главном
Добавлено: 21 авг 09 7:21
[
Ответ
][
Цитата
]
Стр.18 (39)
:
1
...
14
15
16
17
[18]
19
20
21
22
...
39
<<
< Пред.
|
След. >
>>
Главная
|
Материалы
|
Справочник
|
Гостевая книга
|
Форум
|
Ссылки
|
О сайте
Вопросы и замечания направляйте нам по
Copyright © 2001-2022, www.gotai.net