id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1206.4611 | Pratik Jawanpuria | Pratik Jawanpuria (IIT Bombay), J. Saketha Nath (IIT Bombay) | A Convex Feature Learning Formulation for Latent Task Structure
Discovery | ICML2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the multi-task learning problem and in the setting where
some relevant features could be shared across few related tasks. Most of the
existing methods assume the extent to which the given tasks are related or
share a common feature space to be known apriori. In real-world applications
however, it is desirable to automatically discover the groups of related tasks
that share a feature space. In this paper we aim at searching the exponentially
large space of all possible groups of tasks that may share a feature space. The
main contribution is a convex formulation that employs a graph-based
regularizer and simultaneously discovers few groups of related tasks, having
close-by task parameters, as well as the feature space shared within each
group. The regularizer encodes an important structure among the groups of tasks
leading to an efficient algorithm for solving it: if there is no feature space
under which a group of tasks has close-by task parameters, then there does not
exist such a feature space for any of its supersets. An efficient active set
algorithm that exploits this simplification and performs a clever search in the
exponentially large space is presented. The algorithm is guaranteed to solve
the proposed formulation (within some precision) in a time polynomial in the
number of groups of related tasks discovered. Empirical results on benchmark
datasets show that the proposed formulation achieves good generalization and
outperforms state-of-the-art multi-task learning algorithms in some cases.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:00:07 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Jawanpuria",
"Pratik",
"",
"IIT Bombay"
],
[
"Nath",
"J. Saketha",
"",
"IIT Bombay"
]
] | TITLE: A Convex Feature Learning Formulation for Latent Task Structure
Discovery
ABSTRACT: This paper considers the multi-task learning problem and in the setting where
some relevant features could be shared across few related tasks. Most of the
existing methods assume the extent to which the given tasks are related or
share a common feature space to be known apriori. In real-world applications
however, it is desirable to automatically discover the groups of related tasks
that share a feature space. In this paper we aim at searching the exponentially
large space of all possible groups of tasks that may share a feature space. The
main contribution is a convex formulation that employs a graph-based
regularizer and simultaneously discovers few groups of related tasks, having
close-by task parameters, as well as the feature space shared within each
group. The regularizer encodes an important structure among the groups of tasks
leading to an efficient algorithm for solving it: if there is no feature space
under which a group of tasks has close-by task parameters, then there does not
exist such a feature space for any of its supersets. An efficient active set
algorithm that exploits this simplification and performs a clever search in the
exponentially large space is presented. The algorithm is guaranteed to solve
the proposed formulation (within some precision) in a time polynomial in the
number of groups of related tasks discovered. Empirical results on benchmark
datasets show that the proposed formulation achieves good generalization and
outperforms state-of-the-art multi-task learning algorithms in some cases.
| no_new_dataset | 0.939692 |
1206.4616 | Drausin Wulsin | Drausin Wulsin (University of Pennsylvania), Shane Jensen (University
of Pennsylvania), Brian Litt (University of Pennsylvania) | A Hierarchical Dirichlet Process Model with Multiple Levels of
Clustering for Human EEG Seizure Modeling | ICML2012 | null | null | null | stat.AP cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driven by the multi-level structure of human intracranial
electroencephalogram (iEEG) recordings of epileptic seizures, we introduce a
new variant of a hierarchical Dirichlet Process---the multi-level clustering
hierarchical Dirichlet Process (MLC-HDP)---that simultaneously clusters
datasets on multiple levels. Our seizure dataset contains brain activity
recorded in typically more than a hundred individual channels for each seizure
of each patient. The MLC-HDP model clusters over channels-types, seizure-types,
and patient-types simultaneously. We describe this model and its implementation
in detail. We also present the results of a simulation study comparing the
MLC-HDP to a similar model, the Nested Dirichlet Process and finally
demonstrate the MLC-HDP's use in modeling seizures across multiple patients. We
find the MLC-HDP's clustering to be comparable to independent human physician
clusterings. To our knowledge, the MLC-HDP model is the first in the epilepsy
literature capable of clustering seizures within and between patients.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:02:12 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Wulsin",
"Drausin",
"",
"University of Pennsylvania"
],
[
"Jensen",
"Shane",
"",
"University\n of Pennsylvania"
],
[
"Litt",
"Brian",
"",
"University of Pennsylvania"
]
] | TITLE: A Hierarchical Dirichlet Process Model with Multiple Levels of
Clustering for Human EEG Seizure Modeling
ABSTRACT: Driven by the multi-level structure of human intracranial
electroencephalogram (iEEG) recordings of epileptic seizures, we introduce a
new variant of a hierarchical Dirichlet Process---the multi-level clustering
hierarchical Dirichlet Process (MLC-HDP)---that simultaneously clusters
datasets on multiple levels. Our seizure dataset contains brain activity
recorded in typically more than a hundred individual channels for each seizure
of each patient. The MLC-HDP model clusters over channels-types, seizure-types,
and patient-types simultaneously. We describe this model and its implementation
in detail. We also present the results of a simulation study comparing the
MLC-HDP to a similar model, the Nested Dirichlet Process and finally
demonstrate the MLC-HDP's use in modeling seizures across multiple patients. We
find the MLC-HDP's clustering to be comparable to independent human physician
clusterings. To our knowledge, the MLC-HDP model is the first in the epilepsy
literature capable of clustering seizures within and between patients.
| new_dataset | 0.890485 |
1206.4618 | Wei Liu | Wei Liu (Columbia University), Jun Wang (IBM T. J. Watson Research
Center), Yadong Mu (Columbia University), Sanjiv Kumar (Google), Shih-Fu
Chang (Columbia University) | Compact Hyperplane Hashing with Bilinear Functions | ICML2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperplane hashing aims at rapidly searching nearest points to a hyperplane,
and has shown practical impact in scaling up active learning with SVMs.
Unfortunately, the existing randomized methods need long hash codes to achieve
reasonable search accuracy and thus suffer from reduced search speed and large
memory overhead. To this end, this paper proposes a novel hyperplane hashing
technique which yields compact hash codes. The key idea is the bilinear form of
the proposed hash functions, which leads to higher collision probability than
the existing hyperplane hash functions when using random projections. To
further increase the performance, we propose a learning based framework in
which the bilinear functions are directly learned from the data. This results
in short yet discriminative codes, and also boosts the search performance over
the random projection based solutions. Large-scale active learning experiments
carried out on two datasets with up to one million samples demonstrate the
overall superiority of the proposed approach.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:03:10 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Liu",
"Wei",
"",
"Columbia University"
],
[
"Wang",
"Jun",
"",
"IBM T. J. Watson Research\n Center"
],
[
"Mu",
"Yadong",
"",
"Columbia University"
],
[
"Kumar",
"Sanjiv",
"",
"Google"
],
[
"Chang",
"Shih-Fu",
"",
"Columbia University"
]
] | TITLE: Compact Hyperplane Hashing with Bilinear Functions
ABSTRACT: Hyperplane hashing aims at rapidly searching nearest points to a hyperplane,
and has shown practical impact in scaling up active learning with SVMs.
Unfortunately, the existing randomized methods need long hash codes to achieve
reasonable search accuracy and thus suffer from reduced search speed and large
memory overhead. To this end, this paper proposes a novel hyperplane hashing
technique which yields compact hash codes. The key idea is the bilinear form of
the proposed hash functions, which leads to higher collision probability than
the existing hyperplane hash functions when using random projections. To
further increase the performance, we propose a learning based framework in
which the bilinear functions are directly learned from the data. This results
in short yet discriminative codes, and also boosts the search performance over
the random projection based solutions. Large-scale active learning experiments
carried out on two datasets with up to one million samples demonstrate the
overall superiority of the proposed approach.
| no_new_dataset | 0.95222 |
1206.4622 | Aaron Defazio | Aaron Defazio (ANU), Tiberio Caetano (NICTA and Australian National
University) | A Graphical Model Formulation of Collaborative Filtering Neighbourhood
Methods with Fast Maximum Entropy Training | ICML2012 | null | null | null | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Item neighbourhood methods for collaborative filtering learn a weighted graph
over the set of items, where each item is connected to those it is most similar
to. The prediction of a user's rating on an item is then given by that rating
of neighbouring items, weighted by their similarity. This paper presents a new
neighbourhood approach which we call item fields, whereby an undirected
graphical model is formed over the item graph. The resulting prediction rule is
a simple generalization of the classical approaches, which takes into account
non-local information in the graph, allowing its best results to be obtained
when using drastically fewer edges than other neighbourhood approaches. A fast
approximate maximum entropy training method based on the Bethe approximation is
presented, which uses a simple gradient ascent procedure. When using
precomputed sufficient statistics on the Movielens datasets, our method is
faster than maximum likelihood approaches by two orders of magnitude.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:05:52 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Defazio",
"Aaron",
"",
"ANU"
],
[
"Caetano",
"Tiberio",
"",
"NICTA and Australian National\n University"
]
] | TITLE: A Graphical Model Formulation of Collaborative Filtering Neighbourhood
Methods with Fast Maximum Entropy Training
ABSTRACT: Item neighbourhood methods for collaborative filtering learn a weighted graph
over the set of items, where each item is connected to those it is most similar
to. The prediction of a user's rating on an item is then given by that rating
of neighbouring items, weighted by their similarity. This paper presents a new
neighbourhood approach which we call item fields, whereby an undirected
graphical model is formed over the item graph. The resulting prediction rule is
a simple generalization of the classical approaches, which takes into account
non-local information in the graph, allowing its best results to be obtained
when using drastically fewer edges than other neighbourhood approaches. A fast
approximate maximum entropy training method based on the Bethe approximation is
presented, which uses a simple gradient ascent procedure. When using
precomputed sufficient statistics on the Movielens datasets, our method is
faster than maximum likelihood approaches by two orders of magnitude.
| no_new_dataset | 0.951953 |
1206.4625 | Ye Nan | Ye Nan (NUS), Kian Ming Chai (DSO National Laboratories), Wee Sun Lee
(NUS), Hai Leong Chieu (DSO National Laboratories) | Optimizing F-measure: A Tale of Two Approaches | ICML2012 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | F-measures are popular performance metrics, particularly for tasks with
imbalanced data sets. Algorithms for learning to maximize F-measures follow two
approaches: the empirical utility maximization (EUM) approach learns a
classifier having optimal performance on training data, while the
decision-theoretic approach learns a probabilistic model and then predicts
labels with maximum expected F-measure. In this paper, we investigate the
theoretical justifications and connections for these two approaches, and we
study the conditions under which one approach is preferable to the other using
synthetic and real datasets. Given accurate models, our results suggest that
the two approaches are asymptotically equivalent given large training and test
sets. Nevertheless, empirically, the EUM approach appears to be more robust
against model misspecification, and given a good model, the decision-theoretic
approach appears to be better for handling rare classes and a common domain
adaptation scenario.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:07:04 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Nan",
"Ye",
"",
"NUS"
],
[
"Chai",
"Kian Ming",
"",
"DSO National Laboratories"
],
[
"Lee",
"Wee Sun",
"",
"NUS"
],
[
"Chieu",
"Hai Leong",
"",
"DSO National Laboratories"
]
] | TITLE: Optimizing F-measure: A Tale of Two Approaches
ABSTRACT: F-measures are popular performance metrics, particularly for tasks with
imbalanced data sets. Algorithms for learning to maximize F-measures follow two
approaches: the empirical utility maximization (EUM) approach learns a
classifier having optimal performance on training data, while the
decision-theoretic approach learns a probabilistic model and then predicts
labels with maximum expected F-measure. In this paper, we investigate the
theoretical justifications and connections for these two approaches, and we
study the conditions under which one approach is preferable to the other using
synthetic and real datasets. Given accurate models, our results suggest that
the two approaches are asymptotically equivalent given large training and test
sets. Nevertheless, empirically, the EUM approach appears to be more robust
against model misspecification, and given a good model, the decision-theoretic
approach appears to be better for handling rare classes and a common domain
adaptation scenario.
| no_new_dataset | 0.948106 |
1206.4626 | Steven C.H. Hoi | Bin Li (NTU), Steven C.H. Hoi (NTU) | On-Line Portfolio Selection with Moving Average Reversion | ICML2012 | null | null | null | cs.CE cs.LG q-fin.PM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | On-line portfolio selection has attracted increasing interests in machine
learning and AI communities recently. Empirical evidences show that stock's
high and low prices are temporary and stock price relatives are likely to
follow the mean reversion phenomenon. While the existing mean reversion
strategies are shown to achieve good empirical performance on many real
datasets, they often make the single-period mean reversion assumption, which is
not always satisfied in some real datasets, leading to poor performance when
the assumption does not hold. To overcome the limitation, this article proposes
a multiple-period mean reversion, or so-called Moving Average Reversion (MAR),
and a new on-line portfolio selection strategy named "On-Line Moving Average
Reversion" (OLMAR), which exploits MAR by applying powerful online learning
techniques. From our empirical results, we found that OLMAR can overcome the
drawback of existing mean reversion algorithms and achieve significantly better
results, especially on the datasets where the existing mean reversion
algorithms failed. In addition to superior trading performance, OLMAR also runs
extremely fast, further supporting its practical applicability to a wide range
of applications.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:07:23 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Li",
"Bin",
"",
"NTU"
],
[
"Hoi",
"Steven C. H.",
"",
"NTU"
]
] | TITLE: On-Line Portfolio Selection with Moving Average Reversion
ABSTRACT: On-line portfolio selection has attracted increasing interests in machine
learning and AI communities recently. Empirical evidences show that stock's
high and low prices are temporary and stock price relatives are likely to
follow the mean reversion phenomenon. While the existing mean reversion
strategies are shown to achieve good empirical performance on many real
datasets, they often make the single-period mean reversion assumption, which is
not always satisfied in some real datasets, leading to poor performance when
the assumption does not hold. To overcome the limitation, this article proposes
a multiple-period mean reversion, or so-called Moving Average Reversion (MAR),
and a new on-line portfolio selection strategy named "On-Line Moving Average
Reversion" (OLMAR), which exploits MAR by applying powerful online learning
techniques. From our empirical results, we found that OLMAR can overcome the
drawback of existing mean reversion algorithms and achieve significantly better
results, especially on the datasets where the existing mean reversion
algorithms failed. In addition to superior trading performance, OLMAR also runs
extremely fast, further supporting its practical applicability to a wide range
of applications.
| no_new_dataset | 0.953013 |
1206.4633 | Steven C.H. Hoi | Peilin Zhao (NTU), Jialei Wang (NTU), Pengcheng Wu (NTU), Rong Jin
(MSU), Steven C.H. Hoi (NTU) | Fast Bounded Online Gradient Descent Algorithms for Scalable
Kernel-Based Online Learning | ICML2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kernel-based online learning has often shown state-of-the-art performance for
many online learning tasks. It, however, suffers from a major shortcoming, that
is, the unbounded number of support vectors, making it non-scalable and
unsuitable for applications with large-scale datasets. In this work, we study
the problem of bounded kernel-based online learning that aims to constrain the
number of support vectors by a predefined budget. Although several algorithms
have been proposed in literature, they are neither computationally efficient
due to their intensive budget maintenance strategy nor effective due to the use
of simple Perceptron algorithm. To overcome these limitations, we propose a
framework for bounded kernel-based online learning based on an online gradient
descent approach. We propose two efficient algorithms of bounded online
gradient descent (BOGD) for scalable kernel-based online learning: (i) BOGD by
maintaining support vectors using uniform sampling, and (ii) BOGD++ by
maintaining support vectors using non-uniform sampling. We present theoretical
analysis of regret bound for both algorithms, and found promising empirical
performance in terms of both efficacy and efficiency by comparing them to
several well-known algorithms for bounded kernel-based online learning on
large-scale datasets.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:13:13 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Zhao",
"Peilin",
"",
"NTU"
],
[
"Wang",
"Jialei",
"",
"NTU"
],
[
"Wu",
"Pengcheng",
"",
"NTU"
],
[
"Jin",
"Rong",
"",
"MSU"
],
[
"Hoi",
"Steven C. H.",
"",
"NTU"
]
] | TITLE: Fast Bounded Online Gradient Descent Algorithms for Scalable
Kernel-Based Online Learning
ABSTRACT: Kernel-based online learning has often shown state-of-the-art performance for
many online learning tasks. It, however, suffers from a major shortcoming, that
is, the unbounded number of support vectors, making it non-scalable and
unsuitable for applications with large-scale datasets. In this work, we study
the problem of bounded kernel-based online learning that aims to constrain the
number of support vectors by a predefined budget. Although several algorithms
have been proposed in literature, they are neither computationally efficient
due to their intensive budget maintenance strategy nor effective due to the use
of simple Perceptron algorithm. To overcome these limitations, we propose a
framework for bounded kernel-based online learning based on an online gradient
descent approach. We propose two efficient algorithms of bounded online
gradient descent (BOGD) for scalable kernel-based online learning: (i) BOGD by
maintaining support vectors using uniform sampling, and (ii) BOGD++ by
maintaining support vectors using non-uniform sampling. We present theoretical
analysis of regret bound for both algorithms, and found promising empirical
performance in terms of both efficacy and efficiency by comparing them to
several well-known algorithms for bounded kernel-based online learning on
large-scale datasets.
| no_new_dataset | 0.951188 |
1206.4635 | Yichuan Tang | Yichuan Tang (University of Toronto), Ruslan Salakhutdinov (University
of Toronto), Geoffrey Hinton (University of Toronto) | Deep Mixtures of Factor Analysers | ICML2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An efficient way to learn deep density models that have many layers of latent
variables is to learn one layer at a time using a model that has only one layer
of latent variables. After learning each layer, samples from the posterior
distributions for that layer are used as training data for learning the next
layer. This approach is commonly used with Restricted Boltzmann Machines, which
are undirected graphical models with a single hidden layer, but it can also be
used with Mixtures of Factor Analysers (MFAs) which are directed graphical
models. In this paper, we present a greedy layer-wise learning algorithm for
Deep Mixtures of Factor Analysers (DMFAs). Even though a DMFA can be converted
to an equivalent shallow MFA by multiplying together the factor loading
matrices at different levels, learning and inference are much more efficient in
a DMFA and the sharing of each lower-level factor loading matrix by many
different higher level MFAs prevents overfitting. We demonstrate empirically
that DMFAs learn better density models than both MFAs and two types of
Restricted Boltzmann Machine on a wide variety of datasets.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:14:57 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Tang",
"Yichuan",
"",
"University of Toronto"
],
[
"Salakhutdinov",
"Ruslan",
"",
"University\n of Toronto"
],
[
"Hinton",
"Geoffrey",
"",
"University of Toronto"
]
] | TITLE: Deep Mixtures of Factor Analysers
ABSTRACT: An efficient way to learn deep density models that have many layers of latent
variables is to learn one layer at a time using a model that has only one layer
of latent variables. After learning each layer, samples from the posterior
distributions for that layer are used as training data for learning the next
layer. This approach is commonly used with Restricted Boltzmann Machines, which
are undirected graphical models with a single hidden layer, but it can also be
used with Mixtures of Factor Analysers (MFAs) which are directed graphical
models. In this paper, we present a greedy layer-wise learning algorithm for
Deep Mixtures of Factor Analysers (DMFAs). Even though a DMFA can be converted
to an equivalent shallow MFA by multiplying together the factor loading
matrices at different levels, learning and inference are much more efficient in
a DMFA and the sharing of each lower-level factor loading matrix by many
different higher level MFAs prevents overfitting. We demonstrate empirically
that DMFAs learn better density models than both MFAs and two types of
Restricted Boltzmann Machine on a wide variety of datasets.
| no_new_dataset | 0.948298 |
1206.4636 | M. Pawan Kumar | M. Pawan Kumar (Ecole Centrale Paris), Ben Packer (Stanford
University), Daphne Koller (Stanford University) | Modeling Latent Variable Uncertainty for Loss-based Learning | ICML2012 | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of parameter estimation using weakly supervised
datasets, where a training sample consists of the input and a partially
specified annotation, which we refer to as the output. The missing information
in the annotation is modeled using latent variables. Previous methods
overburden a single distribution with two separate tasks: (i) modeling the
uncertainty in the latent variables during training; and (ii) making accurate
predictions for the output and the latent variables during testing. We propose
a novel framework that separates the demands of the two tasks using two
distributions: (i) a conditional distribution to model the uncertainty of the
latent variables for a given input-output pair; and (ii) a delta distribution
to predict the output and the latent variables for a given input. During
learning, we encourage agreement between the two distributions by minimizing a
loss-based dissimilarity coefficient. Our approach generalizes latent SVM in
two important ways: (i) it models the uncertainty over latent variables instead
of relying on a pointwise estimate; and (ii) it allows the use of loss
functions that depend on latent variables, which greatly increases its
applicability. We demonstrate the efficacy of our approach on two challenging
problems---object detection and action detection---using publicly available
datasets.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:15:13 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Kumar",
"M. Pawan",
"",
"Ecole Centrale Paris"
],
[
"Packer",
"Ben",
"",
"Stanford\n University"
],
[
"Koller",
"Daphne",
"",
"Stanford University"
]
] | TITLE: Modeling Latent Variable Uncertainty for Loss-based Learning
ABSTRACT: We consider the problem of parameter estimation using weakly supervised
datasets, where a training sample consists of the input and a partially
specified annotation, which we refer to as the output. The missing information
in the annotation is modeled using latent variables. Previous methods
overburden a single distribution with two separate tasks: (i) modeling the
uncertainty in the latent variables during training; and (ii) making accurate
predictions for the output and the latent variables during testing. We propose
a novel framework that separates the demands of the two tasks using two
distributions: (i) a conditional distribution to model the uncertainty of the
latent variables for a given input-output pair; and (ii) a delta distribution
to predict the output and the latent variables for a given input. During
learning, we encourage agreement between the two distributions by minimizing a
loss-based dissimilarity coefficient. Our approach generalizes latent SVM in
two important ways: (i) it models the uncertainty over latent variables instead
of relying on a pointwise estimate; and (ii) it allows the use of loss
functions that depend on latent variables, which greatly increases its
applicability. We demonstrate the efficacy of our approach on two challenging
problems---object detection and action detection---using publicly available
datasets.
| no_new_dataset | 0.946448 |
1206.4644 | Ruijiang Li | Ruijiang Li (Fudan University), Bin Li (University of Technology,
Sydney), Ke Zhang (Fudan Univ.), Cheng Jin (Fudan University), Xiangyang Xue
(Fudan University) | Groupwise Constrained Reconstruction for Subspace Clustering | ICML2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstruction based subspace clustering methods compute a self
reconstruction matrix over the samples and use it for spectral clustering to
obtain the final clustering result. Their success largely relies on the
assumption that the underlying subspaces are independent, which, however, does
not always hold in the applications with increasing number of subspaces. In
this paper, we propose a novel reconstruction based subspace clustering model
without making the subspace independence assumption. In our model, certain
properties of the reconstruction matrix are explicitly characterized using the
latent cluster indicators, and the affinity matrix used for spectral clustering
can be directly built from the posterior of the latent cluster indicators
instead of the reconstruction matrix. Experimental results on both synthetic
and real-world datasets show that the proposed model can outperform the
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:19:22 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Li",
"Ruijiang",
"",
"Fudan University"
],
[
"Li",
"Bin",
"",
"University of Technology,\n Sydney"
],
[
"Zhang",
"Ke",
"",
"Fudan Univ."
],
[
"Jin",
"Cheng",
"",
"Fudan University"
],
[
"Xue",
"Xiangyang",
"",
"Fudan University"
]
] | TITLE: Groupwise Constrained Reconstruction for Subspace Clustering
ABSTRACT: Reconstruction based subspace clustering methods compute a self
reconstruction matrix over the samples and use it for spectral clustering to
obtain the final clustering result. Their success largely relies on the
assumption that the underlying subspaces are independent, which, however, does
not always hold in the applications with increasing number of subspaces. In
this paper, we propose a novel reconstruction based subspace clustering model
without making the subspace independence assumption. In our model, certain
properties of the reconstruction matrix are explicitly characterized using the
latent cluster indicators, and the affinity matrix used for spectral clustering
can be directly built from the posterior of the latent cluster indicators
instead of the reconstruction matrix. Experimental results on both synthetic
and real-world datasets show that the proposed model can outperform the
state-of-the-art methods.
| no_new_dataset | 0.953405 |
1206.4653 | Maya Gupta | Nathan Parrish (University of Washington), Maya Gupta (University of
Washington) | Dimensionality Reduction by Local Discriminative Gaussians | ICML2012 | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present local discriminative Gaussian (LDG) dimensionality reduction, a
supervised dimensionality reduction technique for classification. The LDG
objective function is an approximation to the leave-one-out training error of a
local quadratic discriminant analysis classifier, and thus acts locally to each
training point in order to find a mapping where similar data can be
discriminated from dissimilar data. While other state-of-the-art linear
dimensionality reduction methods require gradient descent or iterative solution
approaches, LDG is solved with a single eigen-decomposition. Thus, it scales
better for datasets with a large number of feature dimensions or training
examples. We also adapt LDG to the transfer learning setting, and show that it
achieves good performance when the test data distribution differs from that of
the training data.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:24:49 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Parrish",
"Nathan",
"",
"University of Washington"
],
[
"Gupta",
"Maya",
"",
"University of\n Washington"
]
] | TITLE: Dimensionality Reduction by Local Discriminative Gaussians
ABSTRACT: We present local discriminative Gaussian (LDG) dimensionality reduction, a
supervised dimensionality reduction technique for classification. The LDG
objective function is an approximation to the leave-one-out training error of a
local quadratic discriminant analysis classifier, and thus acts locally to each
training point in order to find a mapping where similar data can be
discriminated from dissimilar data. While other state-of-the-art linear
dimensionality reduction methods require gradient descent or iterative solution
approaches, LDG is solved with a single eigen-decomposition. Thus, it scales
better for datasets with a large number of feature dimensions or training
examples. We also adapt LDG to the transfer learning setting, and show that it
achieves good performance when the test data distribution differs from that of
the training data.
| no_new_dataset | 0.946001 |
1206.4657 | Elad Hazan | Elad Hazan (Technion), Satyen Kale (IBM T.J. Watson Research Center) | Projection-free Online Learning | ICML2012 | null | null | null | cs.LG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The computational bottleneck in applying online learning to massive data sets
is usually the projection step. We present efficient online learning algorithms
that eschew projections in favor of much more efficient linear optimization
steps using the Frank-Wolfe technique. We obtain a range of regret bounds for
online convex optimization, with better bounds for specific cases such as
stochastic online smooth convex optimization.
Besides the computational advantage, other desirable features of our
algorithms are that they are parameter-free in the stochastic case and produce
sparse decisions. We apply our algorithms to computationally intensive
applications of collaborative filtering, and show the theoretical improvements
to be clearly visible on standard datasets.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:26:34 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Hazan",
"Elad",
"",
"Technion"
],
[
"Kale",
"Satyen",
"",
"IBM T.J. Watson Research Center"
]
] | TITLE: Projection-free Online Learning
ABSTRACT: The computational bottleneck in applying online learning to massive data sets
is usually the projection step. We present efficient online learning algorithms
that eschew projections in favor of much more efficient linear optimization
steps using the Frank-Wolfe technique. We obtain a range of regret bounds for
online convex optimization, with better bounds for specific cases such as
stochastic online smooth convex optimization.
Besides the computational advantage, other desirable features of our
algorithms are that they are parameter-free in the stochastic case and produce
sparse decisions. We apply our algorithms to computationally intensive
applications of collaborative filtering, and show the theoretical improvements
to be clearly visible on standard datasets.
| no_new_dataset | 0.948728 |
1206.4659 | Jun Zhu | Jun Zhu (Tsinghua University) | Max-Margin Nonparametric Latent Feature Models for Link Prediction | ICML2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a max-margin nonparametric latent feature model, which unites the
ideas of max-margin learning and Bayesian nonparametrics to discover
discriminative latent features for link prediction and automatically infer the
unknown latent social dimension. By minimizing a hinge-loss using the linear
expectation operator, we can perform posterior inference efficiently without
dealing with a highly nonlinear link likelihood function; by using a
fully-Bayesian formulation, we can avoid tuning regularization constants.
Experimental results on real datasets appear to demonstrate the benefits
inherited from max-margin learning and fully-Bayesian nonparametric inference.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:27:56 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Zhu",
"Jun",
"",
"Tsinghua University"
]
] | TITLE: Max-Margin Nonparametric Latent Feature Models for Link Prediction
ABSTRACT: We present a max-margin nonparametric latent feature model, which unites the
ideas of max-margin learning and Bayesian nonparametrics to discover
discriminative latent features for link prediction and automatically infer the
unknown latent social dimension. By minimizing a hinge-loss using the linear
expectation operator, we can perform posterior inference efficiently without
dealing with a highly nonlinear link likelihood function; by using a
fully-Bayesian formulation, we can avoid tuning regularization constants.
Experimental results on real datasets appear to demonstrate the benefits
inherited from max-margin learning and fully-Bayesian nonparametric inference.
| no_new_dataset | 0.946498 |
1206.4660 | Lixin Duan | Lixin Duan (Nanyang Technological University), Dong Xu (Nanyang
Technological University), Ivor Tsang (Nanyang Technological University) | Learning with Augmented Features for Heterogeneous Domain Adaptation | ICML2012 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new learning method for heterogeneous domain adaptation (HDA),
in which the data from the source domain and the target domain are represented
by heterogeneous features with different dimensions. Using two different
projection matrices, we first transform the data from two domains into a common
subspace in order to measure the similarity between the data from two domains.
We then propose two new feature mapping functions to augment the transformed
data with their original features and zeros. The existing learning methods
(e.g., SVM and SVR) can be readily incorporated with our newly proposed
augmented feature representations to effectively utilize the data from both
domains for HDA. Using the hinge loss function in SVM as an example, we
introduce the detailed objective function in our method called Heterogeneous
Feature Augmentation (HFA) for a linear case and also describe its
kernelization in order to efficiently cope with the data with very high
dimensions. Moreover, we also develop an alternating optimization algorithm to
effectively solve the nontrivial optimization problem in our HFA method.
Comprehensive experiments on two benchmark datasets clearly demonstrate that
HFA outperforms the existing HDA methods.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:28:12 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Duan",
"Lixin",
"",
"Nanyang Technological University"
],
[
"Xu",
"Dong",
"",
"Nanyang\n Technological University"
],
[
"Tsang",
"Ivor",
"",
"Nanyang Technological University"
]
] | TITLE: Learning with Augmented Features for Heterogeneous Domain Adaptation
ABSTRACT: We propose a new learning method for heterogeneous domain adaptation (HDA),
in which the data from the source domain and the target domain are represented
by heterogeneous features with different dimensions. Using two different
projection matrices, we first transform the data from two domains into a common
subspace in order to measure the similarity between the data from two domains.
We then propose two new feature mapping functions to augment the transformed
data with their original features and zeros. The existing learning methods
(e.g., SVM and SVR) can be readily incorporated with our newly proposed
augmented feature representations to effectively utilize the data from both
domains for HDA. Using the hinge loss function in SVM as an example, we
introduce the detailed objective function in our method called Heterogeneous
Feature Augmentation (HFA) for a linear case and also describe its
kernelization in order to efficiently cope with the data with very high
dimensions. Moreover, we also develop an alternating optimization algorithm to
effectively solve the nontrivial optimization problem in our HFA method.
Comprehensive experiments on two benchmark datasets clearly demonstrate that
HFA outperforms the existing HDA methods.
| no_new_dataset | 0.945801 |
1206.4672 | Akshay Krishnamurthy | Akshay Krishnamurthy (Carnegie Mellon University), Sivaraman
Balakrishnan (Carnegie Mellon University), Min Xu (Carnegie Mellon
University), Aarti Singh (Carnegie Mellon University) | Efficient Active Algorithms for Hierarchical Clustering | ICML2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in sensing technologies and the growth of the internet have resulted
in an explosion in the size of modern datasets, while storage and processing
power continue to lag behind. This motivates the need for algorithms that are
efficient, both in terms of the number of measurements needed and running time.
To combat the challenges associated with large datasets, we propose a general
framework for active hierarchical clustering that repeatedly runs an
off-the-shelf clustering algorithm on small subsets of the data and comes with
guarantees on performance, measurement complexity and runtime complexity. We
instantiate this framework with a simple spectral clustering algorithm and
provide concrete results on its performance, showing that, under some
assumptions, this algorithm recovers all clusters of size ?(log n) using O(n
log^2 n) similarities and runs in O(n log^3 n) time for a dataset of n objects.
Through extensive experimentation we also demonstrate that this framework is
practically alluring.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:35:20 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Krishnamurthy",
"Akshay",
"",
"Carnegie Mellon University"
],
[
"Balakrishnan",
"Sivaraman",
"",
"Carnegie Mellon University"
],
[
"Xu",
"Min",
"",
"Carnegie Mellon\n University"
],
[
"Singh",
"Aarti",
"",
"Carnegie Mellon University"
]
] | TITLE: Efficient Active Algorithms for Hierarchical Clustering
ABSTRACT: Advances in sensing technologies and the growth of the internet have resulted
in an explosion in the size of modern datasets, while storage and processing
power continue to lag behind. This motivates the need for algorithms that are
efficient, both in terms of the number of measurements needed and running time.
To combat the challenges associated with large datasets, we propose a general
framework for active hierarchical clustering that repeatedly runs an
off-the-shelf clustering algorithm on small subsets of the data and comes with
guarantees on performance, measurement complexity and runtime complexity. We
instantiate this framework with a simple spectral clustering algorithm and
provide concrete results on its performance, showing that, under some
assumptions, this algorithm recovers all clusters of size ?(log n) using O(n
log^2 n) similarities and runs in O(n log^3 n) time for a dataset of n objects.
Through extensive experimentation we also demonstrate that this framework is
practically alluring.
| no_new_dataset | 0.955026 |
1206.4673 | Junming Yin | Junming Yin (Carnegie Mellon University), Xi Chen (Carnegie Mellon
University), Eric Xing (Carnegie Mellon University) | Group Sparse Additive Models | ICML2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of sparse variable selection in nonparametric
additive models, with the prior knowledge of the structure among the covariates
to encourage those variables within a group to be selected jointly. Previous
works either study the group sparsity in the parametric setting (e.g., group
lasso), or address the problem in the non-parametric setting without exploiting
the structural information (e.g., sparse additive models). In this paper, we
present a new method, called group sparse additive models (GroupSpAM), which
can handle group sparsity in additive models. We generalize the l1/l2 norm to
Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we
derive a novel thresholding condition for identifying the functional sparsity
at the group level, and propose an efficient block coordinate descent algorithm
for constructing the estimate. We demonstrate by simulation that GroupSpAM
substantially outperforms the competing methods in terms of support recovery
and prediction accuracy in additive models, and also conduct a comparative
experiment on a real breast cancer dataset.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:35:38 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Yin",
"Junming",
"",
"Carnegie Mellon University"
],
[
"Chen",
"Xi",
"",
"Carnegie Mellon\n University"
],
[
"Xing",
"Eric",
"",
"Carnegie Mellon University"
]
] | TITLE: Group Sparse Additive Models
ABSTRACT: We consider the problem of sparse variable selection in nonparametric
additive models, with the prior knowledge of the structure among the covariates
to encourage those variables within a group to be selected jointly. Previous
works either study the group sparsity in the parametric setting (e.g., group
lasso), or address the problem in the non-parametric setting without exploiting
the structural information (e.g., sparse additive models). In this paper, we
present a new method, called group sparse additive models (GroupSpAM), which
can handle group sparsity in additive models. We generalize the l1/l2 norm to
Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we
derive a novel thresholding condition for identifying the functional sparsity
at the group level, and propose an efficient block coordinate descent algorithm
for constructing the estimate. We demonstrate by simulation that GroupSpAM
substantially outperforms the competing methods in terms of support recovery
and prediction accuracy in additive models, and also conduct a comparative
experiment on a real breast cancer dataset.
| no_new_dataset | 0.947088 |
1206.4674 | Stratis Ioannidis | Amin Karbasi (EPFL), Stratis Ioannidis (Technicolor), laurent
Massoulie (Technicolor) | Comparison-Based Learning with Rank Nets | ICML2012 | null | null | null | cs.LG cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of search through comparisons, where a user is
presented with two candidate objects and reveals which is closer to her
intended target. We study adaptive strategies for finding the target, that
require knowledge of rank relationships but not actual distances between
objects. We propose a new strategy based on rank nets, and show that for target
distributions with a bounded doubling constant, it finds the target in a number
of comparisons close to the entropy of the target distribution and, hence, of
the optimum. We extend these results to the case of noisy oracles, and compare
this strategy to prior art over multiple datasets.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:36:16 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Karbasi",
"Amin",
"",
"EPFL"
],
[
"Ioannidis",
"Stratis",
"",
"Technicolor"
],
[
"Massoulie",
"laurent",
"",
"Technicolor"
]
] | TITLE: Comparison-Based Learning with Rank Nets
ABSTRACT: We consider the problem of search through comparisons, where a user is
presented with two candidate objects and reveals which is closer to her
intended target. We study adaptive strategies for finding the target, that
require knowledge of rank relationships but not actual distances between
objects. We propose a new strategy based on rank nets, and show that for target
distributions with a bounded doubling constant, it finds the target in a number
of comparisons close to the entropy of the target distribution and, hence, of
the optimum. We extend these results to the case of noisy oracles, and compare
this strategy to prior art over multiple datasets.
| no_new_dataset | 0.950915 |
1206.4676 | Zhirong Yang | Zhirong Yang (Aalto University), Erkki Oja (Aalto University) | Clustering by Low-Rank Doubly Stochastic Matrix Decomposition | ICML2012 | null | null | null | cs.LG cs.CV cs.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering analysis by nonnegative low-rank approximations has achieved
remarkable progress in the past decade. However, most approximation approaches
in this direction are still restricted to matrix factorization. We propose a
new low-rank learning method to improve the clustering performance, which is
beyond matrix factorization. The approximation is based on a two-step bipartite
random walk through virtual cluster nodes, where the approximation is formed by
only cluster assigning probabilities. Minimizing the approximation error
measured by Kullback-Leibler divergence is equivalent to maximizing the
likelihood of a discriminative model, which endows our method with a solid
probabilistic interpretation. The optimization is implemented by a relaxed
Majorization-Minimization algorithm that is advantageous in finding good local
minima. Furthermore, we point out that the regularized algorithm with Dirichlet
prior only serves as initialization. Experimental results show that the new
method has strong performance in clustering purity for various datasets,
especially for large-scale manifold data.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:36:49 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Yang",
"Zhirong",
"",
"Aalto University"
],
[
"Oja",
"Erkki",
"",
"Aalto University"
]
] | TITLE: Clustering by Low-Rank Doubly Stochastic Matrix Decomposition
ABSTRACT: Clustering analysis by nonnegative low-rank approximations has achieved
remarkable progress in the past decade. However, most approximation approaches
in this direction are still restricted to matrix factorization. We propose a
new low-rank learning method to improve the clustering performance, which is
beyond matrix factorization. The approximation is based on a two-step bipartite
random walk through virtual cluster nodes, where the approximation is formed by
only cluster assigning probabilities. Minimizing the approximation error
measured by Kullback-Leibler divergence is equivalent to maximizing the
likelihood of a discriminative model, which endows our method with a solid
probabilistic interpretation. The optimization is implemented by a relaxed
Majorization-Minimization algorithm that is advantageous in finding good local
minima. Furthermore, we point out that the regularized algorithm with Dirichlet
prior only serves as initialization. Experimental results show that the new
method has strong performance in clustering purity for various datasets,
especially for large-scale manifold data.
| no_new_dataset | 0.9455 |
1206.4677 | Marthinus Du Plessis | Marthinus Du Plessis (Tokyo Institute of Technology), Masashi Sugiyama
(Tokyo Institute of Technology) | Semi-Supervised Learning of Class Balance under Class-Prior Change by
Distribution Matching | ICML2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real-world classification problems, the class balance in the training
dataset does not necessarily reflect that of the test dataset, which can cause
significant estimation bias. If the class ratio of the test dataset is known,
instance re-weighting or resampling allows systematical bias correction.
However, learning the class ratio of the test dataset is challenging when no
labeled data is available from the test domain. In this paper, we propose to
estimate the class ratio in the test dataset by matching probability
distributions of training and test input data. We demonstrate the utility of
the proposed approach through experiments.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:37:07 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Plessis",
"Marthinus Du",
"",
"Tokyo Institute of Technology"
],
[
"Sugiyama",
"Masashi",
"",
"Tokyo Institute of Technology"
]
] | TITLE: Semi-Supervised Learning of Class Balance under Class-Prior Change by
Distribution Matching
ABSTRACT: In real-world classification problems, the class balance in the training
dataset does not necessarily reflect that of the test dataset, which can cause
significant estimation bias. If the class ratio of the test dataset is known,
instance re-weighting or resampling allows systematical bias correction.
However, learning the class ratio of the test dataset is challenging when no
labeled data is available from the test domain. In this paper, we propose to
estimate the class ratio in the test dataset by matching probability
distributions of training and test input data. We demonstrate the utility of
the proposed approach through experiments.
| no_new_dataset | 0.951051 |
1206.4680 | Mikhail Bilenko | Hoyt Koepke (University of Washington), Mikhail Bilenko (Microsoft
Research) | Fast Prediction of New Feature Utility | ICML2012 | null | null | null | cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the new feature utility prediction problem: statistically testing
whether adding a new feature to the data representation can improve predictive
accuracy on a supervised learning task. In many applications, identifying new
informative features is the primary pathway for improving performance. However,
evaluating every potential feature by re-training the predictor with it can be
costly. The paper describes an efficient, learner-independent technique for
estimating new feature utility without re-training based on the current
predictor's outputs. The method is obtained by deriving a connection between
loss reduction potential and the new feature's correlation with the loss
gradient of the current predictor. This leads to a simple yet powerful
hypothesis testing procedure, for which we prove consistency. Our theoretical
analysis is accompanied by empirical evaluation on standard benchmarks and a
large-scale industrial dataset.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:38:18 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Koepke",
"Hoyt",
"",
"University of Washington"
],
[
"Bilenko",
"Mikhail",
"",
"Microsoft\n Research"
]
] | TITLE: Fast Prediction of New Feature Utility
ABSTRACT: We study the new feature utility prediction problem: statistically testing
whether adding a new feature to the data representation can improve predictive
accuracy on a supervised learning task. In many applications, identifying new
informative features is the primary pathway for improving performance. However,
evaluating every potential feature by re-training the predictor with it can be
costly. The paper describes an efficient, learner-independent technique for
estimating new feature utility without re-training based on the current
predictor's outputs. The method is obtained by deriving a connection between
loss reduction potential and the new feature's correlation with the loss
gradient of the current predictor. This leads to a simple yet powerful
hypothesis testing procedure, for which we prove consistency. Our theoretical
analysis is accompanied by empirical evaluation on standard benchmarks and a
large-scale industrial dataset.
| no_new_dataset | 0.946001 |
1206.4684 | Sanjay Purushotham | Sanjay Purushotham (Univ. of Southern California), Yan Liu (Univ. of
Southern California), C.-C. Jay Kuo (Univ. of Southern California) | Collaborative Topic Regression with Social Matrix Factorization for
Recommendation Systems | ICML2012 | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social network websites, such as Facebook, YouTube, Lastfm etc, have become a
popular platform for users to connect with each other and share content or
opinions. They provide rich information for us to study the influence of user's
social circle in their decision process. In this paper, we are interested in
examining the effectiveness of social network information to predict the user's
ratings of items. We propose a novel hierarchical Bayesian model which jointly
incorporates topic modeling and probabilistic matrix factorization of social
networks. A major advantage of our model is to automatically infer useful
latent topics and social information as well as their importance to
collaborative filtering from the training data. Empirical experiments on two
large-scale datasets show that our algorithm provides a more effective
recommendation system than the state-of-the art approaches. Our results reveal
interesting insight that the social circles have more influence on people's
decisions about the usefulness of information (e.g., bookmarking preference on
Delicious) than personal taste (e.g., music preference on Lastfm). We also
examine and discuss solutions on potential information leak in many
recommendation systems that utilize social information.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:41:06 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Purushotham",
"Sanjay",
"",
"Univ. of Southern California"
],
[
"Liu",
"Yan",
"",
"Univ. of\n Southern California"
],
[
"Kuo",
"C. -C. Jay",
"",
"Univ. of Southern California"
]
] | TITLE: Collaborative Topic Regression with Social Matrix Factorization for
Recommendation Systems
ABSTRACT: Social network websites, such as Facebook, YouTube, Lastfm etc, have become a
popular platform for users to connect with each other and share content or
opinions. They provide rich information for us to study the influence of user's
social circle in their decision process. In this paper, we are interested in
examining the effectiveness of social network information to predict the user's
ratings of items. We propose a novel hierarchical Bayesian model which jointly
incorporates topic modeling and probabilistic matrix factorization of social
networks. A major advantage of our model is to automatically infer useful
latent topics and social information as well as their importance to
collaborative filtering from the training data. Empirical experiments on two
large-scale datasets show that our algorithm provides a more effective
recommendation system than the state-of-the art approaches. Our results reveal
interesting insight that the social circles have more influence on people's
decisions about the usefulness of information (e.g., bookmarking preference on
Delicious) than personal taste (e.g., music preference on Lastfm). We also
examine and discuss solutions on potential information leak in many
recommendation systems that utilize social information.
| no_new_dataset | 0.945951 |
1206.4685 | Yan Liu | Yan Liu (USC), Taha Bahadori (USC), Hongfei Li (IBM T.J. Watson
Research Center) | Sparse-GEV: Sparse Latent Space Model for Multivariate Extreme Value
Time Serie Modeling | ICML2012 | null | null | null | stat.ME cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many applications of time series models, such as climate analysis and
social media analysis, we are often interested in extreme events, such as
heatwave, wind gust, and burst of topics. These time series data usually
exhibit a heavy-tailed distribution rather than a Gaussian distribution. This
poses great challenges to existing approaches due to the significantly
different assumptions on the data distributions and the lack of sufficient past
data on extreme events. In this paper, we propose the Sparse-GEV model, a
latent state model based on the theory of extreme value modeling to
automatically learn sparse temporal dependence and make predictions. Our model
is theoretically significant because it is among the first models to learn
sparse temporal dependencies among multivariate extreme value time series. We
demonstrate the superior performance of our algorithm to the state-of-art
methods, including Granger causality, copula approach, and transfer entropy, on
one synthetic dataset, one climate dataset and two Twitter datasets.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 15:42:15 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Liu",
"Yan",
"",
"USC"
],
[
"Bahadori",
"Taha",
"",
"USC"
],
[
"Li",
"Hongfei",
"",
"IBM T.J. Watson\n Research Center"
]
] | TITLE: Sparse-GEV: Sparse Latent Space Model for Multivariate Extreme Value
Time Serie Modeling
ABSTRACT: In many applications of time series models, such as climate analysis and
social media analysis, we are often interested in extreme events, such as
heatwave, wind gust, and burst of topics. These time series data usually
exhibit a heavy-tailed distribution rather than a Gaussian distribution. This
poses great challenges to existing approaches due to the significantly
different assumptions on the data distributions and the lack of sufficient past
data on extreme events. In this paper, we propose the Sparse-GEV model, a
latent state model based on the theory of extreme value modeling to
automatically learn sparse temporal dependence and make predictions. Our model
is theoretically significant because it is among the first models to learn
sparse temporal dependencies among multivariate extreme value time series. We
demonstrate the superior performance of our algorithm to the state-of-art
methods, including Granger causality, copula approach, and transfer entropy, on
one synthetic dataset, one climate dataset and two Twitter datasets.
| no_new_dataset | 0.951953 |
1206.4952 | Nesreen Ahmed | Nesreen K. Ahmed, Jennifer Neville, Ramana Kompella | Space-Efficient Sampling from Social Activity Streams | BigMine 2012 | null | null | null | cs.SI cs.DB physics.soc-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to efficiently study the characteristics of network domains and
support development of network systems (e.g. algorithms, protocols that operate
on networks), it is often necessary to sample a representative subgraph from a
large complex network. Although recent subgraph sampling methods have been
shown to work well, they focus on sampling from memory-resident graphs and
assume that the sampling algorithm can access the entire graph in order to
decide which nodes/edges to select. Many large-scale network datasets, however,
are too large and/or dynamic to be processed using main memory (e.g., email,
tweets, wall posts). In this work, we formulate the problem of sampling from
large graph streams. We propose a streaming graph sampling algorithm that
dynamically maintains a representative sample in a reservoir based setting. We
evaluate the efficacy of our proposed methods empirically using several
real-world data sets. Across all datasets, we found that our method produce
samples that preserve better the original graph distributions.
| [
{
"version": "v1",
"created": "Wed, 20 Jun 2012 04:55:20 GMT"
}
] | 2012-06-22T00:00:00 | [
[
"Ahmed",
"Nesreen K.",
""
],
[
"Neville",
"Jennifer",
""
],
[
"Kompella",
"Ramana",
""
]
] | TITLE: Space-Efficient Sampling from Social Activity Streams
ABSTRACT: In order to efficiently study the characteristics of network domains and
support development of network systems (e.g. algorithms, protocols that operate
on networks), it is often necessary to sample a representative subgraph from a
large complex network. Although recent subgraph sampling methods have been
shown to work well, they focus on sampling from memory-resident graphs and
assume that the sampling algorithm can access the entire graph in order to
decide which nodes/edges to select. Many large-scale network datasets, however,
are too large and/or dynamic to be processed using main memory (e.g., email,
tweets, wall posts). In this work, we formulate the problem of sampling from
large graph streams. We propose a streaming graph sampling algorithm that
dynamically maintains a representative sample in a reservoir based setting. We
evaluate the efficacy of our proposed methods empirically using several
real-world data sets. Across all datasets, we found that our method produce
samples that preserve better the original graph distributions.
| no_new_dataset | 0.948822 |
1206.4110 | Duc Son Pham | Truyen T. Tran and Duc Son Pham | ConeRANK: Ranking as Learning Generalized Inequalities | null | null | null | null | cs.LG cs.IR | http://creativecommons.org/licenses/by/3.0/ | We propose a new data mining approach in ranking documents based on the
concept of cone-based generalized inequalities between vectors. A partial
ordering between two vectors is made with respect to a proper cone and thus
learning the preferences is formulated as learning proper cones. A pairwise
learning-to-rank algorithm (ConeRank) is proposed to learn a non-negative
subspace, formulated as a polyhedral cone, over document-pair differences. The
algorithm is regularized by controlling the `volume' of the cone. The
experimental studies on the latest and largest ranking dataset LETOR 4.0 shows
that ConeRank is competitive against other recent ranking approaches.
| [
{
"version": "v1",
"created": "Tue, 19 Jun 2012 02:24:55 GMT"
}
] | 2012-06-21T00:00:00 | [
[
"Tran",
"Truyen T.",
""
],
[
"Pham",
"Duc Son",
""
]
] | TITLE: ConeRANK: Ranking as Learning Generalized Inequalities
ABSTRACT: We propose a new data mining approach in ranking documents based on the
concept of cone-based generalized inequalities between vectors. A partial
ordering between two vectors is made with respect to a proper cone and thus
learning the preferences is formulated as learning proper cones. A pairwise
learning-to-rank algorithm (ConeRank) is proposed to learn a non-negative
subspace, formulated as a polyhedral cone, over document-pair differences. The
algorithm is regularized by controlling the `volume' of the cone. The
experimental studies on the latest and largest ranking dataset LETOR 4.0 shows
that ConeRank is competitive against other recent ranking approaches.
| no_new_dataset | 0.943556 |
1206.4329 | Sudarshan Nandy | Sudarshan Nandy, Partha Pratim Sarkar and Achintya Das | An Improved Gauss-Newtons Method based Back-propagation Algorithm for
Fast Convergence | 7 pages, 6 figures,2 tables, Published with International Journal of
Computer Applications (IJCA) | International Journal of Computer Applications 39(8):1-7, February
2012. Published by Foundation of Computer Science, New York, USA | 10.5120/4837-7097 | null | cs.AI cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The present work deals with an improved back-propagation algorithm based on
Gauss-Newton numerical optimization method for fast convergence. The steepest
descent method is used for the back-propagation. The algorithm is tested using
various datasets and compared with the steepest descent back-propagation
algorithm. In the system, optimization is carried out using multilayer neural
network. The efficacy of the proposed method is observed during the training
period as it converges quickly for the dataset used in test. The requirement of
memory for computing the steps of algorithm is also analyzed.
| [
{
"version": "v1",
"created": "Tue, 19 Jun 2012 20:20:56 GMT"
}
] | 2012-06-21T00:00:00 | [
[
"Nandy",
"Sudarshan",
""
],
[
"Sarkar",
"Partha Pratim",
""
],
[
"Das",
"Achintya",
""
]
] | TITLE: An Improved Gauss-Newtons Method based Back-propagation Algorithm for
Fast Convergence
ABSTRACT: The present work deals with an improved back-propagation algorithm based on
Gauss-Newton numerical optimization method for fast convergence. The steepest
descent method is used for the back-propagation. The algorithm is tested using
various datasets and compared with the steepest descent back-propagation
algorithm. In the system, optimization is carried out using multilayer neural
network. The efficacy of the proposed method is observed during the training
period as it converges quickly for the dataset used in test. The requirement of
memory for computing the steps of algorithm is also analyzed.
| no_new_dataset | 0.948442 |
1206.4116 | Makoto Yamada | Makoto Yamada, Leonid Sigal, Michalis Raptis, Masashi Sugiyama | Dependence Maximizing Temporal Alignment via Squared-Loss Mutual
Information | 11 pages | null | null | null | stat.ML cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of temporal alignment is to establish time correspondence between
two sequences, which has many applications in a variety of areas such as speech
processing, bioinformatics, computer vision, and computer graphics. In this
paper, we propose a novel temporal alignment method called least-squares
dynamic time warping (LSDTW). LSDTW finds an alignment that maximizes
statistical dependency between sequences, measured by a squared-loss variant of
mutual information. The benefit of this novel information-theoretic formulation
is that LSDTW can align sequences with different lengths, different
dimensionality, high non-linearity, and non-Gaussianity in a computationally
efficient manner. In addition, model parameters such as an initial alignment
matrix can be systematically optimized by cross-validation. We demonstrate the
usefulness of LSDTW through experiments on synthetic and real-world Kinect
action recognition datasets.
| [
{
"version": "v1",
"created": "Tue, 19 Jun 2012 03:35:52 GMT"
}
] | 2012-06-20T00:00:00 | [
[
"Yamada",
"Makoto",
""
],
[
"Sigal",
"Leonid",
""
],
[
"Raptis",
"Michalis",
""
],
[
"Sugiyama",
"Masashi",
""
]
] | TITLE: Dependence Maximizing Temporal Alignment via Squared-Loss Mutual
Information
ABSTRACT: The goal of temporal alignment is to establish time correspondence between
two sequences, which has many applications in a variety of areas such as speech
processing, bioinformatics, computer vision, and computer graphics. In this
paper, we propose a novel temporal alignment method called least-squares
dynamic time warping (LSDTW). LSDTW finds an alignment that maximizes
statistical dependency between sequences, measured by a squared-loss variant of
mutual information. The benefit of this novel information-theoretic formulation
is that LSDTW can align sequences with different lengths, different
dimensionality, high non-linearity, and non-Gaussianity in a computationally
efficient manner. In addition, model parameters such as an initial alignment
matrix can be systematically optimized by cross-validation. We demonstrate the
usefulness of LSDTW through experiments on synthetic and real-world Kinect
action recognition datasets.
| no_new_dataset | 0.952838 |
1205.4378 | Yu Zheng | Yin Zhu, Yu Zheng, Liuhang Zhang, Darshan Santani, Xing Xie, Qiang
Yang | Inferring Taxi Status Using GPS Trajectories | null | null | null | null | cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we infer the statuses of a taxi, consisting of occupied,
non-occupied and parked, in terms of its GPS trajectory. The status information
can enable urban computing for improving a city's transportation systems and
land use planning. In our solution, we first identify and extract a set of
effective features incorporating the knowledge of a single trajectory,
historical trajectories and geographic data like road network. Second, a
parking status detection algorithm is devised to find parking places (from a
given trajectory), dividing a trajectory into segments (i.e.,
sub-trajectories). Third, we propose a two-phase inference model to learn the
status (occupied or non-occupied) of each point from a taxi segment. This model
first uses the identified features to train a local probabilistic classifier
and then carries out a Hidden Semi-Markov Model (HSMM) for globally considering
long term travel patterns. We evaluated our method with a large-scale
real-world trajectory dataset generated by 600 taxis, showing the advantages of
our method over baselines.
| [
{
"version": "v1",
"created": "Sun, 20 May 2012 03:24:25 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jun 2012 08:15:27 GMT"
}
] | 2012-06-19T00:00:00 | [
[
"Zhu",
"Yin",
""
],
[
"Zheng",
"Yu",
""
],
[
"Zhang",
"Liuhang",
""
],
[
"Santani",
"Darshan",
""
],
[
"Xie",
"Xing",
""
],
[
"Yang",
"Qiang",
""
]
] | TITLE: Inferring Taxi Status Using GPS Trajectories
ABSTRACT: In this paper, we infer the statuses of a taxi, consisting of occupied,
non-occupied and parked, in terms of its GPS trajectory. The status information
can enable urban computing for improving a city's transportation systems and
land use planning. In our solution, we first identify and extract a set of
effective features incorporating the knowledge of a single trajectory,
historical trajectories and geographic data like road network. Second, a
parking status detection algorithm is devised to find parking places (from a
given trajectory), dividing a trajectory into segments (i.e.,
sub-trajectories). Third, we propose a two-phase inference model to learn the
status (occupied or non-occupied) of each point from a taxi segment. This model
first uses the identified features to train a local probabilistic classifier
and then carries out a Hidden Semi-Markov Model (HSMM) for globally considering
long term travel patterns. We evaluated our method with a large-scale
real-world trajectory dataset generated by 600 taxis, showing the advantages of
our method over baselines.
| no_new_dataset | 0.940898 |
1206.3717 | Qingji Zheng | Qingji Zheng and Xinwen Zhang | Multiparty Cloud Computation | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing popularity of the cloud, clients oursource their data to
clouds in order to take advantage of unlimited virtualized storage space and
the low management cost. Such trend prompts the privately oursourcing
computation, called \emph{multiparty cloud computation} (\MCC): Given $k$
clients storing their data in the cloud, how can they perform the joint
functionality by contributing their private data as inputs, and making use of
cloud's powerful computation capability. Namely, the clients wish to oursource
computation to the cloud together with their private data stored in the cloud,
which naturally happens when the computation is involved with large datasets,
e.g., to analyze malicious URLs. We note that the \MCC\ problem is different
from widely considered concepts, e.g., secure multiparty computation and
multiparty computation with server aid.
To address this problem, we introduce the notion of \emph{homomorphic
threshold proxy re-encryption} schemes, which are encryption schemes that enjoy
three promising properties: proxy re-encryption -- transforming encrypted data
of one user to encrypted data of target user, threshold decryption --
decrypting encrypted data by combining secret key shares obtained by a set of
users, and homomorphic computation -- evaluating functions on the encrypted
data. To demonstrate the feasibility of the proposed approach, we present an
encryption scheme which allows anyone to compute arbitrary many additions and
at most one multiplications.
| [
{
"version": "v1",
"created": "Sun, 17 Jun 2012 02:33:22 GMT"
}
] | 2012-06-19T00:00:00 | [
[
"Zheng",
"Qingji",
""
],
[
"Zhang",
"Xinwen",
""
]
] | TITLE: Multiparty Cloud Computation
ABSTRACT: With the increasing popularity of the cloud, clients oursource their data to
clouds in order to take advantage of unlimited virtualized storage space and
the low management cost. Such trend prompts the privately oursourcing
computation, called \emph{multiparty cloud computation} (\MCC): Given $k$
clients storing their data in the cloud, how can they perform the joint
functionality by contributing their private data as inputs, and making use of
cloud's powerful computation capability. Namely, the clients wish to oursource
computation to the cloud together with their private data stored in the cloud,
which naturally happens when the computation is involved with large datasets,
e.g., to analyze malicious URLs. We note that the \MCC\ problem is different
from widely considered concepts, e.g., secure multiparty computation and
multiparty computation with server aid.
To address this problem, we introduce the notion of \emph{homomorphic
threshold proxy re-encryption} schemes, which are encryption schemes that enjoy
three promising properties: proxy re-encryption -- transforming encrypted data
of one user to encrypted data of target user, threshold decryption --
decrypting encrypted data by combining secret key shares obtained by a set of
users, and homomorphic computation -- evaluating functions on the encrypted
data. To demonstrate the feasibility of the proposed approach, we present an
encryption scheme which allows anyone to compute arbitrary many additions and
at most one multiplications.
| no_new_dataset | 0.945197 |
1206.3881 | Alessandro Rozza | Claudio Ceruti and Simone Bassis and Alessandro Rozza and Gabriele
Lombardi and Elena Casiraghi and Paola Campadelli | DANCo: Dimensionality from Angle and Norm Concentration | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last decades the estimation of the intrinsic dimensionality of a
dataset has gained considerable importance. Despite the great deal of research
work devoted to this task, most of the proposed solutions prove to be
unreliable when the intrinsic dimensionality of the input dataset is high and
the manifold where the points lie is nonlinearly embedded in a higher
dimensional space. In this paper we propose a novel robust intrinsic
dimensionality estimator that exploits the twofold complementary information
conveyed both by the normalized nearest neighbor distances and by the angles
computed on couples of neighboring points, providing also closed-forms for the
Kullback-Leibler divergences of the respective distributions. Experiments
performed on both synthetic and real datasets highlight the robustness and the
effectiveness of the proposed algorithm when compared to state of the art
methodologies.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 10:33:29 GMT"
}
] | 2012-06-19T00:00:00 | [
[
"Ceruti",
"Claudio",
""
],
[
"Bassis",
"Simone",
""
],
[
"Rozza",
"Alessandro",
""
],
[
"Lombardi",
"Gabriele",
""
],
[
"Casiraghi",
"Elena",
""
],
[
"Campadelli",
"Paola",
""
]
] | TITLE: DANCo: Dimensionality from Angle and Norm Concentration
ABSTRACT: In the last decades the estimation of the intrinsic dimensionality of a
dataset has gained considerable importance. Despite the great deal of research
work devoted to this task, most of the proposed solutions prove to be
unreliable when the intrinsic dimensionality of the input dataset is high and
the manifold where the points lie is nonlinearly embedded in a higher
dimensional space. In this paper we propose a novel robust intrinsic
dimensionality estimator that exploits the twofold complementary information
conveyed both by the normalized nearest neighbor distances and by the angles
computed on couples of neighboring points, providing also closed-forms for the
Kullback-Leibler divergences of the respective distributions. Experiments
performed on both synthetic and real datasets highlight the robustness and the
effectiveness of the proposed algorithm when compared to state of the art
methodologies.
| no_new_dataset | 0.944944 |
1206.3204 | Pranjal Awasthi | Pranjal Awasthi, Or Sheffet | Improved Spectral-Norm Bounds for Clustering | null | null | null | null | cs.LG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aiming to unify known results about clustering mixtures of distributions
under separation conditions, Kumar and Kannan[2010] introduced a deterministic
condition for clustering datasets. They showed that this single deterministic
condition encompasses many previously studied clustering assumptions. More
specifically, their proximity condition requires that in the target
$k$-clustering, the projection of a point $x$ onto the line joining its cluster
center $\mu$ and some other center $\mu'$, is a large additive factor closer to
$\mu$ than to $\mu'$. This additive factor can be roughly described as $k$
times the spectral norm of the matrix representing the differences between the
given (known) dataset and the means of the (unknown) target clustering.
Clearly, the proximity condition implies center separation -- the distance
between any two centers must be as large as the above mentioned bound.
In this paper we improve upon the work of Kumar and Kannan along several
axes. First, we weaken the center separation bound by a factor of $\sqrt{k}$,
and secondly we weaken the proximity condition by a factor of $k$. Using these
weaker bounds we still achieve the same guarantees when all points satisfy the
proximity condition. We also achieve better guarantees when only
$(1-\epsilon)$-fraction of the points satisfy the weaker proximity condition.
The bulk of our analysis relies only on center separation under which one can
produce a clustering which (i) has low error, (ii) has low $k$-means cost, and
(iii) has centers very close to the target centers.
Our improved separation condition allows us to match the results of the
Planted Partition Model of McSherry[2001], improve upon the results of
Ostrovsky et al[2006], and improve separation results for mixture of Gaussian
models in a particular setting.
| [
{
"version": "v1",
"created": "Thu, 14 Jun 2012 18:23:46 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Jun 2012 18:11:27 GMT"
}
] | 2012-06-18T00:00:00 | [
[
"Awasthi",
"Pranjal",
""
],
[
"Sheffet",
"Or",
""
]
] | TITLE: Improved Spectral-Norm Bounds for Clustering
ABSTRACT: Aiming to unify known results about clustering mixtures of distributions
under separation conditions, Kumar and Kannan[2010] introduced a deterministic
condition for clustering datasets. They showed that this single deterministic
condition encompasses many previously studied clustering assumptions. More
specifically, their proximity condition requires that in the target
$k$-clustering, the projection of a point $x$ onto the line joining its cluster
center $\mu$ and some other center $\mu'$, is a large additive factor closer to
$\mu$ than to $\mu'$. This additive factor can be roughly described as $k$
times the spectral norm of the matrix representing the differences between the
given (known) dataset and the means of the (unknown) target clustering.
Clearly, the proximity condition implies center separation -- the distance
between any two centers must be as large as the above mentioned bound.
In this paper we improve upon the work of Kumar and Kannan along several
axes. First, we weaken the center separation bound by a factor of $\sqrt{k}$,
and secondly we weaken the proximity condition by a factor of $k$. Using these
weaker bounds we still achieve the same guarantees when all points satisfy the
proximity condition. We also achieve better guarantees when only
$(1-\epsilon)$-fraction of the points satisfy the weaker proximity condition.
The bulk of our analysis relies only on center separation under which one can
produce a clustering which (i) has low error, (ii) has low $k$-means cost, and
(iii) has centers very close to the target centers.
Our improved separation condition allows us to match the results of the
Planted Partition Model of McSherry[2001], improve upon the results of
Ostrovsky et al[2006], and improve separation results for mixture of Gaussian
models in a particular setting.
| no_new_dataset | 0.94366 |
1206.3236 | Vincent Auvray | Vincent Auvray, Louis Wehenkel | Learning Inclusion-Optimal Chordal Graphs | Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty
in Artificial Intelligence (UAI2008) | null | null | UAI-P-2008-PG-18-25 | cs.LG cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chordal graphs can be used to encode dependency models that are representable
by both directed acyclic and undirected graphs. This paper discusses a very
simple and efficient algorithm to learn the chordal structure of a
probabilistic model from data. The algorithm is a greedy hill-climbing search
algorithm that uses the inclusion boundary neighborhood over chordal graphs. In
the limit of a large sample size and under appropriate hypotheses on the
scoring criterion, we prove that the algorithm will find a structure that is
inclusion-optimal when the dependency model of the data-generating distribution
can be represented exactly by an undirected graph. The algorithm is evaluated
on simulated datasets.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2012 14:17:24 GMT"
}
] | 2012-06-18T00:00:00 | [
[
"Auvray",
"Vincent",
""
],
[
"Wehenkel",
"Louis",
""
]
] | TITLE: Learning Inclusion-Optimal Chordal Graphs
ABSTRACT: Chordal graphs can be used to encode dependency models that are representable
by both directed acyclic and undirected graphs. This paper discusses a very
simple and efficient algorithm to learn the chordal structure of a
probabilistic model from data. The algorithm is a greedy hill-climbing search
algorithm that uses the inclusion boundary neighborhood over chordal graphs. In
the limit of a large sample size and under appropriate hypotheses on the
scoring criterion, we prove that the algorithm will find a structure that is
inclusion-optimal when the dependency model of the data-generating distribution
can be represented exactly by an undirected graph. The algorithm is evaluated
on simulated datasets.
| no_new_dataset | 0.948917 |
1206.3238 | Liefeng Bo | Liefeng Bo, Cristian Sminchisescu | Greedy Block Coordinate Descent for Large Scale Gaussian Process
Regression | Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty
in Artificial Intelligence (UAI2008) | null | null | UAI-P-2008-PG-43-52 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a variable decomposition algorithm -greedy block coordinate
descent (GBCD)- in order to make dense Gaussian process regression practical
for large scale problems. GBCD breaks a large scale optimization into a series
of small sub-problems. The challenge in variable decomposition algorithms is
the identification of a subproblem (the active set of variables) that yields
the largest improvement. We analyze the limitations of existing methods and
cast the active set selection into a zero-norm constrained optimization problem
that we solve using greedy methods. By directly estimating the decrease in the
objective function, we obtain not only efficient approximate solutions for
GBCD, but we are also able to demonstrate that the method is globally
convergent. Empirical comparisons against competing dense methods like
Conjugate Gradient or SMO show that GBCD is an order of magnitude faster.
Comparisons against sparse GP methods show that GBCD is both accurate and
capable of handling datasets of 100,000 samples or more.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2012 14:18:22 GMT"
}
] | 2012-06-18T00:00:00 | [
[
"Bo",
"Liefeng",
""
],
[
"Sminchisescu",
"Cristian",
""
]
] | TITLE: Greedy Block Coordinate Descent for Large Scale Gaussian Process
Regression
ABSTRACT: We propose a variable decomposition algorithm -greedy block coordinate
descent (GBCD)- in order to make dense Gaussian process regression practical
for large scale problems. GBCD breaks a large scale optimization into a series
of small sub-problems. The challenge in variable decomposition algorithms is
the identification of a subproblem (the active set of variables) that yields
the largest improvement. We analyze the limitations of existing methods and
cast the active set selection into a zero-norm constrained optimization problem
that we solve using greedy methods. By directly estimating the decrease in the
objective function, we obtain not only efficient approximate solutions for
GBCD, but we are also able to demonstrate that the method is globally
convergent. Empirical comparisons against competing dense methods like
Conjugate Gradient or SMO show that GBCD is an order of magnitude faster.
Comparisons against sparse GP methods show that GBCD is both accurate and
capable of handling datasets of 100,000 samples or more.
| no_new_dataset | 0.942454 |
1206.3244 | James Cussens | James Cussens | Bayesian network learning by compiling to weighted MAX-SAT | Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty
in Artificial Intelligence (UAI2008) | null | null | UAI-P-2008-PG-105-112 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of learning discrete Bayesian networks from data is encoded as a
weighted MAX-SAT problem and the MaxWalkSat local search algorithm is used to
address it. For each dataset, the per-variable summands of the (BDeu) marginal
likelihood for different choices of parents ('family scores') are computed
prior to applying MaxWalkSat. Each permissible choice of parents for each
variable is encoded as a distinct propositional atom and the associated family
score encoded as a 'soft' weighted single-literal clause. Two approaches to
enforcing acyclicity are considered: either by encoding the ancestor relation
or by attaching a total order to each graph and encoding that. The latter
approach gives better results. Learning experiments have been conducted on 21
synthetic datasets sampled from 7 BNs. The largest dataset has 10,000
datapoints and 60 variables producing (for the 'ancestor' encoding) a weighted
CNF input file with 19,932 atoms and 269,367 clauses. For most datasets,
MaxWalkSat quickly finds BNs with higher BDeu score than the 'true' BN. The
effect of adding prior information is assessed. It is further shown that
Bayesian model averaging can be effected by collecting BNs generated during the
search.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2012 15:06:22 GMT"
}
] | 2012-06-18T00:00:00 | [
[
"Cussens",
"James",
""
]
] | TITLE: Bayesian network learning by compiling to weighted MAX-SAT
ABSTRACT: The problem of learning discrete Bayesian networks from data is encoded as a
weighted MAX-SAT problem and the MaxWalkSat local search algorithm is used to
address it. For each dataset, the per-variable summands of the (BDeu) marginal
likelihood for different choices of parents ('family scores') are computed
prior to applying MaxWalkSat. Each permissible choice of parents for each
variable is encoded as a distinct propositional atom and the associated family
score encoded as a 'soft' weighted single-literal clause. Two approaches to
enforcing acyclicity are considered: either by encoding the ancestor relation
or by attaching a total order to each graph and encoding that. The latter
approach gives better results. Learning experiments have been conducted on 21
synthetic datasets sampled from 7 BNs. The largest dataset has 10,000
datapoints and 60 variables producing (for the 'ancestor' encoding) a weighted
CNF input file with 19,932 atoms and 269,367 clauses. For most datasets,
MaxWalkSat quickly finds BNs with higher BDeu score than the 'true' BN. The
effect of adding prior information is assessed. It is further shown that
Bayesian model averaging can be effected by collecting BNs generated during the
search.
| no_new_dataset | 0.949716 |
1206.3259 | Jim Huang | Jim Huang, Brendan J. Frey | Cumulative distribution networks and the derivative-sum-product
algorithm | Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty
in Artificial Intelligence (UAI2008) | null | null | UAI-P-2008-PG-290-297 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new type of graphical model called a "cumulative distribution
network" (CDN), which expresses a joint cumulative distribution as a product of
local functions. Each local function can be viewed as providing evidence about
possible orderings, or rankings, of variables. Interestingly, we find that the
conditional independence properties of CDNs are quite different from other
graphical models. We also describe a messagepassing algorithm that efficiently
computes conditional cumulative distributions. Due to the unique independence
properties of the CDN, these messages do not in general have a one-to-one
correspondence with messages exchanged in standard algorithms, such as belief
propagation. We demonstrate the application of CDNs for structured ranking
learning using a previously-studied multi-player gaming dataset.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2012 15:33:06 GMT"
}
] | 2012-06-18T00:00:00 | [
[
"Huang",
"Jim",
""
],
[
"Frey",
"Brendan J.",
""
]
] | TITLE: Cumulative distribution networks and the derivative-sum-product
algorithm
ABSTRACT: We introduce a new type of graphical model called a "cumulative distribution
network" (CDN), which expresses a joint cumulative distribution as a product of
local functions. Each local function can be viewed as providing evidence about
possible orderings, or rankings, of variables. Interestingly, we find that the
conditional independence properties of CDNs are quite different from other
graphical models. We also describe a messagepassing algorithm that efficiently
computes conditional cumulative distributions. Due to the unique independence
properties of the CDN, these messages do not in general have a one-to-one
correspondence with messages exchanged in standard algorithms, such as belief
propagation. We demonstrate the application of CDNs for structured ranking
learning using a previously-studied multi-player gaming dataset.
| no_new_dataset | 0.858363 |
1206.3269 | Tony S. Jebara | Tony S. Jebara | Bayesian Out-Trees | Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty
in Artificial Intelligence (UAI2008) | null | null | UAI-P-2008-PG-315-324 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Bayesian treatment of latent directed graph structure for non-iid data is
provided where each child datum is sampled with a directed conditional
dependence on a single unknown parent datum. The latent graph structure is
assumed to lie in the family of directed out-tree graphs which leads to
efficient Bayesian inference. The latent likelihood of the data and its
gradients are computable in closed form via Tutte's directed matrix tree
theorem using determinants and inverses of the out-Laplacian. This novel
likelihood subsumes iid likelihood, is exchangeable and yields efficient
unsupervised and semi-supervised learning algorithms. In addition to handling
taxonomy and phylogenetic datasets the out-tree assumption performs
surprisingly well as a semi-parametric density estimator on standard iid
datasets. Experiments with unsupervised and semisupervised learning are shown
on various UCI and taxonomy datasets.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2012 15:37:30 GMT"
}
] | 2012-06-18T00:00:00 | [
[
"Jebara",
"Tony S.",
""
]
] | TITLE: Bayesian Out-Trees
ABSTRACT: A Bayesian treatment of latent directed graph structure for non-iid data is
provided where each child datum is sampled with a directed conditional
dependence on a single unknown parent datum. The latent graph structure is
assumed to lie in the family of directed out-tree graphs which leads to
efficient Bayesian inference. The latent likelihood of the data and its
gradients are computable in closed form via Tutte's directed matrix tree
theorem using determinants and inverses of the out-Laplacian. This novel
likelihood subsumes iid likelihood, is exchangeable and yields efficient
unsupervised and semi-supervised learning algorithms. In addition to handling
taxonomy and phylogenetic datasets the out-tree assumption performs
surprisingly well as a semi-parametric density estimator on standard iid
datasets. Experiments with unsupervised and semisupervised learning are shown
on various UCI and taxonomy datasets.
| no_new_dataset | 0.95297 |
1206.3320 | Zi-Ke Zhang Mr. | Jinhu Liu, Chengcheng Yang, Zi-Ke Zhang | A two-step Recommendation Algorithm via Iterative Local Least Squares | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems can change our life a lot and help us select suitable and
favorite items much more conveniently and easily. As a consequence, various
kinds of algorithms have been proposed in last few years to improve the
performance. However, all of them face one critical problem: data sparsity. In
this paper, we proposed a two-step recommendation algorithm via iterative local
least squares (ILLS). Firstly, we obtain the ratings matrix which is
constructed via users' behavioral records, and it is normally very sparse.
Secondly, we preprocess the "ratings" matrix through ProbS which can convert
the sparse data to a dense one. Then we use ILLS to estimate those missing
values. Finally, the recommendation list is generated. Experimental results on
the three datasets: MovieLens, Netflix, RYM, suggest that the proposed method
can enhance the algorithmic accuracy of AUC. Especially, it performs much
better in dense datasets. Furthermore, since this methods can improve those
missing value more accurately via iteration which might show light in
discovering those inactive users' purchasing intention and eventually solving
cold-start problem.
| [
{
"version": "v1",
"created": "Thu, 14 Jun 2012 20:23:24 GMT"
}
] | 2012-06-18T00:00:00 | [
[
"Liu",
"Jinhu",
""
],
[
"Yang",
"Chengcheng",
""
],
[
"Zhang",
"Zi-Ke",
""
]
] | TITLE: A two-step Recommendation Algorithm via Iterative Local Least Squares
ABSTRACT: Recommender systems can change our life a lot and help us select suitable and
favorite items much more conveniently and easily. As a consequence, various
kinds of algorithms have been proposed in last few years to improve the
performance. However, all of them face one critical problem: data sparsity. In
this paper, we proposed a two-step recommendation algorithm via iterative local
least squares (ILLS). Firstly, we obtain the ratings matrix which is
constructed via users' behavioral records, and it is normally very sparse.
Secondly, we preprocess the "ratings" matrix through ProbS which can convert
the sparse data to a dense one. Then we use ILLS to estimate those missing
values. Finally, the recommendation list is generated. Experimental results on
the three datasets: MovieLens, Netflix, RYM, suggest that the proposed method
can enhance the algorithmic accuracy of AUC. Especially, it performs much
better in dense datasets. Furthermore, since this methods can improve those
missing value more accurately via iteration which might show light in
discovering those inactive users' purchasing intention and eventually solving
cold-start problem.
| no_new_dataset | 0.949248 |
1206.3334 | Pranjal Awasthi | Pranjal Awasthi, Avrim Blum, Jamie Morgenstern, Or Sheffet | Additive Approximation for Near-Perfect Phylogeny Construction | null | null | null | null | cs.DS cs.CE q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of constructing phylogenetic trees for a given set of
species. The problem is formulated as that of finding a minimum Steiner tree on
$n$ points over the Boolean hypercube of dimension $d$. It is known that an
optimal tree can be found in linear time if the given dataset has a perfect
phylogeny, i.e. cost of the optimal phylogeny is exactly $d$. Moreover, if the
data has a near-perfect phylogeny, i.e. the cost of the optimal Steiner tree is
$d+q$, it is known that an exact solution can be found in running time which is
polynomial in the number of species and $d$, yet exponential in $q$. In this
work, we give a polynomial-time algorithm (in both $d$ and $q$) that finds a
phylogenetic tree of cost $d+O(q^2)$. This provides the best guarantees known -
namely, a $(1+o(1))$-approximation - for the case $\log(d) \ll q \ll \sqrt{d}$,
broadening the range of settings for which near-optimal solutions can be
efficiently found. We also discuss the motivation and reasoning for studying
such additive approximations.
| [
{
"version": "v1",
"created": "Thu, 14 Jun 2012 21:38:01 GMT"
}
] | 2012-06-18T00:00:00 | [
[
"Awasthi",
"Pranjal",
""
],
[
"Blum",
"Avrim",
""
],
[
"Morgenstern",
"Jamie",
""
],
[
"Sheffet",
"Or",
""
]
] | TITLE: Additive Approximation for Near-Perfect Phylogeny Construction
ABSTRACT: We study the problem of constructing phylogenetic trees for a given set of
species. The problem is formulated as that of finding a minimum Steiner tree on
$n$ points over the Boolean hypercube of dimension $d$. It is known that an
optimal tree can be found in linear time if the given dataset has a perfect
phylogeny, i.e. cost of the optimal phylogeny is exactly $d$. Moreover, if the
data has a near-perfect phylogeny, i.e. the cost of the optimal Steiner tree is
$d+q$, it is known that an exact solution can be found in running time which is
polynomial in the number of species and $d$, yet exponential in $q$. In this
work, we give a polynomial-time algorithm (in both $d$ and $q$) that finds a
phylogenetic tree of cost $d+O(q^2)$. This provides the best guarantees known -
namely, a $(1+o(1))$-approximation - for the case $\log(d) \ll q \ll \sqrt{d}$,
broadening the range of settings for which near-optimal solutions can be
efficiently found. We also discuss the motivation and reasoning for studying
such additive approximations.
| no_new_dataset | 0.941439 |
1206.3055 | {\O}yvind Breivik PhD | {\O}yvind Breivik, Yvonne Gusdal, Birgitte R. Furevik, Ole Johan
Aarnes and Magnar Reistad | Nearshore wave forecasting and hindcasting by dynamical and statistical
downscaling | 20 pages, 7 figures and 2 tables, MREA07 special issue on Marine
rapid environmental assessment | J Marine Syst, 78 (2009) pp S235-S243 | 10.1016/j.jmarsys.2009.01.025 | null | physics.ao-ph physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A high-resolution nested WAM/SWAN wave model suite aimed at rapidly
establishing nearshore wave forecasts as well as a climatology and return
values of the local wave conditions with Rapid Enviromental Assessment (REA) in
mind is described. The system is targeted at regions where local wave growth
and partial exposure to complex open-ocean wave conditions makes diagnostic
wave modelling difficult.
SWAN is set up on 500 m resolution and is nested in a 10 km version of WAM. A
model integration of more than one year is carried out to map the spatial
distribution of the wave field. The model correlates well with wave buoy
observations (0.96) but overestimates the wave height somewhat (18%, bias 0.29
m).
To estimate wave height return values a much longer time series is required
and running SWAN for such a period is unrealistic in a REA setting. Instead we
establish a direction-dependent transfer function between an already existing
coarse open-ocean hindcast dataset and the high-resolution nested SWAN model.
Return values are estimated using ensemble estimates of two different
extreme-value distributions based on the full 52 years of statistically
downscaled hindcast data. We find good agreement between downscaled wave height
and wave buoy observations. The cost of generating the statistically downscaled
hindcast time series is negligible and can be redone for arbitrary locations
within the SWAN domain, although the sectors must be carefully chosen for each
new location.
The method is found to be well suited to rapidly providing detailed wave
forecasts as well as hindcasts and return values estimates of partly sheltered
coastal regions.
| [
{
"version": "v1",
"created": "Thu, 14 Jun 2012 09:45:51 GMT"
}
] | 2012-06-15T00:00:00 | [
[
"Breivik",
"Øyvind",
""
],
[
"Gusdal",
"Yvonne",
""
],
[
"Furevik",
"Birgitte R.",
""
],
[
"Aarnes",
"Ole Johan",
""
],
[
"Reistad",
"Magnar",
""
]
] | TITLE: Nearshore wave forecasting and hindcasting by dynamical and statistical
downscaling
ABSTRACT: A high-resolution nested WAM/SWAN wave model suite aimed at rapidly
establishing nearshore wave forecasts as well as a climatology and return
values of the local wave conditions with Rapid Enviromental Assessment (REA) in
mind is described. The system is targeted at regions where local wave growth
and partial exposure to complex open-ocean wave conditions makes diagnostic
wave modelling difficult.
SWAN is set up on 500 m resolution and is nested in a 10 km version of WAM. A
model integration of more than one year is carried out to map the spatial
distribution of the wave field. The model correlates well with wave buoy
observations (0.96) but overestimates the wave height somewhat (18%, bias 0.29
m).
To estimate wave height return values a much longer time series is required
and running SWAN for such a period is unrealistic in a REA setting. Instead we
establish a direction-dependent transfer function between an already existing
coarse open-ocean hindcast dataset and the high-resolution nested SWAN model.
Return values are estimated using ensemble estimates of two different
extreme-value distributions based on the full 52 years of statistically
downscaled hindcast data. We find good agreement between downscaled wave height
and wave buoy observations. The cost of generating the statistically downscaled
hindcast time series is negligible and can be redone for arbitrary locations
within the SWAN domain, although the sectors must be carefully chosen for each
new location.
The method is found to be well suited to rapidly providing detailed wave
forecasts as well as hindcasts and return values estimates of partly sheltered
coastal regions.
| no_new_dataset | 0.948298 |
1206.1891 | Donghyuk Shin | Donghyuk Shin, Si Si, Inderjit S. Dhillon | Multi-Scale Link Prediction | 20 pages, 10 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The automated analysis of social networks has become an important problem due
to the proliferation of social networks, such as LiveJournal, Flickr and
Facebook. The scale of these social networks is massive and continues to grow
rapidly. An important problem in social network analysis is proximity
estimation that infers the closeness of different users. Link prediction, in
turn, is an important application of proximity estimation. However, many
methods for computing proximity measures have high computational complexity and
are thus prohibitive for large-scale link prediction problems. One way to
address this problem is to estimate proximity measures via low-rank
approximation. However, a single low-rank approximation may not be sufficient
to represent the behavior of the entire network. In this paper, we propose
Multi-Scale Link Prediction (MSLP), a framework for link prediction, which can
handle massive networks. The basis idea of MSLP is to construct low rank
approximations of the network at multiple scales in an efficient manner. Based
on this approach, MSLP combines predictions at multiple scales to make robust
and accurate predictions. Experimental results on real-life datasets with more
than a million nodes show the superior performance and scalability of our
method.
| [
{
"version": "v1",
"created": "Fri, 8 Jun 2012 23:49:13 GMT"
}
] | 2012-06-12T00:00:00 | [
[
"Shin",
"Donghyuk",
""
],
[
"Si",
"Si",
""
],
[
"Dhillon",
"Inderjit S.",
""
]
] | TITLE: Multi-Scale Link Prediction
ABSTRACT: The automated analysis of social networks has become an important problem due
to the proliferation of social networks, such as LiveJournal, Flickr and
Facebook. The scale of these social networks is massive and continues to grow
rapidly. An important problem in social network analysis is proximity
estimation that infers the closeness of different users. Link prediction, in
turn, is an important application of proximity estimation. However, many
methods for computing proximity measures have high computational complexity and
are thus prohibitive for large-scale link prediction problems. One way to
address this problem is to estimate proximity measures via low-rank
approximation. However, a single low-rank approximation may not be sufficient
to represent the behavior of the entire network. In this paper, we propose
Multi-Scale Link Prediction (MSLP), a framework for link prediction, which can
handle massive networks. The basis idea of MSLP is to construct low rank
approximations of the network at multiple scales in an efficient manner. Based
on this approach, MSLP combines predictions at multiple scales to make robust
and accurate predictions. Experimental results on real-life datasets with more
than a million nodes show the superior performance and scalability of our
method.
| no_new_dataset | 0.945551 |
1206.2320 | YenFu Ou | Yen-Fu Ou, Yuanyi Xue, Yao Wang | Q-STAR:A Perceptual Video Quality Model Considering Impact of Spatial,
Temporal, and Amplitude Resolutions | 13 pages | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate the impact of spatial, temporal and amplitude
resolution (STAR) on the perceptual quality of a compressed video. Subjective
quality tests were carried out on a mobile device. Seven source sequences are
included in the tests and for each source sequence we have 27 test
configurations generated by JSVM encoder (3 QP levels, 3 spatial resolutions,
and 3 temporal resolutions), resulting a total of 189 processed video sequences
(PVSs). Videos coded at different spatial resolutions are displayed at the full
screen size of the mobile platform. Subjective data reveal that the impact of
spatial resolution (SR), temporal resolution (TR) and quantization stepsize
(QS) can each be captured by a function with a single content-dependent
parameter. The joint impact of SR, TR and QS can be accurately modeled by the
product of these three functions with only three parameters. We further find
that the quality decay rates with SR and QS, respectively are independent of
TR, and likewise, the decay rate with TR is independent of SR and QS,
respectively. However, there is a significant interaction between the effects
of SR and QS. The overall quality model is further validated on five other
datasets with very high accuracy. The complete model correlates well with the
subjective ratings with a Pearson Correlation Coefficient (PCC) of 0.991.
| [
{
"version": "v1",
"created": "Mon, 11 Jun 2012 19:06:07 GMT"
}
] | 2012-06-12T00:00:00 | [
[
"Ou",
"Yen-Fu",
""
],
[
"Xue",
"Yuanyi",
""
],
[
"Wang",
"Yao",
""
]
] | TITLE: Q-STAR:A Perceptual Video Quality Model Considering Impact of Spatial,
Temporal, and Amplitude Resolutions
ABSTRACT: In this paper, we investigate the impact of spatial, temporal and amplitude
resolution (STAR) on the perceptual quality of a compressed video. Subjective
quality tests were carried out on a mobile device. Seven source sequences are
included in the tests and for each source sequence we have 27 test
configurations generated by JSVM encoder (3 QP levels, 3 spatial resolutions,
and 3 temporal resolutions), resulting a total of 189 processed video sequences
(PVSs). Videos coded at different spatial resolutions are displayed at the full
screen size of the mobile platform. Subjective data reveal that the impact of
spatial resolution (SR), temporal resolution (TR) and quantization stepsize
(QS) can each be captured by a function with a single content-dependent
parameter. The joint impact of SR, TR and QS can be accurately modeled by the
product of these three functions with only three parameters. We further find
that the quality decay rates with SR and QS, respectively are independent of
TR, and likewise, the decay rate with TR is independent of SR and QS,
respectively. However, there is a significant interaction between the effects
of SR and QS. The overall quality model is further validated on five other
datasets with very high accuracy. The complete model correlates well with the
subjective ratings with a Pearson Correlation Coefficient (PCC) of 0.991.
| no_new_dataset | 0.942348 |
1202.0224 | James Bagrow | James P. Bagrow and Yu-Ru Lin | Mesoscopic structure and social aspects of human mobility | 7 pages, 5 figures (main text); 11 pages, 9 figures, 1 table
(supporting information) | PLoS ONE 7(5): e37676, 2012 | 10.1371/journal.pone.0037676 | null | physics.soc-ph cond-mat.stat-mech cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The individual movements of large numbers of people are important in many
contexts, from urban planning to disease spreading. Datasets that capture human
mobility are now available and many interesting features have been discovered,
including the ultra-slow spatial growth of individual mobility. However, the
detailed substructures and spatiotemporal flows of mobility - the sets and
sequences of visited locations - have not been well studied. We show that
individual mobility is dominated by small groups of frequently visited,
dynamically close locations, forming primary "habitats" capturing typical daily
activity, along with subsidiary habitats representing additional travel. These
habitats do not correspond to typical contexts such as home or work. The
temporal evolution of mobility within habitats, which constitutes most motion,
is universal across habitats and exhibits scaling patterns both distinct from
all previous observations and unpredicted by current models. The delay to enter
subsidiary habitats is a primary factor in the spatiotemporal growth of human
travel. Interestingly, habitats correlate with non-mobility dynamics such as
communication activity, implying that habitats may influence processes such as
information spreading and revealing new connections between human mobility and
social networks.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2012 17:18:02 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jun 2012 20:00:04 GMT"
}
] | 2012-06-11T00:00:00 | [
[
"Bagrow",
"James P.",
""
],
[
"Lin",
"Yu-Ru",
""
]
] | TITLE: Mesoscopic structure and social aspects of human mobility
ABSTRACT: The individual movements of large numbers of people are important in many
contexts, from urban planning to disease spreading. Datasets that capture human
mobility are now available and many interesting features have been discovered,
including the ultra-slow spatial growth of individual mobility. However, the
detailed substructures and spatiotemporal flows of mobility - the sets and
sequences of visited locations - have not been well studied. We show that
individual mobility is dominated by small groups of frequently visited,
dynamically close locations, forming primary "habitats" capturing typical daily
activity, along with subsidiary habitats representing additional travel. These
habitats do not correspond to typical contexts such as home or work. The
temporal evolution of mobility within habitats, which constitutes most motion,
is universal across habitats and exhibits scaling patterns both distinct from
all previous observations and unpredicted by current models. The delay to enter
subsidiary habitats is a primary factor in the spatiotemporal growth of human
travel. Interestingly, habitats correlate with non-mobility dynamics such as
communication activity, implying that habitats may influence processes such as
information spreading and revealing new connections between human mobility and
social networks.
| no_new_dataset | 0.933188 |
1206.1458 | Shervan Fekri ershad | Shervan Fekri Ershad and Sattar Hashemi | Dispelling Classes Gradually to Improve Quality of Feature Reduction
Approaches | 11 Pages, 5 Figure, 7 Tables; Advanced Computing: An International
Journal (ACIJ), Vol.3, No.3, May 2012 | null | 10.5121/acij.2012.3310 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature reduction is an important concept which is used for reducing
dimensions to decrease the computation complexity and time of classification.
Since now many approaches have been proposed for solving this problem, but
almost all of them just presented a fix output for each input dataset that some
of them aren't satisfied cases for classification. In this we proposed an
approach as processing input dataset to increase accuracy rate of each feature
extraction methods. First of all, a new concept called dispelling classes
gradually (DCG) is proposed to increase separability of classes based on their
labels. Next, this method is used to process input dataset of the feature
reduction approaches to decrease the misclassification error rate of their
outputs more than when output is achieved without any processing. In addition
our method has a good quality to collate with noise based on adapting dataset
with feature reduction approaches. In the result part, two conditions (With
process and without that) are compared to support our idea by using some of UCI
datasets.
| [
{
"version": "v1",
"created": "Thu, 7 Jun 2012 11:52:21 GMT"
}
] | 2012-06-08T00:00:00 | [
[
"Ershad",
"Shervan Fekri",
""
],
[
"Hashemi",
"Sattar",
""
]
] | TITLE: Dispelling Classes Gradually to Improve Quality of Feature Reduction
Approaches
ABSTRACT: Feature reduction is an important concept which is used for reducing
dimensions to decrease the computation complexity and time of classification.
Since now many approaches have been proposed for solving this problem, but
almost all of them just presented a fix output for each input dataset that some
of them aren't satisfied cases for classification. In this we proposed an
approach as processing input dataset to increase accuracy rate of each feature
extraction methods. First of all, a new concept called dispelling classes
gradually (DCG) is proposed to increase separability of classes based on their
labels. Next, this method is used to process input dataset of the feature
reduction approaches to decrease the misclassification error rate of their
outputs more than when output is achieved without any processing. In addition
our method has a good quality to collate with noise based on adapting dataset
with feature reduction approaches. In the result part, two conditions (With
process and without that) are compared to support our idea by using some of UCI
datasets.
| no_new_dataset | 0.943243 |
1206.1557 | Jay Gholap | Jay Gholap, Anurag Ingole, Jayesh Gohil, Shailesh Gargade and Vahida
Attar | Soil Data Analysis Using Classification Techniques and Soil Attribute
Prediction | 4 pages, published in International Journal of Computer Science
Issues, Volume 9, Issue 3 | null | null | null | cs.AI stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Agricultural research has been profited by technical advances such as
automation, data mining. Today, data mining is used in a vast areas and many
off-the-shelf data mining system products and domain specific data mining
application soft wares are available, but data mining in agricultural soil
datasets is a relatively a young research field. The large amounts of data that
are nowadays virtually harvested along with the crops have to be analyzed and
should be used to their full extent. This research aims at analysis of soil
dataset using data mining techniques. It focuses on classification of soil
using various algorithms available. Another important purpose is to predict
untested attributes using regression technique, and implementation of automated
soil sample classification.
| [
{
"version": "v1",
"created": "Thu, 7 Jun 2012 17:28:20 GMT"
}
] | 2012-06-08T00:00:00 | [
[
"Gholap",
"Jay",
""
],
[
"Ingole",
"Anurag",
""
],
[
"Gohil",
"Jayesh",
""
],
[
"Gargade",
"Shailesh",
""
],
[
"Attar",
"Vahida",
""
]
] | TITLE: Soil Data Analysis Using Classification Techniques and Soil Attribute
Prediction
ABSTRACT: Agricultural research has been profited by technical advances such as
automation, data mining. Today, data mining is used in a vast areas and many
off-the-shelf data mining system products and domain specific data mining
application soft wares are available, but data mining in agricultural soil
datasets is a relatively a young research field. The large amounts of data that
are nowadays virtually harvested along with the crops have to be analyzed and
should be used to their full extent. This research aims at analysis of soil
dataset using data mining techniques. It focuses on classification of soil
using various algorithms available. Another important purpose is to predict
untested attributes using regression technique, and implementation of automated
soil sample classification.
| no_new_dataset | 0.944842 |
1109.1396 | R\'obert Orm\'andi | R\'obert Orm\'andi, Istv\'an Heged\"us, M\'ark Jelasity | Gossip Learning with Linear Models on Fully Distributed Data | The paper was published in the journal Concurrency and Computation:
Practice and Experience
http://onlinelibrary.wiley.com/journal/10.1002/%28ISSN%291532-0634 (DOI:
http://dx.doi.org/10.1002/cpe.2858). The modifications are based on the
suggestions from the reviewers | null | 10.1002/cpe.2858 | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning over fully distributed data poses an important problem in
peer-to-peer (P2P) applications. In this model we have one data record at each
network node, but without the possibility to move raw data due to privacy
considerations. For example, user profiles, ratings, history, or sensor
readings can represent this case. This problem is difficult, because there is
no possibility to learn local models, the system model offers almost no
guarantees for reliability, yet the communication cost needs to be kept low.
Here we propose gossip learning, a generic approach that is based on multiple
models taking random walks over the network in parallel, while applying an
online learning algorithm to improve themselves, and getting combined via
ensemble learning methods. We present an instantiation of this approach for the
case of classification with linear models. Our main contribution is an ensemble
learning method which---through the continuous combination of the models in the
network---implements a virtual weighted voting mechanism over an exponential
number of models at practically no extra cost as compared to independent random
walks. We prove the convergence of the method theoretically, and perform
extensive experiments on benchmark datasets. Our experimental analysis
demonstrates the performance and robustness of the proposed approach.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2011 09:16:37 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jun 2012 09:55:07 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jun 2012 09:26:30 GMT"
}
] | 2012-06-07T00:00:00 | [
[
"Ormándi",
"Róbert",
""
],
[
"Hegedüs",
"István",
""
],
[
"Jelasity",
"Márk",
""
]
] | TITLE: Gossip Learning with Linear Models on Fully Distributed Data
ABSTRACT: Machine learning over fully distributed data poses an important problem in
peer-to-peer (P2P) applications. In this model we have one data record at each
network node, but without the possibility to move raw data due to privacy
considerations. For example, user profiles, ratings, history, or sensor
readings can represent this case. This problem is difficult, because there is
no possibility to learn local models, the system model offers almost no
guarantees for reliability, yet the communication cost needs to be kept low.
Here we propose gossip learning, a generic approach that is based on multiple
models taking random walks over the network in parallel, while applying an
online learning algorithm to improve themselves, and getting combined via
ensemble learning methods. We present an instantiation of this approach for the
case of classification with linear models. Our main contribution is an ensemble
learning method which---through the continuous combination of the models in the
network---implements a virtual weighted voting mechanism over an exponential
number of models at practically no extra cost as compared to independent random
walks. We prove the convergence of the method theoretically, and perform
extensive experiments on benchmark datasets. Our experimental analysis
demonstrates the performance and robustness of the proposed approach.
| no_new_dataset | 0.944177 |
1206.1134 | Rachit Agarwal | Rachit Agarwal, Matthew Caesar, P. Brighten Godfrey, Ben Y. Zhao | Shortest Paths in Less Than a Millisecond | 6 pages; to appear in SIGCOMM WOSN 2012 | null | null | null | cs.SI cs.DB physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of answering point-to-point shortest path queries on
massive social networks. The goal is to answer queries within tens of
milliseconds while minimizing the memory requirements. We present a technique
that achieves this goal for an extremely large fraction of path queries by
exploiting the structure of the social networks.
Using evaluations on real-world datasets, we argue that our technique offers
a unique trade-off between latency, memory and accuracy. For instance, for the
LiveJournal social network (roughly 5 million nodes and 69 million edges), our
technique can answer 99.9% of the queries in less than a millisecond. In
comparison to storing all pair shortest paths, our technique requires at least
550x less memory; the average query time is roughly 365 microseconds --- 430x
faster than the state-of-the-art shortest path algorithm. Furthermore, the
relative performance of our technique improves with the size (and density) of
the network. For the Orkut social network (3 million nodes and 220 million
edges), for instance, our technique is roughly 2588x faster than the
state-of-the-art algorithm for computing shortest paths.
| [
{
"version": "v1",
"created": "Wed, 6 Jun 2012 07:13:37 GMT"
}
] | 2012-06-07T00:00:00 | [
[
"Agarwal",
"Rachit",
""
],
[
"Caesar",
"Matthew",
""
],
[
"Godfrey",
"P. Brighten",
""
],
[
"Zhao",
"Ben Y.",
""
]
] | TITLE: Shortest Paths in Less Than a Millisecond
ABSTRACT: We consider the problem of answering point-to-point shortest path queries on
massive social networks. The goal is to answer queries within tens of
milliseconds while minimizing the memory requirements. We present a technique
that achieves this goal for an extremely large fraction of path queries by
exploiting the structure of the social networks.
Using evaluations on real-world datasets, we argue that our technique offers
a unique trade-off between latency, memory and accuracy. For instance, for the
LiveJournal social network (roughly 5 million nodes and 69 million edges), our
technique can answer 99.9% of the queries in less than a millisecond. In
comparison to storing all pair shortest paths, our technique requires at least
550x less memory; the average query time is roughly 365 microseconds --- 430x
faster than the state-of-the-art shortest path algorithm. Furthermore, the
relative performance of our technique improves with the size (and density) of
the network. For the Orkut social network (3 million nodes and 220 million
edges), for instance, our technique is roughly 2588x faster than the
state-of-the-art algorithm for computing shortest paths.
| no_new_dataset | 0.946399 |
1206.0335 | Nima Hatami | Nima Hatami, Camelia Chira and Giuliano Armano | A Route Confidence Evaluation Method for Reliable Hierarchical Text
Categorization | null | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical Text Categorization (HTC) is becoming increasingly important
with the rapidly growing amount of text data available in the World Wide Web.
Among the different strategies proposed to cope with HTC, the Local Classifier
per Node (LCN) approach attains good performance by mirroring the underlying
class hierarchy while enforcing a top-down strategy in the testing step.
However, the problem of embedding hierarchical information (parent-child
relationship) to improve the performance of HTC systems still remains open. A
confidence evaluation method for a selected route in the hierarchy is proposed
to evaluate the reliability of the final candidate labels in an HTC system. In
order to take into account the information embedded in the hierarchy, weight
factors are used to take into account the importance of each level. An
acceptance/rejection strategy in the top-down decision making process is
proposed, which improves the overall categorization accuracy by rejecting a few
percentage of samples, i.e., those with low reliability score. Experimental
results on the Reuters benchmark dataset (RCV1- v2) confirm the effectiveness
of the proposed method, compared to other state-of-the art HTC methods.
| [
{
"version": "v1",
"created": "Sat, 2 Jun 2012 01:37:22 GMT"
}
] | 2012-06-05T00:00:00 | [
[
"Hatami",
"Nima",
""
],
[
"Chira",
"Camelia",
""
],
[
"Armano",
"Giuliano",
""
]
] | TITLE: A Route Confidence Evaluation Method for Reliable Hierarchical Text
Categorization
ABSTRACT: Hierarchical Text Categorization (HTC) is becoming increasingly important
with the rapidly growing amount of text data available in the World Wide Web.
Among the different strategies proposed to cope with HTC, the Local Classifier
per Node (LCN) approach attains good performance by mirroring the underlying
class hierarchy while enforcing a top-down strategy in the testing step.
However, the problem of embedding hierarchical information (parent-child
relationship) to improve the performance of HTC systems still remains open. A
confidence evaluation method for a selected route in the hierarchy is proposed
to evaluate the reliability of the final candidate labels in an HTC system. In
order to take into account the information embedded in the hierarchy, weight
factors are used to take into account the importance of each level. An
acceptance/rejection strategy in the top-down decision making process is
proposed, which improves the overall categorization accuracy by rejecting a few
percentage of samples, i.e., those with low reliability score. Experimental
results on the Reuters benchmark dataset (RCV1- v2) confirm the effectiveness
of the proposed method, compared to other state-of-the art HTC methods.
| no_new_dataset | 0.958538 |
1206.0377 | Zoltan Szabo | Balazs Pinter, Gyula Voros, Zoltan Szabo, Andras Lorincz | Automated Word Puzzle Generation via Topic Dictionaries | 4 pages | International Conference on Machine Learning (ICML-2012) -
Sparsity, Dictionaries and Projections in Machine Learning and Signal
Processing Workshop, Edinburgh, Scotland, 30 June 2012 | null | null | cs.CL math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a general method for automated word puzzle generation. Contrary to
previous approaches in this novel field, the presented method does not rely on
highly structured datasets obtained with serious human annotation effort: it
only needs an unstructured and unannotated corpus (i.e., document collection)
as input. The method builds upon two additional pillars: (i) a topic model,
which induces a topic dictionary from the input corpus (examples include e.g.,
latent semantic analysis, group-structured dictionaries or latent Dirichlet
allocation), and (ii) a semantic similarity measure of word pairs. Our method
can (i) generate automatically a large number of proper word puzzles of
different types, including the odd one out, choose the related word and
separate the topics puzzle. (ii) It can easily create domain-specific puzzles
by replacing the corpus component. (iii) It is also capable of automatically
generating puzzles with parameterizable levels of difficulty suitable for,
e.g., beginners or intermediate learners.
| [
{
"version": "v1",
"created": "Sat, 2 Jun 2012 13:11:17 GMT"
}
] | 2012-06-05T00:00:00 | [
[
"Pinter",
"Balazs",
""
],
[
"Voros",
"Gyula",
""
],
[
"Szabo",
"Zoltan",
""
],
[
"Lorincz",
"Andras",
""
]
] | TITLE: Automated Word Puzzle Generation via Topic Dictionaries
ABSTRACT: We propose a general method for automated word puzzle generation. Contrary to
previous approaches in this novel field, the presented method does not rely on
highly structured datasets obtained with serious human annotation effort: it
only needs an unstructured and unannotated corpus (i.e., document collection)
as input. The method builds upon two additional pillars: (i) a topic model,
which induces a topic dictionary from the input corpus (examples include e.g.,
latent semantic analysis, group-structured dictionaries or latent Dirichlet
allocation), and (ii) a semantic similarity measure of word pairs. Our method
can (i) generate automatically a large number of proper word puzzles of
different types, including the odd one out, choose the related word and
separate the topics puzzle. (ii) It can easily create domain-specific puzzles
by replacing the corpus component. (iii) It is also capable of automatically
generating puzzles with parameterizable levels of difficulty suitable for,
e.g., beginners or intermediate learners.
| no_new_dataset | 0.946892 |
1205.5159 | Nicolas Dobigeon | Nicolas Dobigeon and Nathalie Brun | Spectral mixture analysis of EELS spectrum-images | Manuscript accepted for publication in Ultramicroscopy | null | null | null | cond-mat.mtrl-sci physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in detectors and computer science have enabled the
acquisition and the processing of multidimensional datasets, in particular in
the field of spectral imaging. Benefiting from these new developments, earth
scientists try to recover the reflectance spectra of macroscopic materials
(e.g., water, grass, mineral types...) present in an observed scene and to
estimate their respective proportions in each mixed pixel of the acquired
image. This task is usually referred to as spectral mixture analysis or
spectral unmixing (SU). SU aims at decomposing the measured pixel spectrum into
a collection of constituent spectra, called endmembers, and a set of
corresponding fractions (abundances) that indicate the proportion of each
endmember present in the pixel. Similarly, when processing spectrum-images,
microscopists usually try to map elemental, physical and chemical state
information of a given material. This paper reports how a SU algorithm
dedicated to remote sensing hyperspectral images can be successfully applied to
analyze spectrum-image resulting from electron energy-loss spectroscopy (EELS).
SU generally overcomes standard limitations inherent to other multivariate
statistical analysis methods, such as principal component analysis (PCA) or
independent component analysis (ICA), that have been previously used to analyze
EELS maps. Indeed, ICA and PCA may perform poorly for linear spectral mixture
analysis due to the strong dependence between the abundances of the different
materials. One example is presented here to demonstrate the potential of this
technique for EELS analysis.
| [
{
"version": "v1",
"created": "Wed, 23 May 2012 11:56:33 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Jun 2012 12:43:37 GMT"
}
] | 2012-06-04T00:00:00 | [
[
"Dobigeon",
"Nicolas",
""
],
[
"Brun",
"Nathalie",
""
]
] | TITLE: Spectral mixture analysis of EELS spectrum-images
ABSTRACT: Recent advances in detectors and computer science have enabled the
acquisition and the processing of multidimensional datasets, in particular in
the field of spectral imaging. Benefiting from these new developments, earth
scientists try to recover the reflectance spectra of macroscopic materials
(e.g., water, grass, mineral types...) present in an observed scene and to
estimate their respective proportions in each mixed pixel of the acquired
image. This task is usually referred to as spectral mixture analysis or
spectral unmixing (SU). SU aims at decomposing the measured pixel spectrum into
a collection of constituent spectra, called endmembers, and a set of
corresponding fractions (abundances) that indicate the proportion of each
endmember present in the pixel. Similarly, when processing spectrum-images,
microscopists usually try to map elemental, physical and chemical state
information of a given material. This paper reports how a SU algorithm
dedicated to remote sensing hyperspectral images can be successfully applied to
analyze spectrum-image resulting from electron energy-loss spectroscopy (EELS).
SU generally overcomes standard limitations inherent to other multivariate
statistical analysis methods, such as principal component analysis (PCA) or
independent component analysis (ICA), that have been previously used to analyze
EELS maps. Indeed, ICA and PCA may perform poorly for linear spectral mixture
analysis due to the strong dependence between the abundances of the different
materials. One example is presented here to demonstrate the potential of this
technique for EELS analysis.
| no_new_dataset | 0.949106 |
1205.6523 | Jana Gevertz | Chamont Wang, Jana Gevertz, Chaur-Chin Chen, Leonardo Auslender | Finding Important Genes from High-Dimensional Data: An Appraisal of
Statistical Tests and Machine-Learning Approaches | 36 pages, 9 figures | null | null | null | stat.ML cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past decades, statisticians and machine-learning researchers have
developed literally thousands of new tools for the reduction of
high-dimensional data in order to identify the variables most responsible for a
particular trait. These tools have applications in a plethora of settings,
including data analysis in the fields of business, education, forensics, and
biology (such as microarray, proteomics, brain imaging), to name a few.
In the present work, we focus our investigation on the limitations and
potential misuses of certain tools in the analysis of the benchmark colon
cancer data (2,000 variables; Alon et al., 1999) and the prostate cancer data
(6,033 variables; Efron, 2010, 2008). Our analysis demonstrates that models
that produce 100% accuracy measures often select different sets of genes and
cannot stand the scrutiny of parameter estimates and model stability.
Furthermore, we created a host of simulation datasets and "artificial
diseases" to evaluate the reliability of commonly used statistical and data
mining tools. We found that certain widely used models can classify the data
with 100% accuracy without using any of the variables responsible for the
disease. With moderate sample size and suitable pre-screening, stochastic
gradient boosting will be shown to be a superior model for gene selection and
variable screening from high-dimensional datasets.
| [
{
"version": "v1",
"created": "Wed, 30 May 2012 01:23:01 GMT"
}
] | 2012-05-31T00:00:00 | [
[
"Wang",
"Chamont",
""
],
[
"Gevertz",
"Jana",
""
],
[
"Chen",
"Chaur-Chin",
""
],
[
"Auslender",
"Leonardo",
""
]
] | TITLE: Finding Important Genes from High-Dimensional Data: An Appraisal of
Statistical Tests and Machine-Learning Approaches
ABSTRACT: Over the past decades, statisticians and machine-learning researchers have
developed literally thousands of new tools for the reduction of
high-dimensional data in order to identify the variables most responsible for a
particular trait. These tools have applications in a plethora of settings,
including data analysis in the fields of business, education, forensics, and
biology (such as microarray, proteomics, brain imaging), to name a few.
In the present work, we focus our investigation on the limitations and
potential misuses of certain tools in the analysis of the benchmark colon
cancer data (2,000 variables; Alon et al., 1999) and the prostate cancer data
(6,033 variables; Efron, 2010, 2008). Our analysis demonstrates that models
that produce 100% accuracy measures often select different sets of genes and
cannot stand the scrutiny of parameter estimates and model stability.
Furthermore, we created a host of simulation datasets and "artificial
diseases" to evaluate the reliability of commonly used statistical and data
mining tools. We found that certain widely used models can classify the data
with 100% accuracy without using any of the variables responsible for the
disease. With moderate sample size and suitable pre-screening, stochastic
gradient boosting will be shown to be a superior model for gene selection and
variable screening from high-dimensional datasets.
| no_new_dataset | 0.925095 |
1205.6605 | Jan Egger | Jan Egger, Bernd Freisleben, Christopher Nimsky, Tina Kapur | Template-Cut: A Pattern-Based Segmentation Paradigm | 8 pages, 6 figures, 3 tables, 6 equations, 51 references | J. Egger, B. Freisleben, C. Nimsky, T. Kapur. Template-Cut: A
Pattern-Based Segmentation Paradigm. Nature - Scientific Reports, Nature
Publishing Group (NPG), 2(420), 2012 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present a scale-invariant, template-based segmentation paradigm that sets
up a graph and performs a graph cut to separate an object from the background.
Typically graph-based schemes distribute the nodes of the graph uniformly and
equidistantly on the image, and use a regularizer to bias the cut towards a
particular shape. The strategy of uniform and equidistant nodes does not allow
the cut to prefer more complex structures, especially when areas of the object
are indistinguishable from the background. We propose a solution by introducing
the concept of a "template shape" of the target object in which the nodes are
sampled non-uniformly and non-equidistantly on the image. We evaluate it on
2D-images where the object's textures and backgrounds are similar, and large
areas of the object have the same gray level appearance as the background. We
also evaluate it in 3D on 60 brain tumor datasets for neurosurgical planning
purposes.
| [
{
"version": "v1",
"created": "Wed, 30 May 2012 09:44:43 GMT"
}
] | 2012-05-31T00:00:00 | [
[
"Egger",
"Jan",
""
],
[
"Freisleben",
"Bernd",
""
],
[
"Nimsky",
"Christopher",
""
],
[
"Kapur",
"Tina",
""
]
] | TITLE: Template-Cut: A Pattern-Based Segmentation Paradigm
ABSTRACT: We present a scale-invariant, template-based segmentation paradigm that sets
up a graph and performs a graph cut to separate an object from the background.
Typically graph-based schemes distribute the nodes of the graph uniformly and
equidistantly on the image, and use a regularizer to bias the cut towards a
particular shape. The strategy of uniform and equidistant nodes does not allow
the cut to prefer more complex structures, especially when areas of the object
are indistinguishable from the background. We propose a solution by introducing
the concept of a "template shape" of the target object in which the nodes are
sampled non-uniformly and non-equidistantly on the image. We evaluate it on
2D-images where the object's textures and backgrounds are similar, and large
areas of the object have the same gray level appearance as the background. We
also evaluate it in 3D on 60 brain tumor datasets for neurosurgical planning
purposes.
| no_new_dataset | 0.958148 |
1205.6693 | Jia Wang | Jia Wang, James Cheng | Truss Decomposition in Massive Networks | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 9, pp.
812-823 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The k-truss is a type of cohesive subgraphs proposed recently for the study
of networks. While the problem of computing most cohesive subgraphs is NP-hard,
there exists a polynomial time algorithm for computing k-truss. Compared with
k-core which is also efficient to compute, k-truss represents the "core" of a
k-core that keeps the key information of, while filtering out less important
information from, the k-core. However, existing algorithms for computing
k-truss are inefficient for handling today's massive networks. We first improve
the existing in-memory algorithm for computing k-truss in networks of moderate
size. Then, we propose two I/O-efficient algorithms to handle massive networks
that cannot fit in main memory. Our experiments on real datasets verify the
efficiency of our algorithms and the value of k-truss.
| [
{
"version": "v1",
"created": "Wed, 30 May 2012 14:32:46 GMT"
}
] | 2012-05-31T00:00:00 | [
[
"Wang",
"Jia",
""
],
[
"Cheng",
"James",
""
]
] | TITLE: Truss Decomposition in Massive Networks
ABSTRACT: The k-truss is a type of cohesive subgraphs proposed recently for the study
of networks. While the problem of computing most cohesive subgraphs is NP-hard,
there exists a polynomial time algorithm for computing k-truss. Compared with
k-core which is also efficient to compute, k-truss represents the "core" of a
k-core that keeps the key information of, while filtering out less important
information from, the k-core. However, existing algorithms for computing
k-truss are inefficient for handling today's massive networks. We first improve
the existing in-memory algorithm for computing k-truss in networks of moderate
size. Then, we propose two I/O-efficient algorithms to handle massive networks
that cannot fit in main memory. Our experiments on real datasets verify the
efficiency of our algorithms and the value of k-truss.
| no_new_dataset | 0.948058 |
1205.6694 | Ju Fan | Ju Fan, Guoliang Li, Lizhu Zhou, Shanshan Chen, Jun Hu | SEAL: Spatio-Textual Similarity Search | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 9, pp.
824-835 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Location-based services (LBS) have become more and more ubiquitous recently.
Existing methods focus on finding relevant points-of-interest (POIs) based on
users' locations and query keywords. Nowadays, modern LBS applications generate
a new kind of spatio-textual data, regions-of-interest (ROIs), containing
region-based spatial information and textual description, e.g., mobile user
profiles with active regions and interest tags. To satisfy search requirements
on ROIs, we study a new research problem, called spatio-textual similarity
search: Given a set of ROIs and a query ROI, we find the similar ROIs by
considering spatial overlap and textual similarity. Spatio-textual similarity
search has many important applications, e.g., social marketing in
location-aware social networks. It calls for an efficient search method to
support large scales of spatio-textual data in LBS systems. To this end, we
introduce a filter-and-verification framework to compute the answers. In the
filter step, we generate signatures for the ROIs and the query, and utilize the
signatures to generate candidates whose signatures are similar to that of the
query. In the verification step, we verify the candidates and identify the
final answers. To achieve high performance, we generate effective high-quality
signatures, and devise efficient filtering algorithms as well as pruning
techniques. Experimental results on real and synthetic datasets show that our
method achieves high performance.
| [
{
"version": "v1",
"created": "Wed, 30 May 2012 14:32:51 GMT"
}
] | 2012-05-31T00:00:00 | [
[
"Fan",
"Ju",
""
],
[
"Li",
"Guoliang",
""
],
[
"Zhou",
"Lizhu",
""
],
[
"Chen",
"Shanshan",
""
],
[
"Hu",
"Jun",
""
]
] | TITLE: SEAL: Spatio-Textual Similarity Search
ABSTRACT: Location-based services (LBS) have become more and more ubiquitous recently.
Existing methods focus on finding relevant points-of-interest (POIs) based on
users' locations and query keywords. Nowadays, modern LBS applications generate
a new kind of spatio-textual data, regions-of-interest (ROIs), containing
region-based spatial information and textual description, e.g., mobile user
profiles with active regions and interest tags. To satisfy search requirements
on ROIs, we study a new research problem, called spatio-textual similarity
search: Given a set of ROIs and a query ROI, we find the similar ROIs by
considering spatial overlap and textual similarity. Spatio-textual similarity
search has many important applications, e.g., social marketing in
location-aware social networks. It calls for an efficient search method to
support large scales of spatio-textual data in LBS systems. To this end, we
introduce a filter-and-verification framework to compute the answers. In the
filter step, we generate signatures for the ROIs and the query, and utilize the
signatures to generate candidates whose signatures are similar to that of the
query. In the verification step, we verify the candidates and identify the
final answers. To achieve high performance, we generate effective high-quality
signatures, and devise efficient filtering algorithms as well as pruning
techniques. Experimental results on real and synthetic datasets show that our
method achieves high performance.
| no_new_dataset | 0.946843 |
1205.6695 | Theodoros Lappas | Theodoros Lappas, Marcos R. Vieira, Dimitrios Gunopulos, Vassilis J.
Tsotras | On The Spatiotemporal Burstiness of Terms | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 9, pp.
836-847 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Thousands of documents are made available to the users via the web on a daily
basis. One of the most extensively studied problems in the context of such
document streams is burst identification. Given a term t, a burst is generally
exhibited when an unusually high frequency is observed for t. While spatial and
temporal burstiness have been studied individually in the past, our work is the
first to simultaneously track and measure spatiotemporal term burstiness. In
addition, we use the mined burstiness information toward an efficient
document-search engine: given a user's query of terms, our engine returns a
ranked list of documents discussing influential events with a strong
spatiotemporal impact. We demonstrate the efficiency of our methods with an
extensive experimental evaluation on real and synthetic datasets.
| [
{
"version": "v1",
"created": "Wed, 30 May 2012 14:32:56 GMT"
}
] | 2012-05-31T00:00:00 | [
[
"Lappas",
"Theodoros",
""
],
[
"Vieira",
"Marcos R.",
""
],
[
"Gunopulos",
"Dimitrios",
""
],
[
"Tsotras",
"Vassilis J.",
""
]
] | TITLE: On The Spatiotemporal Burstiness of Terms
ABSTRACT: Thousands of documents are made available to the users via the web on a daily
basis. One of the most extensively studied problems in the context of such
document streams is burst identification. Given a term t, a burst is generally
exhibited when an unusually high frequency is observed for t. While spatial and
temporal burstiness have been studied individually in the past, our work is the
first to simultaneously track and measure spatiotemporal term burstiness. In
addition, we use the mined burstiness information toward an efficient
document-search engine: given a user's query of terms, our engine returns a
ranked list of documents discussing influential events with a strong
spatiotemporal impact. We demonstrate the efficiency of our methods with an
extensive experimental evaluation on real and synthetic datasets.
| no_new_dataset | 0.943504 |
1205.6696 | Houtan Shirani-Mehr | Houtan Shirani-Mehr, Farnoush Banaei Kashani, Cyrus Shahabi | Efficient Reachability Query Evaluation in Large Spatiotemporal Contact
Datasets | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 9, pp.
848-859 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advent of reliable positioning technologies and prevalence of
location-based services, it is now feasible to accurately study the propagation
of items such as infectious viruses, sensitive information pieces, and malwares
through a population of moving objects, e.g., individuals, mobile devices, and
vehicles. In such application scenarios, an item passes between two objects
when the objects are sufficiently close (i.e., when they are, so-called, in
contact), and hence once an item is initiated, it can penetrate the object
population through the evolving network of contacts among objects, termed
contact network. In this paper, for the first time we define and study
reachability queries in large (i.e., disk-resident) contact datasets which
record the movement of a (potentially large) set of objects moving in a spatial
environment over an extended time period. A reachability query verifies whether
two objects are "reachable" through the evolving contact network represented by
such contact datasets. We propose two contact-dataset indexes that enable
efficient evaluation of such queries despite the potentially humongous size of
the contact datasets. With the first index, termed ReachGrid, at the query time
only a small necessary portion of the contact network which is required for
reachability evaluation is constructed and traversed. With the second approach,
termed ReachGraph, we precompute reachability at different scales and leverage
these precalculations at the query time for efficient query processing. We
optimize the placement of both indexes on disk to enable efficient index
traversal during query processing. We study the pros and cons of our proposed
approaches by performing extensive experiments with both real and synthetic
data. Based on our experimental results, our proposed approaches outperform
existing reachability query processing techniques in contact n...[truncated].
| [
{
"version": "v1",
"created": "Wed, 30 May 2012 14:33:01 GMT"
}
] | 2012-05-31T00:00:00 | [
[
"Shirani-Mehr",
"Houtan",
""
],
[
"Kashani",
"Farnoush Banaei",
""
],
[
"Shahabi",
"Cyrus",
""
]
] | TITLE: Efficient Reachability Query Evaluation in Large Spatiotemporal Contact
Datasets
ABSTRACT: With the advent of reliable positioning technologies and prevalence of
location-based services, it is now feasible to accurately study the propagation
of items such as infectious viruses, sensitive information pieces, and malwares
through a population of moving objects, e.g., individuals, mobile devices, and
vehicles. In such application scenarios, an item passes between two objects
when the objects are sufficiently close (i.e., when they are, so-called, in
contact), and hence once an item is initiated, it can penetrate the object
population through the evolving network of contacts among objects, termed
contact network. In this paper, for the first time we define and study
reachability queries in large (i.e., disk-resident) contact datasets which
record the movement of a (potentially large) set of objects moving in a spatial
environment over an extended time period. A reachability query verifies whether
two objects are "reachable" through the evolving contact network represented by
such contact datasets. We propose two contact-dataset indexes that enable
efficient evaluation of such queries despite the potentially humongous size of
the contact datasets. With the first index, termed ReachGrid, at the query time
only a small necessary portion of the contact network which is required for
reachability evaluation is constructed and traversed. With the second approach,
termed ReachGraph, we precompute reachability at different scales and leverage
these precalculations at the query time for efficient query processing. We
optimize the placement of both indexes on disk to enable efficient index
traversal during query processing. We study the pros and cons of our proposed
approaches by performing extensive experiments with both real and synthetic
data. Based on our experimental results, our proposed approaches outperform
existing reachability query processing techniques in contact n...[truncated].
| no_new_dataset | 0.939748 |
1205.6700 | Hongzhi Yin | Hongzhi Yin, Bin Cui, Jing Li, Junjie Yao, Chen Chen | Challenging the Long Tail Recommendation | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 9, pp.
896-907 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of "infinite-inventory" retailers such as Amazon.com and Netflix
has been largely attributed to a "long tail" phenomenon. Although the majority
of their inventory is not in high demand, these niche products, unavailable at
limited-inventory competitors, generate a significant fraction of total revenue
in aggregate. In addition, tail product availability can boost head sales by
offering consumers the convenience of "one-stop shopping" for both their
mainstream and niche tastes. However, most of existing recommender systems,
especially collaborative filter based methods, can not recommend tail products
due to the data sparsity issue. It has been widely acknowledged that to
recommend popular products is easier yet more trivial while to recommend long
tail products adds more novelty yet it is also a more challenging task. In this
paper, we propose a novel suite of graph-based algorithms for the long tail
recommendation. We first represent user-item information with undirected
edge-weighted graph and investigate the theoretical foundation of applying
Hitting Time algorithm for long tail item recommendation. To improve
recommendation diversity and accuracy, we extend Hitting Time and propose
efficient Absorbing Time algorithm to help users find their favorite long tail
items. Finally, we refine the Absorbing Time algorithm and propose two
entropy-biased Absorbing Cost algorithms to distinguish the variation on
different user-item rating pairs, which further enhances the effectiveness of
long tail recommendation. Empirical experiments on two real life datasets show
that our proposed algorithms are effective to recommend long tail items and
outperform state-of-the-art recommendation techniques.
| [
{
"version": "v1",
"created": "Wed, 30 May 2012 14:33:56 GMT"
}
] | 2012-05-31T00:00:00 | [
[
"Yin",
"Hongzhi",
""
],
[
"Cui",
"Bin",
""
],
[
"Li",
"Jing",
""
],
[
"Yao",
"Junjie",
""
],
[
"Chen",
"Chen",
""
]
] | TITLE: Challenging the Long Tail Recommendation
ABSTRACT: The success of "infinite-inventory" retailers such as Amazon.com and Netflix
has been largely attributed to a "long tail" phenomenon. Although the majority
of their inventory is not in high demand, these niche products, unavailable at
limited-inventory competitors, generate a significant fraction of total revenue
in aggregate. In addition, tail product availability can boost head sales by
offering consumers the convenience of "one-stop shopping" for both their
mainstream and niche tastes. However, most of existing recommender systems,
especially collaborative filter based methods, can not recommend tail products
due to the data sparsity issue. It has been widely acknowledged that to
recommend popular products is easier yet more trivial while to recommend long
tail products adds more novelty yet it is also a more challenging task. In this
paper, we propose a novel suite of graph-based algorithms for the long tail
recommendation. We first represent user-item information with undirected
edge-weighted graph and investigate the theoretical foundation of applying
Hitting Time algorithm for long tail item recommendation. To improve
recommendation diversity and accuracy, we extend Hitting Time and propose
efficient Absorbing Time algorithm to help users find their favorite long tail
items. Finally, we refine the Absorbing Time algorithm and propose two
entropy-biased Absorbing Cost algorithms to distinguish the variation on
different user-item rating pairs, which further enhances the effectiveness of
long tail recommendation. Empirical experiments on two real life datasets show
that our proposed algorithms are effective to recommend long tail items and
outperform state-of-the-art recommendation techniques.
| no_new_dataset | 0.947186 |
1205.6278 | Bosiljka Tadic | Milovan \v{S}uvakov, David Garcia, Frank Schweitzer, Bosiljka Tadi\'c | Agent-based simulations of emotion spreading in online social networks | 21 pages, 13 figures | null | null | IJS-F1 preprint 12/08 | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantitative analysis of empirical data from online social networks reveals
group dynamics in which emotions are involved (\v{S}uvakov et al). Full
understanding of the underlying mechanisms, however, remains a challenging
task. Using agent-based computer simulations, in this paper we study dynamics
of emotional communications in online social networks. The rules that guide how
the agents interact are motivated, and the realistic network structure and some
important parameters are inferred from the empirical dataset of
\texttt{MySpace} social network. Agent's emotional state is characterized by
two variables representing psychological arousal---reactivity to stimuli, and
valence---attractiveness or aversiveness, by which common emotions can be
defined. Agent's action is triggered by increased arousal. High-resolution
dynamics is implemented where each message carrying agent's emotion along the
network link is identified and its effect on the recipient agent is considered
as continuously aging in time. Our results demonstrate that (i) aggregated
group behaviors may arise from individual emotional actions of agents; (ii)
collective states characterized by temporal correlations and dominant positive
emotions emerge, similar to the empirical system; (iii) nature of the driving
signal---rate of user's stepping into online world, has profound effects on
building the coherent behaviors, which are observed for users in online social
networks. Further, our simulations suggest that spreading patterns differ for
the emotions, e.g., "enthusiastic" and "ashamed", which have entirely different
emotional content. {\bf {All data used in this study are fully anonymized.}}
| [
{
"version": "v1",
"created": "Tue, 29 May 2012 07:10:15 GMT"
}
] | 2012-05-30T00:00:00 | [
[
"Šuvakov",
"Milovan",
""
],
[
"Garcia",
"David",
""
],
[
"Schweitzer",
"Frank",
""
],
[
"Tadić",
"Bosiljka",
""
]
] | TITLE: Agent-based simulations of emotion spreading in online social networks
ABSTRACT: Quantitative analysis of empirical data from online social networks reveals
group dynamics in which emotions are involved (\v{S}uvakov et al). Full
understanding of the underlying mechanisms, however, remains a challenging
task. Using agent-based computer simulations, in this paper we study dynamics
of emotional communications in online social networks. The rules that guide how
the agents interact are motivated, and the realistic network structure and some
important parameters are inferred from the empirical dataset of
\texttt{MySpace} social network. Agent's emotional state is characterized by
two variables representing psychological arousal---reactivity to stimuli, and
valence---attractiveness or aversiveness, by which common emotions can be
defined. Agent's action is triggered by increased arousal. High-resolution
dynamics is implemented where each message carrying agent's emotion along the
network link is identified and its effect on the recipient agent is considered
as continuously aging in time. Our results demonstrate that (i) aggregated
group behaviors may arise from individual emotional actions of agents; (ii)
collective states characterized by temporal correlations and dominant positive
emotions emerge, similar to the empirical system; (iii) nature of the driving
signal---rate of user's stepping into online world, has profound effects on
building the coherent behaviors, which are observed for users in online social
networks. Further, our simulations suggest that spreading patterns differ for
the emotions, e.g., "enthusiastic" and "ashamed", which have entirely different
emotional content. {\bf {All data used in this study are fully anonymized.}}
| no_new_dataset | 0.947527 |
1205.6373 | Gerard Burnside | Gerard Burnside, Dohy Hong, Son Nguyen-Kim and Liang Liu | Publication Induced Research Analysis (PIRA) - Experiments on Real Data | null | null | null | null | cs.DL cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes the first results obtained by implementing a novel
approach to rank vertices in a heterogeneous graph, based on the PageRank
family of algorithms and applied here to the bipartite graph of papers and
authors as a first evaluation of its relevance on real data samples. With this
approach to evaluate research activities, the ranking of a paper/author depends
on that of the papers/authors citing it/him or her. We compare the results
against existing ranking methods (including methods which simply apply PageRank
to the graph of papers or the graph of authors) through the analysis of simple
scenarios based on a real dataset built from DBLP and CiteseerX. The results
show that in all examined cases the obtained result is most pertinent with our
method which allows to orient our future work to optimizing the execution of
this algorithm.
| [
{
"version": "v1",
"created": "Tue, 29 May 2012 14:28:10 GMT"
}
] | 2012-05-30T00:00:00 | [
[
"Burnside",
"Gerard",
""
],
[
"Hong",
"Dohy",
""
],
[
"Nguyen-Kim",
"Son",
""
],
[
"Liu",
"Liang",
""
]
] | TITLE: Publication Induced Research Analysis (PIRA) - Experiments on Real Data
ABSTRACT: This paper describes the first results obtained by implementing a novel
approach to rank vertices in a heterogeneous graph, based on the PageRank
family of algorithms and applied here to the bipartite graph of papers and
authors as a first evaluation of its relevance on real data samples. With this
approach to evaluate research activities, the ranking of a paper/author depends
on that of the papers/authors citing it/him or her. We compare the results
against existing ranking methods (including methods which simply apply PageRank
to the graph of papers or the graph of authors) through the analysis of simple
scenarios based on a real dataset built from DBLP and CiteseerX. The results
show that in all examined cases the obtained result is most pertinent with our
method which allows to orient our future work to optimizing the execution of
this algorithm.
| no_new_dataset | 0.945851 |
1205.5353 | Ravindra Jain | Ravindra Jain | A hybrid clustering algorithm for data mining | null | null | null | null | cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data clustering is a process of arranging similar data into groups. A
clustering algorithm partitions a data set into several groups such that the
similarity within a group is better than among groups. In this paper a hybrid
clustering algorithm based on K-mean and K-harmonic mean (KHM) is described.
The proposed algorithm is tested on five different datasets. The research is
focused on fast and accurate clustering. Its performance is compared with the
traditional K-means & KHM algorithm. The result obtained from proposed hybrid
algorithm is much better than the traditional K-mean & KHM algorithm.
| [
{
"version": "v1",
"created": "Thu, 24 May 2012 07:37:28 GMT"
}
] | 2012-05-25T00:00:00 | [
[
"Jain",
"Ravindra",
""
]
] | TITLE: A hybrid clustering algorithm for data mining
ABSTRACT: Data clustering is a process of arranging similar data into groups. A
clustering algorithm partitions a data set into several groups such that the
similarity within a group is better than among groups. In this paper a hybrid
clustering algorithm based on K-mean and K-harmonic mean (KHM) is described.
The proposed algorithm is tested on five different datasets. The research is
focused on fast and accurate clustering. Its performance is compared with the
traditional K-means & KHM algorithm. The result obtained from proposed hybrid
algorithm is much better than the traditional K-mean & KHM algorithm.
| no_new_dataset | 0.950641 |
1205.5024 | A.K. Mishra Dr. | A.K. Mishra and H. Chandrasekharan | Analytical Study of Hexapod miRNAs using Phylogenetic Methods | null | null | null | null | cs.CE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MicroRNAs (miRNAs) are a class of non-coding RNAs that regulate gene
expression. Identification of total number of miRNAs even in completely
sequenced organisms is still an open problem. However, researchers have been
using techniques that can predict limited number of miRNA in an organism. In
this paper, we have used homology based approach for comparative analysis of
miRNA of hexapoda group .We have used Apis mellifera, Bombyx mori, Anopholes
gambiae and Drosophila melanogaster miRNA datasets from miRBase repository. We
have done pair wise as well as multiple alignments for the available miRNAs in
the repository to identify and analyse conserved regions among related species.
Unfortunately, to the best of our knowledge, miRNA related literature does not
provide in depth analysis of hexapods. We have made an attempt to derive the
commonality among the miRNAs and to identify the conserved regions which are
still not available in miRNA repositories. The results are good approximation
with a small number of mismatches. However, they are encouraging and may
facilitate miRNA biogenesis for
| [
{
"version": "v1",
"created": "Tue, 22 May 2012 10:28:29 GMT"
}
] | 2012-05-24T00:00:00 | [
[
"Mishra",
"A. K.",
""
],
[
"Chandrasekharan",
"H.",
""
]
] | TITLE: Analytical Study of Hexapod miRNAs using Phylogenetic Methods
ABSTRACT: MicroRNAs (miRNAs) are a class of non-coding RNAs that regulate gene
expression. Identification of total number of miRNAs even in completely
sequenced organisms is still an open problem. However, researchers have been
using techniques that can predict limited number of miRNA in an organism. In
this paper, we have used homology based approach for comparative analysis of
miRNA of hexapoda group .We have used Apis mellifera, Bombyx mori, Anopholes
gambiae and Drosophila melanogaster miRNA datasets from miRBase repository. We
have done pair wise as well as multiple alignments for the available miRNAs in
the repository to identify and analyse conserved regions among related species.
Unfortunately, to the best of our knowledge, miRNA related literature does not
provide in depth analysis of hexapods. We have made an attempt to derive the
commonality among the miRNAs and to identify the conserved regions which are
still not available in miRNA repositories. The results are good approximation
with a small number of mismatches. However, they are encouraging and may
facilitate miRNA biogenesis for
| no_new_dataset | 0.944536 |
1205.5204 | Bruno Jobard | Bruno Jobard, Nicolas Ray and Dmitry Sokolov | Visualizing 2D Flows with Animated Arrow Plots | null | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Flow fields are often represented by a set of static arrows to illustrate
scientific vulgarization, documentary film, meteorology, etc. This simple
schematic representation lets an observer intuitively interpret the main
properties of a flow: its orientation and velocity magnitude. We propose to
generate dynamic versions of such representations for 2D unsteady flow fields.
Our algorithm smoothly animates arrows along the flow while controlling their
density in the domain over time. Several strategies have been combined to lower
the unavoidable popping artifacts arising when arrows appear and disappear and
to achieve visually pleasing animations. Disturbing arrow rotations in low
velocity regions are also handled by continuously morphing arrow glyphs to
semi-transparent discs. To substantiate our method, we provide results for
synthetic and real velocity field datasets.
| [
{
"version": "v1",
"created": "Wed, 23 May 2012 15:29:16 GMT"
}
] | 2012-05-24T00:00:00 | [
[
"Jobard",
"Bruno",
""
],
[
"Ray",
"Nicolas",
""
],
[
"Sokolov",
"Dmitry",
""
]
] | TITLE: Visualizing 2D Flows with Animated Arrow Plots
ABSTRACT: Flow fields are often represented by a set of static arrows to illustrate
scientific vulgarization, documentary film, meteorology, etc. This simple
schematic representation lets an observer intuitively interpret the main
properties of a flow: its orientation and velocity magnitude. We propose to
generate dynamic versions of such representations for 2D unsteady flow fields.
Our algorithm smoothly animates arrows along the flow while controlling their
density in the domain over time. Several strategies have been combined to lower
the unavoidable popping artifacts arising when arrows appear and disappear and
to achieve visually pleasing animations. Disturbing arrow rotations in low
velocity regions are also handled by continuously morphing arrow glyphs to
semi-transparent discs. To substantiate our method, we provide results for
synthetic and real velocity field datasets.
| no_new_dataset | 0.94743 |
1205.4546 | Myunghwan Kim | Myunghwan Kim and Jure Leskovec | Latent Multi-group Membership Graph Model | 10 pages, 4 figures, 4 tables | null | null | null | cs.SI physics.soc-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop the Latent Multi-group Membership Graph (LMMG) model, a model of
networks with rich node feature structure. In the LMMG model, each node belongs
to multiple groups and each latent group models the occurrence of links as well
as the node feature structure. The LMMG can be used to summarize the network
structure, to predict links between the nodes, and to predict missing features
of a node. We derive efficient inference and learning algorithms and evaluate
the predictive performance of the LMMG on several social and document network
datasets.
| [
{
"version": "v1",
"created": "Mon, 21 May 2012 09:56:10 GMT"
}
] | 2012-05-22T00:00:00 | [
[
"Kim",
"Myunghwan",
""
],
[
"Leskovec",
"Jure",
""
]
] | TITLE: Latent Multi-group Membership Graph Model
ABSTRACT: We develop the Latent Multi-group Membership Graph (LMMG) model, a model of
networks with rich node feature structure. In the LMMG model, each node belongs
to multiple groups and each latent group models the occurrence of links as well
as the node feature structure. The LMMG can be used to summarize the network
structure, to predict links between the nodes, and to predict missing features
of a node. We derive efficient inference and learning algorithms and evaluate
the predictive performance of the LMMG on several social and document network
datasets.
| no_new_dataset | 0.952264 |
1205.4013 | Xiaohan Zhao | Xiaohan Zhao, Alessandra Sala, Christo Wilson, Xiao Wang, Sabrina
Gaito, Haitao Zheng, Ben Y. Zhao | Multi-scale Dynamics in a Massive Online Social Network | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data confidentiality policies at major social network providers have severely
limited researchers' access to large-scale datasets. The biggest impact has
been on the study of network dynamics, where researchers have studied citation
graphs and content-sharing networks, but few have analyzed detailed dynamics in
the massive social networks that dominate the web today. In this paper, we
present results of analyzing detailed dynamics in the Renren social network,
covering a period of 2 years when the network grew from 1 user to 19 million
users and 199 million edges. Rather than validate a single model of network
dynamics, we analyze dynamics at different granularities (user-, community- and
network- wide) to determine how much, if any, users are influenced by dynamics
processes at different scales. We observe in- dependent predictable processes
at each level, and find that while the growth of communities has moderate and
sustained impact on users, significant events such as network merge events have
a strong but short-lived impact that is quickly dominated by the continuous
arrival of new users.
| [
{
"version": "v1",
"created": "Thu, 17 May 2012 19:21:56 GMT"
}
] | 2012-05-18T00:00:00 | [
[
"Zhao",
"Xiaohan",
""
],
[
"Sala",
"Alessandra",
""
],
[
"Wilson",
"Christo",
""
],
[
"Wang",
"Xiao",
""
],
[
"Gaito",
"Sabrina",
""
],
[
"Zheng",
"Haitao",
""
],
[
"Zhao",
"Ben Y.",
""
]
] | TITLE: Multi-scale Dynamics in a Massive Online Social Network
ABSTRACT: Data confidentiality policies at major social network providers have severely
limited researchers' access to large-scale datasets. The biggest impact has
been on the study of network dynamics, where researchers have studied citation
graphs and content-sharing networks, but few have analyzed detailed dynamics in
the massive social networks that dominate the web today. In this paper, we
present results of analyzing detailed dynamics in the Renren social network,
covering a period of 2 years when the network grew from 1 user to 19 million
users and 199 million edges. Rather than validate a single model of network
dynamics, we analyze dynamics at different granularities (user-, community- and
network- wide) to determine how much, if any, users are influenced by dynamics
processes at different scales. We observe in- dependent predictable processes
at each level, and find that while the growth of communities has moderate and
sustained impact on users, significant events such as network merge events have
a strong but short-lived impact that is quickly dominated by the continuous
arrival of new users.
| no_new_dataset | 0.949482 |
1010.2198 | Akram Aldroubi | Akram Aldroubi and Ali Sekmen | Nearness to Local Subspace Algorithm for Subspace and Motion
Segmentation | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a growing interest in computer science, engineering, and mathematics
for modeling signals in terms of union of subspaces and manifolds. Subspace
segmentation and clustering of high dimensional data drawn from a union of
subspaces are especially important with many practical applications in computer
vision, image and signal processing, communications, and information theory.
This paper presents a clustering algorithm for high dimensional data that comes
from a union of lower dimensional subspaces of equal and known dimensions. Such
cases occur in many data clustering problems, such as motion segmentation and
face recognition. The algorithm is reliable in the presence of noise, and
applied to the Hopkins 155 Dataset, it generates the best results to date for
motion segmentation. The two motion, three motion, and overall segmentation
rates for the video sequences are 99.43%, 98.69%, and 99.24%, respectively.
| [
{
"version": "v1",
"created": "Mon, 11 Oct 2010 19:47:41 GMT"
},
{
"version": "v2",
"created": "Mon, 14 May 2012 22:57:33 GMT"
}
] | 2012-05-16T00:00:00 | [
[
"Aldroubi",
"Akram",
""
],
[
"Sekmen",
"Ali",
""
]
] | TITLE: Nearness to Local Subspace Algorithm for Subspace and Motion
Segmentation
ABSTRACT: There is a growing interest in computer science, engineering, and mathematics
for modeling signals in terms of union of subspaces and manifolds. Subspace
segmentation and clustering of high dimensional data drawn from a union of
subspaces are especially important with many practical applications in computer
vision, image and signal processing, communications, and information theory.
This paper presents a clustering algorithm for high dimensional data that comes
from a union of lower dimensional subspaces of equal and known dimensions. Such
cases occur in many data clustering problems, such as motion segmentation and
face recognition. The algorithm is reliable in the presence of noise, and
applied to the Hopkins 155 Dataset, it generates the best results to date for
motion segmentation. The two motion, three motion, and overall segmentation
rates for the video sequences are 99.43%, 98.69%, and 99.24%, respectively.
| no_new_dataset | 0.951504 |
1205.3441 | Romain Giot | Romain Giot (GREYC), Christophe Rosenberger (GREYC) | Genetic Programming for Multibiometrics | null | Expert Systems with Applications 39, 2 1837-1847 (2012) | 10.1016/j.eswa.2011.08.066 | null | cs.NE cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biometric systems suffer from some drawbacks: a biometric system can provide
in general good performances except with some individuals as its performance
depends highly on the quality of the capture. One solution to solve some of
these problems is to use multibiometrics where different biometric systems are
combined together (multiple captures of the same biometric modality, multiple
feature extraction algorithms, multiple biometric modalities...). In this
paper, we are interested in score level fusion functions application (i.e., we
use a multibiometric authentication scheme which accept or deny the claimant
for using an application). In the state of the art, the weighted sum of scores
(which is a linear classifier) and the use of an SVM (which is a non linear
classifier) provided by different biometric systems provide one of the best
performances. We present a new method based on the use of genetic programming
giving similar or better performances (depending on the complexity of the
database). We derive a score fusion function by assembling some classical
primitives functions (+, *, -, ...). We have validated the proposed method on
three significant biometric benchmark datasets from the state of the art.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2012 10:25:16 GMT"
}
] | 2012-05-16T00:00:00 | [
[
"Giot",
"Romain",
"",
"GREYC"
],
[
"Rosenberger",
"Christophe",
"",
"GREYC"
]
] | TITLE: Genetic Programming for Multibiometrics
ABSTRACT: Biometric systems suffer from some drawbacks: a biometric system can provide
in general good performances except with some individuals as its performance
depends highly on the quality of the capture. One solution to solve some of
these problems is to use multibiometrics where different biometric systems are
combined together (multiple captures of the same biometric modality, multiple
feature extraction algorithms, multiple biometric modalities...). In this
paper, we are interested in score level fusion functions application (i.e., we
use a multibiometric authentication scheme which accept or deny the claimant
for using an application). In the state of the art, the weighted sum of scores
(which is a linear classifier) and the use of an SVM (which is a non linear
classifier) provided by different biometric systems provide one of the best
performances. We present a new method based on the use of genetic programming
giving similar or better performances (depending on the complexity of the
database). We derive a score fusion function by assembling some classical
primitives functions (+, *, -, ...). We have validated the proposed method on
three significant biometric benchmark datasets from the state of the art.
| no_new_dataset | 0.954095 |
1205.2726 | David Leoni | David Leoni | Non-Interactive Differential Privacy: a Survey | Presented at the First International Workshop On Open Data, WOD-2012
(http://arxiv.org/abs/1204.3726) | null | null | WOD/2012/NANTES/12 | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | OpenData movement around the globe is demanding more access to information
which lies locked in public or private servers. As recently reported by a
McKinsey publication, this data has significant economic value, yet its release
has potential to blatantly conflict with people privacy. Recent UK government
inquires have shown concern from various parties about publication of
anonymized databases, as there is concrete possibility of user identification
by means of linkage attacks. Differential privacy stands out as a model that
provides strong formal guarantees about the anonymity of the participants in a
sanitized database. Only recent results demonstrated its applicability on
real-life datasets, though. This paper covers such breakthrough discoveries, by
reviewing applications of differential privacy for non-interactive publication
of anonymized real-life datasets. Theory, utility and a data-aware comparison
are discussed on a variety of principles and concrete applications.
| [
{
"version": "v1",
"created": "Fri, 11 May 2012 21:38:16 GMT"
}
] | 2012-05-15T00:00:00 | [
[
"Leoni",
"David",
""
]
] | TITLE: Non-Interactive Differential Privacy: a Survey
ABSTRACT: OpenData movement around the globe is demanding more access to information
which lies locked in public or private servers. As recently reported by a
McKinsey publication, this data has significant economic value, yet its release
has potential to blatantly conflict with people privacy. Recent UK government
inquires have shown concern from various parties about publication of
anonymized databases, as there is concrete possibility of user identification
by means of linkage attacks. Differential privacy stands out as a model that
provides strong formal guarantees about the anonymity of the participants in a
sanitized database. Only recent results demonstrated its applicability on
real-life datasets, though. This paper covers such breakthrough discoveries, by
reviewing applications of differential privacy for non-interactive publication
of anonymized real-life datasets. Theory, utility and a data-aware comparison
are discussed on a variety of principles and concrete applications.
| no_new_dataset | 0.944944 |
1205.2821 | Odemir Bruno PhD | J. B. Florindo and O. M. Bruno | Texture Analysis And Characterization Using Probability Fractal
Descriptors | 6 pages, 5 figures | null | null | null | physics.data-an cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A gray-level image texture descriptors based on fractal dimension estimation
is proposed in this work. The proposed method estimates the fractal dimension
using probability (Voss) method. The descriptors are computed applying a
multiscale transform to the fractal dimension curves of the texture image. The
proposed texture descriptor method is evaluated in a classification task of
well known benchmark texture datasets. The results show the great performance
of the proposed method as a tool for texture images analysis and
characterization.
| [
{
"version": "v1",
"created": "Sun, 13 May 2012 02:20:52 GMT"
}
] | 2012-05-15T00:00:00 | [
[
"Florindo",
"J. B.",
""
],
[
"Bruno",
"O. M.",
""
]
] | TITLE: Texture Analysis And Characterization Using Probability Fractal
Descriptors
ABSTRACT: A gray-level image texture descriptors based on fractal dimension estimation
is proposed in this work. The proposed method estimates the fractal dimension
using probability (Voss) method. The descriptors are computed applying a
multiscale transform to the fractal dimension curves of the texture image. The
proposed texture descriptor method is evaluated in a classification task of
well known benchmark texture datasets. The results show the great performance
of the proposed method as a tool for texture images analysis and
characterization.
| no_new_dataset | 0.952042 |
1205.2958 | Ping Li | Ping Li and Anshumali Shrivastava and Arnd Christian Konig | b-Bit Minwise Hashing in Practice: Large-Scale Batch and Online Learning
and Using GPUs for Fast Preprocessing with Simple Hash Functions | null | null | null | null | cs.IR cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study several critical issues which must be tackled before
one can apply b-bit minwise hashing to the volumes of data often used
industrial applications, especially in the context of search.
1. (b-bit) Minwise hashing requires an expensive preprocessing step that
computes k (e.g., 500) minimal values after applying the corresponding
permutations for each data vector. We developed a parallelization scheme using
GPUs and observed that the preprocessing time can be reduced by a factor of
20-80 and becomes substantially smaller than the data loading time.
2. One major advantage of b-bit minwise hashing is that it can substantially
reduce the amount of memory required for batch learning. However, as online
algorithms become increasingly popular for large-scale learning in the context
of search, it is not clear if b-bit minwise yields significant improvements for
them. This paper demonstrates that $b$-bit minwise hashing provides an
effective data size/dimension reduction scheme and hence it can dramatically
reduce the data loading time for each epoch of the online training process.
This is significant because online learning often requires many (e.g., 10 to
100) epochs to reach a sufficient accuracy.
3. Another critical issue is that for very large data sets it becomes
impossible to store a (fully) random permutation matrix, due to its space
requirements. Our paper is the first study to demonstrate that $b$-bit minwise
hashing implemented using simple hash functions, e.g., the 2-universal (2U) and
4-universal (4U) hash families, can produce very similar learning results as
using fully random permutations. Experiments on datasets of up to 200GB are
presented.
| [
{
"version": "v1",
"created": "Mon, 14 May 2012 08:28:10 GMT"
}
] | 2012-05-15T00:00:00 | [
[
"Li",
"Ping",
""
],
[
"Shrivastava",
"Anshumali",
""
],
[
"Konig",
"Arnd Christian",
""
]
] | TITLE: b-Bit Minwise Hashing in Practice: Large-Scale Batch and Online Learning
and Using GPUs for Fast Preprocessing with Simple Hash Functions
ABSTRACT: In this paper, we study several critical issues which must be tackled before
one can apply b-bit minwise hashing to the volumes of data often used
industrial applications, especially in the context of search.
1. (b-bit) Minwise hashing requires an expensive preprocessing step that
computes k (e.g., 500) minimal values after applying the corresponding
permutations for each data vector. We developed a parallelization scheme using
GPUs and observed that the preprocessing time can be reduced by a factor of
20-80 and becomes substantially smaller than the data loading time.
2. One major advantage of b-bit minwise hashing is that it can substantially
reduce the amount of memory required for batch learning. However, as online
algorithms become increasingly popular for large-scale learning in the context
of search, it is not clear if b-bit minwise yields significant improvements for
them. This paper demonstrates that $b$-bit minwise hashing provides an
effective data size/dimension reduction scheme and hence it can dramatically
reduce the data loading time for each epoch of the online training process.
This is significant because online learning often requires many (e.g., 10 to
100) epochs to reach a sufficient accuracy.
3. Another critical issue is that for very large data sets it becomes
impossible to store a (fully) random permutation matrix, due to its space
requirements. Our paper is the first study to demonstrate that $b$-bit minwise
hashing implemented using simple hash functions, e.g., the 2-universal (2U) and
4-universal (4U) hash families, can produce very similar learning results as
using fully random permutations. Experiments on datasets of up to 200GB are
presented.
| no_new_dataset | 0.942135 |
1205.3012 | Xavier Calbet | Xavier Calbet | Determination of the best optimal estimation parameters for validation
of infrared hyperspectral sounding retrievals | 38 pages, 14 figures, 1 table | null | null | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The availability of hyperspectral infrared remote sensing instruments, like
AIRS and IASI, on board of Earth observing satellites opens the possibility of
obtaining high vertical resolution atmospheric profiles. We present an
objective and simple technique to derive the parameters used in the optimal
estimation method that retrieve atmospheric states from the spectra. The
retrievals obtained in this way are optimal in the sense of providing the best
possible validation statistics obtained from the difference between retrievals
and a chosen calibration/validation dataset of atmospheric states. This is
demonstrated analytically. To illustrate this result several real world
examples using IASI retrievals fine tuned to ECMWF analyses are shown. The
analytical equations obtained give further insight into the various
contributions to the biases and errors of the retrievals and the consequences
of using other types of fine tuning. Retrievals using IASI show an error of 0.9
to 1.9 K in temperature and below 6.5 K in humidity dew point temperature in
the troposphere on the vertical radiative transfer model pressure grid
(RTIASI-4.1), which has a vertical spacing between 300 and 400 m. The more
accurately the calibration dataset represents the true state of the atmosphere,
the better the retrievals will be when compared to the true states.
| [
{
"version": "v1",
"created": "Mon, 14 May 2012 13:19:31 GMT"
}
] | 2012-05-15T00:00:00 | [
[
"Calbet",
"Xavier",
""
]
] | TITLE: Determination of the best optimal estimation parameters for validation
of infrared hyperspectral sounding retrievals
ABSTRACT: The availability of hyperspectral infrared remote sensing instruments, like
AIRS and IASI, on board of Earth observing satellites opens the possibility of
obtaining high vertical resolution atmospheric profiles. We present an
objective and simple technique to derive the parameters used in the optimal
estimation method that retrieve atmospheric states from the spectra. The
retrievals obtained in this way are optimal in the sense of providing the best
possible validation statistics obtained from the difference between retrievals
and a chosen calibration/validation dataset of atmospheric states. This is
demonstrated analytically. To illustrate this result several real world
examples using IASI retrievals fine tuned to ECMWF analyses are shown. The
analytical equations obtained give further insight into the various
contributions to the biases and errors of the retrievals and the consequences
of using other types of fine tuning. Retrievals using IASI show an error of 0.9
to 1.9 K in temperature and below 6.5 K in humidity dew point temperature in
the troposphere on the vertical radiative transfer model pressure grid
(RTIASI-4.1), which has a vertical spacing between 300 and 400 m. The more
accurately the calibration dataset represents the true state of the atmosphere,
the better the retrievals will be when compared to the true states.
| no_new_dataset | 0.937096 |
1205.2424 | Ping Zhou | Ping Zhou and Yongfeng Zhong | The citation-based indicator and combined impact indicator - New options
for measuring impact | null | null | null | null | cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metrics based on percentile ranks (PRs) for measuring scholarly impact
involves complex treatment because of various defects such as overvaluing or
devaluing an object caused by percentile ranking schemes, ignoring precise
citation variation among those ranked next to each other, and inconsistency
caused by additional papers or citations. These defects are especially obvious
in a small-sized dataset. To avoid the complicated treatment of PRs based
metrics, we propose two new indicators - the citation-based indicator (CBI) and
the combined impact indicator (CII). Document types of publications are taken
into account. With the two indicators, one would no more be bothered by complex
issues encountered by PRs based indicators. For a small-sized dataset with less
than 100 papers, special calculation is no more needed. The CBI is based solely
on citation counts and the CII measures the integrate contributions of
publications and citations. Both virtual and empirical data are used so as to
compare the effect of related indicators. The CII and the PRs based indicator
I3 are highly correlated but the former reflects citation impact more and the
latter relates more to publications.
| [
{
"version": "v1",
"created": "Fri, 11 May 2012 03:49:35 GMT"
}
] | 2012-05-14T00:00:00 | [
[
"Zhou",
"Ping",
""
],
[
"Zhong",
"Yongfeng",
""
]
] | TITLE: The citation-based indicator and combined impact indicator - New options
for measuring impact
ABSTRACT: Metrics based on percentile ranks (PRs) for measuring scholarly impact
involves complex treatment because of various defects such as overvaluing or
devaluing an object caused by percentile ranking schemes, ignoring precise
citation variation among those ranked next to each other, and inconsistency
caused by additional papers or citations. These defects are especially obvious
in a small-sized dataset. To avoid the complicated treatment of PRs based
metrics, we propose two new indicators - the citation-based indicator (CBI) and
the combined impact indicator (CII). Document types of publications are taken
into account. With the two indicators, one would no more be bothered by complex
issues encountered by PRs based indicators. For a small-sized dataset with less
than 100 papers, special calculation is no more needed. The CBI is based solely
on citation counts and the CII measures the integrate contributions of
publications and citations. Both virtual and empirical data are used so as to
compare the effect of related indicators. The CII and the PRs based indicator
I3 are highly correlated but the former reflects citation impact more and the
latter relates more to publications.
| no_new_dataset | 0.949763 |
1205.2470 | Hideaki Aoyama | Hideaki Aoyama, Hiroshi Iyetomi, and Hiroshi Yoshikawa | Equilibrium Distribution of Labor Productivity: A Theoretical Model | 11pages, 5 figures, and 1 table | null | null | KUNS-2400 | q-fin.ST physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We construct a theoretical model for equilibrium distribution of workers
across sectors with different labor productivity, assuming that a sector can
accommodate a limited number of workers which depends only on its productivity.
A general formula for such distribution of productivity is obtained, using the
detail-balance condition necessary for equilibrium in the Ehrenfest-Brillouin
model. We also carry out an empirical analysis on the average number of workers
in given productivity sectors on the basis of an exhaustive dataset in Japan.
The theoretical formula succeeds in explaining the two distinctive
observational facts in a unified way, that is, a Boltzmann distribution with
negative temperature on low-to-medium productivity side and a decreasing part
in a power-law form on high productivity side.
| [
{
"version": "v1",
"created": "Fri, 11 May 2012 09:53:17 GMT"
}
] | 2012-05-14T00:00:00 | [
[
"Aoyama",
"Hideaki",
""
],
[
"Iyetomi",
"Hiroshi",
""
],
[
"Yoshikawa",
"Hiroshi",
""
]
] | TITLE: Equilibrium Distribution of Labor Productivity: A Theoretical Model
ABSTRACT: We construct a theoretical model for equilibrium distribution of workers
across sectors with different labor productivity, assuming that a sector can
accommodate a limited number of workers which depends only on its productivity.
A general formula for such distribution of productivity is obtained, using the
detail-balance condition necessary for equilibrium in the Ehrenfest-Brillouin
model. We also carry out an empirical analysis on the average number of workers
in given productivity sectors on the basis of an exhaustive dataset in Japan.
The theoretical formula succeeds in explaining the two distinctive
observational facts in a unified way, that is, a Boltzmann distribution with
negative temperature on low-to-medium productivity side and a decreasing part
in a power-law form on high productivity side.
| no_new_dataset | 0.948489 |
1205.2650 | Finale Doshi-Velez | Finale Doshi-Velez, Zoubin Ghahramani | Correlated Non-Parametric Latent Feature Models | Appears in Proceedings of the Twenty-Fifth Conference on Uncertainty
in Artificial Intelligence (UAI2009) | null | null | UAI-P-2009-PG-143-150 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are often interested in explaining data through a set of hidden factors or
features. When the number of hidden features is unknown, the Indian Buffet
Process (IBP) is a nonparametric latent feature model that does not bound the
number of active features in dataset. However, the IBP assumes that all latent
features are uncorrelated, making it inadequate for many realworld problems. We
introduce a framework for correlated nonparametric feature models, generalising
the IBP. We use this framework to generate several specific models and
demonstrate applications on realworld datasets.
| [
{
"version": "v1",
"created": "Wed, 9 May 2012 15:09:51 GMT"
}
] | 2012-05-14T00:00:00 | [
[
"Doshi-Velez",
"Finale",
""
],
[
"Ghahramani",
"Zoubin",
""
]
] | TITLE: Correlated Non-Parametric Latent Feature Models
ABSTRACT: We are often interested in explaining data through a set of hidden factors or
features. When the number of hidden features is unknown, the Indian Buffet
Process (IBP) is a nonparametric latent feature model that does not bound the
number of active features in dataset. However, the IBP assumes that all latent
features are uncorrelated, making it inadequate for many realworld problems. We
introduce a framework for correlated nonparametric feature models, generalising
the IBP. We use this framework to generate several specific models and
demonstrate applications on realworld datasets.
| no_new_dataset | 0.948585 |
1205.2292 | George Papastefanatos Dr. | Yannis Stavrakas, George Papastefanatos, Theodore Dalamagas, Vassilis
Christophides | Diachronic Linked Data: Towards Long-Term Preservation of Structured
Interrelated Information | Presented at the First International Workshop On Open Data, WOD-2012
(http://arxiv.org/abs/1204.3726) | null | null | WOD/2012/NANTES/10 | cs.DB cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Linked Data Paradigm is one of the most promising technologies for
publishing, sharing, and connecting data on the Web, and offers a new way for
data integration and interoperability. However, the proliferation of
distributed, inter-connected sources of information and services on the Web
poses significant new challenges for managing consistently a huge number of
large datasets and their interdependencies. In this paper we focus on the key
problem of preserving evolving structured interlinked data. We argue that a
number of issues that hinder applications and users are related to the temporal
aspect that is intrinsic in linked data. We present a number of real use cases
to motivate our approach, we discuss the problems that occur, and propose a
direction for a solution.
| [
{
"version": "v1",
"created": "Thu, 10 May 2012 15:28:30 GMT"
}
] | 2012-05-11T00:00:00 | [
[
"Stavrakas",
"Yannis",
""
],
[
"Papastefanatos",
"George",
""
],
[
"Dalamagas",
"Theodore",
""
],
[
"Christophides",
"Vassilis",
""
]
] | TITLE: Diachronic Linked Data: Towards Long-Term Preservation of Structured
Interrelated Information
ABSTRACT: The Linked Data Paradigm is one of the most promising technologies for
publishing, sharing, and connecting data on the Web, and offers a new way for
data integration and interoperability. However, the proliferation of
distributed, inter-connected sources of information and services on the Web
poses significant new challenges for managing consistently a huge number of
large datasets and their interdependencies. In this paper we focus on the key
problem of preserving evolving structured interlinked data. We argue that a
number of issues that hinder applications and users are related to the temporal
aspect that is intrinsic in linked data. We present a number of real use cases
to motivate our approach, we discuss the problems that occur, and propose a
direction for a solution.
| no_new_dataset | 0.947332 |
1205.2345 | Salah A. Aly | Hossam Zawbaa and Salah A. Aly | Hajj and Umrah Event Recognition Datasets | 4 pages, 18 figures with 33 images | null | null | null | cs.CV cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this note, new Hajj and Umrah Event Recognition datasets (HUER) are
presented. The demonstrated datasets are based on videos and images taken
during 2011-2012 Hajj and Umrah seasons. HUER is the first collection of
datasets covering the six types of Hajj and Umrah ritual events (rotating in
Tawaf around Kabaa, performing Sa'y between Safa and Marwa, standing on the
mount of Arafat, staying overnight in Muzdalifah, staying two or three days in
Mina, and throwing Jamarat). The HUER datasets also contain video and image
databases for nine types of human actions during Hajj and Umrah (walking,
drinking from Zamzam water, sleeping, smiling, eating, praying, sitting,
shaving hairs and ablutions, reading the holy Quran and making duaa). The
spatial resolutions are 1280 x 720 pixels for images and 640 x 480 pixels for
videos and have lengths of 20 seconds in average with 30 frame per second
rates.
| [
{
"version": "v1",
"created": "Thu, 10 May 2012 19:10:18 GMT"
}
] | 2012-05-11T00:00:00 | [
[
"Zawbaa",
"Hossam",
""
],
[
"Aly",
"Salah A.",
""
]
] | TITLE: Hajj and Umrah Event Recognition Datasets
ABSTRACT: In this note, new Hajj and Umrah Event Recognition datasets (HUER) are
presented. The demonstrated datasets are based on videos and images taken
during 2011-2012 Hajj and Umrah seasons. HUER is the first collection of
datasets covering the six types of Hajj and Umrah ritual events (rotating in
Tawaf around Kabaa, performing Sa'y between Safa and Marwa, standing on the
mount of Arafat, staying overnight in Muzdalifah, staying two or three days in
Mina, and throwing Jamarat). The HUER datasets also contain video and image
databases for nine types of human actions during Hajj and Umrah (walking,
drinking from Zamzam water, sleeping, smiling, eating, praying, sitting,
shaving hairs and ablutions, reading the holy Quran and making duaa). The
spatial resolutions are 1280 x 720 pixels for images and 640 x 480 pixels for
videos and have lengths of 20 seconds in average with 30 frame per second
rates.
| new_dataset | 0.973292 |
1205.2031 | Sreejini Ks | K. S. Sreejini, A. Lijiya and V. K. Govindan | M-FISH Karyotyping - A New Approach Based on Watershed Transform | 13 pages,7 figures | null | 10.5121/ijcseit.2012.2210 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Karyotyping is a process in which chromosomes in a dividing cell are properly
stained, identified and displayed in a standard format, which helps geneticist
to study and diagnose genetic factors behind various genetic diseases and for
studying cancer. M-FISH (Multiplex Fluorescent In-Situ Hybridization) provides
color karyotyping. In this paper, an automated method for M-FISH chromosome
segmentation based on watershed transform followed by naive Bayes
classification of each region using the features, mean and standard deviation,
is presented. Also, a post processing step is added to re-classify the small
chromosome segments to the neighboring larger segment for reducing the chances
of misclassification. The approach provided improved accuracy when compared to
the pixel-by-pixel approach. The approach was tested on 40 images from the
dataset and achieved an accuracy of 84.21 %.
| [
{
"version": "v1",
"created": "Wed, 9 May 2012 16:52:23 GMT"
}
] | 2012-05-10T00:00:00 | [
[
"Sreejini",
"K. S.",
""
],
[
"Lijiya",
"A.",
""
],
[
"Govindan",
"V. K.",
""
]
] | TITLE: M-FISH Karyotyping - A New Approach Based on Watershed Transform
ABSTRACT: Karyotyping is a process in which chromosomes in a dividing cell are properly
stained, identified and displayed in a standard format, which helps geneticist
to study and diagnose genetic factors behind various genetic diseases and for
studying cancer. M-FISH (Multiplex Fluorescent In-Situ Hybridization) provides
color karyotyping. In this paper, an automated method for M-FISH chromosome
segmentation based on watershed transform followed by naive Bayes
classification of each region using the features, mean and standard deviation,
is presented. Also, a post processing step is added to re-classify the small
chromosome segments to the neighboring larger segment for reducing the chances
of misclassification. The approach provided improved accuracy when compared to
the pixel-by-pixel approach. The approach was tested on 40 images from the
dataset and achieved an accuracy of 84.21 %.
| no_new_dataset | 0.956675 |
1205.1645 | Fran\c{c}ois Scharffe | Julien Plu and Fran\c{c}ois Scharffe | Publishing and linking transport data on the Web | Presented at the First International Workshop On Open Data, WOD-2012
(http://arxiv.org/abs/1204.3726) | null | null | WOD/2012/NANTES/13 | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Without Linked Data, transport data is limited to applications exclusively
around transport. In this paper, we present a workflow for publishing and
linking transport data on the Web. So we will be able to develop transport
applications and to add other features which will be created from other
datasets. This will be possible because transport data will be linked to these
datasets. We apply this workflow to two datasets: NEPTUNE, a French standard
describing a transport line, and Passim, a directory containing relevant
information on transport services, in every French city.
| [
{
"version": "v1",
"created": "Tue, 8 May 2012 09:50:35 GMT"
}
] | 2012-05-09T00:00:00 | [
[
"Plu",
"Julien",
""
],
[
"Scharffe",
"François",
""
]
] | TITLE: Publishing and linking transport data on the Web
ABSTRACT: Without Linked Data, transport data is limited to applications exclusively
around transport. In this paper, we present a workflow for publishing and
linking transport data on the Web. So we will be able to develop transport
applications and to add other features which will be created from other
datasets. This will be possible because transport data will be linked to these
datasets. We apply this workflow to two datasets: NEPTUNE, a French standard
describing a transport line, and Passim, a directory containing relevant
information on transport services, in every French city.
| no_new_dataset | 0.949012 |
1103.2950 | Wentian Li | Wentian Li and Pedro Miramontes | Fitting Ranked English and Spanish Letter Frequency Distribution in U.S.
and Mexican Presidential Speeches | 7 figures | Journal of Quantitative Linguistics, 18(4):359-380 (2011) | 10.1080/09296174.2011.608606 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The limited range in its abscissa of ranked letter frequency distributions
causes multiple functions to fit the observed distribution reasonably well. In
order to critically compare various functions, we apply the statistical model
selections on ten functions, using the texts of U.S. and Mexican presidential
speeches in the last 1-2 centuries. Dispite minor switching of ranking order of
certain letters during the temporal evolution for both datasets, the letter
usage is generally stable. The best fitting function, judged by either
least-square-error or by AIC/BIC model selection, is the Cocho/Beta function.
We also use a novel method to discover clusters of letters by their
observed-over-expected frequency ratios.
| [
{
"version": "v1",
"created": "Tue, 15 Mar 2011 16:21:24 GMT"
}
] | 2012-05-07T00:00:00 | [
[
"Li",
"Wentian",
""
],
[
"Miramontes",
"Pedro",
""
]
] | TITLE: Fitting Ranked English and Spanish Letter Frequency Distribution in U.S.
and Mexican Presidential Speeches
ABSTRACT: The limited range in its abscissa of ranked letter frequency distributions
causes multiple functions to fit the observed distribution reasonably well. In
order to critically compare various functions, we apply the statistical model
selections on ten functions, using the texts of U.S. and Mexican presidential
speeches in the last 1-2 centuries. Dispite minor switching of ranking order of
certain letters during the temporal evolution for both datasets, the letter
usage is generally stable. The best fitting function, judged by either
least-square-error or by AIC/BIC model selection, is the Cocho/Beta function.
We also use a novel method to discover clusters of letters by their
observed-over-expected frequency ratios.
| no_new_dataset | 0.952353 |
1205.0837 | Sean Chester | Sean Chester, Alex Thomo, S. Venkatesh, Sue Whitesides | Indexing Reverse Top-k Queries | null | null | null | null | cs.DB cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the recently introduced monochromatic reverse top-k queries which
ask for, given a new tuple q and a dataset D, all possible top-k queries on D
union {q} for which q is in the result. Towards this problem, we focus on
designing indexes in two dimensions for repeated (or batch) querying, a novel
but practical consideration. We present the insight that by representing the
dataset as an arrangement of lines, a critical k-polygon can be identified and
used exclusively to respond to reverse top-k queries. We construct an index
based on this observation which has guaranteed worst-case query cost that is
logarithmic in the size of the k-polygon.
We implement our work and compare it to related approaches, demonstrating
that our index is fast in practice. Furthermore, we demonstrate through our
experiments that a k-polygon is comprised of a small proportion of the original
data, so our index structure consumes little disk space.
| [
{
"version": "v1",
"created": "Fri, 4 May 2012 00:03:18 GMT"
}
] | 2012-05-07T00:00:00 | [
[
"Chester",
"Sean",
""
],
[
"Thomo",
"Alex",
""
],
[
"Venkatesh",
"S.",
""
],
[
"Whitesides",
"Sue",
""
]
] | TITLE: Indexing Reverse Top-k Queries
ABSTRACT: We consider the recently introduced monochromatic reverse top-k queries which
ask for, given a new tuple q and a dataset D, all possible top-k queries on D
union {q} for which q is in the result. Towards this problem, we focus on
designing indexes in two dimensions for repeated (or batch) querying, a novel
but practical consideration. We present the insight that by representing the
dataset as an arrangement of lines, a critical k-polygon can be identified and
used exclusively to respond to reverse top-k queries. We construct an index
based on this observation which has guaranteed worst-case query cost that is
logarithmic in the size of the k-polygon.
We implement our work and compare it to related approaches, demonstrating
that our index is fast in practice. Furthermore, we demonstrate through our
experiments that a k-polygon is comprised of a small proportion of the original
data, so our index structure consumes little disk space.
| no_new_dataset | 0.931836 |
1205.0917 | Omri Mohamed Nazih | Radhouane Boughamoura, Lobna Hlaoua and Mohamed Nazih Omri | VIQI: A New Approach for Visual Interpretation of Deep Web Query
Interfaces | 8th NCM: 2012 International Conference on Networked Computing and
Advanced Information Management | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Web databases contain more than 90% of pertinent information of the Web.
Despite their importance, users don't profit of this treasury. Many deep web
services are offering competitive services in term of prices, quality of
service, and facilities. As the number of services is growing rapidly, users
have difficulty to ask many web services in the same time. In this paper, we
imagine a system where users have the possibility to formulate one query using
one query interface and then the system translates query to the rest of query
interfaces. However, interfaces are created by designers in order to be
interpreted visually by users, machines can not interpret query from a given
interface. We propose a new approach which emulates capacity of interpretation
of users and extracts query from deep web query interfaces. Our approach has
proved good performances on two standard datasets.
| [
{
"version": "v1",
"created": "Fri, 4 May 2012 11:01:42 GMT"
}
] | 2012-05-07T00:00:00 | [
[
"Boughamoura",
"Radhouane",
""
],
[
"Hlaoua",
"Lobna",
""
],
[
"Omri",
"Mohamed Nazih",
""
]
] | TITLE: VIQI: A New Approach for Visual Interpretation of Deep Web Query
Interfaces
ABSTRACT: Deep Web databases contain more than 90% of pertinent information of the Web.
Despite their importance, users don't profit of this treasury. Many deep web
services are offering competitive services in term of prices, quality of
service, and facilities. As the number of services is growing rapidly, users
have difficulty to ask many web services in the same time. In this paper, we
imagine a system where users have the possibility to formulate one query using
one query interface and then the system translates query to the rest of query
interfaces. However, interfaces are created by designers in order to be
interpreted visually by users, machines can not interpret query from a given
interface. We propose a new approach which emulates capacity of interpretation
of users and extracts query from deep web query interfaces. Our approach has
proved good performances on two standard datasets.
| no_new_dataset | 0.950549 |
1205.0919 | Omri Mohamed Nazih | Radhouane Boughammoura Lobna Hlaoua and Mohamed Nazih Omri | ViQIE: A New Approach for Visual Query Interpretation and Extraction | ICITES 2012 - 2nd International Conference on Information Technology
and e-Services | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web services are accessed via query interfaces which hide databases
containing thousands of relevant information. User's side, distant database is
a black box which accepts query and returns results, there is no way to access
database schema which reflect data and query meanings. Hence, web services are
very autonomous. Users view this autonomy as a major drawback because they need
often to combine query capabilities of many web services at the same time. In
this work, we will present a new approach which allows users to benefit of
query capabilities of many web services while respecting autonomy of each
service. This solution is a new contribution in Information Retrieval research
axe and has proven good performances on two standard datasets.
| [
{
"version": "v1",
"created": "Fri, 4 May 2012 11:08:31 GMT"
}
] | 2012-05-07T00:00:00 | [
[
"Hlaoua",
"Radhouane Boughammoura Lobna",
""
],
[
"Omri",
"Mohamed Nazih",
""
]
] | TITLE: ViQIE: A New Approach for Visual Query Interpretation and Extraction
ABSTRACT: Web services are accessed via query interfaces which hide databases
containing thousands of relevant information. User's side, distant database is
a black box which accepts query and returns results, there is no way to access
database schema which reflect data and query meanings. Hence, web services are
very autonomous. Users view this autonomy as a major drawback because they need
often to combine query capabilities of many web services at the same time. In
this work, we will present a new approach which allows users to benefit of
query capabilities of many web services while respecting autonomy of each
service. This solution is a new contribution in Information Retrieval research
axe and has proven good performances on two standard datasets.
| no_new_dataset | 0.945751 |
1205.0610 | Gang Chen | Gang Chen and Jason Corso | Greedy Multiple Instance Learning via Codebook Learning and Nearest
Neighbor Voting | 12 pages | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Multiple instance learning (MIL) has attracted great attention recently in
machine learning community. However, most MIL algorithms are very slow and
cannot be applied to large datasets. In this paper, we propose a greedy
strategy to speed up the multiple instance learning process. Our contribution
is two fold. First, we propose a density ratio model, and show that maximizing
a density ratio function is the low bound of the DD model under certain
conditions. Secondly, we make use of a histogram ratio between positive bags
and negative bags to represent the density ratio function and find codebooks
separately for positive bags and negative bags by a greedy strategy. For
testing, we make use of a nearest neighbor strategy to classify new bags. We
test our method on both small benchmark datasets and the large TRECVID MED11
dataset. The experimental results show that our method yields comparable
accuracy to the current state of the art, while being up to at least one order
of magnitude faster.
| [
{
"version": "v1",
"created": "Thu, 3 May 2012 04:09:19 GMT"
}
] | 2012-05-04T00:00:00 | [
[
"Chen",
"Gang",
""
],
[
"Corso",
"Jason",
""
]
] | TITLE: Greedy Multiple Instance Learning via Codebook Learning and Nearest
Neighbor Voting
ABSTRACT: Multiple instance learning (MIL) has attracted great attention recently in
machine learning community. However, most MIL algorithms are very slow and
cannot be applied to large datasets. In this paper, we propose a greedy
strategy to speed up the multiple instance learning process. Our contribution
is two fold. First, we propose a density ratio model, and show that maximizing
a density ratio function is the low bound of the DD model under certain
conditions. Secondly, we make use of a histogram ratio between positive bags
and negative bags to represent the density ratio function and find codebooks
separately for positive bags and negative bags by a greedy strategy. For
testing, we make use of a nearest neighbor strategy to classify new bags. We
test our method on both small benchmark datasets and the large TRECVID MED11
dataset. The experimental results show that our method yields comparable
accuracy to the current state of the art, while being up to at least one order
of magnitude faster.
| no_new_dataset | 0.951953 |
1204.6385 | Yankui Sun | Yankui Sun, Tian Zhang | A 3D Segmentation Method for Retinal Optical Coherence Tomography Volume
Data | 4 pages, 9 figures | China Patent Application (201110247341.5), 2011 | null | null | cs.CV physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the introduction of spectral-domain optical coherence tomography (OCT),
much larger image datasets are routinely acquired compared to what was possible
using the previous generation of time-domain OCT. Thus, the need for 3-D
segmentation methods for processing such data is becoming increasingly
important. We present a new 3D segmentation method for retinal OCT volume data,
which generates an enhanced volume data by using pixel intensity, boundary
position information, intensity changes on both sides of the border
simultaneously, and preliminary discrete boundary points are found from all
A-Scans and then the smoothed boundary surface can be obtained after removing a
small quantity of error points. Our experiments show that this method is
efficient, accurate and robust.
| [
{
"version": "v1",
"created": "Sat, 28 Apr 2012 09:05:56 GMT"
}
] | 2012-05-03T00:00:00 | [
[
"Sun",
"Yankui",
""
],
[
"Zhang",
"Tian",
""
]
] | TITLE: A 3D Segmentation Method for Retinal Optical Coherence Tomography Volume
Data
ABSTRACT: With the introduction of spectral-domain optical coherence tomography (OCT),
much larger image datasets are routinely acquired compared to what was possible
using the previous generation of time-domain OCT. Thus, the need for 3-D
segmentation methods for processing such data is becoming increasingly
important. We present a new 3D segmentation method for retinal OCT volume data,
which generates an enhanced volume data by using pixel intensity, boundary
position information, intensity changes on both sides of the border
simultaneously, and preliminary discrete boundary points are found from all
A-Scans and then the smoothed boundary surface can be obtained after removing a
small quantity of error points. Our experiments show that this method is
efficient, accurate and robust.
| no_new_dataset | 0.956877 |
1204.6563 | Prabhu Kaliamoorthi Mr | Prabhu Kaliamoorthi and Ramakrishna Kakarala | Parametric annealing: a stochastic search method for human pose tracking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model based methods to marker-free motion capture have a very high
computational overhead that make them unattractive. In this paper we describe a
method that improves on existing global optimization techniques to tracking
articulated objects. Our method improves on the state-of-the-art Annealed
Particle Filter (APF) by reusing samples across annealing layers and by using
an adaptive parametric density for diffusion. We compare the proposed method
with APF on a scalable problem and study how the two methods scale with the
dimensionality, multi-modality and the range of search. Then we perform
sensitivity analysis on the parameters of our algorithm and show that it
tolerates a wide range of parameter settings. We also show results on tracking
human pose from the widely-used Human Eva I dataset. Our results show that the
proposed method reduces the tracking error despite using less than 50% of the
computational resources as APF. The tracked output also shows a significant
qualitative improvement over APF as demonstrated through image and video
results.
| [
{
"version": "v1",
"created": "Mon, 30 Apr 2012 07:04:08 GMT"
},
{
"version": "v2",
"created": "Wed, 2 May 2012 04:37:03 GMT"
}
] | 2012-05-03T00:00:00 | [
[
"Kaliamoorthi",
"Prabhu",
""
],
[
"Kakarala",
"Ramakrishna",
""
]
] | TITLE: Parametric annealing: a stochastic search method for human pose tracking
ABSTRACT: Model based methods to marker-free motion capture have a very high
computational overhead that make them unattractive. In this paper we describe a
method that improves on existing global optimization techniques to tracking
articulated objects. Our method improves on the state-of-the-art Annealed
Particle Filter (APF) by reusing samples across annealing layers and by using
an adaptive parametric density for diffusion. We compare the proposed method
with APF on a scalable problem and study how the two methods scale with the
dimensionality, multi-modality and the range of search. Then we perform
sensitivity analysis on the parameters of our algorithm and show that it
tolerates a wide range of parameter settings. We also show results on tracking
human pose from the widely-used Human Eva I dataset. Our results show that the
proposed method reduces the tracking error despite using less than 50% of the
computational resources as APF. The tracked output also shows a significant
qualitative improvement over APF as demonstrated through image and video
results.
| no_new_dataset | 0.9462 |
1205.0038 | Fergal Reid | Fergal Reid, Aaron McDaid, Neil Hurley | Percolation Computation in Complex Networks | 12 pages, 8 figures. Supporting source code available:
http://sites.google.com/site/cliqueperccomp | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | K-clique percolation is an overlapping community finding algorithm which
extracts particular structures, comprised of overlapping cliques, from complex
networks. While it is conceptually straightforward, and can be elegantly
expressed using clique graphs, certain aspects of k-clique percolation are
computationally challenging in practice. In this paper we investigate aspects
of empirical social networks, such as the large numbers of overlapping maximal
cliques contained within them, that make clique percolation, and clique graph
representations, computationally expensive. We motivate a simple algorithm to
conduct clique percolation, and investigate its performance compared to current
best-in-class algorithms. We present improvements to this algorithm, which
allow us to perform k-clique percolation on much larger empirical datasets. Our
approaches perform much better than existing algorithms on networks exhibiting
pervasively overlapping community structure, especially for higher values of k.
However, clique percolation remains a hard computational problem; current
algorithms still scale worse than some other overlapping community finding
algorithms.
| [
{
"version": "v1",
"created": "Mon, 30 Apr 2012 21:40:37 GMT"
}
] | 2012-05-02T00:00:00 | [
[
"Reid",
"Fergal",
""
],
[
"McDaid",
"Aaron",
""
],
[
"Hurley",
"Neil",
""
]
] | TITLE: Percolation Computation in Complex Networks
ABSTRACT: K-clique percolation is an overlapping community finding algorithm which
extracts particular structures, comprised of overlapping cliques, from complex
networks. While it is conceptually straightforward, and can be elegantly
expressed using clique graphs, certain aspects of k-clique percolation are
computationally challenging in practice. In this paper we investigate aspects
of empirical social networks, such as the large numbers of overlapping maximal
cliques contained within them, that make clique percolation, and clique graph
representations, computationally expensive. We motivate a simple algorithm to
conduct clique percolation, and investigate its performance compared to current
best-in-class algorithms. We present improvements to this algorithm, which
allow us to perform k-clique percolation on much larger empirical datasets. Our
approaches perform much better than existing algorithms on networks exhibiting
pervasively overlapping community structure, especially for higher values of k.
However, clique percolation remains a hard computational problem; current
algorithms still scale worse than some other overlapping community finding
algorithms.
| no_new_dataset | 0.950869 |
1204.6396 | Roheet Bhatnagar | Roheet Bhatnagar and Mrinal Kanti Ghose | Comparing Soft Computing Techniques For Early Stage Software Development
Effort Estimations | 09 PAGES | International Journal of Software Engineering & Applications
(IJSEA), Vol.3, No.2, March 2012 | null | null | cs.SE | http://creativecommons.org/licenses/publicdomain/ | Accurately estimating the software size, cost, effort and schedule is
probably the biggest challenge facing software developers today. It has major
implications for the management of software development because both the
overestimates and underestimates have direct impact for causing damage to
software companies. Lot of models have been proposed over the years by various
researchers for carrying out effort estimations. Also some of the studies for
early stage effort estimations suggest the importance of early estimations. New
paradigms offer alternatives to estimate the software development effort, in
particular the Computational Intelligence (CI) that exploits mechanisms of
interaction between humans and processes domain knowledge with the intention of
building intelligent systems (IS). Among IS, Artificial Neural Network and
Fuzzy Logic are the two most popular soft computing techniques for software
development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using
the student dataset. It has been found that Mamdani FIS was able to predict the
early stage efforts more efficiently in comparison to the neural network models
based models.
| [
{
"version": "v1",
"created": "Sat, 28 Apr 2012 10:48:19 GMT"
}
] | 2012-05-01T00:00:00 | [
[
"Bhatnagar",
"Roheet",
""
],
[
"Ghose",
"Mrinal Kanti",
""
]
] | TITLE: Comparing Soft Computing Techniques For Early Stage Software Development
Effort Estimations
ABSTRACT: Accurately estimating the software size, cost, effort and schedule is
probably the biggest challenge facing software developers today. It has major
implications for the management of software development because both the
overestimates and underestimates have direct impact for causing damage to
software companies. Lot of models have been proposed over the years by various
researchers for carrying out effort estimations. Also some of the studies for
early stage effort estimations suggest the importance of early estimations. New
paradigms offer alternatives to estimate the software development effort, in
particular the Computational Intelligence (CI) that exploits mechanisms of
interaction between humans and processes domain knowledge with the intention of
building intelligent systems (IS). Among IS, Artificial Neural Network and
Fuzzy Logic are the two most popular soft computing techniques for software
development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using
the student dataset. It has been found that Mamdani FIS was able to predict the
early stage efforts more efficiently in comparison to the neural network models
based models.
| no_new_dataset | 0.946794 |
1204.6077 | Ahmed Metwally | Ahmed Metwally, Christos Faloutsos | V-SMART-Join: A Scalable MapReduce Framework for All-Pair Similarity
Joins of Multisets and Vectors | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 8, pp.
704-715 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work proposes V-SMART-Join, a scalable MapReduce-based framework for
discovering all pairs of similar entities. The V-SMART-Join framework is
applicable to sets, multisets, and vectors. V-SMART-Join is motivated by the
observed skew in the underlying distributions of Internet traffic, and is a
family of 2-stage algorithms, where the first stage computes and joins the
partial results, and the second stage computes the similarity exactly for all
candidate pairs. The V-SMART-Join algorithms are very efficient and scalable in
the number of entities, as well as their cardinalities. They were up to 30
times faster than the state of the art algorithm, VCL, when compared on a real
dataset of a small size. We also established the scalability of the proposed
algorithms by running them on a dataset of a realistic size, on which VCL never
succeeded to finish. Experiments were run using real datasets of IPs and
cookies, where each IP is represented as a multiset of cookies, and the goal is
to discover similar IPs to identify Internet proxies.
| [
{
"version": "v1",
"created": "Thu, 26 Apr 2012 23:25:14 GMT"
}
] | 2012-04-30T00:00:00 | [
[
"Metwally",
"Ahmed",
""
],
[
"Faloutsos",
"Christos",
""
]
] | TITLE: V-SMART-Join: A Scalable MapReduce Framework for All-Pair Similarity
Joins of Multisets and Vectors
ABSTRACT: This work proposes V-SMART-Join, a scalable MapReduce-based framework for
discovering all pairs of similar entities. The V-SMART-Join framework is
applicable to sets, multisets, and vectors. V-SMART-Join is motivated by the
observed skew in the underlying distributions of Internet traffic, and is a
family of 2-stage algorithms, where the first stage computes and joins the
partial results, and the second stage computes the similarity exactly for all
candidate pairs. The V-SMART-Join algorithms are very efficient and scalable in
the number of entities, as well as their cardinalities. They were up to 30
times faster than the state of the art algorithm, VCL, when compared on a real
dataset of a small size. We also established the scalability of the proposed
algorithms by running them on a dataset of a realistic size, on which VCL never
succeeded to finish. Experiments were run using real datasets of IPs and
cookies, where each IP is represented as a multiset of cookies, and the goal is
to discover similar IPs to identify Internet proxies.
| no_new_dataset | 0.946349 |
1201.5722 | Vasyl Palchykov | Vasyl Palchykov, Kimmo Kaski, J\'anos Kert\'esz, Albert-L\'aszl\'o
Barab\'asi and Robin I. M. Dunbar | Sex differences in intimate relationships | 5 pages, 3 figures, contains electronic supplementary material | Sci. Rep. 2, 370 (2012) | 10.1038/srep00370 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social networks have turned out to be of fundamental importance both for our
understanding human sociality and for the design of digital communication
technology. However, social networks are themselves based on dyadic
relationships and we have little understanding of the dynamics of close
relationships and how these change over time. Evolutionary theory suggests
that, even in monogamous mating systems, the pattern of investment in close
relationships should vary across the lifespan when post-weaning investment
plays an important role in maximising fitness. Mobile phone data sets provide
us with a unique window into the structure of relationships and the way these
change across the lifespan. We here use data from a large national mobile phone
dataset to demonstrate striking sex differences in the pattern in the
gender-bias of preferred relationships that reflect the way the reproductive
investment strategies of the two sexes change across the lifespan: these
differences mainly reflect women's shifting patterns of investment in
reproduction and parental care. These results suggest that human social
strategies may have more complex dynamics than we have tended to assume and a
life-history perspective may be crucial for understanding them.
| [
{
"version": "v1",
"created": "Fri, 27 Jan 2012 08:42:10 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Apr 2012 10:48:20 GMT"
}
] | 2012-04-26T00:00:00 | [
[
"Palchykov",
"Vasyl",
""
],
[
"Kaski",
"Kimmo",
""
],
[
"Kertész",
"János",
""
],
[
"Barabási",
"Albert-László",
""
],
[
"Dunbar",
"Robin I. M.",
""
]
] | TITLE: Sex differences in intimate relationships
ABSTRACT: Social networks have turned out to be of fundamental importance both for our
understanding human sociality and for the design of digital communication
technology. However, social networks are themselves based on dyadic
relationships and we have little understanding of the dynamics of close
relationships and how these change over time. Evolutionary theory suggests
that, even in monogamous mating systems, the pattern of investment in close
relationships should vary across the lifespan when post-weaning investment
plays an important role in maximising fitness. Mobile phone data sets provide
us with a unique window into the structure of relationships and the way these
change across the lifespan. We here use data from a large national mobile phone
dataset to demonstrate striking sex differences in the pattern in the
gender-bias of preferred relationships that reflect the way the reproductive
investment strategies of the two sexes change across the lifespan: these
differences mainly reflect women's shifting patterns of investment in
reproduction and parental care. These results suggest that human social
strategies may have more complex dynamics than we have tended to assume and a
life-history perspective may be crucial for understanding them.
| no_new_dataset | 0.90878 |
1204.5592 | Dr Brij Gupta | B. B. Gupta, R. C. Joshi, Manoj Misra | Dynamic and Auto Responsive Solution for Distributed Denial-of-Service
Attacks Detection in ISP Network | arXiv admin note: substantial text overlap with arXiv:1203.2400 | International Journal of Computer Theory and Engineering, Vol. 1,
No. 1, April 2009 1793-821X | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Denial of service (DoS) attacks and more particularly the distributed ones
(DDoS) are one of the latest threat and pose a grave danger to users,
organizations and infrastructures of the Internet. Several schemes have been
proposed on how to detect some of these attacks, but they suffer from a range
of problems, some of them being impractical and others not being effective
against these attacks. This paper reports the design principles and evaluation
results of our proposed framework that autonomously detects and accurately
characterizes a wide range of flooding DDoS attacks in ISP network. Attacks are
detected by the constant monitoring of propagation of abrupt traffic changes
inside ISP network. For this, a newly designed flow-volume based approach
(FVBA) is used to construct profile of the traffic normally seen in the
network, and identify anomalies whenever traffic goes out of profile.
Consideration of varying tolerance factors make proposed detection system
scalable to the varying network conditions and attack loads in real time.
Six-sigma method is used to identify threshold values accurately for malicious
flows characterization. FVBA has been extensively evaluated in a controlled
test-bed environment. Detection thresholds and efficiency is justified using
receiver operating characteristics (ROC) curve. For validation, KDD 99, a
publicly available benchmark dataset is used. The results show that our
proposed system gives a drastic improvement in terms of detection and false
alarm rate.
| [
{
"version": "v1",
"created": "Wed, 25 Apr 2012 08:56:12 GMT"
}
] | 2012-04-26T00:00:00 | [
[
"Gupta",
"B. B.",
""
],
[
"Joshi",
"R. C.",
""
],
[
"Misra",
"Manoj",
""
]
] | TITLE: Dynamic and Auto Responsive Solution for Distributed Denial-of-Service
Attacks Detection in ISP Network
ABSTRACT: Denial of service (DoS) attacks and more particularly the distributed ones
(DDoS) are one of the latest threat and pose a grave danger to users,
organizations and infrastructures of the Internet. Several schemes have been
proposed on how to detect some of these attacks, but they suffer from a range
of problems, some of them being impractical and others not being effective
against these attacks. This paper reports the design principles and evaluation
results of our proposed framework that autonomously detects and accurately
characterizes a wide range of flooding DDoS attacks in ISP network. Attacks are
detected by the constant monitoring of propagation of abrupt traffic changes
inside ISP network. For this, a newly designed flow-volume based approach
(FVBA) is used to construct profile of the traffic normally seen in the
network, and identify anomalies whenever traffic goes out of profile.
Consideration of varying tolerance factors make proposed detection system
scalable to the varying network conditions and attack loads in real time.
Six-sigma method is used to identify threshold values accurately for malicious
flows characterization. FVBA has been extensively evaluated in a controlled
test-bed environment. Detection thresholds and efficiency is justified using
receiver operating characteristics (ROC) curve. For validation, KDD 99, a
publicly available benchmark dataset is used. The results show that our
proposed system gives a drastic improvement in terms of detection and false
alarm rate.
| no_new_dataset | 0.947332 |
1204.5086 | Christoph Lange | Christoph Lange and Patrick Ion and Anastasia Dimou and Charalampos
Bratsas and Joseph Corneli and Wolfram Sperber and Michael Kohlhase and
Ioannis Antoniou | Reimplementing the Mathematical Subject Classification (MSC) as a Linked
Open Dataset | Conference on Intelligent Computer Mathematics, July 9-14, Bremen,
Germany. Published as number 7362 in Lecture Notes in Artificial
Intelligence, Springer | null | null | null | cs.DL cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Mathematics Subject Classification (MSC) is a widely used scheme for
classifying documents in mathematics by subject. Its traditional, idiosyncratic
conceptualization and representation makes the scheme hard to maintain and
requires custom implementations of search, query and annotation support. This
limits uptake e.g. in semantic web technologies in general and the creation and
exploration of connections between mathematics and related domains (e.g.
science) in particular.
This paper presents the new official implementation of the MSC2010 as a
Linked Open Dataset, building on SKOS (Simple Knowledge Organization System).
We provide a brief overview of the dataset's structure, its available
implementations, and first applications.
| [
{
"version": "v1",
"created": "Mon, 23 Apr 2012 15:29:30 GMT"
}
] | 2012-04-24T00:00:00 | [
[
"Lange",
"Christoph",
""
],
[
"Ion",
"Patrick",
""
],
[
"Dimou",
"Anastasia",
""
],
[
"Bratsas",
"Charalampos",
""
],
[
"Corneli",
"Joseph",
""
],
[
"Sperber",
"Wolfram",
""
],
[
"Kohlhase",
"Michael",
""
],
[
"Antoniou",
"Ioannis",
""
]
] | TITLE: Reimplementing the Mathematical Subject Classification (MSC) as a Linked
Open Dataset
ABSTRACT: The Mathematics Subject Classification (MSC) is a widely used scheme for
classifying documents in mathematics by subject. Its traditional, idiosyncratic
conceptualization and representation makes the scheme hard to maintain and
requires custom implementations of search, query and annotation support. This
limits uptake e.g. in semantic web technologies in general and the creation and
exploration of connections between mathematics and related domains (e.g.
science) in particular.
This paper presents the new official implementation of the MSC2010 as a
Linked Open Dataset, building on SKOS (Simple Knowledge Organization System).
We provide a brief overview of the dataset's structure, its available
implementations, and first applications.
| no_new_dataset | 0.735784 |
1006.5235 | Matteo Riondato | Andrea Pietracaprina, Matteo Riondato, Eli Upfal, Fabio Vandin | Mining Top-K Frequent Itemsets Through Progressive Sampling | 16 pages, 2 figures, accepted for presentation at ECML PKDD 2010 and
publication in the ECML PKDD 2010 special issue of the Data Mining and
Knowledge Discovery journal | null | 10.1007/s10618-010-0185-7 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the use of sampling for efficiently mining the top-K frequent
itemsets of cardinality at most w. To this purpose, we define an approximation
to the top-K frequent itemsets to be a family of itemsets which includes
(resp., excludes) all very frequent (resp., very infrequent) itemsets, together
with an estimate of these itemsets' frequencies with a bounded error. Our first
result is an upper bound on the sample size which guarantees that the top-K
frequent itemsets mined from a random sample of that size approximate the
actual top-K frequent itemsets, with probability larger than a specified value.
We show that the upper bound is asymptotically tight when w is constant. Our
main algorithmic contribution is a progressive sampling approach, combined with
suitable stopping conditions, which on appropriate inputs is able to extract
approximate top-K frequent itemsets from samples whose sizes are smaller than
the general upper bound. In order to test the stopping conditions, this
approach maintains the frequency of all itemsets encountered, which is
practical only for small w. However, we show how this problem can be mitigated
by using a variation of Bloom filters. A number of experiments conducted on
both synthetic and real bench- mark datasets show that using samples
substantially smaller than the original dataset (i.e., of size defined by the
upper bound or reached through the progressive sampling approach) enable to
approximate the actual top-K frequent itemsets with accuracy much higher than
what analytically proved.
| [
{
"version": "v1",
"created": "Sun, 27 Jun 2010 20:38:39 GMT"
}
] | 2012-04-23T00:00:00 | [
[
"Pietracaprina",
"Andrea",
""
],
[
"Riondato",
"Matteo",
""
],
[
"Upfal",
"Eli",
""
],
[
"Vandin",
"Fabio",
""
]
] | TITLE: Mining Top-K Frequent Itemsets Through Progressive Sampling
ABSTRACT: We study the use of sampling for efficiently mining the top-K frequent
itemsets of cardinality at most w. To this purpose, we define an approximation
to the top-K frequent itemsets to be a family of itemsets which includes
(resp., excludes) all very frequent (resp., very infrequent) itemsets, together
with an estimate of these itemsets' frequencies with a bounded error. Our first
result is an upper bound on the sample size which guarantees that the top-K
frequent itemsets mined from a random sample of that size approximate the
actual top-K frequent itemsets, with probability larger than a specified value.
We show that the upper bound is asymptotically tight when w is constant. Our
main algorithmic contribution is a progressive sampling approach, combined with
suitable stopping conditions, which on appropriate inputs is able to extract
approximate top-K frequent itemsets from samples whose sizes are smaller than
the general upper bound. In order to test the stopping conditions, this
approach maintains the frequency of all itemsets encountered, which is
practical only for small w. However, we show how this problem can be mitigated
by using a variation of Bloom filters. A number of experiments conducted on
both synthetic and real bench- mark datasets show that using samples
substantially smaller than the original dataset (i.e., of size defined by the
upper bound or reached through the progressive sampling approach) enable to
approximate the actual top-K frequent itemsets with accuracy much higher than
what analytically proved.
| no_new_dataset | 0.948298 |
1204.4541 | Patrick Taillandier | Patrick Taillandier (UMMISCO), Julien Gaffuri (COGIT) | Automatic Sampling of Geographic objects | null | GIScience, Zurich : Switzerland (2010) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, one's disposes of large datasets composed of thousands of geographic
objects. However, for many processes, which require the appraisal of an expert
or much computational time, only a small part of these objects can be taken
into account. In this context, robust sampling methods become necessary. In
this paper, we propose a sampling method based on clustering techniques. Our
method consists in dividing the objects in clusters, then in selecting in each
cluster, the most representative objects. A case-study in the context of a
process dedicated to knowledge revision for geographic data generalisation is
presented. This case-study shows that our method allows to select relevant
samples of objects.
| [
{
"version": "v1",
"created": "Fri, 20 Apr 2012 06:35:41 GMT"
}
] | 2012-04-23T00:00:00 | [
[
"Taillandier",
"Patrick",
"",
"UMMISCO"
],
[
"Gaffuri",
"Julien",
"",
"COGIT"
]
] | TITLE: Automatic Sampling of Geographic objects
ABSTRACT: Today, one's disposes of large datasets composed of thousands of geographic
objects. However, for many processes, which require the appraisal of an expert
or much computational time, only a small part of these objects can be taken
into account. In this context, robust sampling methods become necessary. In
this paper, we propose a sampling method based on clustering techniques. Our
method consists in dividing the objects in clusters, then in selecting in each
cluster, the most representative objects. A case-study in the context of a
process dedicated to knowledge revision for geographic data generalisation is
presented. This case-study shows that our method allows to select relevant
samples of objects.
| no_new_dataset | 0.94545 |
1105.2470 | Bertrand Georgeot | Bertrand Georgeot and Olivier Giraud | The game of go as a complex network | 6 pages, 9 figures, final version | Europhysics Letters 97, 68002 (2012) | 10.1209/0295-5075/97/68002 | null | cs.GT cond-mat.stat-mech cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the game of go from a complex network perspective. We construct a
directed network using a suitable definition of tactical moves including local
patterns, and study this network for different datasets of professional
tournaments and amateur games. The move distribution follows Zipf's law and the
network is scale free, with statistical peculiarities different from other real
directed networks, such as e. g. the World Wide Web. These specificities
reflect in the outcome of ranking algorithms applied to it. The fine study of
the eigenvalues and eigenvectors of matrices used by the ranking algorithms
singles out certain strategic situations. Our results should pave the way to a
better modelization of board games and other types of human strategic scheming.
| [
{
"version": "v1",
"created": "Thu, 12 May 2011 13:36:09 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Apr 2012 11:03:07 GMT"
}
] | 2012-04-20T00:00:00 | [
[
"Georgeot",
"Bertrand",
""
],
[
"Giraud",
"Olivier",
""
]
] | TITLE: The game of go as a complex network
ABSTRACT: We study the game of go from a complex network perspective. We construct a
directed network using a suitable definition of tactical moves including local
patterns, and study this network for different datasets of professional
tournaments and amateur games. The move distribution follows Zipf's law and the
network is scale free, with statistical peculiarities different from other real
directed networks, such as e. g. the World Wide Web. These specificities
reflect in the outcome of ranking algorithms applied to it. The fine study of
the eigenvalues and eigenvectors of matrices used by the ranking algorithms
singles out certain strategic situations. Our results should pave the way to a
better modelization of board games and other types of human strategic scheming.
| no_new_dataset | 0.946399 |
1204.3921 | Javier Esteban Zarza | Javier Esteban, Antonio Ortega, Sean McPherson and Maheswaran
Sathiamoorthy | Analysis of Twitter Traffic based on Renewal Densities | null | null | null | null | cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a novel approach for Twitter traffic analysis based
on renewal theory. Even though twitter datasets are of increasing interest to
researchers, extracting information from message timing remains somewhat
unexplored. Our approach, extending our prior work on anomaly detection, makes
it possible to characterize levels of correlation within a message stream, thus
assessing how much interaction there is between those posting messages.
Moreover, our method enables us to detect the presence of periodic traffic,
which is useful to determine whether there is spam in the message stream.
Because our proposed techniques only make use of timing information and are
amenable to downsampling, they can be used as low complexity tools for data
analysis.
| [
{
"version": "v1",
"created": "Tue, 17 Apr 2012 21:26:19 GMT"
}
] | 2012-04-19T00:00:00 | [
[
"Esteban",
"Javier",
""
],
[
"Ortega",
"Antonio",
""
],
[
"McPherson",
"Sean",
""
],
[
"Sathiamoorthy",
"Maheswaran",
""
]
] | TITLE: Analysis of Twitter Traffic based on Renewal Densities
ABSTRACT: In this paper we propose a novel approach for Twitter traffic analysis based
on renewal theory. Even though twitter datasets are of increasing interest to
researchers, extracting information from message timing remains somewhat
unexplored. Our approach, extending our prior work on anomaly detection, makes
it possible to characterize levels of correlation within a message stream, thus
assessing how much interaction there is between those posting messages.
Moreover, our method enables us to detect the presence of periodic traffic,
which is useful to determine whether there is spam in the message stream.
Because our proposed techniques only make use of timing information and are
amenable to downsampling, they can be used as low complexity tools for data
analysis.
| no_new_dataset | 0.949809 |
1204.3968 | Pierre Sermanet | Pierre Sermanet, Soumith Chintala, Yann LeCun | Convolutional Neural Networks Applied to House Numbers Digit
Classification | 4 pages, 6 figures, 2 tables | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We classify digits of real-world house numbers using convolutional neural
networks (ConvNets). ConvNets are hierarchical feature learning neural networks
whose structure is biologically inspired. Unlike many popular vision approaches
that are hand-designed, ConvNets can automatically learn a unique set of
features optimized for a given task. We augmented the traditional ConvNet
architecture by learning multi-stage features and by using Lp pooling and
establish a new state-of-the-art of 94.85% accuracy on the SVHN dataset (45.2%
error improvement). Furthermore, we analyze the benefits of different pooling
methods and multi-stage features in ConvNets. The source code and a tutorial
are available at eblearn.sf.net.
| [
{
"version": "v1",
"created": "Wed, 18 Apr 2012 03:48:38 GMT"
}
] | 2012-04-19T00:00:00 | [
[
"Sermanet",
"Pierre",
""
],
[
"Chintala",
"Soumith",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Convolutional Neural Networks Applied to House Numbers Digit
Classification
ABSTRACT: We classify digits of real-world house numbers using convolutional neural
networks (ConvNets). ConvNets are hierarchical feature learning neural networks
whose structure is biologically inspired. Unlike many popular vision approaches
that are hand-designed, ConvNets can automatically learn a unique set of
features optimized for a given task. We augmented the traditional ConvNet
architecture by learning multi-stage features and by using Lp pooling and
establish a new state-of-the-art of 94.85% accuracy on the SVHN dataset (45.2%
error improvement). Furthermore, we analyze the benefits of different pooling
methods and multi-stage features in ConvNets. The source code and a tutorial
are available at eblearn.sf.net.
| no_new_dataset | 0.950457 |
1204.3498 | Vahed Qazvinian | Vahed Qazvinian and Dragomir R. Radev | A Computational Analysis of Collective Discourse | Presented at Collective Intelligence conference, 2012
(arXiv:1204.2991) | null | null | CollectiveIntelligence/2012/59 | cs.SI cs.CL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is focused on the computational analysis of collective discourse,
a collective behavior seen in non-expert content contributions in online social
media. We collect and analyze a wide range of real-world collective discourse
datasets from movie user reviews to microblogs and news headlines to scientific
citations. We show that all these datasets exhibit diversity of perspective, a
property seen in other collective systems and a criterion in wise crowds. Our
experiments also confirm that the network of different perspective
co-occurrences exhibits the small-world property with high clustering of
different perspectives. Finally, we show that non-expert contributions in
collective discourse can be used to answer simple questions that are otherwise
hard to answer.
| [
{
"version": "v1",
"created": "Mon, 16 Apr 2012 14:27:39 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Apr 2012 17:17:28 GMT"
}
] | 2012-04-18T00:00:00 | [
[
"Qazvinian",
"Vahed",
""
],
[
"Radev",
"Dragomir R.",
""
]
] | TITLE: A Computational Analysis of Collective Discourse
ABSTRACT: This paper is focused on the computational analysis of collective discourse,
a collective behavior seen in non-expert content contributions in online social
media. We collect and analyze a wide range of real-world collective discourse
datasets from movie user reviews to microblogs and news headlines to scientific
citations. We show that all these datasets exhibit diversity of perspective, a
property seen in other collective systems and a criterion in wise crowds. Our
experiments also confirm that the network of different perspective
co-occurrences exhibits the small-world property with high clustering of
different perspectives. Finally, we show that non-expert contributions in
collective discourse can be used to answer simple questions that are otherwise
hard to answer.
| no_new_dataset | 0.947381 |
1204.3200 | Andrea Scharnhorst | Andrea Scharnhorst, Olav ten Bosch, Peter Doorn | Looking at a digital research data archive - Visual interfaces to EASY | Submitted to the TPDL 2012 | null | null | null | cs.DL physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | In this paper we explore visually the structure of the collection of a
digital research data archive in terms of metadata for deposited datasets. We
look into the distribution of datasets over different scientific fields; the
role of main depositors (persons and institutions) in different fields, and
main access choices for the deposited datasets. We argue that visual analytics
of metadata of collections can be used in multiple ways: to inform the archive
about structure and growth of its collection; to foster collections strategies;
and to check metadata consistency. We combine visual analytics and visual
enhanced browsing introducing a set of web-based, interactive visual interfaces
to the archive's collection. We discuss how text based search combined with
visual enhanced browsing enhances data access, navigation, and reuse.
| [
{
"version": "v1",
"created": "Sat, 14 Apr 2012 19:49:02 GMT"
}
] | 2012-04-17T00:00:00 | [
[
"Scharnhorst",
"Andrea",
""
],
[
"Bosch",
"Olav ten",
""
],
[
"Doorn",
"Peter",
""
]
] | TITLE: Looking at a digital research data archive - Visual interfaces to EASY
ABSTRACT: In this paper we explore visually the structure of the collection of a
digital research data archive in terms of metadata for deposited datasets. We
look into the distribution of datasets over different scientific fields; the
role of main depositors (persons and institutions) in different fields, and
main access choices for the deposited datasets. We argue that visual analytics
of metadata of collections can be used in multiple ways: to inform the archive
about structure and growth of its collection; to foster collections strategies;
and to check metadata consistency. We combine visual analytics and visual
enhanced browsing introducing a set of web-based, interactive visual interfaces
to the archive's collection. We discuss how text based search combined with
visual enhanced browsing enhances data access, navigation, and reuse.
| no_new_dataset | 0.950869 |
1204.3511 | Nicol\'as Della Penna | Nicol\'as Della Penna, Mark D. Reid | Crowd & Prejudice: An Impossibility Theorem for Crowd Labelling without
a Gold Standard | Presented at Collective Intelligence conference, 2012
(arXiv:1204.2991) | null | null | CollectiveIntelligence/2012/33 | cs.SI cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common use of crowd sourcing is to obtain labels for a dataset. Several
algorithms have been proposed to identify uninformative members of the crowd so
that their labels can be disregarded and the cost of paying them avoided. One
common motivation of these algorithms is to try and do without any initial set
of trusted labeled data. We analyse this class of algorithms as mechanisms in a
game-theoretic setting to understand the incentives they create for workers. We
find an impossibility result that without any ground truth, and when workers
have access to commonly shared 'prejudices' upon which they agree but are not
informative of true labels, there is always equilibria where all agents report
the prejudice. A small amount amount of gold standard data is found to be
sufficient to rule out these equilibria.
| [
{
"version": "v1",
"created": "Mon, 16 Apr 2012 15:07:56 GMT"
}
] | 2012-04-17T00:00:00 | [
[
"Della Penna",
"Nicolás",
""
],
[
"Reid",
"Mark D.",
""
]
] | TITLE: Crowd & Prejudice: An Impossibility Theorem for Crowd Labelling without
a Gold Standard
ABSTRACT: A common use of crowd sourcing is to obtain labels for a dataset. Several
algorithms have been proposed to identify uninformative members of the crowd so
that their labels can be disregarded and the cost of paying them avoided. One
common motivation of these algorithms is to try and do without any initial set
of trusted labeled data. We analyse this class of algorithms as mechanisms in a
game-theoretic setting to understand the incentives they create for workers. We
find an impossibility result that without any ground truth, and when workers
have access to commonly shared 'prejudices' upon which they agree but are not
informative of true labels, there is always equilibria where all agents report
the prejudice. A small amount amount of gold standard data is found to be
sufficient to rule out these equilibria.
| no_new_dataset | 0.950457 |
1109.3841 | Han-I Su | Han-I Su and Abbas El Gamal | Limits on the Benefits of Energy Storage for Renewable Integration | 45 pages, 17 figures | null | null | null | math.OC cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The high variability of renewable energy resources presents significant
challenges to the operation of the electric power grid. Conventional generators
can be used to mitigate this variability but are costly to operate and produce
carbon emissions. Energy storage provides a more environmentally friendly
alternative, but is costly to deploy in large amounts. This paper studies the
limits on the benefits of energy storage to renewable energy: How effective is
storage at mitigating the adverse effects of renewable energy variability? How
much storage is needed? What are the optimal control policies for operating
storage? To provide answers to these questions, we first formulate the power
flow in a single-bus power system with storage as an infinite horizon
stochastic program. We find the optimal policies for arbitrary net renewable
generation process when the cost function is the average conventional
generation (environmental cost) and when it is the average loss of load
probability (reliability cost). We obtain more refined results by considering
the multi-timescale operation of the power system. We view the power flow in
each timescale as the superposition of a predicted (deterministic) component
and an prediction error (residual) component and formulate the residual power
flow problem as an infinite horizon dynamic program. Assuming that the net
generation prediction error is an IID process, we quantify the asymptotic
benefits of storage. With the additional assumption of Laplace distributed
prediction error, we obtain closed form expressions for the stationary
distribution of storage and conventional generation. Finally, we propose a
two-threshold policy that trades off conventional generation saving with loss
of load probability. We illustrate our results and corroborate the IID and
Laplace assumptions numerically using datasets from CAISO and NREL.
| [
{
"version": "v1",
"created": "Sun, 18 Sep 2011 04:12:04 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Apr 2012 17:32:27 GMT"
}
] | 2012-04-13T00:00:00 | [
[
"Su",
"Han-I",
""
],
[
"Gamal",
"Abbas El",
""
]
] | TITLE: Limits on the Benefits of Energy Storage for Renewable Integration
ABSTRACT: The high variability of renewable energy resources presents significant
challenges to the operation of the electric power grid. Conventional generators
can be used to mitigate this variability but are costly to operate and produce
carbon emissions. Energy storage provides a more environmentally friendly
alternative, but is costly to deploy in large amounts. This paper studies the
limits on the benefits of energy storage to renewable energy: How effective is
storage at mitigating the adverse effects of renewable energy variability? How
much storage is needed? What are the optimal control policies for operating
storage? To provide answers to these questions, we first formulate the power
flow in a single-bus power system with storage as an infinite horizon
stochastic program. We find the optimal policies for arbitrary net renewable
generation process when the cost function is the average conventional
generation (environmental cost) and when it is the average loss of load
probability (reliability cost). We obtain more refined results by considering
the multi-timescale operation of the power system. We view the power flow in
each timescale as the superposition of a predicted (deterministic) component
and an prediction error (residual) component and formulate the residual power
flow problem as an infinite horizon dynamic program. Assuming that the net
generation prediction error is an IID process, we quantify the asymptotic
benefits of storage. With the additional assumption of Laplace distributed
prediction error, we obtain closed form expressions for the stationary
distribution of storage and conventional generation. Finally, we propose a
two-threshold policy that trades off conventional generation saving with loss
of load probability. We illustrate our results and corroborate the IID and
Laplace assumptions numerically using datasets from CAISO and NREL.
| no_new_dataset | 0.949763 |
1204.2581 | Sheng Gao | Sheng Gao and Ludovic Denoyer and Patrick Gallinari | Modeling Relational Data via Latent Factor Blockmodel | 10 pages, 12 figures | null | null | null | cs.DS cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we address the problem of modeling relational data, which
appear in many applications such as social network analysis, recommender
systems and bioinformatics. Previous studies either consider latent feature
based models but disregarding local structure in the network, or focus
exclusively on capturing local structure of objects based on latent blockmodels
without coupling with latent characteristics of objects. To combine the
benefits of the previous work, we propose a novel model that can simultaneously
incorporate the effect of latent features and covariates if any, as well as the
effect of latent structure that may exist in the data. To achieve this, we
model the relation graph as a function of both latent feature factors and
latent cluster memberships of objects to collectively discover globally
predictive intrinsic properties of objects and capture latent block structure
in the network to improve prediction performance. We also develop an
optimization transfer algorithm based on the generalized EM-style strategy to
learn the latent factors. We prove the efficacy of our proposed model through
the link prediction task and cluster analysis task, and extensive experiments
on the synthetic data and several real world datasets suggest that our proposed
LFBM model outperforms the other state of the art approaches in the evaluated
tasks.
| [
{
"version": "v1",
"created": "Wed, 11 Apr 2012 22:14:05 GMT"
}
] | 2012-04-13T00:00:00 | [
[
"Gao",
"Sheng",
""
],
[
"Denoyer",
"Ludovic",
""
],
[
"Gallinari",
"Patrick",
""
]
] | TITLE: Modeling Relational Data via Latent Factor Blockmodel
ABSTRACT: In this paper we address the problem of modeling relational data, which
appear in many applications such as social network analysis, recommender
systems and bioinformatics. Previous studies either consider latent feature
based models but disregarding local structure in the network, or focus
exclusively on capturing local structure of objects based on latent blockmodels
without coupling with latent characteristics of objects. To combine the
benefits of the previous work, we propose a novel model that can simultaneously
incorporate the effect of latent features and covariates if any, as well as the
effect of latent structure that may exist in the data. To achieve this, we
model the relation graph as a function of both latent feature factors and
latent cluster memberships of objects to collectively discover globally
predictive intrinsic properties of objects and capture latent block structure
in the network to improve prediction performance. We also develop an
optimization transfer algorithm based on the generalized EM-style strategy to
learn the latent factors. We prove the efficacy of our proposed model through
the link prediction task and cluster analysis task, and extensive experiments
on the synthetic data and several real world datasets suggest that our proposed
LFBM model outperforms the other state of the art approaches in the evaluated
tasks.
| no_new_dataset | 0.948155 |
1204.2588 | Sheng Gao | Sheng Gao and Ludovic Denoyer and Patrick Gallinari | Probabilistic Latent Tensor Factorization Model for Link Pattern
Prediction in Multi-relational Networks | 19pages, 5 figures | null | null | null | cs.SI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper aims at the problem of link pattern prediction in collections of
objects connected by multiple relation types, where each type may play a
distinct role. While common link analysis models are limited to single-type
link prediction, we attempt here to capture the correlations among different
relation types and reveal the impact of various relation types on performance
quality. For that, we define the overall relations between object pairs as a
\textit{link pattern} which consists in interaction pattern and connection
structure in the network, and then use tensor formalization to jointly model
and predict the link patterns, which we refer to as \textit{Link Pattern
Prediction} (LPP) problem. To address the issue, we propose a Probabilistic
Latent Tensor Factorization (PLTF) model by introducing another latent factor
for multiple relation types and furnish the Hierarchical Bayesian treatment of
the proposed probabilistic model to avoid overfitting for solving the LPP
problem. To learn the proposed model we develop an efficient Markov Chain Monte
Carlo sampling method. Extensive experiments are conducted on several real
world datasets and demonstrate significant improvements over several existing
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 11 Apr 2012 22:58:46 GMT"
}
] | 2012-04-13T00:00:00 | [
[
"Gao",
"Sheng",
""
],
[
"Denoyer",
"Ludovic",
""
],
[
"Gallinari",
"Patrick",
""
]
] | TITLE: Probabilistic Latent Tensor Factorization Model for Link Pattern
Prediction in Multi-relational Networks
ABSTRACT: This paper aims at the problem of link pattern prediction in collections of
objects connected by multiple relation types, where each type may play a
distinct role. While common link analysis models are limited to single-type
link prediction, we attempt here to capture the correlations among different
relation types and reveal the impact of various relation types on performance
quality. For that, we define the overall relations between object pairs as a
\textit{link pattern} which consists in interaction pattern and connection
structure in the network, and then use tensor formalization to jointly model
and predict the link patterns, which we refer to as \textit{Link Pattern
Prediction} (LPP) problem. To address the issue, we propose a Probabilistic
Latent Tensor Factorization (PLTF) model by introducing another latent factor
for multiple relation types and furnish the Hierarchical Bayesian treatment of
the proposed probabilistic model to avoid overfitting for solving the LPP
problem. To learn the proposed model we develop an efficient Markov Chain Monte
Carlo sampling method. Extensive experiments are conducted on several real
world datasets and demonstrate significant improvements over several existing
state-of-the-art methods.
| no_new_dataset | 0.948106 |
1204.2715 | David Vallet David Vallet | Magnus Knuth, Johannes Hercher and Harald Sack | Collaboratively Patching Linked Data | 2nd International Workshop on Usage Analysis and the Web of Data
(USEWOD2012) in the 21st International World Wide Web Conference (WWW2012),
Lyon, France, April 17th, 2012 | null | null | WWW2012USEWOD/2012/knhesa | cs.IR cs.DL cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's Web of Data is noisy. Linked Data often needs extensive preprocessing
to enable efficient use of heterogeneous resources. While consistent and valid
data provides the key to efficient data processing and aggregation we are
facing two main challenges: (1st) Identification of erroneous facts and
tracking their origins in dynamically connected datasets is a difficult task,
and (2nd) efforts in the curation of deficient facts in Linked Data are
exchanged rather rarely. Since erroneous data often is duplicated and
(re-)distributed by mashup applications it is not only the responsibility of a
few original publishers to keep their data tidy, but progresses to be a mission
for all distributers and consumers of Linked Data too. We present a new
approach to expose and to reuse patches on erroneous data to enhance and to add
quality information to the Web of Data. The feasibility of our approach is
demonstrated by example of a collaborative game that patches statements in
DBpedia data and provides notifications for relevant changes.
| [
{
"version": "v1",
"created": "Thu, 12 Apr 2012 13:27:08 GMT"
}
] | 2012-04-13T00:00:00 | [
[
"Knuth",
"Magnus",
""
],
[
"Hercher",
"Johannes",
""
],
[
"Sack",
"Harald",
""
]
] | TITLE: Collaboratively Patching Linked Data
ABSTRACT: Today's Web of Data is noisy. Linked Data often needs extensive preprocessing
to enable efficient use of heterogeneous resources. While consistent and valid
data provides the key to efficient data processing and aggregation we are
facing two main challenges: (1st) Identification of erroneous facts and
tracking their origins in dynamically connected datasets is a difficult task,
and (2nd) efforts in the curation of deficient facts in Linked Data are
exchanged rather rarely. Since erroneous data often is duplicated and
(re-)distributed by mashup applications it is not only the responsibility of a
few original publishers to keep their data tidy, but progresses to be a mission
for all distributers and consumers of Linked Data too. We present a new
approach to expose and to reuse patches on erroneous data to enhance and to add
quality information to the Web of Data. The feasibility of our approach is
demonstrated by example of a collaborative game that patches statements in
DBpedia data and provides notifications for relevant changes.
| no_new_dataset | 0.947769 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.