id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1204.2718 | David Vallet David Vallet | Andreas Thalhammer, Ioan Toma, Antonio Roa-Valverde and Dieter Fensel | Leveraging Usage Data for Linked Data Movie Entity Summarization | 2nd International Workshop on Usage Analysis and the Web of Data
(USEWOD2012) in the 21st International World Wide Web Conference (WWW2012),
Lyon, France, April 17th, 2012 | null | null | WWW2012USEWOD/2012/thtorofe | cs.AI cs.HC cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Novel research in the field of Linked Data focuses on the problem of entity
summarization. This field addresses the problem of ranking features according
to their importance for the task of identifying a particular entity. Next to a
more human friendly presentation, these summarizations can play a central role
for semantic search engines and semantic recommender systems. In current
approaches, it has been tried to apply entity summarization based on patterns
that are inherent to the regarded data.
The proposed approach of this paper focuses on the movie domain. It utilizes
usage data in order to support measuring the similarity between movie entities.
Using this similarity it is possible to determine the k-nearest neighbors of an
entity. This leads to the idea that features that entities share with their
nearest neighbors can be considered as significant or important for these
entities. Additionally, we introduce a downgrading factor (similar to TF-IDF)
in order to overcome the high number of commonly occurring features. We
exemplify the approach based on a movie-ratings dataset that has been linked to
Freebase entities.
| [
{
"version": "v1",
"created": "Thu, 12 Apr 2012 13:31:52 GMT"
}
] | 2012-04-13T00:00:00 | [
[
"Thalhammer",
"Andreas",
""
],
[
"Toma",
"Ioan",
""
],
[
"Roa-Valverde",
"Antonio",
""
],
[
"Fensel",
"Dieter",
""
]
] | TITLE: Leveraging Usage Data for Linked Data Movie Entity Summarization
ABSTRACT: Novel research in the field of Linked Data focuses on the problem of entity
summarization. This field addresses the problem of ranking features according
to their importance for the task of identifying a particular entity. Next to a
more human friendly presentation, these summarizations can play a central role
for semantic search engines and semantic recommender systems. In current
approaches, it has been tried to apply entity summarization based on patterns
that are inherent to the regarded data.
The proposed approach of this paper focuses on the movie domain. It utilizes
usage data in order to support measuring the similarity between movie entities.
Using this similarity it is possible to determine the k-nearest neighbors of an
entity. This leads to the idea that features that entities share with their
nearest neighbors can be considered as significant or important for these
entities. Additionally, we introduce a downgrading factor (similar to TF-IDF)
in order to overcome the high number of commonly occurring features. We
exemplify the approach based on a movie-ratings dataset that has been linked to
Freebase entities.
| no_new_dataset | 0.941601 |
1204.2404 | Sanaa Elyassami | Sanaa Elyassami and Ali Idri | Investigating Effort Prediction of Software Projects on the ISBSG
Dataset | International Journal of Artificial Intelligence & Applications
(IJAIA), Vol.3, No.2, March 2012 | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many cost estimation models have been proposed over the last three decades.
In this study, we investigate fuzzy ID3 decision tree as a method for software
effort estimation. Fuzzy ID software effort estimation model is designed by
incorporating the principles of ID3 decision tree and the concepts of the fuzzy
settheoretic; permitting the model to handle uncertain and imprecise data when
presenting the software projects. MMRE (Mean Magnitude of Relative Error) and
Pred(l) (Prediction at level l) are used, as measures of prediction accuracy,
for this study. A series of experiments is reported using ISBSG software
projects dataset. Fuzzy trees are grown using different fuzziness control
thresholds. Results showed that optimizing the fuzzy ID3 parameters can improve
greatly the accuracy of the generated software cost estimate.
| [
{
"version": "v1",
"created": "Wed, 11 Apr 2012 10:36:12 GMT"
}
] | 2012-04-12T00:00:00 | [
[
"Elyassami",
"Sanaa",
""
],
[
"Idri",
"Ali",
""
]
] | TITLE: Investigating Effort Prediction of Software Projects on the ISBSG
Dataset
ABSTRACT: Many cost estimation models have been proposed over the last three decades.
In this study, we investigate fuzzy ID3 decision tree as a method for software
effort estimation. Fuzzy ID software effort estimation model is designed by
incorporating the principles of ID3 decision tree and the concepts of the fuzzy
settheoretic; permitting the model to handle uncertain and imprecise data when
presenting the software projects. MMRE (Mean Magnitude of Relative Error) and
Pred(l) (Prediction at level l) are used, as measures of prediction accuracy,
for this study. A series of experiments is reported using ISBSG software
projects dataset. Fuzzy trees are grown using different fuzziness control
thresholds. Results showed that optimizing the fuzzy ID3 parameters can improve
greatly the accuracy of the generated software cost estimate.
| no_new_dataset | 0.951188 |
1204.2114 | Yong Haur Tay | Jun Yee Ng and Yong Haur Tay | Image-based Vehicle Classification System | The 11th Asia-Pacific ITS Forum and Exhibition (ITS-AP 2011),
Kaoshiung, Taiwan. June 8-11, 2011 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electronic toll collection (ETC) system has been a common trend used for toll
collection on toll road nowadays. The implementation of electronic toll
collection allows vehicles to travel at low or full speed during the toll
payment, which help to avoid the traffic delay at toll road. One of the major
components of an electronic toll collection is the automatic vehicle detection
and classification (AVDC) system which is important to classify the vehicle so
that the toll is charged according to the vehicle classes. Vision-based vehicle
classification system is one type of vehicle classification system which adopt
camera as the input sensing device for the system. This type of system has
advantage over the rest for it is cost efficient as low cost camera is used.
The implementation of vision-based vehicle classification system requires lower
initial investment cost and very suitable for the toll collection trend
migration in Malaysia from single ETC system to full-scale multi-lane free flow
(MLFF). This project includes the development of an image-based vehicle
classification system as an effort to seek for a robust vision-based vehicle
classification system. The techniques used in the system include
scale-invariant feature transform (SIFT) technique, Canny's edge detector,
K-means clustering as well as Euclidean distance matching. In this project, a
unique way to image description as matching medium is proposed. This
distinctiveness of method is analogous to the human DNA concept which is highly
unique. The system is evaluated on open datasets and return promising results.
| [
{
"version": "v1",
"created": "Tue, 10 Apr 2012 11:59:10 GMT"
}
] | 2012-04-11T00:00:00 | [
[
"Ng",
"Jun Yee",
""
],
[
"Tay",
"Yong Haur",
""
]
] | TITLE: Image-based Vehicle Classification System
ABSTRACT: Electronic toll collection (ETC) system has been a common trend used for toll
collection on toll road nowadays. The implementation of electronic toll
collection allows vehicles to travel at low or full speed during the toll
payment, which help to avoid the traffic delay at toll road. One of the major
components of an electronic toll collection is the automatic vehicle detection
and classification (AVDC) system which is important to classify the vehicle so
that the toll is charged according to the vehicle classes. Vision-based vehicle
classification system is one type of vehicle classification system which adopt
camera as the input sensing device for the system. This type of system has
advantage over the rest for it is cost efficient as low cost camera is used.
The implementation of vision-based vehicle classification system requires lower
initial investment cost and very suitable for the toll collection trend
migration in Malaysia from single ETC system to full-scale multi-lane free flow
(MLFF). This project includes the development of an image-based vehicle
classification system as an effort to seek for a robust vision-based vehicle
classification system. The techniques used in the system include
scale-invariant feature transform (SIFT) technique, Canny's edge detector,
K-means clustering as well as Euclidean distance matching. In this project, a
unique way to image description as matching medium is proposed. This
distinctiveness of method is analogous to the human DNA concept which is highly
unique. The system is evaluated on open datasets and return promising results.
| no_new_dataset | 0.945147 |
1203.3586 | Mohsen Pourvali | Mohsen Pourvali and Mohammad Saniee Abadeh | Automated Text Summarization Base on Lexicales Chain and graph Using of
WordNet and Wikipedia Knowledge Base | null | IJCSI International Journal of Computer Science Issues, Vol. 9,
Issue 1, No 3, January 2012 | null | null | cs.IR cs.CL | http://creativecommons.org/licenses/by/3.0/ | The technology of automatic document summarization is maturing and may
provide a solution to the information overload problem. Nowadays, document
summarization plays an important role in information retrieval. With a large
volume of documents, presenting the user with a summary of each document
greatly facilitates the task of finding the desired documents. Document
summarization is a process of automatically creating a compressed version of a
given document that provides useful information to users, and multi-document
summarization is to produce a summary delivering the majority of information
content from a set of documents about an explicit or implicit main topic. The
lexical cohesion structure of the text can be exploited to determine the
importance of a sentence/phrase. Lexical chains are useful tools to analyze the
lexical cohesion structure in a text .In this paper we consider the effect of
the use of lexical cohesion features in Summarization, And presenting a
algorithm base on the knowledge base. Ours algorithm at first find the correct
sense of any word, Then constructs the lexical chains, remove Lexical chains
that less score than other, detects topics roughly from lexical chains,
segments the text with respect to the topics and selects the most important
sentences. The experimental results on an open benchmark datasets from DUC01
and DUC02 show that our proposed approach can improve the performance compared
to sate-of-the-art summarization approaches.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 22:56:29 GMT"
}
] | 2012-04-10T00:00:00 | [
[
"Pourvali",
"Mohsen",
""
],
[
"Abadeh",
"Mohammad Saniee",
""
]
] | TITLE: Automated Text Summarization Base on Lexicales Chain and graph Using of
WordNet and Wikipedia Knowledge Base
ABSTRACT: The technology of automatic document summarization is maturing and may
provide a solution to the information overload problem. Nowadays, document
summarization plays an important role in information retrieval. With a large
volume of documents, presenting the user with a summary of each document
greatly facilitates the task of finding the desired documents. Document
summarization is a process of automatically creating a compressed version of a
given document that provides useful information to users, and multi-document
summarization is to produce a summary delivering the majority of information
content from a set of documents about an explicit or implicit main topic. The
lexical cohesion structure of the text can be exploited to determine the
importance of a sentence/phrase. Lexical chains are useful tools to analyze the
lexical cohesion structure in a text .In this paper we consider the effect of
the use of lexical cohesion features in Summarization, And presenting a
algorithm base on the knowledge base. Ours algorithm at first find the correct
sense of any word, Then constructs the lexical chains, remove Lexical chains
that less score than other, detects topics roughly from lexical chains,
segments the text with respect to the topics and selects the most important
sentences. The experimental results on an open benchmark datasets from DUC01
and DUC02 show that our proposed approach can improve the performance compared
to sate-of-the-art summarization approaches.
| no_new_dataset | 0.949342 |
1204.1611 | Choon Boon Ng | Choon Boon Ng, Yong Haur Tay, Bok Min Goi | Vision-based Human Gender Recognition: A Survey | 30 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gender is an important demographic attribute of people. This paper provides a
survey of human gender recognition in computer vision. A review of approaches
exploiting information from face and whole body (either from a still image or
gait sequence) is presented. We highlight the challenges faced and survey the
representative methods of these approaches. Based on the results, good
performance have been achieved for datasets captured under controlled
environments, but there is still much work that can be done to improve the
robustness of gender recognition under real-life environments.
| [
{
"version": "v1",
"created": "Sat, 7 Apr 2012 08:17:40 GMT"
}
] | 2012-04-10T00:00:00 | [
[
"Ng",
"Choon Boon",
""
],
[
"Tay",
"Yong Haur",
""
],
[
"Goi",
"Bok Min",
""
]
] | TITLE: Vision-based Human Gender Recognition: A Survey
ABSTRACT: Gender is an important demographic attribute of people. This paper provides a
survey of human gender recognition in computer vision. A review of approaches
exploiting information from face and whole body (either from a still image or
gait sequence) is presented. We highlight the challenges faced and survey the
representative methods of these approaches. Based on the results, good
performance have been achieved for datasets captured under controlled
environments, but there is still much work that can be done to improve the
robustness of gender recognition under real-life environments.
| no_new_dataset | 0.946843 |
1204.1949 | Zi-Ke Zhang Mr. | Xiao Hu, Chuibo Chen, Xiaolong Chen, Zi-Ke Zhang | Social Recommender Systems Based on Coupling Network Structure Analysis | null | null | null | null | cs.IR cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The past few years has witnessed the great success of recommender systems,
which can significantly help users find relevant and interesting items for them
in the information era. However, a vast class of researches in this area mainly
focus on predicting missing links in bipartite user-item networks (represented
as behavioral networks). Comparatively, the social impact, especially the
network structure based properties, is relatively lack of study. In this paper,
we firstly obtain five corresponding network-based features, including user
activity, average neighbors' degree, clustering coefficient, assortative
coefficient and discrimination, from social and behavioral networks,
respectively. A hybrid algorithm is proposed to integrate those features from
two respective networks. Subsequently, we employ a machine learning process to
use those features to provide recommendation results in a binary classifier
method. Experimental results on a real dataset, Flixster, suggest that the
proposed method can significantly enhance the algorithmic accuracy. In
addition, as network-based properties consider not only the social activities,
but also take into account user preferences in the behavioral networks,
therefore, it performs much better than that from either social or behavioral
networks. Furthermore, since the features based on the behavioral network
contain more diverse and meaningfully structural information, they play a vital
role in uncovering users' potential preference, which, might show light in
deeply understanding the structure and function of the social and behavioral
networks.
| [
{
"version": "v1",
"created": "Mon, 9 Apr 2012 18:46:53 GMT"
}
] | 2012-04-10T00:00:00 | [
[
"Hu",
"Xiao",
""
],
[
"Chen",
"Chuibo",
""
],
[
"Chen",
"Xiaolong",
""
],
[
"Zhang",
"Zi-Ke",
""
]
] | TITLE: Social Recommender Systems Based on Coupling Network Structure Analysis
ABSTRACT: The past few years has witnessed the great success of recommender systems,
which can significantly help users find relevant and interesting items for them
in the information era. However, a vast class of researches in this area mainly
focus on predicting missing links in bipartite user-item networks (represented
as behavioral networks). Comparatively, the social impact, especially the
network structure based properties, is relatively lack of study. In this paper,
we firstly obtain five corresponding network-based features, including user
activity, average neighbors' degree, clustering coefficient, assortative
coefficient and discrimination, from social and behavioral networks,
respectively. A hybrid algorithm is proposed to integrate those features from
two respective networks. Subsequently, we employ a machine learning process to
use those features to provide recommendation results in a binary classifier
method. Experimental results on a real dataset, Flixster, suggest that the
proposed method can significantly enhance the algorithmic accuracy. In
addition, as network-based properties consider not only the social activities,
but also take into account user preferences in the behavioral networks,
therefore, it performs much better than that from either social or behavioral
networks. Furthermore, since the features based on the behavioral network
contain more diverse and meaningfully structural information, they play a vital
role in uncovering users' potential preference, which, might show light in
deeply understanding the structure and function of the social and behavioral
networks.
| no_new_dataset | 0.944944 |
1204.1336 | Md. Abu Naser Bikas | Mohammad Sazzadul Hoque, Md. Abdul Mukit and Md. Abu Naser Bikas | An Implementation of Intrusion Detection System Using Genetic Algorithm | null | International Journal of Network Security & Its Applications,
Volume 4, Number 2, pages 109 - 120, March 2012 | 10.5121/ijnsa.2012.4208 | null | cs.CR cs.NE cs.NI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Nowadays it is very important to maintain a high level security to ensure
safe and trusted communication of information between various organizations.
But secured data communication over internet and any other network is always
under threat of intrusions and misuses. So Intrusion Detection Systems have
become a needful component in terms of computer and network security. There are
various approaches being utilized in intrusion detections, but unfortunately
any of the systems so far is not completely flawless. So, the quest of
betterment continues. In this progression, here we present an Intrusion
Detection System (IDS), by applying genetic algorithm (GA) to efficiently
detect various types of network intrusions. Parameters and evolution processes
for GA are discussed in details and implemented. This approach uses evolution
theory to information evolution in order to filter the traffic data and thus
reduce the complexity. To implement and measure the performance of our system
we used the KDD99 benchmark dataset and obtained reasonable detection rate.
| [
{
"version": "v1",
"created": "Thu, 5 Apr 2012 11:40:21 GMT"
}
] | 2012-04-09T00:00:00 | [
[
"Hoque",
"Mohammad Sazzadul",
""
],
[
"Mukit",
"Md. Abdul",
""
],
[
"Bikas",
"Md. Abu Naser",
""
]
] | TITLE: An Implementation of Intrusion Detection System Using Genetic Algorithm
ABSTRACT: Nowadays it is very important to maintain a high level security to ensure
safe and trusted communication of information between various organizations.
But secured data communication over internet and any other network is always
under threat of intrusions and misuses. So Intrusion Detection Systems have
become a needful component in terms of computer and network security. There are
various approaches being utilized in intrusion detections, but unfortunately
any of the systems so far is not completely flawless. So, the quest of
betterment continues. In this progression, here we present an Intrusion
Detection System (IDS), by applying genetic algorithm (GA) to efficiently
detect various types of network intrusions. Parameters and evolution processes
for GA are discussed in details and implemented. This approach uses evolution
theory to information evolution in order to filter the traffic data and thus
reduce the complexity. To implement and measure the performance of our system
we used the KDD99 benchmark dataset and obtained reasonable detection rate.
| no_new_dataset | 0.940898 |
1204.1393 | Raquel Urtasun | Koichiro Yamaguchi and Tamir Hazan and David McAllester and Raquel
Urtasun | Continuous Markov Random Fields for Robust Stereo Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a novel slanted-plane MRF model which reasons
jointly about occlusion boundaries as well as depth. We formulate the problem
as the one of inference in a hybrid MRF composed of both continuous (i.e.,
slanted 3D planes) and discrete (i.e., occlusion boundaries) random variables.
This allows us to define potentials encoding the ownership of the pixels that
compose the boundary between segments, as well as potentials encoding which
junctions are physically possible. Our approach outperforms the
state-of-the-art on Middlebury high resolution imagery as well as in the more
challenging KITTI dataset, while being more efficient than existing slanted
plane MRF-based methods, taking on average 2 minutes to perform inference on
high resolution imagery.
| [
{
"version": "v1",
"created": "Fri, 6 Apr 2012 01:40:21 GMT"
}
] | 2012-04-09T00:00:00 | [
[
"Yamaguchi",
"Koichiro",
""
],
[
"Hazan",
"Tamir",
""
],
[
"McAllester",
"David",
""
],
[
"Urtasun",
"Raquel",
""
]
] | TITLE: Continuous Markov Random Fields for Robust Stereo Estimation
ABSTRACT: In this paper we present a novel slanted-plane MRF model which reasons
jointly about occlusion boundaries as well as depth. We formulate the problem
as the one of inference in a hybrid MRF composed of both continuous (i.e.,
slanted 3D planes) and discrete (i.e., occlusion boundaries) random variables.
This allows us to define potentials encoding the ownership of the pixels that
compose the boundary between segments, as well as potentials encoding which
junctions are physically possible. Our approach outperforms the
state-of-the-art on Middlebury high resolution imagery as well as in the more
challenging KITTI dataset, while being more efficient than existing slanted
plane MRF-based methods, taking on average 2 minutes to perform inference on
high resolution imagery.
| no_new_dataset | 0.953232 |
1204.1528 | Thomas Sandholm | Leandro Balby Marinho, Cl\'audio de Souza Baptista, Thomas Sandholm,
Iury Nunes, Caio N\'obrega, Jord\~ao Ara\'ujo | Extracting Geospatial Preferences Using Relational Neighbors | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing popularity of location-based social media applications
and devices that automatically tag generated content with locations, large
repositories of collaborative geo-referenced data are appearing on-line.
Efficiently extracting user preferences from these data to determine what
information to recommend is challenging because of the sheer volume of data as
well as the frequency of updates. Traditional recommender systems focus on the
interplay between users and items, but ignore contextual parameters such as
location. In this paper we take a geospatial approach to determine locational
preferences and similarities between users. We propose to capture the
geographic context of user preferences for items using a relational graph,
through which we are able to derive many new and state-of-the-art
recommendation algorithms, including combinations of them, requiring changes
only in the definition of the edge weights. Furthermore, we discuss several
solutions for cold-start scenarios. Finally, we conduct experiments using two
real-world datasets and provide empirical evidence that many of the proposed
algorithms outperform existing location-aware recommender algorithms.
| [
{
"version": "v1",
"created": "Fri, 6 Apr 2012 18:15:55 GMT"
}
] | 2012-04-09T00:00:00 | [
[
"Marinho",
"Leandro Balby",
""
],
[
"Baptista",
"Cláudio de Souza",
""
],
[
"Sandholm",
"Thomas",
""
],
[
"Nunes",
"Iury",
""
],
[
"Nóbrega",
"Caio",
""
],
[
"Araújo",
"Jordão",
""
]
] | TITLE: Extracting Geospatial Preferences Using Relational Neighbors
ABSTRACT: With the increasing popularity of location-based social media applications
and devices that automatically tag generated content with locations, large
repositories of collaborative geo-referenced data are appearing on-line.
Efficiently extracting user preferences from these data to determine what
information to recommend is challenging because of the sheer volume of data as
well as the frequency of updates. Traditional recommender systems focus on the
interplay between users and items, but ignore contextual parameters such as
location. In this paper we take a geospatial approach to determine locational
preferences and similarities between users. We propose to capture the
geographic context of user preferences for items using a relational graph,
through which we are able to derive many new and state-of-the-art
recommendation algorithms, including combinations of them, requiring changes
only in the definition of the edge weights. Furthermore, we discuss several
solutions for cold-start scenarios. Finally, we conduct experiments using two
real-world datasets and provide empirical evidence that many of the proposed
algorithms outperform existing location-aware recommender algorithms.
| no_new_dataset | 0.948537 |
1201.3382 | Ian Goodfellow | Ian J. Goodfellow and Aaron Courville and Yoshua Bengio | Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of using a factor model we call {\em spike-and-slab
sparse coding} (S3C) to learn features for a classification task. The S3C model
resembles both the spike-and-slab RBM and sparse coding. Since exact inference
in this model is intractable, we derive a structured variational inference
procedure and employ a variational EM training algorithm. Prior work on
approximate inference for this model has not prioritized the ability to exploit
parallel architectures and scale to enormous problem sizes. We present an
inference procedure appropriate for use with GPUs which allows us to
dramatically increase both the training set size and the amount of latent
factors.
We demonstrate that this approach improves upon the supervised learning
capabilities of both sparse coding and the ssRBM on the CIFAR-10 dataset. We
evaluate our approach's potential for semi-supervised learning on subsets of
CIFAR-10. We demonstrate state-of-the art self-taught learning performance on
the STL-10 dataset and use our method to win the NIPS 2011 Workshop on
Challenges In Learning Hierarchical Models' Transfer Learning Challenge.
| [
{
"version": "v1",
"created": "Mon, 16 Jan 2012 22:00:07 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Apr 2012 22:48:52 GMT"
}
] | 2012-04-05T00:00:00 | [
[
"Goodfellow",
"Ian J.",
""
],
[
"Courville",
"Aaron",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery
ABSTRACT: We consider the problem of using a factor model we call {\em spike-and-slab
sparse coding} (S3C) to learn features for a classification task. The S3C model
resembles both the spike-and-slab RBM and sparse coding. Since exact inference
in this model is intractable, we derive a structured variational inference
procedure and employ a variational EM training algorithm. Prior work on
approximate inference for this model has not prioritized the ability to exploit
parallel architectures and scale to enormous problem sizes. We present an
inference procedure appropriate for use with GPUs which allows us to
dramatically increase both the training set size and the amount of latent
factors.
We demonstrate that this approach improves upon the supervised learning
capabilities of both sparse coding and the ssRBM on the CIFAR-10 dataset. We
evaluate our approach's potential for semi-supervised learning on subsets of
CIFAR-10. We demonstrate state-of-the art self-taught learning performance on
the STL-10 dataset and use our method to win the NIPS 2011 Workshop on
Challenges In Learning Hierarchical Models' Transfer Learning Challenge.
| no_new_dataset | 0.946051 |
1110.2096 | Philipp Herrmann | Philipp N. Herrmann, Dennis O. Kundisch, Mohammad S. Rahman | Beating Irrationality: Does Delegating to IT Alleviate the Sunk Cost
Effect? | null | null | null | null | cs.HC cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this research, we investigate the impact of delegating decision making to
information technology (IT) on an important human decision bias - the sunk cost
effect. To address our research question, we use a unique and very rich dataset
containing actual market transaction data for approximately 7,000 pay-per-bid
auctions. Thus, unlike previous studies that are primarily laboratory
experiments, we investigate the effects of using IT on the proneness to a
decision bias in real market transactions. We identify and analyze irrational
decision scenarios of auction participants. We find that participants with a
higher monetary investment have an increased likelihood of violating the
assumption of rationality, due to the sunk cost effect. Interestingly, after
controlling for monetary investments, participants who delegate their decision
making to IT and, consequently, have comparably lower behavioral investments
(e.g., emotional attachment, effort, time) are less prone to the sunk cost
effect. In particular, delegation to IT reduces the impact of overall
investments on the sunk cost effect by approximately 50%.
| [
{
"version": "v1",
"created": "Mon, 10 Oct 2011 16:23:18 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Apr 2012 15:34:53 GMT"
}
] | 2012-04-04T00:00:00 | [
[
"Herrmann",
"Philipp N.",
""
],
[
"Kundisch",
"Dennis O.",
""
],
[
"Rahman",
"Mohammad S.",
""
]
] | TITLE: Beating Irrationality: Does Delegating to IT Alleviate the Sunk Cost
Effect?
ABSTRACT: In this research, we investigate the impact of delegating decision making to
information technology (IT) on an important human decision bias - the sunk cost
effect. To address our research question, we use a unique and very rich dataset
containing actual market transaction data for approximately 7,000 pay-per-bid
auctions. Thus, unlike previous studies that are primarily laboratory
experiments, we investigate the effects of using IT on the proneness to a
decision bias in real market transactions. We identify and analyze irrational
decision scenarios of auction participants. We find that participants with a
higher monetary investment have an increased likelihood of violating the
assumption of rationality, due to the sunk cost effect. Interestingly, after
controlling for monetary investments, participants who delegate their decision
making to IT and, consequently, have comparably lower behavioral investments
(e.g., emotional attachment, effort, time) are less prone to the sunk cost
effect. In particular, delegation to IT reduces the impact of overall
investments on the sunk cost effect by approximately 50%.
| new_dataset | 0.960584 |
1204.0033 | Ryan Rossi | Ryan A. Rossi, Luke K. McDowell, David W. Aha and Jennifer Neville | Transforming Graph Representations for Statistical Relational Learning | null | null | null | null | stat.ML cs.AI cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relational data representations have become an increasingly important topic
due to the recent proliferation of network datasets (e.g., social, biological,
information networks) and a corresponding increase in the application of
statistical relational learning (SRL) algorithms to these domains. In this
article, we examine a range of representation issues for graph-based relational
data. Since the choice of relational data representation for the nodes, links,
and features can dramatically affect the capabilities of SRL algorithms, we
survey approaches and opportunities for relational representation
transformation designed to improve the performance of these algorithms. This
leads us to introduce an intuitive taxonomy for data representation
transformations in relational domains that incorporates link transformation and
node transformation as symmetric representation tasks. In particular, the
transformation tasks for both nodes and links include (i) predicting their
existence, (ii) predicting their label or type, (iii) estimating their weight
or importance, and (iv) systematically constructing their relevant features. We
motivate our taxonomy through detailed examples and use it to survey and
compare competing approaches for each of these tasks. We also discuss general
conditions for transforming links, nodes, and features. Finally, we highlight
challenges that remain to be addressed.
| [
{
"version": "v1",
"created": "Fri, 30 Mar 2012 21:38:52 GMT"
}
] | 2012-04-03T00:00:00 | [
[
"Rossi",
"Ryan A.",
""
],
[
"McDowell",
"Luke K.",
""
],
[
"Aha",
"David W.",
""
],
[
"Neville",
"Jennifer",
""
]
] | TITLE: Transforming Graph Representations for Statistical Relational Learning
ABSTRACT: Relational data representations have become an increasingly important topic
due to the recent proliferation of network datasets (e.g., social, biological,
information networks) and a corresponding increase in the application of
statistical relational learning (SRL) algorithms to these domains. In this
article, we examine a range of representation issues for graph-based relational
data. Since the choice of relational data representation for the nodes, links,
and features can dramatically affect the capabilities of SRL algorithms, we
survey approaches and opportunities for relational representation
transformation designed to improve the performance of these algorithms. This
leads us to introduce an intuitive taxonomy for data representation
transformations in relational domains that incorporates link transformation and
node transformation as symmetric representation tasks. In particular, the
transformation tasks for both nodes and links include (i) predicting their
existence, (ii) predicting their label or type, (iii) estimating their weight
or importance, and (iv) systematically constructing their relevant features. We
motivate our taxonomy through detailed examples and use it to survey and
compare competing approaches for each of these tasks. We also discuss general
conditions for transforming links, nodes, and features. Finally, we highlight
challenges that remain to be addressed.
| no_new_dataset | 0.94801 |
1204.0184 | Youssef Bassil | Youssef Bassil | Parallel Spell-Checking Algorithm Based on Yahoo! N-Grams Dataset | LACSC - Lebanese Association for Computational Sciences,
http://www.lacsc.org/; International Journal of Research and Reviews in
Computer Science (IJRRCS), Vol. 3, No. 1, February 2012 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spell-checking is the process of detecting and sometimes providing
suggestions for incorrectly spelled words in a text. Basically, the larger the
dictionary of a spell-checker is, the higher is the error detection rate;
otherwise, misspellings would pass undetected. Unfortunately, traditional
dictionaries suffer from out-of-vocabulary and data sparseness problems as they
do not encompass large vocabulary of words indispensable to cover proper names,
domain-specific terms, technical jargons, special acronyms, and terminologies.
As a result, spell-checkers will incur low error detection and correction rate
and will fail to flag all errors in the text. This paper proposes a new
parallel shared-memory spell-checking algorithm that uses rich real-world word
statistics from Yahoo! N-Grams Dataset to correct non-word and real-word errors
in computer text. Essentially, the proposed algorithm can be divided into three
sub-algorithms that run in a parallel fashion: The error detection algorithm
that detects misspellings, the candidates generation algorithm that generates
correction suggestions, and the error correction algorithm that performs
contextual error correction. Experiments conducted on a set of text articles
containing misspellings, showed a remarkable spelling error correction rate
that resulted in a radical reduction of both non-word and real-word errors in
electronic text. In a further study, the proposed algorithm is to be optimized
for message-passing systems so as to become more flexible and less costly to
scale over distributed machines.
| [
{
"version": "v1",
"created": "Sun, 1 Apr 2012 09:28:20 GMT"
}
] | 2012-04-03T00:00:00 | [
[
"Bassil",
"Youssef",
""
]
] | TITLE: Parallel Spell-Checking Algorithm Based on Yahoo! N-Grams Dataset
ABSTRACT: Spell-checking is the process of detecting and sometimes providing
suggestions for incorrectly spelled words in a text. Basically, the larger the
dictionary of a spell-checker is, the higher is the error detection rate;
otherwise, misspellings would pass undetected. Unfortunately, traditional
dictionaries suffer from out-of-vocabulary and data sparseness problems as they
do not encompass large vocabulary of words indispensable to cover proper names,
domain-specific terms, technical jargons, special acronyms, and terminologies.
As a result, spell-checkers will incur low error detection and correction rate
and will fail to flag all errors in the text. This paper proposes a new
parallel shared-memory spell-checking algorithm that uses rich real-world word
statistics from Yahoo! N-Grams Dataset to correct non-word and real-word errors
in computer text. Essentially, the proposed algorithm can be divided into three
sub-algorithms that run in a parallel fashion: The error detection algorithm
that detects misspellings, the candidates generation algorithm that generates
correction suggestions, and the error correction algorithm that performs
contextual error correction. Experiments conducted on a set of text articles
containing misspellings, showed a remarkable spelling error correction rate
that resulted in a radical reduction of both non-word and real-word errors in
electronic text. In a further study, the proposed algorithm is to be optimized
for message-passing systems so as to become more flexible and less costly to
scale over distributed machines.
| no_new_dataset | 0.942135 |
1204.0233 | O. Paul Isikaku-Ironkwe | O. Paul Isikaku-Ironkwe | Transition Temperatures of Superconductors estimated from Periodic Table
Properties | 28 pages,10 Tables, 5 figures | null | null | null | physics.gen-ph cond-mat.supr-con | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the transition temperature, Tc, of a superconductor from Periodic
Table normal state properties is regarded as one of the grand challenges of
superconductivity. By studying the correlations of Periodic Table properties
with known superconductors, it is possible to estimate their transition
temperatures. Starting from the isotope effect and correlations of
superconductivity with electronegativity (\Chi), valence electron count per
atom (Ne), atomic number(Z) and formula weight (Fw), we derive an empirical
formula for estimating Tc that includes an unknown parameter,(Ko). With average
values of \Chi, Ne and Z, we develop a material specific characterization
dataset (MSCD) model of a superconductor that is quantitatively useful for
characterizing and comparing superconductors. We show that for most
superconductors, Ko correlates with Fw/Z, Ne, Z, number of atoms (An) in the
formula, number of elements (En) and with Tc. We study some superconductor
families and use the discovered correlations to predict similar and novel
superconductors and also estimate their Tcs. Thus the material specific
equations derived in this paper, the material specific characterization dataset
(MSCD) system developed here and the discovered correlation between Tc and
Fw/Z, En and An, provide the building blocks for the analysis, design and
search of potential novel high temperature superconductors with specific
estimated Tcs.
| [
{
"version": "v1",
"created": "Sun, 25 Mar 2012 06:39:25 GMT"
}
] | 2012-04-03T00:00:00 | [
[
"Isikaku-Ironkwe",
"O. Paul",
""
]
] | TITLE: Transition Temperatures of Superconductors estimated from Periodic Table
Properties
ABSTRACT: Predicting the transition temperature, Tc, of a superconductor from Periodic
Table normal state properties is regarded as one of the grand challenges of
superconductivity. By studying the correlations of Periodic Table properties
with known superconductors, it is possible to estimate their transition
temperatures. Starting from the isotope effect and correlations of
superconductivity with electronegativity (\Chi), valence electron count per
atom (Ne), atomic number(Z) and formula weight (Fw), we derive an empirical
formula for estimating Tc that includes an unknown parameter,(Ko). With average
values of \Chi, Ne and Z, we develop a material specific characterization
dataset (MSCD) model of a superconductor that is quantitatively useful for
characterizing and comparing superconductors. We show that for most
superconductors, Ko correlates with Fw/Z, Ne, Z, number of atoms (An) in the
formula, number of elements (En) and with Tc. We study some superconductor
families and use the discovered correlations to predict similar and novel
superconductors and also estimate their Tcs. Thus the material specific
equations derived in this paper, the material specific characterization dataset
(MSCD) system developed here and the discovered correlation between Tc and
Fw/Z, En and An, provide the building blocks for the analysis, design and
search of potential novel high temperature superconductors with specific
estimated Tcs.
| no_new_dataset | 0.84966 |
1110.1328 | Venu Satuluri | Venu Satuluri and Srinivasan Parthasarathy | Bayesian Locality Sensitive Hashing for Fast Similarity Search | 13 pages, 5 Tables, 21 figures. Added acknowledgments in v3. A
slightly shorter version of this paper without the appendix has been
published in the PVLDB journal, 5(5):430-441, 2012.
http://vldb.org/pvldb/vol5/p430_venusatuluri_vldb2012.pdf | PVLDB 5(5):430-441, 2012 | null | null | cs.DB cs.AI cs.DS cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a collection of objects and an associated similarity measure, the
all-pairs similarity search problem asks us to find all pairs of objects with
similarity greater than a certain user-specified threshold. Locality-sensitive
hashing (LSH) based methods have become a very popular approach for this
problem. However, most such methods only use LSH for the first phase of
similarity search - i.e. efficient indexing for candidate generation. In this
paper, we present BayesLSH, a principled Bayesian algorithm for the subsequent
phase of similarity search - performing candidate pruning and similarity
estimation using LSH. A simpler variant, BayesLSH-Lite, which calculates
similarities exactly, is also presented. BayesLSH is able to quickly prune away
a large majority of the false positive candidate pairs, leading to significant
speedups over baseline approaches. For BayesLSH, we also provide probabilistic
guarantees on the quality of the output, both in terms of accuracy and recall.
Finally, the quality of BayesLSH's output can be easily tuned and does not
require any manual setting of the number of hashes to use for similarity
estimation, unlike standard approaches. For two state-of-the-art candidate
generation algorithms, AllPairs and LSH, BayesLSH enables significant speedups,
typically in the range 2x-20x for a wide variety of datasets.
| [
{
"version": "v1",
"created": "Thu, 6 Oct 2011 17:13:48 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Dec 2011 17:46:46 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Mar 2012 19:34:39 GMT"
}
] | 2012-03-29T00:00:00 | [
[
"Satuluri",
"Venu",
""
],
[
"Parthasarathy",
"Srinivasan",
""
]
] | TITLE: Bayesian Locality Sensitive Hashing for Fast Similarity Search
ABSTRACT: Given a collection of objects and an associated similarity measure, the
all-pairs similarity search problem asks us to find all pairs of objects with
similarity greater than a certain user-specified threshold. Locality-sensitive
hashing (LSH) based methods have become a very popular approach for this
problem. However, most such methods only use LSH for the first phase of
similarity search - i.e. efficient indexing for candidate generation. In this
paper, we present BayesLSH, a principled Bayesian algorithm for the subsequent
phase of similarity search - performing candidate pruning and similarity
estimation using LSH. A simpler variant, BayesLSH-Lite, which calculates
similarities exactly, is also presented. BayesLSH is able to quickly prune away
a large majority of the false positive candidate pairs, leading to significant
speedups over baseline approaches. For BayesLSH, we also provide probabilistic
guarantees on the quality of the output, both in terms of accuracy and recall.
Finally, the quality of BayesLSH's output can be easily tuned and does not
require any manual setting of the number of hashes to use for similarity
estimation, unlike standard approaches. For two state-of-the-art candidate
generation algorithms, AllPairs and LSH, BayesLSH enables significant speedups,
typically in the range 2x-20x for a wide variety of datasets.
| no_new_dataset | 0.947332 |
1203.5474 | Yanhua Li | Yanhua Li, Zhi-Li Zhang, Jie Bao | Mutual or Unrequited Love: Identifying Stable Clusters in Social
Networks with Uni- and Bi-directional Links | 10pages. A short version appears in 9th Workshop on Algorithms and
Models for the Web Graph (WAW 2012) | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many social networks, e.g., Slashdot and Twitter, can be represented as
directed graphs (digraphs) with two types of links between entities: mutual
(bi-directional) and one-way (uni-directional) connections. Social science
theories reveal that mutual connections are more stable than one-way
connections, and one-way connections exhibit various tendencies to become
mutual connections. It is therefore important to take such tendencies into
account when performing clustering of social networks with both mutual and
one-way connections.
In this paper, we utilize the dyadic methods to analyze social networks, and
develop a generalized mutuality tendency theory to capture the tendencies of
those node pairs which tend to establish mutual connections more frequently
than those occur by chance. Using these results, we develop a
mutuality-tendency-aware spectral clustering algorithm to identify more stable
clusters by maximizing the within-cluster mutuality tendency and minimizing the
cross-cluster mutuality tendency. Extensive simulation results on synthetic
datasets as well as real online social network datasets such as Slashdot,
demonstrate that our proposed mutuality-tendency-aware spectral clustering
algorithm extracts more stable social community structures than traditional
spectral clustering methods.
| [
{
"version": "v1",
"created": "Sun, 25 Mar 2012 07:22:14 GMT"
}
] | 2012-03-27T00:00:00 | [
[
"Li",
"Yanhua",
""
],
[
"Zhang",
"Zhi-Li",
""
],
[
"Bao",
"Jie",
""
]
] | TITLE: Mutual or Unrequited Love: Identifying Stable Clusters in Social
Networks with Uni- and Bi-directional Links
ABSTRACT: Many social networks, e.g., Slashdot and Twitter, can be represented as
directed graphs (digraphs) with two types of links between entities: mutual
(bi-directional) and one-way (uni-directional) connections. Social science
theories reveal that mutual connections are more stable than one-way
connections, and one-way connections exhibit various tendencies to become
mutual connections. It is therefore important to take such tendencies into
account when performing clustering of social networks with both mutual and
one-way connections.
In this paper, we utilize the dyadic methods to analyze social networks, and
develop a generalized mutuality tendency theory to capture the tendencies of
those node pairs which tend to establish mutual connections more frequently
than those occur by chance. Using these results, we develop a
mutuality-tendency-aware spectral clustering algorithm to identify more stable
clusters by maximizing the within-cluster mutuality tendency and minimizing the
cross-cluster mutuality tendency. Extensive simulation results on synthetic
datasets as well as real online social network datasets such as Slashdot,
demonstrate that our proposed mutuality-tendency-aware spectral clustering
algorithm extracts more stable social community structures than traditional
spectral clustering methods.
| no_new_dataset | 0.951729 |
1111.0680 | Daniele Marinazzo | Daniele Marinazzo, Mario Pellicoro, Sebastiano Stramaglia | Causal information approach to partial conditioning in multivariate data
sets | accepted for publication in Computational and Mathematical Methods in
Medicine, special issue on "Methodological Advances in Brain Connectivity" | null | null | null | physics.data-an cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When evaluating causal influence from one time series to another in a
multivariate dataset it is necessary to take into account the conditioning
effect of the other variables. In the presence of many variables, and possibly
of a reduced number of samples, full conditioning can lead to computational and
numerical problems. In this paper we address the problem of partial
conditioning to a limited subset of variables, in the framework of information
theory. The proposed approach is tested on simulated datasets and on an example
of intracranial EEG recording from an epileptic subject. We show that, in many
instances, conditioning on a small number of variables, chosen as the most
informative ones for the driver node, leads to results very close to those
obtained with a fully multivariate analysis, and even better in the presence of
a small number of samples. This is particularly relevant when the pattern of
causalities is sparse.
| [
{
"version": "v1",
"created": "Wed, 2 Nov 2011 22:17:43 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Mar 2012 16:11:35 GMT"
}
] | 2012-03-26T00:00:00 | [
[
"Marinazzo",
"Daniele",
""
],
[
"Pellicoro",
"Mario",
""
],
[
"Stramaglia",
"Sebastiano",
""
]
] | TITLE: Causal information approach to partial conditioning in multivariate data
sets
ABSTRACT: When evaluating causal influence from one time series to another in a
multivariate dataset it is necessary to take into account the conditioning
effect of the other variables. In the presence of many variables, and possibly
of a reduced number of samples, full conditioning can lead to computational and
numerical problems. In this paper we address the problem of partial
conditioning to a limited subset of variables, in the framework of information
theory. The proposed approach is tested on simulated datasets and on an example
of intracranial EEG recording from an epileptic subject. We show that, in many
instances, conditioning on a small number of variables, chosen as the most
informative ones for the driver node, leads to results very close to those
obtained with a fully multivariate analysis, and even better in the presence of
a small number of samples. This is particularly relevant when the pattern of
causalities is sparse.
| no_new_dataset | 0.953101 |
1203.5124 | Liang Zhang | Rajiv Khanna, Liang Zhang, Deepak Agarwal, Beechung Chen | Parallel Matrix Factorization for Binary Response | null | null | null | null | cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting user affinity to items is an important problem in applications
like content optimization, computational advertising, and many more. While
bilinear random effect models (matrix factorization) provide state-of-the-art
performance when minimizing RMSE through a Gaussian response model on explicit
ratings data, applying it to imbalanced binary response data presents
additional challenges that we carefully study in this paper. Data in many
applications usually consist of users' implicit response that are often binary
-- clicking an item or not; the goal is to predict click rates, which is often
combined with other measures to calculate utilities to rank items at runtime of
the recommender systems. Because of the implicit nature, such data are usually
much larger than explicit rating data and often have an imbalanced distribution
with a small fraction of click events, making accurate click rate prediction
difficult. In this paper, we address two problems. First, we show previous
techniques to estimate bilinear random effect models with binary data are less
accurate compared to our new approach based on adaptive rejection sampling,
especially for imbalanced response. Second, we develop a parallel bilinear
random effect model fitting framework using Map-Reduce paradigm that scales to
massive datasets. Our parallel algorithm is based on a "divide and conquer"
strategy coupled with an ensemble approach. Through experiments on the
benchmark MovieLens data, a small Yahoo! Front Page data set, and a large
Yahoo! Front Page data set that contains 8M users and 1B binary observations,
we show that careful handling of binary response as well as identifiability
issues are needed to achieve good performance for click rate prediction, and
that the proposed adaptive rejection sampler and the partitioning as well as
ensemble techniques significantly improve model performance.
| [
{
"version": "v1",
"created": "Thu, 22 Mar 2012 20:54:53 GMT"
}
] | 2012-03-26T00:00:00 | [
[
"Khanna",
"Rajiv",
""
],
[
"Zhang",
"Liang",
""
],
[
"Agarwal",
"Deepak",
""
],
[
"Chen",
"Beechung",
""
]
] | TITLE: Parallel Matrix Factorization for Binary Response
ABSTRACT: Predicting user affinity to items is an important problem in applications
like content optimization, computational advertising, and many more. While
bilinear random effect models (matrix factorization) provide state-of-the-art
performance when minimizing RMSE through a Gaussian response model on explicit
ratings data, applying it to imbalanced binary response data presents
additional challenges that we carefully study in this paper. Data in many
applications usually consist of users' implicit response that are often binary
-- clicking an item or not; the goal is to predict click rates, which is often
combined with other measures to calculate utilities to rank items at runtime of
the recommender systems. Because of the implicit nature, such data are usually
much larger than explicit rating data and often have an imbalanced distribution
with a small fraction of click events, making accurate click rate prediction
difficult. In this paper, we address two problems. First, we show previous
techniques to estimate bilinear random effect models with binary data are less
accurate compared to our new approach based on adaptive rejection sampling,
especially for imbalanced response. Second, we develop a parallel bilinear
random effect model fitting framework using Map-Reduce paradigm that scales to
massive datasets. Our parallel algorithm is based on a "divide and conquer"
strategy coupled with an ensemble approach. Through experiments on the
benchmark MovieLens data, a small Yahoo! Front Page data set, and a large
Yahoo! Front Page data set that contains 8M users and 1B binary observations,
we show that careful handling of binary response as well as identifiability
issues are needed to achieve good performance for click rate prediction, and
that the proposed adaptive rejection sampler and the partitioning as well as
ensemble techniques significantly improve model performance.
| no_new_dataset | 0.948537 |
1203.5262 | Youssef Bassil | Youssef Bassil, Paul Semaan | ASR Context-Sensitive Error Correction Based on Microsoft N-Gram Dataset | LACSC - Lebanese Association for Computational Sciences -
http://www.lacsc.org | Journal of Computing, Vol.4, No.1, January 2012 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | At the present time, computers are employed to solve complex tasks and
problems ranging from simple calculations to intensive digital image processing
and intricate algorithmic optimization problems to computationally-demanding
weather forecasting problems. ASR short for Automatic Speech Recognition is yet
another type of computational problem whose purpose is to recognize human
spoken speech and convert it into text that can be processed by a computer.
Despite that ASR has many versatile and pervasive real-world applications,it is
still relatively erroneous and not perfectly solved as it is prone to produce
spelling errors in the recognized text, especially if the ASR system is
operating in a noisy environment, its vocabulary size is limited, and its input
speech is of bad or low quality. This paper proposes a post-editing ASR error
correction method based on MicrosoftN-Gram dataset for detecting and correcting
spelling errors generated by ASR systems. The proposed method comprises an
error detection algorithm for detecting word errors; a candidate corrections
generation algorithm for generating correction suggestions for the detected
word errors; and a context-sensitive error correction algorithm for selecting
the best candidate for correction. The virtue of using the Microsoft N-Gram
dataset is that it contains real-world data and word sequences extracted from
the web which canmimica comprehensive dictionary of words having a large and
all-inclusive vocabulary. Experiments conducted on numerous speeches, performed
by different speakers, showed a remarkable reduction in ASR errors. Future
research can improve upon the proposed algorithm so much so that it can be
parallelized to take advantage of multiprocessor and distributed systems.
| [
{
"version": "v1",
"created": "Fri, 23 Mar 2012 14:51:05 GMT"
}
] | 2012-03-26T00:00:00 | [
[
"Bassil",
"Youssef",
""
],
[
"Semaan",
"Paul",
""
]
] | TITLE: ASR Context-Sensitive Error Correction Based on Microsoft N-Gram Dataset
ABSTRACT: At the present time, computers are employed to solve complex tasks and
problems ranging from simple calculations to intensive digital image processing
and intricate algorithmic optimization problems to computationally-demanding
weather forecasting problems. ASR short for Automatic Speech Recognition is yet
another type of computational problem whose purpose is to recognize human
spoken speech and convert it into text that can be processed by a computer.
Despite that ASR has many versatile and pervasive real-world applications,it is
still relatively erroneous and not perfectly solved as it is prone to produce
spelling errors in the recognized text, especially if the ASR system is
operating in a noisy environment, its vocabulary size is limited, and its input
speech is of bad or low quality. This paper proposes a post-editing ASR error
correction method based on MicrosoftN-Gram dataset for detecting and correcting
spelling errors generated by ASR systems. The proposed method comprises an
error detection algorithm for detecting word errors; a candidate corrections
generation algorithm for generating correction suggestions for the detected
word errors; and a context-sensitive error correction algorithm for selecting
the best candidate for correction. The virtue of using the Microsoft N-Gram
dataset is that it contains real-world data and word sequences extracted from
the web which canmimica comprehensive dictionary of words having a large and
all-inclusive vocabulary. Experiments conducted on numerous speeches, performed
by different speakers, showed a remarkable reduction in ASR errors. Future
research can improve upon the proposed algorithm so much so that it can be
parallelized to take advantage of multiprocessor and distributed systems.
| no_new_dataset | 0.846451 |
1203.4135 | Eric Seidel | Eric L. Seidel | Metadata Management in Scientific Computing | 8 pages, 5 figures | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex scientific codes and the datasets they generate are in need of a
sophisticated categorization environment that allows the community to store,
search, and enhance metadata in an open, dynamic system. Currently, data is
often presented in a read-only format, distilled and curated by a select group
of researchers. We envision a more open and dynamic system, where authors can
publish their data in a writeable format, allowing users to annotate the
datasets with their own comments and data. This would enable the scientific
community to collaborate on a higher level than before, where researchers could
for example annotate a published dataset with their citations.
Such a system would require a complete set of permissions to ensure that any
individual's data cannot be altered by others unless they specifically allow
it. For this reason datasets and codes are generally presented read-only, to
protect the author's data; however, this also prevents the type of social
revolutions that the private sector has seen with Facebook and Twitter.
In this paper, we present an alternative method of publishing codes and
datasets, based on Fluidinfo, which is an openly writeable and social metadata
engine. We will use the specific example of the Einstein Toolkit, a shared
scientific code built using the Cactus Framework, to illustrate how the code's
metadata may be published in writeable form via Fluidinfo.
| [
{
"version": "v1",
"created": "Mon, 19 Mar 2012 15:35:36 GMT"
}
] | 2012-03-20T00:00:00 | [
[
"Seidel",
"Eric L.",
""
]
] | TITLE: Metadata Management in Scientific Computing
ABSTRACT: Complex scientific codes and the datasets they generate are in need of a
sophisticated categorization environment that allows the community to store,
search, and enhance metadata in an open, dynamic system. Currently, data is
often presented in a read-only format, distilled and curated by a select group
of researchers. We envision a more open and dynamic system, where authors can
publish their data in a writeable format, allowing users to annotate the
datasets with their own comments and data. This would enable the scientific
community to collaborate on a higher level than before, where researchers could
for example annotate a published dataset with their citations.
Such a system would require a complete set of permissions to ensure that any
individual's data cannot be altered by others unless they specifically allow
it. For this reason datasets and codes are generally presented read-only, to
protect the author's data; however, this also prevents the type of social
revolutions that the private sector has seen with Facebook and Twitter.
In this paper, we present an alternative method of publishing codes and
datasets, based on Fluidinfo, which is an openly writeable and social metadata
engine. We will use the specific example of the Einstein Toolkit, a shared
scientific code built using the Cactus Framework, to illustrate how the code's
metadata may be published in writeable form via Fluidinfo.
| no_new_dataset | 0.94428 |
1203.3463 | Amr Ahmed | Amr Ahmed, Eric P. Xing | Timeline: A Dynamic Hierarchical Dirichlet Process Model for Recovering
Birth/Death and Evolution of Topics in Text Stream | Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010) | null | null | UAI-P-2010-PG-20-29 | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topic models have proven to be a useful tool for discovering latent
structures in document collections. However, most document collections often
come as temporal streams and thus several aspects of the latent structure such
as the number of topics, the topics' distribution and popularity are
time-evolving. Several models exist that model the evolution of some but not
all of the above aspects. In this paper we introduce infinite dynamic topic
models, iDTM, that can accommodate the evolution of all the aforementioned
aspects. Our model assumes that documents are organized into epochs, where the
documents within each epoch are exchangeable but the order between the
documents is maintained across epochs. iDTM allows for unbounded number of
topics: topics can die or be born at any epoch, and the representation of each
topic can evolve according to a Markovian dynamics. We use iDTM to analyze the
birth and evolution of topics in the NIPS community and evaluated the efficacy
of our model on both simulated and real datasets with favorable outcome.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 11:17:56 GMT"
}
] | 2012-03-19T00:00:00 | [
[
"Ahmed",
"Amr",
""
],
[
"Xing",
"Eric P.",
""
]
] | TITLE: Timeline: A Dynamic Hierarchical Dirichlet Process Model for Recovering
Birth/Death and Evolution of Topics in Text Stream
ABSTRACT: Topic models have proven to be a useful tool for discovering latent
structures in document collections. However, most document collections often
come as temporal streams and thus several aspects of the latent structure such
as the number of topics, the topics' distribution and popularity are
time-evolving. Several models exist that model the evolution of some but not
all of the above aspects. In this paper we introduce infinite dynamic topic
models, iDTM, that can accommodate the evolution of all the aforementioned
aspects. Our model assumes that documents are organized into epochs, where the
documents within each epoch are exchangeable but the order between the
documents is maintained across epochs. iDTM allows for unbounded number of
topics: topics can die or be born at any epoch, and the representation of each
topic can evolve according to a Markovian dynamics. We use iDTM to analyze the
birth and evolution of topics in the NIPS community and evaluated the efficacy
of our model on both simulated and real datasets with favorable outcome.
| no_new_dataset | 0.950915 |
1203.3483 | Mithun Das Gupta | Mithun Das Gupta, Thomas S. Huang | Regularized Maximum Likelihood for Intrinsic Dimension Estimation | Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010) | null | null | UAI-P-2010-PG-220-227 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new method for estimating the intrinsic dimension of a dataset
by applying the principle of regularized maximum likelihood to the distances
between close neighbors. We propose a regularization scheme which is motivated
by divergence minimization principles. We derive the estimator by a Poisson
process approximation, argue about its convergence properties and apply it to a
number of simulated and real datasets. We also show it has the best overall
performance compared with two other intrinsic dimension estimators.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 11:17:56 GMT"
}
] | 2012-03-19T00:00:00 | [
[
"Gupta",
"Mithun Das",
""
],
[
"Huang",
"Thomas S.",
""
]
] | TITLE: Regularized Maximum Likelihood for Intrinsic Dimension Estimation
ABSTRACT: We propose a new method for estimating the intrinsic dimension of a dataset
by applying the principle of regularized maximum likelihood to the distances
between close neighbors. We propose a regularization scheme which is motivated
by divergence minimization principles. We derive the estimator by a Poisson
process approximation, argue about its convergence properties and apply it to a
number of simulated and real datasets. We also show it has the best overall
performance compared with two other intrinsic dimension estimators.
| no_new_dataset | 0.946941 |
1203.3486 | Berk Kapicioglu | Berk Kapicioglu, Robert E. Schapire, Martin Wikelski, Tamara Broderick | Combining Spatial and Telemetric Features for Learning Animal Movement
Models | Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010) | null | null | UAI-P-2010-PG-260-267 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new graphical model for tracking radio-tagged animals and
learning their movement patterns. The model provides a principled way to
combine radio telemetry data with an arbitrary set of userdefined, spatial
features. We describe an efficient stochastic gradient algorithm for fitting
model parameters to data and demonstrate its effectiveness via asymptotic
analysis and synthetic experiments. We also apply our model to real datasets,
and show that it outperforms the most popular radio telemetry software package
used in ecology. We conclude that integration of different data sources under a
single statistical framework, coupled with appropriate parameter and state
estimation procedures, produces both accurate location estimates and an
interpretable statistical model of animal movement.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 11:17:56 GMT"
}
] | 2012-03-19T00:00:00 | [
[
"Kapicioglu",
"Berk",
""
],
[
"Schapire",
"Robert E.",
""
],
[
"Wikelski",
"Martin",
""
],
[
"Broderick",
"Tamara",
""
]
] | TITLE: Combining Spatial and Telemetric Features for Learning Animal Movement
Models
ABSTRACT: We introduce a new graphical model for tracking radio-tagged animals and
learning their movement patterns. The model provides a principled way to
combine radio telemetry data with an arbitrary set of userdefined, spatial
features. We describe an efficient stochastic gradient algorithm for fitting
model parameters to data and demonstrate its effectiveness via asymptotic
analysis and synthetic experiments. We also apply our model to real datasets,
and show that it outperforms the most popular radio telemetry software package
used in ecology. We conclude that integration of different data sources under a
single statistical framework, coupled with appropriate parameter and state
estimation procedures, produces both accurate location estimates and an
interpretable statistical model of animal movement.
| no_new_dataset | 0.950778 |
1203.3495 | Qi Mao | Qi Mao, Ivor W. Tsang | Parameter-Free Spectral Kernel Learning | Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010) | null | null | UAI-P-2010-PG-350-357 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the growing ubiquity of unlabeled data, learning with unlabeled data
is attracting increasing attention in machine learning. In this paper, we
propose a novel semi-supervised kernel learning method which can seamlessly
combine manifold structure of unlabeled data and Regularized Least-Squares
(RLS) to learn a new kernel. Interestingly, the new kernel matrix can be
obtained analytically with the use of spectral decomposition of graph Laplacian
matrix. Hence, the proposed algorithm does not require any numerical
optimization solvers. Moreover, by maximizing kernel target alignment on
labeled data, we can also learn model parameters automatically with a
closed-form solution. For a given graph Laplacian matrix, our proposed method
does not need to tune any model parameter including the tradeoff parameter in
RLS and the balance parameter for unlabeled data. Extensive experiments on ten
benchmark datasets show that our proposed two-stage parameter-free spectral
kernel learning algorithm can obtain comparable performance with fine-tuned
manifold regularization methods in transductive setting, and outperform
multiple kernel learning in supervised setting.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 11:17:56 GMT"
}
] | 2012-03-19T00:00:00 | [
[
"Mao",
"Qi",
""
],
[
"Tsang",
"Ivor W.",
""
]
] | TITLE: Parameter-Free Spectral Kernel Learning
ABSTRACT: Due to the growing ubiquity of unlabeled data, learning with unlabeled data
is attracting increasing attention in machine learning. In this paper, we
propose a novel semi-supervised kernel learning method which can seamlessly
combine manifold structure of unlabeled data and Regularized Least-Squares
(RLS) to learn a new kernel. Interestingly, the new kernel matrix can be
obtained analytically with the use of spectral decomposition of graph Laplacian
matrix. Hence, the proposed algorithm does not require any numerical
optimization solvers. Moreover, by maximizing kernel target alignment on
labeled data, we can also learn model parameters automatically with a
closed-form solution. For a given graph Laplacian matrix, our proposed method
does not need to tune any model parameter including the tradeoff parameter in
RLS and the balance parameter for unlabeled data. Extensive experiments on ten
benchmark datasets show that our proposed two-stage parameter-free spectral
kernel learning algorithm can obtain comparable performance with fine-tuned
manifold regularization methods in transductive setting, and outperform
multiple kernel learning in supervised setting.
| no_new_dataset | 0.946498 |
1203.3496 | Marina Meila | Marina Meila, Harr Chen | Dirichlet Process Mixtures of Generalized Mallows Models | Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010) | null | null | UAI-P-2010-PG-358-367 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a Dirichlet process mixture model over discrete incomplete
rankings and study two Gibbs sampling inference techniques for estimating
posterior clusterings. The first approach uses a slice sampling subcomponent
for estimating cluster parameters. The second approach marginalizes out several
cluster parameters by taking advantage of approximations to the conditional
posteriors. We empirically demonstrate (1) the effectiveness of this
approximation for improving convergence, (2) the benefits of the Dirichlet
process model over alternative clustering techniques for ranked data, and (3)
the applicability of the approach to exploring large realworld ranking
datasets.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 11:17:56 GMT"
}
] | 2012-03-19T00:00:00 | [
[
"Meila",
"Marina",
""
],
[
"Chen",
"Harr",
""
]
] | TITLE: Dirichlet Process Mixtures of Generalized Mallows Models
ABSTRACT: We present a Dirichlet process mixture model over discrete incomplete
rankings and study two Gibbs sampling inference techniques for estimating
posterior clusterings. The first approach uses a slice sampling subcomponent
for estimating cluster parameters. The second approach marginalizes out several
cluster parameters by taking advantage of approximations to the conditional
posteriors. We empirically demonstrate (1) the effectiveness of this
approximation for improving convergence, (2) the benefits of the Dirichlet
process model over alternative clustering techniques for ranked data, and (3)
the applicability of the approach to exploring large realworld ranking
datasets.
| no_new_dataset | 0.95418 |
1203.3507 | Yuan (Alan) Qi | Yuan (Alan) Qi, Ahmed H. Abdel-Gawad, Thomas P. Minka | Sparse-posterior Gaussian Processes for general likelihoods | Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010) | null | null | UAI-P-2010-PG-450-457 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian processes (GPs) provide a probabilistic nonparametric representation
of functions in regression, classification, and other problems. Unfortunately,
exact learning with GPs is intractable for large datasets. A variety of
approximate GP methods have been proposed that essentially map the large
dataset into a small set of basis points. Among them, two state-of-the-art
methods are sparse pseudo-input Gaussian process (SPGP) (Snelson and
Ghahramani, 2006) and variablesigma GP (VSGP) Walder et al. (2008), which
generalizes SPGP and allows each basis point to have its own length scale.
However, VSGP was only derived for regression. In this paper, we propose a new
sparse GP framework that uses expectation propagation to directly approximate
general GP likelihoods using a sparse and smooth basis. It includes both SPGP
and VSGP for regression as special cases. Plus as an EP algorithm, it inherits
the ability to process data online. As a particular choice of approximating
family, we blur each basis point with a Gaussian distribution that has a full
covariance matrix representing the data distribution around that basis point;
as a result, we can summarize local data manifold information with a small set
of basis points. Our experiments demonstrate that this framework outperforms
previous GP classification methods on benchmark datasets in terms of minimizing
divergence to the non-sparse GP solution as well as lower misclassification
rate.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 11:17:56 GMT"
}
] | 2012-03-19T00:00:00 | [
[
"Yuan",
"",
"",
"Alan"
],
[
"Qi",
"",
""
],
[
"Abdel-Gawad",
"Ahmed H.",
""
],
[
"Minka",
"Thomas P.",
""
]
] | TITLE: Sparse-posterior Gaussian Processes for general likelihoods
ABSTRACT: Gaussian processes (GPs) provide a probabilistic nonparametric representation
of functions in regression, classification, and other problems. Unfortunately,
exact learning with GPs is intractable for large datasets. A variety of
approximate GP methods have been proposed that essentially map the large
dataset into a small set of basis points. Among them, two state-of-the-art
methods are sparse pseudo-input Gaussian process (SPGP) (Snelson and
Ghahramani, 2006) and variablesigma GP (VSGP) Walder et al. (2008), which
generalizes SPGP and allows each basis point to have its own length scale.
However, VSGP was only derived for regression. In this paper, we propose a new
sparse GP framework that uses expectation propagation to directly approximate
general GP likelihoods using a sparse and smooth basis. It includes both SPGP
and VSGP for regression as special cases. Plus as an EP algorithm, it inherits
the ability to process data online. As a particular choice of approximating
family, we blur each basis point with a Gaussian distribution that has a full
covariance matrix representing the data distribution around that basis point;
as a result, we can summarize local data manifold information with a small set
of basis points. Our experiments demonstrate that this framework outperforms
previous GP classification methods on benchmark datasets in terms of minimizing
divergence to the non-sparse GP solution as well as lower misclassification
rate.
| no_new_dataset | 0.945045 |
1203.3516 | Aleksandr Simma | Aleksandr Simma, Michael I. Jordan | Modeling Events with Cascades of Poisson Processes | Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010) | null | null | UAI-P-2010-PG-546-555 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a probabilistic model of events in continuous time in which each
event triggers a Poisson process of successor events. The ensemble of observed
events is thereby modeled as a superposition of Poisson processes. Efficient
inference is feasible under this model with an EM algorithm. Moreover, the EM
algorithm can be implemented as a distributed algorithm, permitting the model
to be applied to very large datasets. We apply these techniques to the modeling
of Twitter messages and the revision history of Wikipedia.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 11:17:56 GMT"
}
] | 2012-03-19T00:00:00 | [
[
"Simma",
"Aleksandr",
""
],
[
"Jordan",
"Michael I.",
""
]
] | TITLE: Modeling Events with Cascades of Poisson Processes
ABSTRACT: We present a probabilistic model of events in continuous time in which each
event triggers a Poisson process of successor events. The ensemble of observed
events is thereby modeled as a superposition of Poisson processes. Efficient
inference is feasible under this model with an EM algorithm. Moreover, the EM
algorithm can be implemented as a distributed algorithm, permitting the model
to be applied to very large datasets. We apply these techniques to the modeling
of Twitter messages and the revision history of Wikipedia.
| no_new_dataset | 0.949949 |
1203.3584 | Tarek El-Shishtawy Ahmed | Tarek El-Shishtawy and Fatma El-Ghannam | An Accurate Arabic Root-Based Lemmatizer for Information Retrieval
Purposes | 9 pages | IJCSI International Journal of Computer Science Issues, Vol. 9,
Issue 1, No 3, January 2012 ISSN (Online): 1694-0814 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In spite of its robust syntax, semantic cohesion, and less ambiguity, lemma
level analysis and generation does not yet focused in Arabic NLP literatures.
In the current research, we propose the first non-statistical accurate Arabic
lemmatizer algorithm that is suitable for information retrieval (IR) systems.
The proposed lemmatizer makes use of different Arabic language knowledge
resources to generate accurate lemma form and its relevant features that
support IR purposes. As a POS tagger, the experimental results show that, the
proposed algorithm achieves a maximum accuracy of 94.8%. For first seen
documents, an accuracy of 89.15% is achieved, compared to 76.7% of up to date
Stanford accurate Arabic model, for the same, dataset.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 22:49:20 GMT"
}
] | 2012-03-19T00:00:00 | [
[
"El-Shishtawy",
"Tarek",
""
],
[
"El-Ghannam",
"Fatma",
""
]
] | TITLE: An Accurate Arabic Root-Based Lemmatizer for Information Retrieval
Purposes
ABSTRACT: In spite of its robust syntax, semantic cohesion, and less ambiguity, lemma
level analysis and generation does not yet focused in Arabic NLP literatures.
In the current research, we propose the first non-statistical accurate Arabic
lemmatizer algorithm that is suitable for information retrieval (IR) systems.
The proposed lemmatizer makes use of different Arabic language knowledge
resources to generate accurate lemma form and its relevant features that
support IR purposes. As a POS tagger, the experimental results show that, the
proposed algorithm achieves a maximum accuracy of 94.8%. For first seen
documents, an accuracy of 89.15% is achieved, compared to 76.7% of up to date
Stanford accurate Arabic model, for the same, dataset.
| no_new_dataset | 0.950227 |
1203.3092 | Riccardo Murri | S\'ebastien Moretti, Riccardo Murri, Sergio Maffioletti, Arnold
Kuzniar, Bris\'e\"is Castella, Nicolas Salamin, Marc Robinson-Rechavi, and
Heinz Stockinger | gcodeml: A Grid-enabled Tool for Detecting Positive Selection in
Biological Evolution | 10 pages, 4 figures. To appear in the HealthGrid 2012 conf | null | null | null | cs.DC cs.CE q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the important questions in biological evolution is to know if certain
changes along protein coding genes have contributed to the adaptation of
species. This problem is known to be biologically complex and computationally
very expensive. It, therefore, requires efficient Grid or cluster solutions to
overcome the computational challenge. We have developed a Grid-enabled tool
(gcodeml) that relies on the PAML (codeml) package to help analyse large
phylogenetic datasets on both Grids and computational clusters. Although we
report on results for gcodeml, our approach is applicable and customisable to
related problems in biology or other scientific domains.
| [
{
"version": "v1",
"created": "Wed, 14 Mar 2012 14:08:12 GMT"
}
] | 2012-03-15T00:00:00 | [
[
"Moretti",
"Sébastien",
""
],
[
"Murri",
"Riccardo",
""
],
[
"Maffioletti",
"Sergio",
""
],
[
"Kuzniar",
"Arnold",
""
],
[
"Castella",
"Briséïs",
""
],
[
"Salamin",
"Nicolas",
""
],
[
"Robinson-Rechavi",
"Marc",
""
],
[
"Stockinger",
"Heinz",
""
]
] | TITLE: gcodeml: A Grid-enabled Tool for Detecting Positive Selection in
Biological Evolution
ABSTRACT: One of the important questions in biological evolution is to know if certain
changes along protein coding genes have contributed to the adaptation of
species. This problem is known to be biologically complex and computationally
very expensive. It, therefore, requires efficient Grid or cluster solutions to
overcome the computational challenge. We have developed a Grid-enabled tool
(gcodeml) that relies on the PAML (codeml) package to help analyse large
phylogenetic datasets on both Grids and computational clusters. Although we
report on results for gcodeml, our approach is applicable and customisable to
related problems in biology or other scientific domains.
| no_new_dataset | 0.943086 |
1203.3170 | Shampa Sengupta | Shampa Sengupta and Asit Kr. Das | Single Reduct Generation Based on Relative Indiscernibility of Rough Set
Theory | 13 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real world everything is an object which represents particular classes.
Every object can be fully described by its attributes. Any real world dataset
contains large number of attributes and objects. Classifiers give poor
performance when these huge datasets are given as input to it for proper
classification. So from these huge dataset most useful attributes need to be
extracted that contribute the maximum to the decision. In the paper, attribute
set is reduced by generating reducts using the indiscernibility relation of
Rough Set Theory (RST). The method measures similarity among the attributes
using relative indiscernibility relation and computes attribute similarity set.
Then the set is minimized and an attribute similarity table is constructed from
which attribute similar to maximum number of attributes is selected so that the
resultant minimum set of selected attributes (called reduct) cover all
attributes of the attribute similarity table. The method has been applied on
glass dataset collected from the UCI repository and the classification accuracy
is calculated by various classifiers. The result shows the efficiency of the
proposed method.
| [
{
"version": "v1",
"created": "Wed, 14 Mar 2012 18:34:05 GMT"
}
] | 2012-03-15T00:00:00 | [
[
"Sengupta",
"Shampa",
""
],
[
"Das",
"Asit Kr.",
""
]
] | TITLE: Single Reduct Generation Based on Relative Indiscernibility of Rough Set
Theory
ABSTRACT: In real world everything is an object which represents particular classes.
Every object can be fully described by its attributes. Any real world dataset
contains large number of attributes and objects. Classifiers give poor
performance when these huge datasets are given as input to it for proper
classification. So from these huge dataset most useful attributes need to be
extracted that contribute the maximum to the decision. In the paper, attribute
set is reduced by generating reducts using the indiscernibility relation of
Rough Set Theory (RST). The method measures similarity among the attributes
using relative indiscernibility relation and computes attribute similarity set.
Then the set is minimized and an attribute similarity table is constructed from
which attribute similar to maximum number of attributes is selected so that the
resultant minimum set of selected attributes (called reduct) cover all
attributes of the attribute similarity table. The method has been applied on
glass dataset collected from the UCI repository and the classification accuracy
is calculated by various classifiers. The result shows the efficiency of the
proposed method.
| no_new_dataset | 0.950869 |
1109.5235 | James Fowler | Nicholas A. Christakis, James H. Fowler | Social Contagion Theory: Examining Dynamic Social Networks and Human
Behavior | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here, we review the research we have done on social contagion. We describe
the methods we have employed (and the assumptions they have entailed) in order
to examine several datasets with complementary strengths and weaknesses,
including the Framingham Heart Study, the National Longitudinal Study of
Adolescent Health, and other observational and experimental datasets that we
and others have collected. We describe the regularities that led us to propose
that human social networks may exhibit a "three degrees of influence" property,
and we review statistical approaches we have used to characterize
inter-personal influence with respect to phenomena as diverse as obesity,
smoking, cooperation, and happiness. We do not claim that this work is the
final word, but we do believe that it provides some novel, informative, and
stimulating evidence regarding social contagion in longitudinally followed
networks. Along with other scholars, we are working to develop new methods for
identifying causal effects using social network data, and we believe that this
area is ripe for statistical development as current methods have known and
often unavoidable limitations.
| [
{
"version": "v1",
"created": "Sat, 24 Sep 2011 06:19:43 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Mar 2012 14:03:00 GMT"
}
] | 2012-03-14T00:00:00 | [
[
"Christakis",
"Nicholas A.",
""
],
[
"Fowler",
"James H.",
""
]
] | TITLE: Social Contagion Theory: Examining Dynamic Social Networks and Human
Behavior
ABSTRACT: Here, we review the research we have done on social contagion. We describe
the methods we have employed (and the assumptions they have entailed) in order
to examine several datasets with complementary strengths and weaknesses,
including the Framingham Heart Study, the National Longitudinal Study of
Adolescent Health, and other observational and experimental datasets that we
and others have collected. We describe the regularities that led us to propose
that human social networks may exhibit a "three degrees of influence" property,
and we review statistical approaches we have used to characterize
inter-personal influence with respect to phenomena as diverse as obesity,
smoking, cooperation, and happiness. We do not claim that this work is the
final word, but we do believe that it provides some novel, informative, and
stimulating evidence regarding social contagion in longitudinally followed
networks. Along with other scholars, we are working to develop new methods for
identifying causal effects using social network data, and we believe that this
area is ripe for statistical development as current methods have known and
often unavoidable limitations.
| no_new_dataset | 0.946941 |
1201.2925 | Geetha Manjunath | Geetha Manjunatha, M Narasimha Murty, Dinkar Sitaram | Combining Heterogeneous Classifiers for Relational Databases | Withdrawn - as that was a trial upload only. Non public information | null | null | null | cs.LG cs.DB | http://creativecommons.org/licenses/by/3.0/ | Most enterprise data is distributed in multiple relational databases with
expert-designed schema. Using traditional single-table machine learning
techniques over such data not only incur a computational penalty for converting
to a 'flat' form (mega-join), even the human-specified semantic information
present in the relations is lost. In this paper, we present a practical,
two-phase hierarchical meta-classification algorithm for relational databases
with a semantic divide and conquer approach. We propose a recursive, prediction
aggregation technique over heterogeneous classifiers applied on individual
database tables. The proposed algorithm was evaluated on three diverse
datasets, namely TPCH, PKDD and UCI benchmarks and showed considerable
reduction in classification time without any loss of prediction accuracy.
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2012 19:54:27 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Mar 2012 20:23:24 GMT"
}
] | 2012-03-14T00:00:00 | [
[
"Manjunatha",
"Geetha",
""
],
[
"Murty",
"M Narasimha",
""
],
[
"Sitaram",
"Dinkar",
""
]
] | TITLE: Combining Heterogeneous Classifiers for Relational Databases
ABSTRACT: Most enterprise data is distributed in multiple relational databases with
expert-designed schema. Using traditional single-table machine learning
techniques over such data not only incur a computational penalty for converting
to a 'flat' form (mega-join), even the human-specified semantic information
present in the relations is lost. In this paper, we present a practical,
two-phase hierarchical meta-classification algorithm for relational databases
with a semantic divide and conquer approach. We propose a recursive, prediction
aggregation technique over heterogeneous classifiers applied on individual
database tables. The proposed algorithm was evaluated on three diverse
datasets, namely TPCH, PKDD and UCI benchmarks and showed considerable
reduction in classification time without any loss of prediction accuracy.
| no_new_dataset | 0.94801 |
1203.2675 | Yaoyun Shi | Yaoyun Shi | Quantum Simpsons Paradox and High Order Bell-Tsirelson Inequalities | null | null | null | null | quant-ph cs.IT math-ph math.IT math.MP math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The well-known Simpson's Paradox, or Yule-Simpson Effect, in statistics is
often illustrated by the following thought experiment: A drug may be found in a
trial to increase the survival rate for both men and women, but decrease the
rate for all the subjects as a whole. This paradoxical reversal effect has been
found in numerous datasets across many disciplines, and is now included in most
introductory statistics textbooks. In the language of the drug trial, the
effect is impossible, however, if both treatment groups' survival rates are
higher than both control groups'. Here we show that for quantum probabilities,
such a reversal remains possible. In particular, a "quantum drug", so to speak,
could be life-saving for both men and women yet deadly for the whole
population. We further identify a simple inequality on conditional
probabilities that must hold classically but is violated by our quantum
scenarios, and completely characterize the maximum quantum violation. As
polynomial inequalities on entries of the density operator, our inequalities
are of degree 6.
| [
{
"version": "v1",
"created": "Mon, 12 Mar 2012 23:36:44 GMT"
}
] | 2012-03-14T00:00:00 | [
[
"Shi",
"Yaoyun",
""
]
] | TITLE: Quantum Simpsons Paradox and High Order Bell-Tsirelson Inequalities
ABSTRACT: The well-known Simpson's Paradox, or Yule-Simpson Effect, in statistics is
often illustrated by the following thought experiment: A drug may be found in a
trial to increase the survival rate for both men and women, but decrease the
rate for all the subjects as a whole. This paradoxical reversal effect has been
found in numerous datasets across many disciplines, and is now included in most
introductory statistics textbooks. In the language of the drug trial, the
effect is impossible, however, if both treatment groups' survival rates are
higher than both control groups'. Here we show that for quantum probabilities,
such a reversal remains possible. In particular, a "quantum drug", so to speak,
could be life-saving for both men and women yet deadly for the whole
population. We further identify a simple inequality on conditional
probabilities that must hold classically but is violated by our quantum
scenarios, and completely characterize the maximum quantum violation. As
polynomial inequalities on entries of the density operator, our inequalities
are of degree 6.
| no_new_dataset | 0.951006 |
1203.2839 | Jan Egger | Jan Egger, Tina Kapur, Thomas Dukatz, Malgorzata Kolodziej, Dzenan
Zukic, Bernd Freisleben, Christopher Nimsky | Square-Cut: A Segmentation Algorithm on the Basis of a Rectangle Shape | 13 pages, 17 figures, 2 tables, 3 equations, 42 references | Egger J, Kapur T, Dukatz T, Kolodziej M, Zukic D, et al. (2012)
Square-Cut: A Segmentation Algorithm on the Basis of a Rectangle Shape. PLoS
ONE 7(2): e31064 | 10.1371/journal.pone.0031064 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a rectangle-based segmentation algorithm that sets up a graph and
performs a graph cut to separate an object from the background. However,
graph-based algorithms distribute the graph's nodes uniformly and equidistantly
on the image. Then, a smoothness term is added to force the cut to prefer a
particular shape. This strategy does not allow the cut to prefer a certain
structure, especially when areas of the object are indistinguishable from the
background. We solve this problem by referring to a rectangle shape of the
object when sampling the graph nodes, i.e., the nodes are distributed
nonuniformly and non-equidistantly on the image. This strategy can be useful,
when areas of the object are indistinguishable from the background. For
evaluation, we focus on vertebrae images from Magnetic Resonance Imaging (MRI)
datasets to support the time consuming manual slice-by-slice segmentation
performed by physicians. The ground truth of the vertebrae boundaries were
manually extracted by two clinical experts (neurological surgeons) with several
years of experience in spine surgery and afterwards compared with the automatic
segmentation results of the proposed scheme yielding an average Dice Similarity
Coefficient (DSC) of 90.97\pm62.2%.
| [
{
"version": "v1",
"created": "Tue, 13 Mar 2012 15:41:14 GMT"
}
] | 2012-03-14T00:00:00 | [
[
"Egger",
"Jan",
""
],
[
"Kapur",
"Tina",
""
],
[
"Dukatz",
"Thomas",
""
],
[
"Kolodziej",
"Malgorzata",
""
],
[
"Zukic",
"Dzenan",
""
],
[
"Freisleben",
"Bernd",
""
],
[
"Nimsky",
"Christopher",
""
]
] | TITLE: Square-Cut: A Segmentation Algorithm on the Basis of a Rectangle Shape
ABSTRACT: We present a rectangle-based segmentation algorithm that sets up a graph and
performs a graph cut to separate an object from the background. However,
graph-based algorithms distribute the graph's nodes uniformly and equidistantly
on the image. Then, a smoothness term is added to force the cut to prefer a
particular shape. This strategy does not allow the cut to prefer a certain
structure, especially when areas of the object are indistinguishable from the
background. We solve this problem by referring to a rectangle shape of the
object when sampling the graph nodes, i.e., the nodes are distributed
nonuniformly and non-equidistantly on the image. This strategy can be useful,
when areas of the object are indistinguishable from the background. For
evaluation, we focus on vertebrae images from Magnetic Resonance Imaging (MRI)
datasets to support the time consuming manual slice-by-slice segmentation
performed by physicians. The ground truth of the vertebrae boundaries were
manually extracted by two clinical experts (neurological surgeons) with several
years of experience in spine surgery and afterwards compared with the automatic
segmentation results of the proposed scheme yielding an average Dice Similarity
Coefficient (DSC) of 90.97\pm62.2%.
| no_new_dataset | 0.953923 |
1203.2886 | Medha Atre | Medha Atre, Vineet Chaoji, Mohammed J. Zaki | BitPath -- Label Order Constrained Reachability Queries over Large
Graphs | null | null | null | RPI-CS 12-02 | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we focus on the following constrained reachability problem over
edge-labeled graphs like RDF -- "given source node x, destination node y, and a
sequence of edge labels (a, b, c, d), is there a path between the two nodes
such that the edge labels on the path satisfy a regular expression
"*a.*b.*c.*d.*". A "*" before "a" allows any other edge label to appear on the
path before edge "a". "a.*" forces at least one edge with label "a". ".*" after
"a" allows zero or more edge labels after "a" and before "b". Our query
processing algorithm uses simple divide-and-conquer and greedy pruning
procedures to limit the search space. However, our graph indexing technique --
based on "compressed bit-vectors" -- allows indexing large graphs which
otherwise would have been infeasible. We have evaluated our approach on graphs
with more than 22 million edges and 6 million nodes -- much larger compared to
the datasets used in the contemporary work on path queries.
| [
{
"version": "v1",
"created": "Tue, 13 Mar 2012 18:11:55 GMT"
}
] | 2012-03-14T00:00:00 | [
[
"Atre",
"Medha",
""
],
[
"Chaoji",
"Vineet",
""
],
[
"Zaki",
"Mohammed J.",
""
]
] | TITLE: BitPath -- Label Order Constrained Reachability Queries over Large
Graphs
ABSTRACT: In this paper we focus on the following constrained reachability problem over
edge-labeled graphs like RDF -- "given source node x, destination node y, and a
sequence of edge labels (a, b, c, d), is there a path between the two nodes
such that the edge labels on the path satisfy a regular expression
"*a.*b.*c.*d.*". A "*" before "a" allows any other edge label to appear on the
path before edge "a". "a.*" forces at least one edge with label "a". ".*" after
"a" allows zero or more edge labels after "a" and before "b". Our query
processing algorithm uses simple divide-and-conquer and greedy pruning
procedures to limit the search space. However, our graph indexing technique --
based on "compressed bit-vectors" -- allows indexing large graphs which
otherwise would have been infeasible. We have evaluated our approach on graphs
with more than 22 million edges and 6 million nodes -- much larger compared to
the datasets used in the contemporary work on path queries.
| no_new_dataset | 0.9455 |
1203.1985 | Zhaowen Wang | Zhaowen Wang, Jinjun Wang, Jing Xiao, Kai-Hsiang Lin, Thomas Huang | Substructure and Boundary Modeling for Continuous Action Recognition | Detailed version of the CVPR 2012 paper. 15 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a probabilistic graphical model for continuous action
recognition with two novel components: substructure transition model and
discriminative boundary model. The first component encodes the sparse and
global temporal transition prior between action primitives in state-space model
to handle the large spatial-temporal variations within an action class. The
second component enforces the action duration constraint in a discriminative
way to locate the transition boundaries between actions more accurately. The
two components are integrated into a unified graphical structure to enable
effective training and inference. Our comprehensive experimental results on
both public and in-house datasets show that, with the capability to incorporate
additional information that had not been explicitly or efficiently modeled by
previous methods, our proposed algorithm achieved significantly improved
performance for continuous action recognition.
| [
{
"version": "v1",
"created": "Fri, 9 Mar 2012 04:16:33 GMT"
}
] | 2012-03-12T00:00:00 | [
[
"Wang",
"Zhaowen",
""
],
[
"Wang",
"Jinjun",
""
],
[
"Xiao",
"Jing",
""
],
[
"Lin",
"Kai-Hsiang",
""
],
[
"Huang",
"Thomas",
""
]
] | TITLE: Substructure and Boundary Modeling for Continuous Action Recognition
ABSTRACT: This paper introduces a probabilistic graphical model for continuous action
recognition with two novel components: substructure transition model and
discriminative boundary model. The first component encodes the sparse and
global temporal transition prior between action primitives in state-space model
to handle the large spatial-temporal variations within an action class. The
second component enforces the action duration constraint in a discriminative
way to locate the transition boundaries between actions more accurately. The
two components are integrated into a unified graphical structure to enable
effective training and inference. Our comprehensive experimental results on
both public and in-house datasets show that, with the capability to incorporate
additional information that had not been explicitly or efficiently modeled by
previous methods, our proposed algorithm achieved significantly improved
performance for continuous action recognition.
| no_new_dataset | 0.950319 |
1203.2021 | Sylvain Lespinats | Sylvain Lespinats, Anke Meyer-Baese, Michael Aupetit | A new supervised non-linear mapping | 2 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised mapping methods project multi-dimensional labeled data onto a
2-dimensional space attempting to preserve both data similarities and topology
of classes. Supervised mappings are expected to help the user to understand the
underlying original class structure and to classify new data visually. Several
methods have been designed to achieve supervised mapping, but many of them
modify original distances prior to the mapping so that original data
similarities are corrupted and even overlapping classes tend to be separated
onto the map ignoring their original topology. We propose ClassiMap, an
alternative method for supervised mapping. Mappings come with distortions which
can be split between tears (close points mapped far apart) and false
neighborhoods (points far apart mapped as neighbors). Some mapping methods
favor the former while others favor the latter. ClassiMap switches between such
mapping methods so that tears tend to appear between classes and false
neighborhood within classes, better preserving classes' topology. We also
propose two new objective criteria instead of the usual subjective visual
inspection to perform fair comparisons of supervised mapping methods. ClassiMap
appears to be the best supervised mapping method according to these criteria in
our experiments on synthetic and real datasets.
| [
{
"version": "v1",
"created": "Fri, 9 Mar 2012 09:15:43 GMT"
}
] | 2012-03-12T00:00:00 | [
[
"Lespinats",
"Sylvain",
""
],
[
"Meyer-Baese",
"Anke",
""
],
[
"Aupetit",
"Michael",
""
]
] | TITLE: A new supervised non-linear mapping
ABSTRACT: Supervised mapping methods project multi-dimensional labeled data onto a
2-dimensional space attempting to preserve both data similarities and topology
of classes. Supervised mappings are expected to help the user to understand the
underlying original class structure and to classify new data visually. Several
methods have been designed to achieve supervised mapping, but many of them
modify original distances prior to the mapping so that original data
similarities are corrupted and even overlapping classes tend to be separated
onto the map ignoring their original topology. We propose ClassiMap, an
alternative method for supervised mapping. Mappings come with distortions which
can be split between tears (close points mapped far apart) and false
neighborhoods (points far apart mapped as neighbors). Some mapping methods
favor the former while others favor the latter. ClassiMap switches between such
mapping methods so that tears tend to appear between classes and false
neighborhood within classes, better preserving classes' topology. We also
propose two new objective criteria instead of the usual subjective visual
inspection to perform fair comparisons of supervised mapping methods. ClassiMap
appears to be the best supervised mapping method according to these criteria in
our experiments on synthetic and real datasets.
| no_new_dataset | 0.956145 |
1203.1483 | Eduard Gabriel B\u{a}z\u{a}van | Eduard Gabriel B\u{a}z\u{a}van, Fuxin Li and Cristian Sminchisescu | Learning Random Kernel Approximations for Object Recognition | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approximations based on random Fourier features have recently emerged as an
efficient and formally consistent methodology to design large-scale kernel
machines. By expressing the kernel as a Fourier expansion, features are
generated based on a finite set of random basis projections, sampled from the
Fourier transform of the kernel, with inner products that are Monte Carlo
approximations of the original kernel. Based on the observation that different
kernel-induced Fourier sampling distributions correspond to different kernel
parameters, we show that an optimization process in the Fourier domain can be
used to identify the different frequency bands that are useful for prediction
on training data. Moreover, the application of group Lasso to random feature
vectors corresponding to a linear combination of multiple kernels, leads to
efficient and scalable reformulations of the standard multiple kernel learning
model \cite{Varma09}. In this paper we develop the linear Fourier approximation
methodology for both single and multiple gradient-based kernel learning and
show that it produces fast and accurate predictors on a complex dataset such as
the Visual Object Challenge 2011 (VOC2011).
| [
{
"version": "v1",
"created": "Wed, 7 Mar 2012 14:33:26 GMT"
}
] | 2012-03-08T00:00:00 | [
[
"Băzăvan",
"Eduard Gabriel",
""
],
[
"Li",
"Fuxin",
""
],
[
"Sminchisescu",
"Cristian",
""
]
] | TITLE: Learning Random Kernel Approximations for Object Recognition
ABSTRACT: Approximations based on random Fourier features have recently emerged as an
efficient and formally consistent methodology to design large-scale kernel
machines. By expressing the kernel as a Fourier expansion, features are
generated based on a finite set of random basis projections, sampled from the
Fourier transform of the kernel, with inner products that are Monte Carlo
approximations of the original kernel. Based on the observation that different
kernel-induced Fourier sampling distributions correspond to different kernel
parameters, we show that an optimization process in the Fourier domain can be
used to identify the different frequency bands that are useful for prediction
on training data. Moreover, the application of group Lasso to random feature
vectors corresponding to a linear combination of multiple kernels, leads to
efficient and scalable reformulations of the standard multiple kernel learning
model \cite{Varma09}. In this paper we develop the linear Fourier approximation
methodology for both single and multiple gradient-based kernel learning and
show that it produces fast and accurate predictors on a complex dataset such as
the Visual Object Challenge 2011 (VOC2011).
| no_new_dataset | 0.949342 |
1203.1502 | Romain Giot | Romain Giot (GREYC), Christophe Rosenberger (GREYC), Bernadette
Dorizzi (SAMOVAR) | Performance Evaluation of Biometric Template Update | International Biometric Performance Testing Conference 2012,
Gaithersburg, MD, USA : United States (2012) | null | null | null | cs.OH cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Template update allows to modify the biometric reference of a user while he
uses the biometric system. With such kind of mechanism we expect the biometric
system uses always an up to date representation of the user, by capturing his
intra-class (temporary or permanent) variability. Although several studies
exist in the literature, there is no commonly adopted evaluation scheme. This
does not ease the comparison of the different systems of the literature. In
this paper, we show that using different evaluation procedures can lead in
different, and contradictory, interpretations of the results. We use a
keystroke dynamics (which is a modality suffering of template ageing quickly)
template update system on a dataset consisting of height different sessions to
illustrate this point. Even if we do not answer to this problematic, it shows
that it is necessary to normalize the template update evaluation procedures.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2012 16:07:47 GMT"
}
] | 2012-03-08T00:00:00 | [
[
"Giot",
"Romain",
"",
"GREYC"
],
[
"Rosenberger",
"Christophe",
"",
"GREYC"
],
[
"Dorizzi",
"Bernadette",
"",
"SAMOVAR"
]
] | TITLE: Performance Evaluation of Biometric Template Update
ABSTRACT: Template update allows to modify the biometric reference of a user while he
uses the biometric system. With such kind of mechanism we expect the biometric
system uses always an up to date representation of the user, by capturing his
intra-class (temporary or permanent) variability. Although several studies
exist in the literature, there is no commonly adopted evaluation scheme. This
does not ease the comparison of the different systems of the literature. In
this paper, we show that using different evaluation procedures can lead in
different, and contradictory, interpretations of the results. We use a
keystroke dynamics (which is a modality suffering of template ageing quickly)
template update system on a dataset consisting of height different sessions to
illustrate this point. Even if we do not answer to this problematic, it shows
that it is necessary to normalize the template update evaluation procedures.
| no_new_dataset | 0.941061 |
1203.1105 | Xiao-Ke Xu | Xiao-Ke Xu, Jian-Bo Wang, Ye Wu, Michael Small | Pairwise interaction pattern in the weighted communication network | 7 pages, 9 figures | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although recent studies show that both topological structures and human
dynamics can strongly affect information spreading on social networks, the
complicated interplay of the two significant factors has not yet been clearly
described. In this work, we find a strong pairwise interaction based on
analyzing the weighted network generated by the short message communication
dataset within a Chinese tele-communication provider. The pairwise interaction
bridges the network topological structure and human interaction dynamics, which
can promote local information spreading between pairs of communication partners
and in contrast can also suppress global information (e.g., rumor) cascade and
spreading. In addition, the pairwise interaction is the basic pattern of group
conversations and it can greatly reduce the waiting time of communication
events between a pair of intimate friends. Our findings are also helpful for
communication operators to design novel tariff strategies and optimize their
communication services.
| [
{
"version": "v1",
"created": "Tue, 6 Mar 2012 05:55:24 GMT"
}
] | 2012-03-07T00:00:00 | [
[
"Xu",
"Xiao-Ke",
""
],
[
"Wang",
"Jian-Bo",
""
],
[
"Wu",
"Ye",
""
],
[
"Small",
"Michael",
""
]
] | TITLE: Pairwise interaction pattern in the weighted communication network
ABSTRACT: Although recent studies show that both topological structures and human
dynamics can strongly affect information spreading on social networks, the
complicated interplay of the two significant factors has not yet been clearly
described. In this work, we find a strong pairwise interaction based on
analyzing the weighted network generated by the short message communication
dataset within a Chinese tele-communication provider. The pairwise interaction
bridges the network topological structure and human interaction dynamics, which
can promote local information spreading between pairs of communication partners
and in contrast can also suppress global information (e.g., rumor) cascade and
spreading. In addition, the pairwise interaction is the basic pattern of group
conversations and it can greatly reduce the waiting time of communication
events between a pair of intimate friends. Our findings are also helpful for
communication operators to design novel tariff strategies and optimize their
communication services.
| no_new_dataset | 0.947527 |
1202.6078 | Avishek Saha | Hal Daume III, Jeff M. Phillips, Avishek Saha, Suresh
Venkatasubramanian | Protocols for Learning Classifiers on Distributed Data | 19 pages, 12 figures, accepted at AISTATS 2012 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of learning classifiers for labeled data that has
been distributed across several nodes. Our goal is to find a single classifier,
with small approximation error, across all datasets while minimizing the
communication between nodes. This setting models real-world communication
bottlenecks in the processing of massive distributed datasets. We present
several very general sampling-based solutions as well as some two-way protocols
which have a provable exponential speed-up over any one-way protocol. We focus
on core problems for noiseless data distributed across two or more nodes. The
techniques we introduce are reminiscent of active learning, but rather than
actively probing labels, nodes actively communicate with each other, each node
simultaneously learning the important data from another node.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2012 21:33:32 GMT"
}
] | 2012-03-06T00:00:00 | [
[
"Daume",
"Hal",
"III"
],
[
"Phillips",
"Jeff M.",
""
],
[
"Saha",
"Avishek",
""
],
[
"Venkatasubramanian",
"Suresh",
""
]
] | TITLE: Protocols for Learning Classifiers on Distributed Data
ABSTRACT: We consider the problem of learning classifiers for labeled data that has
been distributed across several nodes. Our goal is to find a single classifier,
with small approximation error, across all datasets while minimizing the
communication between nodes. This setting models real-world communication
bottlenecks in the processing of massive distributed datasets. We present
several very general sampling-based solutions as well as some two-way protocols
which have a provable exponential speed-up over any one-way protocol. We focus
on core problems for noiseless data distributed across two or more nodes. The
techniques we introduce are reminiscent of active learning, but rather than
actively probing labels, nodes actively communicate with each other, each node
simultaneously learning the important data from another node.
| no_new_dataset | 0.953101 |
1203.0488 | Shu Kong | Shu Kong and Donghui Wang | Multi-Level Feature Descriptor for Robust Texture Classification via
Locality-Constrained Collaborative Strategy | null | null | null | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a simple but highly efficient ensemble for robust
texture classification, which can effectively deal with translation, scale and
changes of significant viewpoint problems. The proposed method first inherits
the spirit of spatial pyramid matching model (SPM), which is popular for
encoding spatial distribution of local features, but in a flexible way,
partitioning the original image into different levels and incorporating
different overlapping patterns of each level. This flexible setup helps capture
the informative features and produces sufficient local feature codes by some
well-chosen aggregation statistics or pooling operations within each
partitioned region, even when only a few sample images are available for
training. Then each texture image is represented by several orderless feature
codes and thereby all the training data form a reliable feature pond. Finally,
to take full advantage of this feature pond, we develop a collaborative
representation-based strategy with locality constraint (LC-CRC) for the final
classification, and experimental results on three well-known public texture
datasets demonstrate the proposed approach is very competitive and even
outperforms several state-of-the-art methods. Particularly, when only a few
samples of each category are available for training, our approach still
achieves very high classification performance.
| [
{
"version": "v1",
"created": "Fri, 2 Mar 2012 15:15:50 GMT"
}
] | 2012-03-06T00:00:00 | [
[
"Kong",
"Shu",
""
],
[
"Wang",
"Donghui",
""
]
] | TITLE: Multi-Level Feature Descriptor for Robust Texture Classification via
Locality-Constrained Collaborative Strategy
ABSTRACT: This paper introduces a simple but highly efficient ensemble for robust
texture classification, which can effectively deal with translation, scale and
changes of significant viewpoint problems. The proposed method first inherits
the spirit of spatial pyramid matching model (SPM), which is popular for
encoding spatial distribution of local features, but in a flexible way,
partitioning the original image into different levels and incorporating
different overlapping patterns of each level. This flexible setup helps capture
the informative features and produces sufficient local feature codes by some
well-chosen aggregation statistics or pooling operations within each
partitioned region, even when only a few sample images are available for
training. Then each texture image is represented by several orderless feature
codes and thereby all the training data form a reliable feature pond. Finally,
to take full advantage of this feature pond, we develop a collaborative
representation-based strategy with locality constraint (LC-CRC) for the final
classification, and experimental results on three well-known public texture
datasets demonstrate the proposed approach is very competitive and even
outperforms several state-of-the-art methods. Particularly, when only a few
samples of each category are available for training, our approach still
achieves very high classification performance.
| no_new_dataset | 0.949106 |
1003.0146 | Lihong Li | Lihong Li, Wei Chu, John Langford, Robert E. Schapire | A Contextual-Bandit Approach to Personalized News Article Recommendation | 10 pages, 5 figures | Presented at the Nineteenth International Conference on World Wide
Web (WWW 2010), Raleigh, NC, USA, 2010 | 10.1145/1772690.1772758 | null | cs.LG cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personalized web services strive to adapt their services (advertisements,
news articles, etc) to individual users by making use of both content and user
information. Despite a few recent advances, this problem remains challenging
for at least two reasons. First, web service is featured with dynamically
changing pools of content, rendering traditional collaborative filtering
methods inapplicable. Second, the scale of most web services of practical
interest calls for solutions that are both fast in learning and computation.
In this work, we model personalized recommendation of news articles as a
contextual bandit problem, a principled approach in which a learning algorithm
sequentially selects articles to serve users based on contextual information
about the users and articles, while simultaneously adapting its
article-selection strategy based on user-click feedback to maximize total user
clicks.
The contributions of this work are three-fold. First, we propose a new,
general contextual bandit algorithm that is computationally efficient and well
motivated from learning theory. Second, we argue that any bandit algorithm can
be reliably evaluated offline using previously recorded random traffic.
Finally, using this offline evaluation method, we successfully applied our new
algorithm to a Yahoo! Front Page Today Module dataset containing over 33
million events. Results showed a 12.5% click lift compared to a standard
context-free bandit algorithm, and the advantage becomes even greater when data
gets more scarce.
| [
{
"version": "v1",
"created": "Sun, 28 Feb 2010 02:18:59 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Mar 2012 23:49:42 GMT"
}
] | 2012-03-05T00:00:00 | [
[
"Li",
"Lihong",
""
],
[
"Chu",
"Wei",
""
],
[
"Langford",
"John",
""
],
[
"Schapire",
"Robert E.",
""
]
] | TITLE: A Contextual-Bandit Approach to Personalized News Article Recommendation
ABSTRACT: Personalized web services strive to adapt their services (advertisements,
news articles, etc) to individual users by making use of both content and user
information. Despite a few recent advances, this problem remains challenging
for at least two reasons. First, web service is featured with dynamically
changing pools of content, rendering traditional collaborative filtering
methods inapplicable. Second, the scale of most web services of practical
interest calls for solutions that are both fast in learning and computation.
In this work, we model personalized recommendation of news articles as a
contextual bandit problem, a principled approach in which a learning algorithm
sequentially selects articles to serve users based on contextual information
about the users and articles, while simultaneously adapting its
article-selection strategy based on user-click feedback to maximize total user
clicks.
The contributions of this work are three-fold. First, we propose a new,
general contextual bandit algorithm that is computationally efficient and well
motivated from learning theory. Second, we argue that any bandit algorithm can
be reliably evaluated offline using previously recorded random traffic.
Finally, using this offline evaluation method, we successfully applied our new
algorithm to a Yahoo! Front Page Today Module dataset containing over 33
million events. Results showed a 12.5% click lift compared to a standard
context-free bandit algorithm, and the advantage becomes even greater when data
gets more scarce.
| no_new_dataset | 0.949716 |
1203.0058 | Bo Zhao | Bo Zhao, Benjamin I. P. Rubinstein, Jim Gemmell, Jiawei Han | A Bayesian Approach to Discovering Truth from Conflicting Sources for
Data Integration | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 6, pp.
550-561 (2012) | null | null | cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In practical data integration systems, it is common for the data sources
being integrated to provide conflicting information about the same entity.
Consequently, a major challenge for data integration is to derive the most
complete and accurate integrated records from diverse and sometimes conflicting
sources. We term this challenge the truth finding problem. We observe that some
sources are generally more reliable than others, and therefore a good model of
source quality is the key to solving the truth finding problem. In this work,
we propose a probabilistic graphical model that can automatically infer true
records and source quality without any supervision. In contrast to previous
methods, our principled approach leverages a generative process of two types of
errors (false positive and false negative) by modeling two different aspects of
source quality. In so doing, ours is also the first approach designed to merge
multi-valued attribute types. Our method is scalable, due to an efficient
sampling-based inference algorithm that needs very few iterations in practice
and enjoys linear time complexity, with an even faster incremental variant.
Experiments on two real world datasets show that our new method outperforms
existing state-of-the-art approaches to the truth finding problem.
| [
{
"version": "v1",
"created": "Thu, 1 Mar 2012 00:17:31 GMT"
}
] | 2012-03-05T00:00:00 | [
[
"Zhao",
"Bo",
""
],
[
"Rubinstein",
"Benjamin I. P.",
""
],
[
"Gemmell",
"Jim",
""
],
[
"Han",
"Jiawei",
""
]
] | TITLE: A Bayesian Approach to Discovering Truth from Conflicting Sources for
Data Integration
ABSTRACT: In practical data integration systems, it is common for the data sources
being integrated to provide conflicting information about the same entity.
Consequently, a major challenge for data integration is to derive the most
complete and accurate integrated records from diverse and sometimes conflicting
sources. We term this challenge the truth finding problem. We observe that some
sources are generally more reliable than others, and therefore a good model of
source quality is the key to solving the truth finding problem. In this work,
we propose a probabilistic graphical model that can automatically infer true
records and source quality without any supervision. In contrast to previous
methods, our principled approach leverages a generative process of two types of
errors (false positive and false negative) by modeling two different aspects of
source quality. In so doing, ours is also the first approach designed to merge
multi-valued attribute types. Our method is scalable, due to an efficient
sampling-based inference algorithm that needs very few iterations in practice
and enjoys linear time complexity, with an even faster incremental variant.
Experiments on two real world datasets show that our new method outperforms
existing state-of-the-art approaches to the truth finding problem.
| no_new_dataset | 0.945399 |
1108.5668 | Gabriel Dulac-Arnold | Gabriel Dulac-Arnold, Ludovic Denoyer, Philippe Preux and Patrick
Gallinari | Datum-Wise Classification: A Sequential Approach to Sparsity | ECML2011 | Lecture Notes in Computer Science, 2011, Volume 6911/2011, 375-390 | 10.1007/978-3-642-23780-5_34 | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel classification technique whose aim is to select an
appropriate representation for each datapoint, in contrast to the usual
approach of selecting a representation encompassing the whole dataset. This
datum-wise representation is found by using a sparsity inducing empirical risk,
which is a relaxation of the standard L 0 regularized risk. The classification
problem is modeled as a sequential decision process that sequentially chooses,
for each datapoint, which features to use before classifying. Datum-Wise
Classification extends naturally to multi-class tasks, and we describe a
specific case where our inference has equivalent complexity to a traditional
linear classifier, while still using a variable number of features. We compare
our classifier to classical L 1 regularized linear models (L 1-SVM and LARS) on
a set of common binary and multi-class datasets and show that for an equal
average number of features used we can get improved performance using our
method.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2011 17:46:08 GMT"
}
] | 2012-03-02T00:00:00 | [
[
"Dulac-Arnold",
"Gabriel",
""
],
[
"Denoyer",
"Ludovic",
""
],
[
"Preux",
"Philippe",
""
],
[
"Gallinari",
"Patrick",
""
]
] | TITLE: Datum-Wise Classification: A Sequential Approach to Sparsity
ABSTRACT: We propose a novel classification technique whose aim is to select an
appropriate representation for each datapoint, in contrast to the usual
approach of selecting a representation encompassing the whole dataset. This
datum-wise representation is found by using a sparsity inducing empirical risk,
which is a relaxation of the standard L 0 regularized risk. The classification
problem is modeled as a sequential decision process that sequentially chooses,
for each datapoint, which features to use before classifying. Datum-Wise
Classification extends naturally to multi-class tasks, and we describe a
specific case where our inference has equivalent complexity to a traditional
linear classifier, while still using a variable number of features. We compare
our classifier to classical L 1 regularized linear models (L 1-SVM and LARS) on
a set of common binary and multi-class datasets and show that for an equal
average number of features used we can get improved performance using our
method.
| no_new_dataset | 0.951233 |
1203.0060 | Albert Angel | Albert Angel, Nick Koudas, Nikos Sarkas, Divesh Srivastava | Dense Subgraph Maintenance under Streaming Edge Weight Updates for
Real-time Story Identification | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 6, pp.
574-585 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have witnessed an unprecedented proliferation of social media.
People around the globe author, every day, millions of blog posts, social
network status updates, etc. This rich stream of information can be used to
identify, on an ongoing basis, emerging stories, and events that capture
popular attention. Stories can be identified via groups of tightly-coupled
real-world entities, namely the people, locations, products, etc., that are
involved in the story. The sheer scale, and rapid evolution of the data
involved necessitate highly efficient techniques for identifying important
stories at every point of time. The main challenge in real-time story
identification is the maintenance of dense subgraphs (corresponding to groups
of tightly-coupled entities) under streaming edge weight updates (resulting
from a stream of user-generated content). This is the first work to study the
efficient maintenance of dense subgraphs under such streaming edge weight
updates. For a wide range of definitions of density, we derive theoretical
results regarding the magnitude of change that a single edge weight update can
cause. Based on these, we propose a novel algorithm, DYNDENS, which outperforms
adaptations of existing techniques to this setting, and yields meaningful
results. Our approach is validated by a thorough experimental evaluation on
large-scale real and synthetic datasets.
| [
{
"version": "v1",
"created": "Thu, 1 Mar 2012 00:17:48 GMT"
}
] | 2012-03-02T00:00:00 | [
[
"Angel",
"Albert",
""
],
[
"Koudas",
"Nick",
""
],
[
"Sarkas",
"Nikos",
""
],
[
"Srivastava",
"Divesh",
""
]
] | TITLE: Dense Subgraph Maintenance under Streaming Edge Weight Updates for
Real-time Story Identification
ABSTRACT: Recent years have witnessed an unprecedented proliferation of social media.
People around the globe author, every day, millions of blog posts, social
network status updates, etc. This rich stream of information can be used to
identify, on an ongoing basis, emerging stories, and events that capture
popular attention. Stories can be identified via groups of tightly-coupled
real-world entities, namely the people, locations, products, etc., that are
involved in the story. The sheer scale, and rapid evolution of the data
involved necessitate highly efficient techniques for identifying important
stories at every point of time. The main challenge in real-time story
identification is the maintenance of dense subgraphs (corresponding to groups
of tightly-coupled entities) under streaming edge weight updates (resulting
from a stream of user-generated content). This is the first work to study the
efficient maintenance of dense subgraphs under such streaming edge weight
updates. For a wide range of definitions of density, we derive theoretical
results regarding the magnitude of change that a single edge weight update can
cause. Based on these, we propose a novel algorithm, DYNDENS, which outperforms
adaptations of existing techniques to this setting, and yields meaningful
results. Our approach is validated by a thorough experimental evaluation on
large-scale real and synthetic datasets.
| no_new_dataset | 0.945751 |
1202.6136 | Dohy Hong | Dohy Hong | D-iteration: evaluation of the update algorithm | 5 pages | null | null | null | cs.DM math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to analyse the gain of the update algorithm
associated to the recently proposed D-iteration: the D-iteration is a fluid
diffusion based new iterative method. It exploits a simple intuitive
decomposition of the product matrix-vector as elementary operations of fluid
diffusion (forward scheme) associated to a new algebraic representation. We
show through experimentations on real datasets how much this approach can
improve the computation efficiency in presence of the graph evolution.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2012 07:04:11 GMT"
}
] | 2012-02-29T00:00:00 | [
[
"Hong",
"Dohy",
""
]
] | TITLE: D-iteration: evaluation of the update algorithm
ABSTRACT: The aim of this paper is to analyse the gain of the update algorithm
associated to the recently proposed D-iteration: the D-iteration is a fluid
diffusion based new iterative method. It exploits a simple intuitive
decomposition of the product matrix-vector as elementary operations of fluid
diffusion (forward scheme) associated to a new algebraic representation. We
show through experimentations on real datasets how much this approach can
improve the computation efficiency in presence of the graph evolution.
| no_new_dataset | 0.943452 |
1202.6168 | Dohy Hong | Dohy Hong | D-iteration: Evaluation of the Asynchronous Distributed Computation | 8 pages | null | null | null | math.NA cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to present a first evaluation of the potential of an
asynchronous distributed computation associated to the recently proposed
approach, D-iteration: the D-iteration is a fluid diffusion based iterative
method, which has the advantage of being natively distributive. It exploits a
simple intuitive decomposition of the matrix-vector product as elementary
operations of fluid diffusion associated to a new algebraic representation. We
show through experiments on real datasets how much this approach can improve
the computation efficiency when the parallelism is applied: with the proposed
solution, when the computation is distributed over $K$ virtual machines (PIDs),
the memory size to be handled by each virtual machine decreases linearly with
$K$ and the computation speed increases almost linearly with $K$ with a slope
becoming closer to one when the number $N$ of linear equations to be solved
increases.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2012 10:27:46 GMT"
}
] | 2012-02-29T00:00:00 | [
[
"Hong",
"Dohy",
""
]
] | TITLE: D-iteration: Evaluation of the Asynchronous Distributed Computation
ABSTRACT: The aim of this paper is to present a first evaluation of the potential of an
asynchronous distributed computation associated to the recently proposed
approach, D-iteration: the D-iteration is a fluid diffusion based iterative
method, which has the advantage of being natively distributive. It exploits a
simple intuitive decomposition of the matrix-vector product as elementary
operations of fluid diffusion associated to a new algebraic representation. We
show through experiments on real datasets how much this approach can improve
the computation efficiency when the parallelism is applied: with the proposed
solution, when the computation is distributed over $K$ virtual machines (PIDs),
the memory size to be handled by each virtual machine decreases linearly with
$K$ and the computation speed increases almost linearly with $K$ with a slope
becoming closer to one when the number $N$ of linear equations to be solved
increases.
| no_new_dataset | 0.939081 |
1202.5713 | Michalis Potamias | Michalis Potamias | The warm-start bias of Yelp ratings | 5 pages, 5 figures | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Yelp ratings are often viewed as a reputation metric for local businesses. In
this paper we study how Yelp ratings evolve over time. Our main finding is that
on average the first ratings that businesses receive overestimate their
eventual reputation. In particular, the first review that a business receives
in our dataset averages 4.1 stars, while the 20th review averages just 3.69
stars. This significant warm-start bias which may be attributed to the limited
exposure of a business in its first steps may mask analysis performed on
ratings and reputational ramifications. Therefore, we study techniques to
identify and correct for this bias. Further, we perform a case study to explore
the effect of a Groupon deal on the merchant's subsequent ratings and show both
that previous research has overestimated Groupon's effect to merchants'
reputation and that average ratings anticorrelate with the number of reviews
received. Our analysis points to the importance of identifying and removing
biases from Yelp reviews.
| [
{
"version": "v1",
"created": "Sun, 26 Feb 2012 01:42:57 GMT"
}
] | 2012-02-28T00:00:00 | [
[
"Potamias",
"Michalis",
""
]
] | TITLE: The warm-start bias of Yelp ratings
ABSTRACT: Yelp ratings are often viewed as a reputation metric for local businesses. In
this paper we study how Yelp ratings evolve over time. Our main finding is that
on average the first ratings that businesses receive overestimate their
eventual reputation. In particular, the first review that a business receives
in our dataset averages 4.1 stars, while the 20th review averages just 3.69
stars. This significant warm-start bias which may be attributed to the limited
exposure of a business in its first steps may mask analysis performed on
ratings and reputational ramifications. Therefore, we study techniques to
identify and correct for this bias. Further, we perform a case study to explore
the effect of a Groupon deal on the merchant's subsequent ratings and show both
that previous research has overestimated Groupon's effect to merchants'
reputation and that average ratings anticorrelate with the number of reviews
received. Our analysis points to the importance of identifying and removing
biases from Yelp reviews.
| no_new_dataset | 0.875681 |
1002.1880 | Florian Sikora | Sylvain Guillemot, Florian Sikora | Finding and counting vertex-colored subtrees | Conference version in International Symposium on Mathematical
Foundations of Computer Science (MFCS), Brno : Czech Republic (2010) Journal
Version in Algorithmica | null | 10.1007/s00453-011-9600-8 | null | cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problems studied in this article originate from the Graph Motif problem
introduced by Lacroix et al. in the context of biological networks. The problem
is to decide if a vertex-colored graph has a connected subgraph whose colors
equal a given multiset of colors $M$. It is a graph pattern-matching problem
variant, where the structure of the occurrence of the pattern is not of
interest but the only requirement is the connectedness. Using an algebraic
framework recently introduced by Koutis et al., we obtain new FPT algorithms
for Graph Motif and variants, with improved running times. We also obtain
results on the counting versions of this problem, proving that the counting
problem is FPT if M is a set, but becomes W[1]-hard if M is a multiset with two
colors. Finally, we present an experimental evaluation of this approach on real
datasets, showing that its performance compares favorably with existing
software.
| [
{
"version": "v1",
"created": "Tue, 9 Feb 2010 15:19:54 GMT"
},
{
"version": "v2",
"created": "Mon, 10 May 2010 12:18:20 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Jun 2010 07:42:54 GMT"
},
{
"version": "v4",
"created": "Fri, 24 Feb 2012 15:35:28 GMT"
}
] | 2012-02-27T00:00:00 | [
[
"Guillemot",
"Sylvain",
""
],
[
"Sikora",
"Florian",
""
]
] | TITLE: Finding and counting vertex-colored subtrees
ABSTRACT: The problems studied in this article originate from the Graph Motif problem
introduced by Lacroix et al. in the context of biological networks. The problem
is to decide if a vertex-colored graph has a connected subgraph whose colors
equal a given multiset of colors $M$. It is a graph pattern-matching problem
variant, where the structure of the occurrence of the pattern is not of
interest but the only requirement is the connectedness. Using an algebraic
framework recently introduced by Koutis et al., we obtain new FPT algorithms
for Graph Motif and variants, with improved running times. We also obtain
results on the counting versions of this problem, proving that the counting
problem is FPT if M is a set, but becomes W[1]-hard if M is a multiset with two
colors. Finally, we present an experimental evaluation of this approach on real
datasets, showing that its performance compares favorably with existing
software.
| no_new_dataset | 0.942665 |
1202.5477 | Arkaitz Zubiaga | Arkaitz Zubiaga and Raquel Mart\'inez and V\'ictor Fresno | Analyzing Tag Distributions in Folksonomies for Resource Classification | null | KSEM 2011, 5th International Conference on Knowledge Science,
Engineering and Management | null | null | cs.DL cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Recent research has shown the usefulness of social tags as a data source to
feed resource classification. Little is known about the effect of settings on
folksonomies created on social tagging systems. In this work, we consider the
settings of social tagging systems to further understand tag distributions in
folksonomies. We analyze in depth the tag distributions on three large-scale
social tagging datasets, and analyze the effect on a resource classification
task. To this end, we study the appropriateness of applying weighting schemes
based on the well-known TF-IDF for resource classification. We show the great
importance of settings as to altering tag distributions. Among those settings,
tag suggestions produce very different folksonomies, which condition the
success of the employed weighting schemes. Our findings and analyses are
relevant for researchers studying tag-based resource classification, user
behavior in social networks, the structure of folksonomies and tag
distributions, as well as for developers of social tagging systems in search of
an appropriate setting.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2012 18:36:06 GMT"
}
] | 2012-02-27T00:00:00 | [
[
"Zubiaga",
"Arkaitz",
""
],
[
"Martínez",
"Raquel",
""
],
[
"Fresno",
"Víctor",
""
]
] | TITLE: Analyzing Tag Distributions in Folksonomies for Resource Classification
ABSTRACT: Recent research has shown the usefulness of social tags as a data source to
feed resource classification. Little is known about the effect of settings on
folksonomies created on social tagging systems. In this work, we consider the
settings of social tagging systems to further understand tag distributions in
folksonomies. We analyze in depth the tag distributions on three large-scale
social tagging datasets, and analyze the effect on a resource classification
task. To this end, we study the appropriateness of applying weighting schemes
based on the well-known TF-IDF for resource classification. We show the great
importance of settings as to altering tag distributions. Among those settings,
tag suggestions produce very different folksonomies, which condition the
success of the employed weighting schemes. Our findings and analyses are
relevant for researchers studying tag-based resource classification, user
behavior in social networks, the structure of folksonomies and tag
distributions, as well as for developers of social tagging systems in search of
an appropriate setting.
| no_new_dataset | 0.954265 |
1202.4805 | Joseph Pfeiffer III | Joseph J. Pfeiffer III, Timothy La Fond, Sebastian Moreno, Jennifer
Neville | Fast Generation of Large Scale Social Networks with Clustering | 11 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key challenge within the social network literature is the problem of
network generation - that is, how can we create synthetic networks that match
characteristics traditionally found in most real world networks? Important
characteristics that are present in social networks include a power law degree
distribution, small diameter and large amounts of clustering; however, most
current network generators, such as the Chung Lu and Kronecker models, largely
ignore the clustering present in a graph and choose to focus on preserving
other network statistics, such as the power law distribution. Models such as
the exponential random graph model have a transitivity parameter, but are
computationally difficult to learn, making scaling to large real world networks
intractable. In this work, we propose an extension to the Chung Lu ran- dom
graph model, the Transitive Chung Lu (TCL) model, which incorporates the notion
of a random transitive edge. That is, with some probability it will choose to
connect to a node exactly two hops away, having been introduced to a 'friend of
a friend'. In all other cases it will follow the standard Chung Lu model,
selecting a 'random surfer' from anywhere in the graph according to the given
invariant distribution. We prove TCL's expected degree distribution is equal to
the degree distribution of the original graph, while being able to capture the
clustering present in the network. The single parameter required by our model
can be learned in seconds on graphs with millions of edges, while networks can
be generated in time that is linear in the number of edges. We demonstrate the
performance TCL on four real- world social networks, including an email dataset
with hundreds of thousands of nodes and millions of edges, showing TCL
generates graphs that match the degree distribution, clustering coefficients
and hop plots of the original networks.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2012 01:35:16 GMT"
}
] | 2012-02-23T00:00:00 | [
[
"Pfeiffer",
"Joseph J.",
"III"
],
[
"La Fond",
"Timothy",
""
],
[
"Moreno",
"Sebastian",
""
],
[
"Neville",
"Jennifer",
""
]
] | TITLE: Fast Generation of Large Scale Social Networks with Clustering
ABSTRACT: A key challenge within the social network literature is the problem of
network generation - that is, how can we create synthetic networks that match
characteristics traditionally found in most real world networks? Important
characteristics that are present in social networks include a power law degree
distribution, small diameter and large amounts of clustering; however, most
current network generators, such as the Chung Lu and Kronecker models, largely
ignore the clustering present in a graph and choose to focus on preserving
other network statistics, such as the power law distribution. Models such as
the exponential random graph model have a transitivity parameter, but are
computationally difficult to learn, making scaling to large real world networks
intractable. In this work, we propose an extension to the Chung Lu ran- dom
graph model, the Transitive Chung Lu (TCL) model, which incorporates the notion
of a random transitive edge. That is, with some probability it will choose to
connect to a node exactly two hops away, having been introduced to a 'friend of
a friend'. In all other cases it will follow the standard Chung Lu model,
selecting a 'random surfer' from anywhere in the graph according to the given
invariant distribution. We prove TCL's expected degree distribution is equal to
the degree distribution of the original graph, while being able to capture the
clustering present in the network. The single parameter required by our model
can be learned in seconds on graphs with millions of edges, while networks can
be generated in time that is linear in the number of edges. We demonstrate the
performance TCL on four real- world social networks, including an email dataset
with hundreds of thousands of nodes and millions of edges, showing TCL
generates graphs that match the degree distribution, clustering coefficients
and hop plots of the original networks.
| no_new_dataset | 0.947769 |
1201.3133 | Odemir Bruno PhD | Jo\~ao Batista Florindo, Odemir Martinez Bruno | Fractal Descriptors in the Fourier Domain Applied to Color Texture
Analysis | Chaos, Volume 21, Issue 4, 2011 | null | 10.1063/1.3650233 | null | physics.data-an cs.CV math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The present work proposes the development of a novel method to provide
descriptors for colored texture images. The method consists in two steps. In
the first, we apply a linear transform in the color space of the image aiming
at highlighting spatial structuring relations among the color of pixels. In a
second moment, we apply a multiscale approach to the calculus of fractal
dimension based on Fourier transform. From this multiscale operation, we
extract the descriptors used to discriminate the texture represented in digital
images. The accuracy of the method is verified in the classification of two
color texture datasets, by comparing the performance of the proposed technique
to other classical and state-of-the-art methods for color texture analysis. The
results showed an advantage of almost 3% of the proposed technique over the
second best approach.
| [
{
"version": "v1",
"created": "Sun, 15 Jan 2012 22:33:43 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2012 01:20:34 GMT"
}
] | 2012-02-21T00:00:00 | [
[
"Florindo",
"João Batista",
""
],
[
"Bruno",
"Odemir Martinez",
""
]
] | TITLE: Fractal Descriptors in the Fourier Domain Applied to Color Texture
Analysis
ABSTRACT: The present work proposes the development of a novel method to provide
descriptors for colored texture images. The method consists in two steps. In
the first, we apply a linear transform in the color space of the image aiming
at highlighting spatial structuring relations among the color of pixels. In a
second moment, we apply a multiscale approach to the calculus of fractal
dimension based on Fourier transform. From this multiscale operation, we
extract the descriptors used to discriminate the texture represented in digital
images. The accuracy of the method is verified in the classification of two
color texture datasets, by comparing the performance of the proposed technique
to other classical and state-of-the-art methods for color texture analysis. The
results showed an advantage of almost 3% of the proposed technique over the
second best approach.
| no_new_dataset | 0.952175 |
1201.4292 | John Whitbeck | John Whitbeck, Yoann Lopez, Jeremie Leguay, Vania Conan, Marcelo Dias
de Amorim | Push-and-Track: Saving Infrastructure Bandwidth Through Opportunistic
Forwarding | Accepted for publication in the Pervasive and Mobile Computing
journal | null | 10.1016/j.pmcj.2012.02.001 | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Major wireless operators are nowadays facing network capacity issues in
striving to meet the growing demands of mobile users. At the same time,
3G-enabled devices increasingly benefit from ad hoc radio connectivity (e.g.,
Wi-Fi). In this context of hybrid connectivity, we propose Push-and-track, a
content dissemina- tion framework that harnesses ad hoc communication
opportunities to minimize the load on the wireless infrastructure while
guaranteeing tight delivery delays. It achieves this through a control loop
that collects user-sent acknowledgements to determine if new copies need to be
reinjected into the network through the 3G interface. Push-and-Track is
flexible and can be applied to a variety of scenarios, including periodic
message flooding and floating data. For the former, this paper examines
multiple strategies to determine how many copies of the content should be
injected, when, and to whom; for the latter, it examines the achievable offload
ratio depending on the freshness constraints. The short delay-tolerance of
common content, such as news or road traffic updates, make them suitable for
such a system. Use cases with a long delay-tolerance, such as software updates,
are an even better fit. Based on a realistic large-scale vehicular dataset from
the city of Bologna composed of more than 10,000 vehicles, we demonstrate that
Push-and-Track consistently meets its delivery objectives while reducing the
use of the 3G network by about 90%.
| [
{
"version": "v1",
"created": "Fri, 20 Jan 2012 13:53:37 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2012 10:26:19 GMT"
},
{
"version": "v3",
"created": "Sat, 18 Feb 2012 14:15:51 GMT"
}
] | 2012-02-21T00:00:00 | [
[
"Whitbeck",
"John",
""
],
[
"Lopez",
"Yoann",
""
],
[
"Leguay",
"Jeremie",
""
],
[
"Conan",
"Vania",
""
],
[
"de Amorim",
"Marcelo Dias",
""
]
] | TITLE: Push-and-Track: Saving Infrastructure Bandwidth Through Opportunistic
Forwarding
ABSTRACT: Major wireless operators are nowadays facing network capacity issues in
striving to meet the growing demands of mobile users. At the same time,
3G-enabled devices increasingly benefit from ad hoc radio connectivity (e.g.,
Wi-Fi). In this context of hybrid connectivity, we propose Push-and-track, a
content dissemina- tion framework that harnesses ad hoc communication
opportunities to minimize the load on the wireless infrastructure while
guaranteeing tight delivery delays. It achieves this through a control loop
that collects user-sent acknowledgements to determine if new copies need to be
reinjected into the network through the 3G interface. Push-and-Track is
flexible and can be applied to a variety of scenarios, including periodic
message flooding and floating data. For the former, this paper examines
multiple strategies to determine how many copies of the content should be
injected, when, and to whom; for the latter, it examines the achievable offload
ratio depending on the freshness constraints. The short delay-tolerance of
common content, such as news or road traffic updates, make them suitable for
such a system. Use cases with a long delay-tolerance, such as software updates,
are an even better fit. Based on a realistic large-scale vehicular dataset from
the city of Bologna composed of more than 10,000 vehicles, we demonstrate that
Push-and-Track consistently meets its delivery objectives while reducing the
use of the 3G network by about 90%.
| no_new_dataset | 0.943452 |
1202.3702 | Avleen S. Bijral | Avleen S. Bijral, Nathan Ratliff, Nathan Srebro | Semi-supervised Learning with Density Based Distances | null | null | null | UAI-P-2011-PG-43-50 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a simple, yet effective, approach to Semi-Supervised Learning. Our
approach is based on estimating density-based distances (DBD) using a shortest
path calculation on a graph. These Graph-DBD estimates can then be used in any
distance-based supervised learning method, such as Nearest Neighbor methods and
SVMs with RBF kernels. In order to apply the method to very large data sets, we
also present a novel algorithm which integrates nearest neighbor computations
into the shortest path search and can find exact shortest paths even in
extremely large dense graphs. Significant runtime improvement over the commonly
used Laplacian regularization method is then shown on a large scale dataset.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2012 16:41:17 GMT"
}
] | 2012-02-20T00:00:00 | [
[
"Bijral",
"Avleen S.",
""
],
[
"Ratliff",
"Nathan",
""
],
[
"Srebro",
"Nathan",
""
]
] | TITLE: Semi-supervised Learning with Density Based Distances
ABSTRACT: We present a simple, yet effective, approach to Semi-Supervised Learning. Our
approach is based on estimating density-based distances (DBD) using a shortest
path calculation on a graph. These Graph-DBD estimates can then be used in any
distance-based supervised learning method, such as Nearest Neighbor methods and
SVMs with RBF kernels. In order to apply the method to very large data sets, we
also present a novel algorithm which integrates nearest neighbor computations
into the shortest path search and can find exact shortest paths even in
extremely large dense graphs. Significant runtime improvement over the commonly
used Laplacian regularization method is then shown on a large scale dataset.
| no_new_dataset | 0.950915 |
1202.3722 | Inmar Givoni | Inmar Givoni, Clement Chung, Brendan J. Frey | Hierarchical Affinity Propagation | null | null | null | UAI-P-2011-PG-238-246 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Affinity propagation is an exemplar-based clustering algorithm that finds a
set of data-points that best exemplify the data, and associates each datapoint
with one exemplar. We extend affinity propagation in a principled way to solve
the hierarchical clustering problem, which arises in a variety of domains
including biology, sensor networks and decision making in operational research.
We derive an inference algorithm that operates by propagating information up
and down the hierarchy, and is efficient despite the high-order potentials
required for the graphical model formulation. We demonstrate that our method
outperforms greedy techniques that cluster one layer at a time. We show that on
an artificial dataset designed to mimic the HIV-strain mutation dynamics, our
method outperforms related methods. For real HIV sequences, where the ground
truth is not available, we show our method achieves better results, in terms of
the underlying objective function, and show the results correspond meaningfully
to geographical location and strain subtypes. Finally we report results on
using the method for the analysis of mass spectra, showing it performs
favorably compared to state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2012 16:41:17 GMT"
}
] | 2012-02-20T00:00:00 | [
[
"Givoni",
"Inmar",
""
],
[
"Chung",
"Clement",
""
],
[
"Frey",
"Brendan J.",
""
]
] | TITLE: Hierarchical Affinity Propagation
ABSTRACT: Affinity propagation is an exemplar-based clustering algorithm that finds a
set of data-points that best exemplify the data, and associates each datapoint
with one exemplar. We extend affinity propagation in a principled way to solve
the hierarchical clustering problem, which arises in a variety of domains
including biology, sensor networks and decision making in operational research.
We derive an inference algorithm that operates by propagating information up
and down the hierarchy, and is efficient despite the high-order potentials
required for the graphical model formulation. We demonstrate that our method
outperforms greedy techniques that cluster one layer at a time. We show that on
an artificial dataset designed to mimic the HIV-strain mutation dynamics, our
method outperforms related methods. For real HIV sequences, where the ground
truth is not available, we show our method achieves better results, in terms of
the underlying objective function, and show the results correspond meaningfully
to geographical location and strain subtypes. Finally we report results on
using the method for the analysis of mass spectra, showing it performs
favorably compared to state-of-the-art methods.
| no_new_dataset | 0.917117 |
1202.3769 | Feng Yan | Feng Yan, Zenglin Xu, Yuan (Alan) Qi | Sparse matrix-variate Gaussian process blockmodels for network modeling | null | null | null | UAI-P-2011-PG-745-752 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We face network data from various sources, such as protein interactions and
online social networks. A critical problem is to model network interactions and
identify latent groups of network nodes. This problem is challenging due to
many reasons. For example, the network nodes are interdependent instead of
independent of each other, and the data are known to be very noisy (e.g.,
missing edges). To address these challenges, we propose a new relational model
for network data, Sparse Matrix-variate Gaussian process Blockmodel (SMGB). Our
model generalizes popular bilinear generative models and captures nonlinear
network interactions using a matrix-variate Gaussian process with latent
membership variables. We also assign sparse prior distributions on the latent
membership variables to learn sparse group assignments for individual network
nodes. To estimate the latent variables efficiently from data, we develop an
efficient variational expectation maximization method. We compared our
approaches with several state-of-the-art network models on both synthetic and
real-world network datasets. Experimental results demonstrate SMGBs outperform
the alternative approaches in terms of discovering latent classes or predicting
unknown interactions.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2012 16:41:17 GMT"
}
] | 2012-02-20T00:00:00 | [
[
"Yan",
"Feng",
"",
"Alan"
],
[
"Xu",
"Zenglin",
"",
"Alan"
],
[
"Yuan",
"",
"",
"Alan"
],
[
"Qi",
"",
""
]
] | TITLE: Sparse matrix-variate Gaussian process blockmodels for network modeling
ABSTRACT: We face network data from various sources, such as protein interactions and
online social networks. A critical problem is to model network interactions and
identify latent groups of network nodes. This problem is challenging due to
many reasons. For example, the network nodes are interdependent instead of
independent of each other, and the data are known to be very noisy (e.g.,
missing edges). To address these challenges, we propose a new relational model
for network data, Sparse Matrix-variate Gaussian process Blockmodel (SMGB). Our
model generalizes popular bilinear generative models and captures nonlinear
network interactions using a matrix-variate Gaussian process with latent
membership variables. We also assign sparse prior distributions on the latent
membership variables to learn sparse group assignments for individual network
nodes. To estimate the latent variables efficiently from data, we develop an
efficient variational expectation maximization method. We compared our
approaches with several state-of-the-art network models on both synthetic and
real-world network datasets. Experimental results demonstrate SMGBs outperform
the alternative approaches in terms of discovering latent classes or predicting
unknown interactions.
| no_new_dataset | 0.950915 |
1202.3770 | Jian-Bo Yang | Jian-Bo Yang, Ivor W. Tsang | Hierarchical Maximum Margin Learning for Multi-Class Classification | null | null | null | UAI-P-2011-PG-753-760 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to myriads of classes, designing accurate and efficient classifiers
becomes very challenging for multi-class classification. Recent research has
shown that class structure learning can greatly facilitate multi-class
learning. In this paper, we propose a novel method to learn the class structure
for multi-class classification problems. The class structure is assumed to be a
binary hierarchical tree. To learn such a tree, we propose a maximum separating
margin method to determine the child nodes of any internal node. The proposed
method ensures that two classgroups represented by any two sibling nodes are
most separable. In the experiments, we evaluate the accuracy and efficiency of
the proposed method over other multi-class classification methods on real world
large-scale problems. The results show that the proposed method outperforms
benchmark methods in terms of accuracy for most datasets and performs
comparably with other class structure learning methods in terms of efficiency
for all datasets.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2012 16:41:17 GMT"
}
] | 2012-02-20T00:00:00 | [
[
"Yang",
"Jian-Bo",
""
],
[
"Tsang",
"Ivor W.",
""
]
] | TITLE: Hierarchical Maximum Margin Learning for Multi-Class Classification
ABSTRACT: Due to myriads of classes, designing accurate and efficient classifiers
becomes very challenging for multi-class classification. Recent research has
shown that class structure learning can greatly facilitate multi-class
learning. In this paper, we propose a novel method to learn the class structure
for multi-class classification problems. The class structure is assumed to be a
binary hierarchical tree. To learn such a tree, we propose a maximum separating
margin method to determine the child nodes of any internal node. The proposed
method ensures that two classgroups represented by any two sibling nodes are
most separable. In the experiments, we evaluate the accuracy and efficiency of
the proposed method over other multi-class classification methods on real world
large-scale problems. The results show that the proposed method outperforms
benchmark methods in terms of accuracy for most datasets and performs
comparably with other class structure learning methods in terms of efficiency
for all datasets.
| no_new_dataset | 0.949435 |
1202.3776 | Xinhua Zhang | Xinhua Zhang, Ankan Saha, S. V.N. Vishwanatan | Smoothing Multivariate Performance Measures | null | null | null | UAI-P-2011-PG-814-821 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Support Vector Method for multivariate performance measures was recently
introduced by Joachims (2005). The underlying optimization problem is currently
solved using cutting plane methods such as SVM-Perf and BMRM. One can show that
these algorithms converge to an eta accurate solution in O(1/Lambda*e)
iterations, where lambda is the trade-off parameter between the regularizer and
the loss function. We present a smoothing strategy for multivariate performance
scores, in particular precision/recall break-even point and ROCArea. When
combined with Nesterov's accelerated gradient algorithm our smoothing strategy
yields an optimization algorithm which converges to an eta accurate solution in
O(min{1/e,1/sqrt(lambda*e)}) iterations. Furthermore, the cost per iteration of
our scheme is the same as that of SVM-Perf and BMRM. Empirical evaluation on a
number of publicly available datasets shows that our method converges
significantly faster than cutting plane methods without sacrificing
generalization ability.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2012 16:41:17 GMT"
}
] | 2012-02-20T00:00:00 | [
[
"Zhang",
"Xinhua",
""
],
[
"Saha",
"Ankan",
""
],
[
"Vishwanatan",
"S. V. N.",
""
]
] | TITLE: Smoothing Multivariate Performance Measures
ABSTRACT: A Support Vector Method for multivariate performance measures was recently
introduced by Joachims (2005). The underlying optimization problem is currently
solved using cutting plane methods such as SVM-Perf and BMRM. One can show that
these algorithms converge to an eta accurate solution in O(1/Lambda*e)
iterations, where lambda is the trade-off parameter between the regularizer and
the loss function. We present a smoothing strategy for multivariate performance
scores, in particular precision/recall break-even point and ROCArea. When
combined with Nesterov's accelerated gradient algorithm our smoothing strategy
yields an optimization algorithm which converges to an eta accurate solution in
O(min{1/e,1/sqrt(lambda*e)}) iterations. Furthermore, the cost per iteration of
our scheme is the same as that of SVM-Perf and BMRM. Empirical evaluation on a
number of publicly available datasets shows that our method converges
significantly faster than cutting plane methods without sacrificing
generalization ability.
| no_new_dataset | 0.95275 |
1104.0729 | Afshin Rostamizadeh | Afshin Rostamizadeh, Alekh Agarwal, Peter Bartlett | Online and Batch Learning Algorithms for Data with Missing Features | null | 27th Conference on Uncertainty in Artificial Intelligence (UAI
2011) | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce new online and batch algorithms that are robust to data with
missing features, a situation that arises in many practical applications. In
the online setup, we allow for the comparison hypothesis to change as a
function of the subset of features that is observed on any given round,
extending the standard setting where the comparison hypothesis is fixed
throughout. In the batch setup, we present a convex relation of a non-convex
problem to jointly estimate an imputation function, used to fill in the values
of missing features, along with the classification hypothesis. We prove regret
bounds in the online setting and Rademacher complexity bounds for the batch
i.i.d. setting. The algorithms are tested on several UCI datasets, showing
superior performance over baselines.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2011 04:28:51 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2011 05:24:55 GMT"
},
{
"version": "v3",
"created": "Sat, 21 May 2011 18:17:23 GMT"
},
{
"version": "v4",
"created": "Thu, 16 Jun 2011 15:40:28 GMT"
}
] | 2012-02-19T00:00:00 | [
[
"Rostamizadeh",
"Afshin",
""
],
[
"Agarwal",
"Alekh",
""
],
[
"Bartlett",
"Peter",
""
]
] | TITLE: Online and Batch Learning Algorithms for Data with Missing Features
ABSTRACT: We introduce new online and batch algorithms that are robust to data with
missing features, a situation that arises in many practical applications. In
the online setup, we allow for the comparison hypothesis to change as a
function of the subset of features that is observed on any given round,
extending the standard setting where the comparison hypothesis is fixed
throughout. In the batch setup, we present a convex relation of a non-convex
problem to jointly estimate an imputation function, used to fill in the values
of missing features, along with the classification hypothesis. We prove regret
bounds in the online setting and Rademacher complexity bounds for the batch
i.i.d. setting. The algorithms are tested on several UCI datasets, showing
superior performance over baselines.
| no_new_dataset | 0.946794 |
1202.3619 | Yamir Moreno Vega | J. Sanz, E.Cozzo, J. Borge-Holthoefer, and Y. Moreno | Topological effects of data incompleteness of gene regulatory networks | Supplementary Material is available on request | null | null | null | physics.bio-ph physics.soc-ph q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The topological analysis of biological networks has been a prolific topic in
network science during the last decade. A persistent problem with this approach
is the inherent uncertainty and noisy nature of the data. One of the cases in
which this situation is more marked is that of transcriptional regulatory
networks (TRNs) in bacteria. The datasets are incomplete because regulatory
pathways associated to a relevant fraction of bacterial genes remain unknown.
Furthermore, direction, strengths and signs of the links are sometimes unknown
or simply overlooked. Finally, the experimental approaches to infer the
regulations are highly heterogeneous, in a way that induces the appearance of
systematic experimental-topological correlations. And yet, the quality of the
available data increases constantly. In this work we capitalize on these
advances to point out the influence of data (in)completeness and quality on
some classical results on topological analysis of TRNs, specially regarding
modularity at different levels. In doing so, we identify the most relevant
factors affecting the validity of previous findings, highlighting important
caveats to future prokaryotic TRNs topological analysis.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2012 15:30:32 GMT"
}
] | 2012-02-17T00:00:00 | [
[
"Sanz",
"J.",
""
],
[
"Cozzo",
"E.",
""
],
[
"Borge-Holthoefer",
"J.",
""
],
[
"Moreno",
"Y.",
""
]
] | TITLE: Topological effects of data incompleteness of gene regulatory networks
ABSTRACT: The topological analysis of biological networks has been a prolific topic in
network science during the last decade. A persistent problem with this approach
is the inherent uncertainty and noisy nature of the data. One of the cases in
which this situation is more marked is that of transcriptional regulatory
networks (TRNs) in bacteria. The datasets are incomplete because regulatory
pathways associated to a relevant fraction of bacterial genes remain unknown.
Furthermore, direction, strengths and signs of the links are sometimes unknown
or simply overlooked. Finally, the experimental approaches to infer the
regulations are highly heterogeneous, in a way that induces the appearance of
systematic experimental-topological correlations. And yet, the quality of the
available data increases constantly. In this work we capitalize on these
advances to point out the influence of data (in)completeness and quality on
some classical results on topological analysis of TRNs, specially regarding
modularity at different levels. In doing so, we identify the most relevant
factors affecting the validity of previous findings, highlighting important
caveats to future prokaryotic TRNs topological analysis.
| no_new_dataset | 0.950041 |
1202.2368 | Afzal Godil | Sarah Tang and Afzal Godil | An evaluation of local shape descriptors for 3D shape retrieval | IS&T/SPIE Electronic Imaging 2012, Proceedings Vol. 8290
Three-Dimensional Image Processing (3DIP) and Applications II, Atilla M.
Baskurt; Robert Sitnik, Editors, 82900N Dates: Tuesday-Thursday 24 - 26
January 2012, Paper 8290-22 | null | 10.1117/12.912153 | Paper 8290-22, Proceedings Vol. 8290 | cs.CV cs.CG cs.DL cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the usage of 3D models increases, so does the importance of developing
accurate 3D shape retrieval algorithms. A common approach is to calculate a
shape descriptor for each object, which can then be compared to determine two
objects' similarity. However, these descriptors are often evaluated
independently and on different datasets, making them difficult to compare.
Using the SHREC 2011 Shape Retrieval Contest of Non-rigid 3D Watertight Meshes
dataset, we systematically evaluate a collection of local shape descriptors. We
apply each descriptor to the bag-of-words paradigm and assess the effects of
varying the dictionary's size and the number of sample points. In addition,
several salient point detection methods are used to choose sample points; these
methods are compared to each other and to random selection. Finally,
information from two local descriptors is combined in two ways and changes in
performance are investigated. This paper presents results of these experiment
| [
{
"version": "v1",
"created": "Fri, 10 Feb 2012 21:02:39 GMT"
}
] | 2012-02-14T00:00:00 | [
[
"Tang",
"Sarah",
""
],
[
"Godil",
"Afzal",
""
]
] | TITLE: An evaluation of local shape descriptors for 3D shape retrieval
ABSTRACT: As the usage of 3D models increases, so does the importance of developing
accurate 3D shape retrieval algorithms. A common approach is to calculate a
shape descriptor for each object, which can then be compared to determine two
objects' similarity. However, these descriptors are often evaluated
independently and on different datasets, making them difficult to compare.
Using the SHREC 2011 Shape Retrieval Contest of Non-rigid 3D Watertight Meshes
dataset, we systematically evaluate a collection of local shape descriptors. We
apply each descriptor to the bag-of-words paradigm and assess the effects of
varying the dictionary's size and the number of sample points. In addition,
several salient point detection methods are used to choose sample points; these
methods are compared to each other and to random selection. Finally,
information from two local descriptors is combined in two ways and changes in
performance are investigated. This paper presents results of these experiment
| no_new_dataset | 0.948106 |
1202.2449 | Salah A. Aly | Moataz M. Abdelwahab, Salah A. Aly, Islam Yousry | Efficient Web-based Facial Recognition System Employing 2DHOG | null | null | null | null | cs.CV cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a system for facial recognition to identify missing and found
people in Hajj and Umrah is described as a web portal. Explicitly, we present a
novel algorithm for recognition and classifications of facial images based on
applying 2DPCA to a 2D representation of the Histogram of oriented gradients
(2D-HOG) which maintains the spatial relation between pixels of the input
images. This algorithm allows a compact representation of the images which
reduces the computational complexity and the storage requirments, while
maintaining the highest reported recognition accuracy. This promotes this
method for usage with very large datasets. Large dataset was collected for
people in Hajj. Experimental results employing ORL, UMIST, JAFFE, and HAJJ
datasets confirm these excellent properties.
| [
{
"version": "v1",
"created": "Sat, 11 Feb 2012 15:24:18 GMT"
}
] | 2012-02-14T00:00:00 | [
[
"Abdelwahab",
"Moataz M.",
""
],
[
"Aly",
"Salah A.",
""
],
[
"Yousry",
"Islam",
""
]
] | TITLE: Efficient Web-based Facial Recognition System Employing 2DHOG
ABSTRACT: In this paper, a system for facial recognition to identify missing and found
people in Hajj and Umrah is described as a web portal. Explicitly, we present a
novel algorithm for recognition and classifications of facial images based on
applying 2DPCA to a 2D representation of the Histogram of oriented gradients
(2D-HOG) which maintains the spatial relation between pixels of the input
images. This algorithm allows a compact representation of the images which
reduces the computational complexity and the storage requirments, while
maintaining the highest reported recognition accuracy. This promotes this
method for usage with very large datasets. Large dataset was collected for
people in Hajj. Experimental results employing ORL, UMIST, JAFFE, and HAJJ
datasets confirm these excellent properties.
| no_new_dataset | 0.944434 |
1112.2459 | Alireza Abbasi | Alireza Abbasi, Liaquat Hossain | Hybrid Centrality Measures for Binary and Weighted Networks | a short version accepted in the 3rd workshop on Complex Network [Full
Paper submitted to JASIST in April 2011] | null | null | null | physics.soc-ph cs.DL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing centrality measures for social network analysis suggest the
im-portance of an actor and give consideration to actor's given structural
position in a network. These existing measures suggest specific attribute of an
actor (i.e., popularity, accessibility, and brokerage behavior). In this study,
we propose new hybrid centrality measures (i.e., Degree-Degree,
Degree-Closeness and Degree-Betweenness), by combining existing measures (i.e.,
degree, closeness and betweenness) with a proposition to better understand the
importance of actors in a given network. Generalized set of measures are also
proposed for weighted networks. Our analysis of co-authorship networks dataset
suggests significant correlation of our proposed new centrality measures
(especially weighted networks) than traditional centrality measures with
performance of the scholars. Thus, they are useful measures which can be used
instead of traditional measures to show prominence of the actors in a network.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2011 07:19:26 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Jan 2012 15:22:11 GMT"
},
{
"version": "v3",
"created": "Sat, 21 Jan 2012 04:53:33 GMT"
},
{
"version": "v4",
"created": "Fri, 10 Feb 2012 04:29:14 GMT"
}
] | 2012-02-13T00:00:00 | [
[
"Abbasi",
"Alireza",
""
],
[
"Hossain",
"Liaquat",
""
]
] | TITLE: Hybrid Centrality Measures for Binary and Weighted Networks
ABSTRACT: Existing centrality measures for social network analysis suggest the
im-portance of an actor and give consideration to actor's given structural
position in a network. These existing measures suggest specific attribute of an
actor (i.e., popularity, accessibility, and brokerage behavior). In this study,
we propose new hybrid centrality measures (i.e., Degree-Degree,
Degree-Closeness and Degree-Betweenness), by combining existing measures (i.e.,
degree, closeness and betweenness) with a proposition to better understand the
importance of actors in a given network. Generalized set of measures are also
proposed for weighted networks. Our analysis of co-authorship networks dataset
suggests significant correlation of our proposed new centrality measures
(especially weighted networks) than traditional centrality measures with
performance of the scholars. Thus, they are useful measures which can be used
instead of traditional measures to show prominence of the actors in a network.
| no_new_dataset | 0.955693 |
1201.5871 | Patrick Perry | Patrick O. Perry, Patrick J. Wolfe | Null models for network data | 12 pages, 2 figures; submitted for publication | null | null | null | math.ST cs.SI stat.ME stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of datasets taking the form of simple, undirected graphs
continues to gain in importance across a variety of disciplines. Two choices of
null model, the logistic-linear model and the implicit log-linear model, have
come into common use for analyzing such network data, in part because each
accounts for the heterogeneity of network node degrees typically observed in
practice. Here we show how these both may be viewed as instances of a broader
class of null models, with the property that all members of this class give
rise to essentially the same likelihood-based estimates of link probabilities
in sparse graph regimes. This facilitates likelihood-based computation and
inference, and enables practitioners to choose the most appropriate null model
from this family based on application context. Comparative model fits for a
variety of network datasets demonstrate the practical implications of our
results.
| [
{
"version": "v1",
"created": "Fri, 27 Jan 2012 19:30:46 GMT"
}
] | 2012-02-13T00:00:00 | [
[
"Perry",
"Patrick O.",
""
],
[
"Wolfe",
"Patrick J.",
""
]
] | TITLE: Null models for network data
ABSTRACT: The analysis of datasets taking the form of simple, undirected graphs
continues to gain in importance across a variety of disciplines. Two choices of
null model, the logistic-linear model and the implicit log-linear model, have
come into common use for analyzing such network data, in part because each
accounts for the heterogeneity of network node degrees typically observed in
practice. Here we show how these both may be viewed as instances of a broader
class of null models, with the property that all members of this class give
rise to essentially the same likelihood-based estimates of link probabilities
in sparse graph regimes. This facilitates likelihood-based computation and
inference, and enables practitioners to choose the most appropriate null model
from this family based on application context. Comparative model fits for a
variety of network datasets demonstrate the practical implications of our
results.
| no_new_dataset | 0.952442 |
1202.2153 | Everthon Valadao | Everthon Valadao, Dorgival Guedes, Ricardo Duarte | Caracteriza\c{c}\~ao de tempos de ida-e-volta na Internet | null | Revista Brasileira de Redes de Computadores e Sistemas
Distribu\'idos, v. 3, p. 21-34, 2010 | null | null | cs.NI cs.DC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Round-trip times (RTTs) are an important metric for the operation of many
applications in the Internet. For instance, they are taken into account when
choosing servers or peers in streaming systems, and they impact the operation
of fault detectors and congestion control algorithms. Therefore, detailed
knowledge about RTTs is important for application and protocol developers. In
this work we present results on measuring RTTs between 81 PlanetLab nodes every
ten seconds, for ten days. The resulting dataset has over 550 million
measurements. Our analysis gives us a profile of delays in the network and
identifies a Gamma distribution as the model that best fits our data. The
average times observed are below 500 ms in more than 99% of the pairs, but
there is significant variation, not only when we compare different pairs of
hosts during the experiment, but also considering any given pair of hosts over
time. By using a clustering technique, we observe that links can be divided in
five distinct groups based on the distribution of RTTs over time and the losses
observed, ranging from groups of near, well-connected pairs, to groups of
distant hosts, with lower quality links between them.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2012 23:50:42 GMT"
}
] | 2012-02-13T00:00:00 | [
[
"Valadao",
"Everthon",
""
],
[
"Guedes",
"Dorgival",
""
],
[
"Duarte",
"Ricardo",
""
]
] | TITLE: Caracteriza\c{c}\~ao de tempos de ida-e-volta na Internet
ABSTRACT: Round-trip times (RTTs) are an important metric for the operation of many
applications in the Internet. For instance, they are taken into account when
choosing servers or peers in streaming systems, and they impact the operation
of fault detectors and congestion control algorithms. Therefore, detailed
knowledge about RTTs is important for application and protocol developers. In
this work we present results on measuring RTTs between 81 PlanetLab nodes every
ten seconds, for ten days. The resulting dataset has over 550 million
measurements. Our analysis gives us a profile of delays in the network and
identifies a Gamma distribution as the model that best fits our data. The
average times observed are below 500 ms in more than 99% of the pairs, but
there is significant variation, not only when we compare different pairs of
hosts during the experiment, but also considering any given pair of hosts over
time. By using a clustering technique, we observe that links can be divided in
five distinct groups based on the distribution of RTTs over time and the losses
observed, ranging from groups of near, well-connected pairs, to groups of
distant hosts, with lower quality links between them.
| no_new_dataset | 0.804098 |
1202.1990 | Tapobrata Lahiri | Upendra Kumar, Tapobrata Lahiri and Manoj Kumar Pal | Non-parametric convolution based image-segmentation of ill-posed objects
applying context window approach | 10 pages, 7 figures, 4 tables, not published anywhere | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Context-dependence in human cognition process is a well-established fact.
Following this, we introduced the image segmentation method that can use
context to classify a pixel on the basis of its membership to a particular
object-class of the concerned image. In the broad methodological steps, each
pixel was defined by its context window (CW) surrounding it the size of which
was fixed heuristically. CW texture defined by the intensities of its pixels
was convoluted with weights optimized through a non-parametric function
supported by a backpropagation network. Result of convolution was used to
classify them. The training data points (i.e., pixels) were carefully chosen to
include all variety of contexts of types, i) points within the object, ii)
points near the edge but inside the objects, iii) points at the border of the
objects, iv) points near the edge but outside the objects, v) points near or at
the edge of the image frame. Moreover the training data points were selected
from all the images within image-dataset. CW texture information for 1000
pixels from face area and background area of images were captured, out of which
700 CWs were used as training input data, and remaining 300 for testing. Our
work gives the first time foundation of quantitative enumeration of efficiency
of image-segmentation which is extendable to segment out more than 2 objects
within an image.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2012 14:02:26 GMT"
}
] | 2012-02-10T00:00:00 | [
[
"Kumar",
"Upendra",
""
],
[
"Lahiri",
"Tapobrata",
""
],
[
"Pal",
"Manoj Kumar",
""
]
] | TITLE: Non-parametric convolution based image-segmentation of ill-posed objects
applying context window approach
ABSTRACT: Context-dependence in human cognition process is a well-established fact.
Following this, we introduced the image segmentation method that can use
context to classify a pixel on the basis of its membership to a particular
object-class of the concerned image. In the broad methodological steps, each
pixel was defined by its context window (CW) surrounding it the size of which
was fixed heuristically. CW texture defined by the intensities of its pixels
was convoluted with weights optimized through a non-parametric function
supported by a backpropagation network. Result of convolution was used to
classify them. The training data points (i.e., pixels) were carefully chosen to
include all variety of contexts of types, i) points within the object, ii)
points near the edge but inside the objects, iii) points at the border of the
objects, iv) points near the edge but outside the objects, v) points near or at
the edge of the image frame. Moreover the training data points were selected
from all the images within image-dataset. CW texture information for 1000
pixels from face area and background area of images were captured, out of which
700 CWs were used as training input data, and remaining 300 for testing. Our
work gives the first time foundation of quantitative enumeration of efficiency
of image-segmentation which is extendable to segment out more than 2 objects
within an image.
| no_new_dataset | 0.952662 |
1202.1587 | Karteeka Pavan Kanadam | K. Karteeka Pavan, Allam Appa Rao, A. V. Dattatreya Rao | Automatic Clustering with Single Optimal Solution | 13 pages,4 Tables, 3 figures | Computer Engineering and Intelligent Systems, 2011, vol no.2 no.4
pp149-161 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Determining optimal number of clusters in a dataset is a challenging task.
Though some methods are available, there is no algorithm that produces unique
clustering solution. The paper proposes an Automatic Merging for Single Optimal
Solution (AMSOS) which aims to generate unique and nearly optimal clusters for
the given datasets automatically. The AMSOS is iteratively merges the closest
clusters automatically by validating with cluster validity measure to find
single and nearly optimal clusters for the given data set. Experiments on both
synthetic and real data have proved that the proposed algorithm finds single
and nearly optimal clustering structure in terms of number of clusters,
compactness and separation.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2012 03:26:01 GMT"
}
] | 2012-02-09T00:00:00 | [
[
"Pavan",
"K. Karteeka",
""
],
[
"Rao",
"Allam Appa",
""
],
[
"Rao",
"A. V. Dattatreya",
""
]
] | TITLE: Automatic Clustering with Single Optimal Solution
ABSTRACT: Determining optimal number of clusters in a dataset is a challenging task.
Though some methods are available, there is no algorithm that produces unique
clustering solution. The paper proposes an Automatic Merging for Single Optimal
Solution (AMSOS) which aims to generate unique and nearly optimal clusters for
the given datasets automatically. The AMSOS is iteratively merges the closest
clusters automatically by validating with cluster validity measure to find
single and nearly optimal clusters for the given data set. Experiments on both
synthetic and real data have proved that the proposed algorithm finds single
and nearly optimal clustering structure in terms of number of clusters,
compactness and separation.
| no_new_dataset | 0.954223 |
1202.1656 | Holger Kienle | Holger M. Kienle | Open Data: Reverse Engineering and Maintenance Perspective | 7 pages, 6 figures | null | null | null | cs.SE cs.DL cs.IR | http://creativecommons.org/licenses/by/3.0/ | Open data is an emerging paradigm to share large and diverse datasets --
primarily from governmental agencies, but also from other organizations -- with
the goal to enable the exploitation of the data for societal, academic, and
commercial gains. There are now already many datasets available with diverse
characteristics in terms of size, encoding and structure. These datasets are
often created and maintained in an ad-hoc manner. Thus, open data poses many
challenges and there is a need for effective tools and techniques to manage and
maintain it. In this paper we argue that software maintenance and reverse
engineering have an opportunity to contribute to open data and to shape its
future development. From the perspective of reverse engineering research, open
data is a new artifact that serves as input for reverse engineering techniques
and processes. Specific challenges of open data are document scraping, image
processing, and structure/schema recognition. From the perspective of
maintenance research, maintenance has to accommodate changes of open data
sources by third-party providers, traceability of data transformation
pipelines, and quality assurance of data and transformations. We believe that
the increasing importance of open data and the research challenges that it
brings with it may possibly lead to the emergence of new research streams for
reverse engineering as well as for maintenance.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2012 11:08:37 GMT"
}
] | 2012-02-09T00:00:00 | [
[
"Kienle",
"Holger M.",
""
]
] | TITLE: Open Data: Reverse Engineering and Maintenance Perspective
ABSTRACT: Open data is an emerging paradigm to share large and diverse datasets --
primarily from governmental agencies, but also from other organizations -- with
the goal to enable the exploitation of the data for societal, academic, and
commercial gains. There are now already many datasets available with diverse
characteristics in terms of size, encoding and structure. These datasets are
often created and maintained in an ad-hoc manner. Thus, open data poses many
challenges and there is a need for effective tools and techniques to manage and
maintain it. In this paper we argue that software maintenance and reverse
engineering have an opportunity to contribute to open data and to shape its
future development. From the perspective of reverse engineering research, open
data is a new artifact that serves as input for reverse engineering techniques
and processes. Specific challenges of open data are document scraping, image
processing, and structure/schema recognition. From the perspective of
maintenance research, maintenance has to accommodate changes of open data
sources by third-party providers, traceability of data transformation
pipelines, and quality assurance of data and transformations. We believe that
the increasing importance of open data and the research challenges that it
brings with it may possibly lead to the emergence of new research streams for
reverse engineering as well as for maintenance.
| no_new_dataset | 0.949201 |
1202.0940 | Alex James Dr | Alex Pappachen James and Akshay Maan | Improving feature selection algorithms using normalised feature
histograms | null | Electronics Letters,47, 8, 490-491, 2011 | 10.1049/el.2010.3672 | null | cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The proposed feature selection method builds a histogram of the most stable
features from random subsets of a training set and ranks the features based on
a classifier based cross-validation. This approach reduces the instability of
features obtained by conventional feature selection methods that occur with
variation in training data and selection criteria. Classification results on
four microarray and three image datasets using three major feature selection
criteria and a naive Bayes classifier show considerable improvement over
benchmark results.
| [
{
"version": "v1",
"created": "Sun, 5 Feb 2012 04:37:40 GMT"
}
] | 2012-02-07T00:00:00 | [
[
"James",
"Alex Pappachen",
""
],
[
"Maan",
"Akshay",
""
]
] | TITLE: Improving feature selection algorithms using normalised feature
histograms
ABSTRACT: The proposed feature selection method builds a histogram of the most stable
features from random subsets of a training set and ranks the features based on
a classifier based cross-validation. This approach reduces the instability of
features obtained by conventional feature selection methods that occur with
variation in training data and selection criteria. Classification results on
four microarray and three image datasets using three major feature selection
criteria and a naive Bayes classifier show considerable improvement over
benchmark results.
| no_new_dataset | 0.948298 |
1201.6569 | Robert Fink | Robert Fink, Larisa Han, Dan Olteanu | Aggregation in Probabilistic Databases via Knowledge Compilation | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 5, pp.
490-501 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a query evaluation technique for positive relational
algebra queries with aggregates on a representation system for probabilistic
data based on the algebraic structures of semiring and semimodule. The core of
our evaluation technique is a procedure that compiles semimodule and semiring
expressions into so-called decomposition trees, for which the computation of
the probability distribution can be done in time linear in the product of the
sizes of the probability distributions represented by its nodes. We give
syntactic characterisations of tractable queries with aggregates by exploiting
the connection between query tractability and polynomial-time decomposition
trees. A prototype of the technique is incorporated in the probabilistic
database engine SPROUT. We report on performance experiments with custom
datasets and TPC-H data.
| [
{
"version": "v1",
"created": "Tue, 31 Jan 2012 15:10:34 GMT"
}
] | 2012-02-01T00:00:00 | [
[
"Fink",
"Robert",
""
],
[
"Han",
"Larisa",
""
],
[
"Olteanu",
"Dan",
""
]
] | TITLE: Aggregation in Probabilistic Databases via Knowledge Compilation
ABSTRACT: This paper presents a query evaluation technique for positive relational
algebra queries with aggregates on a representation system for probabilistic
data based on the algebraic structures of semiring and semimodule. The core of
our evaluation technique is a procedure that compiles semimodule and semiring
expressions into so-called decomposition trees, for which the computation of
the probability distribution can be done in time linear in the product of the
sizes of the probability distributions represented by its nodes. We give
syntactic characterisations of tractable queries with aggregates by exploiting
the connection between query tractability and polynomial-time decomposition
trees. A prototype of the technique is incorporated in the probabilistic
database engine SPROUT. We report on performance experiments with custom
datasets and TPC-H data.
| new_dataset | 0.955361 |
1104.0186 | Diego Garlaschelli | Luca Valori, Francesco Picciolo, Agnes Allansdottir, Diego
Garlaschelli | Reconciling long-term cultural diversity and short-term collective
social behavior | null | PNAS vol. 109, no. 4, pp. 1068-1073 (2012) | 10.1073/pnas.1109514109 | null | physics.soc-ph cs.SI physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An outstanding open problem is whether collective social phenomena occurring
over short timescales can systematically reduce cultural heterogeneity in the
long run, and whether offline and online human interactions contribute
differently to the process. Theoretical models suggest that short-term
collective behavior and long-term cultural diversity are mutually excluding,
since they require very different levels of social influence. The latter
jointly depends on two factors: the topology of the underlying social network
and the overlap between individuals in multidimensional cultural space.
However, while the empirical properties of social networks are well understood,
little is known about the large-scale organization of real societies in
cultural space, so that random input specifications are necessarily used in
models. Here we use a large dataset to perform a high-dimensional analysis of
the scientific beliefs of thousands of Europeans. We find that inter-opinion
correlations determine a nontrivial ultrametric hierarchy of individuals in
cultural space, a result unaccessible to one-dimensional analyses and in
striking contrast with random assumptions. When empirical data are used as
inputs in models, we find that ultrametricity has strong and counterintuitive
effects, especially in the extreme case of long-range online-like interactions
bypassing social ties. On short time-scales, it strongly facilitates a
symmetry-breaking phase transition triggering coordinated social behavior. On
long time-scales, it severely suppresses cultural convergence by restricting it
within disjoint groups. We therefore find that, remarkably, the empirical
distribution of individuals in cultural space appears to optimize the
coexistence of short-term collective behavior and long-term cultural diversity,
which can be realized simultaneously for the same moderate level of mutual
influence.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2011 14:35:27 GMT"
}
] | 2012-01-31T00:00:00 | [
[
"Valori",
"Luca",
""
],
[
"Picciolo",
"Francesco",
""
],
[
"Allansdottir",
"Agnes",
""
],
[
"Garlaschelli",
"Diego",
""
]
] | TITLE: Reconciling long-term cultural diversity and short-term collective
social behavior
ABSTRACT: An outstanding open problem is whether collective social phenomena occurring
over short timescales can systematically reduce cultural heterogeneity in the
long run, and whether offline and online human interactions contribute
differently to the process. Theoretical models suggest that short-term
collective behavior and long-term cultural diversity are mutually excluding,
since they require very different levels of social influence. The latter
jointly depends on two factors: the topology of the underlying social network
and the overlap between individuals in multidimensional cultural space.
However, while the empirical properties of social networks are well understood,
little is known about the large-scale organization of real societies in
cultural space, so that random input specifications are necessarily used in
models. Here we use a large dataset to perform a high-dimensional analysis of
the scientific beliefs of thousands of Europeans. We find that inter-opinion
correlations determine a nontrivial ultrametric hierarchy of individuals in
cultural space, a result unaccessible to one-dimensional analyses and in
striking contrast with random assumptions. When empirical data are used as
inputs in models, we find that ultrametricity has strong and counterintuitive
effects, especially in the extreme case of long-range online-like interactions
bypassing social ties. On short time-scales, it strongly facilitates a
symmetry-breaking phase transition triggering coordinated social behavior. On
long time-scales, it severely suppresses cultural convergence by restricting it
within disjoint groups. We therefore find that, remarkably, the empirical
distribution of individuals in cultural space appears to optimize the
coexistence of short-term collective behavior and long-term cultural diversity,
which can be realized simultaneously for the same moderate level of mutual
influence.
| no_new_dataset | 0.935935 |
1108.2820 | Marina Sapir | Marina Sapir | Ensemble Risk Modeling Method for Robust Learning on Scarce Data | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In medical risk modeling, typical data are "scarce": they have relatively
small number of training instances (N), censoring, and high dimensionality (M).
We show that the problem may be effectively simplified by reducing it to
bipartite ranking, and introduce new bipartite ranking algorithm, Smooth Rank,
for robust learning on scarce data. The algorithm is based on ensemble learning
with unsupervised aggregation of predictors. The advantage of our approach is
confirmed in comparison with two "gold standard" risk modeling methods on 10
real life survival analysis datasets, where the new approach has the best
results on all but two datasets with the largest ratio N/M. For systematic
study of the effects of data scarcity on modeling by all three methods, we
conducted two types of computational experiments: on real life data with
randomly drawn training sets of different sizes, and on artificial data with
increasing number of features. Both experiments demonstrated that Smooth Rank
has critical advantage over the popular methods on the scarce data; it does not
suffer from overfitting where other methods do.
| [
{
"version": "v1",
"created": "Sat, 13 Aug 2011 20:47:30 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Jan 2012 07:51:50 GMT"
}
] | 2012-01-31T00:00:00 | [
[
"Sapir",
"Marina",
""
]
] | TITLE: Ensemble Risk Modeling Method for Robust Learning on Scarce Data
ABSTRACT: In medical risk modeling, typical data are "scarce": they have relatively
small number of training instances (N), censoring, and high dimensionality (M).
We show that the problem may be effectively simplified by reducing it to
bipartite ranking, and introduce new bipartite ranking algorithm, Smooth Rank,
for robust learning on scarce data. The algorithm is based on ensemble learning
with unsupervised aggregation of predictors. The advantage of our approach is
confirmed in comparison with two "gold standard" risk modeling methods on 10
real life survival analysis datasets, where the new approach has the best
results on all but two datasets with the largest ratio N/M. For systematic
study of the effects of data scarcity on modeling by all three methods, we
conducted two types of computational experiments: on real life data with
randomly drawn training sets of different sizes, and on artificial data with
increasing number of features. Both experiments demonstrated that Smooth Rank
has critical advantage over the popular methods on the scarce data; it does not
suffer from overfitting where other methods do.
| no_new_dataset | 0.953794 |
1201.4597 | Odemir Bruno PhD | Jo\~ao Batista Florindo, Odemir Martinez Bruno | Fractal Descriptors Based on Fourier Spectrum Applied to Texture
Analysis | null | null | null | null | physics.data-an cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work proposes the development and study of a novel technique for the
generation of fractal descriptors used in texture analysis. The novel
descriptors are obtained from a multiscale transform applied to the Fourier
technique of fractal dimension calculus. The power spectrum of the Fourier
transform of the image is plotted against the frequency in a log- log scale and
a multiscale transform is applied to this curve. The obtained values are taken
as the fractal descriptors of the image. The validation of the propose is
performed by the use of the descriptors for the classification of a dataset of
texture images whose real classes are previously known. The classification
precision is compared to other fractal descriptors known in the literature. The
results confirm the efficiency of the proposed method.
| [
{
"version": "v1",
"created": "Sun, 22 Jan 2012 20:43:50 GMT"
}
] | 2012-01-24T00:00:00 | [
[
"Florindo",
"João Batista",
""
],
[
"Bruno",
"Odemir Martinez",
""
]
] | TITLE: Fractal Descriptors Based on Fourier Spectrum Applied to Texture
Analysis
ABSTRACT: This work proposes the development and study of a novel technique for the
generation of fractal descriptors used in texture analysis. The novel
descriptors are obtained from a multiscale transform applied to the Fourier
technique of fractal dimension calculus. The power spectrum of the Fourier
transform of the image is plotted against the frequency in a log- log scale and
a multiscale transform is applied to this curve. The obtained values are taken
as the fractal descriptors of the image. The validation of the propose is
performed by the use of the descriptors for the classification of a dataset of
texture images whose real classes are previously known. The classification
precision is compared to other fractal descriptors known in the literature. The
results confirm the efficiency of the proposed method.
| no_new_dataset | 0.954265 |
1201.4714 | Huyen Do | Huyen Do, Alexandros Kalousis, Jun Wang and Adam Woznica | A metric learning perspective of SVM: on the relation of SVM and LMNN | To appear in AISTATS 2012 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Support Vector Machines, SVMs, and the Large Margin Nearest Neighbor
algorithm, LMNN, are two very popular learning algorithms with quite different
learning biases. In this paper we bring them into a unified view and show that
they have a much stronger relation than what is commonly thought. We analyze
SVMs from a metric learning perspective and cast them as a metric learning
problem, a view which helps us uncover the relations of the two algorithms. We
show that LMNN can be seen as learning a set of local SVM-like models in a
quadratic space. Along the way and inspired by the metric-based interpretation
of SVM s we derive a novel variant of SVMs, epsilon-SVM, to which LMNN is even
more similar. We give a unified view of LMNN and the different SVM variants.
Finally we provide some preliminary experiments on a number of benchmark
datasets in which show that epsilon-SVM compares favorably both with respect to
LMNN and SVM.
| [
{
"version": "v1",
"created": "Mon, 23 Jan 2012 13:48:33 GMT"
}
] | 2012-01-24T00:00:00 | [
[
"Do",
"Huyen",
""
],
[
"Kalousis",
"Alexandros",
""
],
[
"Wang",
"Jun",
""
],
[
"Woznica",
"Adam",
""
]
] | TITLE: A metric learning perspective of SVM: on the relation of SVM and LMNN
ABSTRACT: Support Vector Machines, SVMs, and the Large Margin Nearest Neighbor
algorithm, LMNN, are two very popular learning algorithms with quite different
learning biases. In this paper we bring them into a unified view and show that
they have a much stronger relation than what is commonly thought. We analyze
SVMs from a metric learning perspective and cast them as a metric learning
problem, a view which helps us uncover the relations of the two algorithms. We
show that LMNN can be seen as learning a set of local SVM-like models in a
quadratic space. Along the way and inspired by the metric-based interpretation
of SVM s we derive a novel variant of SVMs, epsilon-SVM, to which LMNN is even
more similar. We give a unified view of LMNN and the different SVM variants.
Finally we provide some preliminary experiments on a number of benchmark
datasets in which show that epsilon-SVM compares favorably both with respect to
LMNN and SVM.
| no_new_dataset | 0.952353 |
1201.4301 | Chitra Kiran N | Chitra Kiran N., G. Narendra Kumar | A Robust Client Verification in cloud enabled m-Commerce using Gaining
Protocol | null | IJCSI International Journal of Computer Science Issues, Vol. 8,
Issue 6, No 2, November 2011 ISSN (Online): 1694-0814 | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proposed system highlights a novel approach of exclusive verification
process using gain protocol for ensuring security among both the parties
(client-service provider) in m-commerce application with cloud enabled service.
The proposed system is based on the potential to verify the clients with
trusted hand held device depending on the set of frequent events and actions to
be carried out. The framework of the proposed work is design after collecting a
real time data sets from an android enabled hand set, which when subjected to
gain protocol, will result in detection of malicious behavior of illegal
clients in the network. The real time experiment is performed with applicable
datasets gather, which show the best result for identifying threats from last 2
months data collected.
| [
{
"version": "v1",
"created": "Fri, 20 Jan 2012 14:21:15 GMT"
}
] | 2012-01-23T00:00:00 | [
[
"N.",
"Chitra Kiran",
""
],
[
"Kumar",
"G. Narendra",
""
]
] | TITLE: A Robust Client Verification in cloud enabled m-Commerce using Gaining
Protocol
ABSTRACT: The proposed system highlights a novel approach of exclusive verification
process using gain protocol for ensuring security among both the parties
(client-service provider) in m-commerce application with cloud enabled service.
The proposed system is based on the potential to verify the clients with
trusted hand held device depending on the set of frequent events and actions to
be carried out. The framework of the proposed work is design after collecting a
real time data sets from an android enabled hand set, which when subjected to
gain protocol, will result in detection of malicious behavior of illegal
clients in the network. The real time experiment is performed with applicable
datasets gather, which show the best result for identifying threats from last 2
months data collected.
| no_new_dataset | 0.913213 |
1201.4139 | Odemir Bruno PhD | Bruno Brandoli Machado, Wesley Nunes Gon\c{c}alves, Odemir Martinez
Bruno | Image decomposition with anisotropic diffusion applied to leaf-texture
analysis | Annals of Workshop of Computer Vision 2011 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Texture analysis is an important field of investigation that has received a
great deal of interest from computer vision community. In this paper, we
propose a novel approach for texture modeling based on partial differential
equation (PDE). Each image $f$ is decomposed into a family of derived
sub-images. $f$ is split into the $u$ component, obtained with anisotropic
diffusion, and the $v$ component which is calculated by the difference between
the original image and the $u$ component. After enhancing the texture attribute
$v$ of the image, Gabor features are computed as descriptors. We validate the
proposed approach on two texture datasets with high variability. We also
evaluate our approach on an important real-world application: leaf-texture
analysis. Experimental results indicate that our approach can be used to
produce higher classification rates and can be successfully employed for
different texture applications.
| [
{
"version": "v1",
"created": "Thu, 19 Jan 2012 18:39:41 GMT"
}
] | 2012-01-20T00:00:00 | [
[
"Machado",
"Bruno Brandoli",
""
],
[
"Gonçalves",
"Wesley Nunes",
""
],
[
"Bruno",
"Odemir Martinez",
""
]
] | TITLE: Image decomposition with anisotropic diffusion applied to leaf-texture
analysis
ABSTRACT: Texture analysis is an important field of investigation that has received a
great deal of interest from computer vision community. In this paper, we
propose a novel approach for texture modeling based on partial differential
equation (PDE). Each image $f$ is decomposed into a family of derived
sub-images. $f$ is split into the $u$ component, obtained with anisotropic
diffusion, and the $v$ component which is calculated by the difference between
the original image and the $u$ component. After enhancing the texture attribute
$v$ of the image, Gabor features are computed as descriptors. We validate the
proposed approach on two texture datasets with high variability. We also
evaluate our approach on an important real-world application: leaf-texture
analysis. Experimental results indicate that our approach can be used to
produce higher classification rates and can be successfully employed for
different texture applications.
| no_new_dataset | 0.947478 |
1201.3900 | Massimiliano Dal Mas | Massimiliano Dal Mas | Elasticity on Ontology Matching of Folksodriven Structure Network | *** This paper has been accepted to the 4th Asian Conference on
Intelligent Information and Database Systems (ACIIDS 2012) - Kaohsiung Taiwan
R.O.C., 19-21 March 2012 *** 9 pages, 4 figures; for details see:
http://www.maxdalmas.com | null | null | null | cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays folksonomy tags are used not just for personal organization, but for
communication and sharing between people sharing their own local interests. In
this paper is considered the new concept structure called "Folksodriven" to
represent folksonomies. The Folksodriven Structure Network (FSN) was thought as
folksonomy tags suggestions for the user on a dataset built on chosen websites
- based on Natural Language Processing (NLP). Morphological changes, such as
changes in folksonomy tags chose have direct impact on network connectivity
(structural plasticity) of the folksonomy tags considered. The goal of this
paper is on defining a base for a FSN plasticity theory to analyze. To perform
such goal it is necessary a systematic mathematical analysis on deformation and
fracture for the ontology matching on the FSN. The advantages of that approach
could be used on a new interesting method to be employed by a knowledge
management system.
| [
{
"version": "v1",
"created": "Wed, 18 Jan 2012 20:24:35 GMT"
}
] | 2012-01-19T00:00:00 | [
[
"Mas",
"Massimiliano Dal",
""
]
] | TITLE: Elasticity on Ontology Matching of Folksodriven Structure Network
ABSTRACT: Nowadays folksonomy tags are used not just for personal organization, but for
communication and sharing between people sharing their own local interests. In
this paper is considered the new concept structure called "Folksodriven" to
represent folksonomies. The Folksodriven Structure Network (FSN) was thought as
folksonomy tags suggestions for the user on a dataset built on chosen websites
- based on Natural Language Processing (NLP). Morphological changes, such as
changes in folksonomy tags chose have direct impact on network connectivity
(structural plasticity) of the folksonomy tags considered. The goal of this
paper is on defining a base for a FSN plasticity theory to analyze. To perform
such goal it is necessary a systematic mathematical analysis on deformation and
fracture for the ontology matching on the FSN. The advantages of that approach
could be used on a new interesting method to be employed by a knowledge
management system.
| new_dataset | 0.860545 |
1201.3458 | Jeffrey Yu | Di Wu, Yiping Ke, Jeffrey Xu Yu, Zheng Liu | Detecting Priming News Events | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a problem of detecting priming events based on a time series index
and an evolving document stream. We define a priming event as an event which
triggers abnormal movements of the time series index, i.e., the Iraq war with
respect to the president approval index of President Bush. Existing solutions
either focus on organizing coherent keywords from a document stream into events
or identifying correlated movements between keyword frequency trajectories and
the time series index. In this paper, we tackle the problem in two major steps.
(1) We identify the elements that form a priming event. The element identified
is called influential topic which consists of a set of coherent keywords. And
we extract them by looking at the correlation between keyword trajectories and
the interested time series index at a global level. (2) We extract priming
events by detecting and organizing the bursty influential topics at a micro
level. We evaluate our algorithms on a real-world dataset and the result
confirms that our method is able to discover the priming events effectively.
| [
{
"version": "v1",
"created": "Tue, 17 Jan 2012 08:59:57 GMT"
}
] | 2012-01-18T00:00:00 | [
[
"Wu",
"Di",
""
],
[
"Ke",
"Yiping",
""
],
[
"Yu",
"Jeffrey Xu",
""
],
[
"Liu",
"Zheng",
""
]
] | TITLE: Detecting Priming News Events
ABSTRACT: We study a problem of detecting priming events based on a time series index
and an evolving document stream. We define a priming event as an event which
triggers abnormal movements of the time series index, i.e., the Iraq war with
respect to the president approval index of President Bush. Existing solutions
either focus on organizing coherent keywords from a document stream into events
or identifying correlated movements between keyword frequency trajectories and
the time series index. In this paper, we tackle the problem in two major steps.
(1) We identify the elements that form a priming event. The element identified
is called influential topic which consists of a set of coherent keywords. And
we extract them by looking at the correlation between keyword trajectories and
the interested time series index at a global level. (2) We extract priming
events by detecting and organizing the bursty influential topics at a micro
level. We evaluate our algorithms on a real-world dataset and the result
confirms that our method is able to discover the priming events effectively.
| no_new_dataset | 0.950549 |
1108.6296 | Feng Yan | Zenglin Xu, Feng Yan, Yuan (Alan) Qi | Infinite Tucker Decomposition: Nonparametric Bayesian Models for
Multiway Data Analysis | null | null | null | null | cs.LG cs.NA | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Tensor decomposition is a powerful computational tool for multiway data
analysis. Many popular tensor decomposition approaches---such as the Tucker
decomposition and CANDECOMP/PARAFAC (CP)---amount to multi-linear
factorization. They are insufficient to model (i) complex interactions between
data entities, (ii) various data types (e.g. missing data and binary data), and
(iii) noisy observations and outliers. To address these issues, we propose
tensor-variate latent nonparametric Bayesian models, coupled with efficient
inference methods, for multiway data analysis. We name these models InfTucker.
Using these InfTucker, we conduct Tucker decomposition in an infinite feature
space. Unlike classical tensor decomposition models, our new approaches handle
both continuous and binary data in a probabilistic framework. Unlike previous
Bayesian models on matrices and tensors, our models are based on latent
Gaussian or $t$ processes with nonlinear covariance functions. To efficiently
learn the InfTucker from data, we develop a variational inference technique on
tensors. Compared with classical implementation, the new technique reduces both
time and space complexities by several orders of magnitude. Our experimental
results on chemometrics and social network datasets demonstrate that our new
models achieved significantly higher prediction accuracy than the most
state-of-art tensor decomposition
| [
{
"version": "v1",
"created": "Wed, 31 Aug 2011 17:36:26 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Jan 2012 16:11:56 GMT"
}
] | 2012-01-17T00:00:00 | [
[
"Xu",
"Zenglin",
"",
"Alan"
],
[
"Yan",
"Feng",
"",
"Alan"
],
[
"Yuan",
"",
"",
"Alan"
],
[
"Qi",
"",
""
]
] | TITLE: Infinite Tucker Decomposition: Nonparametric Bayesian Models for
Multiway Data Analysis
ABSTRACT: Tensor decomposition is a powerful computational tool for multiway data
analysis. Many popular tensor decomposition approaches---such as the Tucker
decomposition and CANDECOMP/PARAFAC (CP)---amount to multi-linear
factorization. They are insufficient to model (i) complex interactions between
data entities, (ii) various data types (e.g. missing data and binary data), and
(iii) noisy observations and outliers. To address these issues, we propose
tensor-variate latent nonparametric Bayesian models, coupled with efficient
inference methods, for multiway data analysis. We name these models InfTucker.
Using these InfTucker, we conduct Tucker decomposition in an infinite feature
space. Unlike classical tensor decomposition models, our new approaches handle
both continuous and binary data in a probabilistic framework. Unlike previous
Bayesian models on matrices and tensors, our models are based on latent
Gaussian or $t$ processes with nonlinear covariance functions. To efficiently
learn the InfTucker from data, we develop a variational inference technique on
tensors. Compared with classical implementation, the new technique reduces both
time and space complexities by several orders of magnitude. Our experimental
results on chemometrics and social network datasets demonstrate that our new
models achieved significantly higher prediction accuracy than the most
state-of-art tensor decomposition
| no_new_dataset | 0.951594 |
1201.3116 | Odemir Bruno PhD | Jo\~ao Batista Florindo, M\'ario de Castro, Odemir Martinez Bruno | Enhancing Volumetric Bouligand-Minkowski Fractal Descriptors by using
Functional Data Analysis | null | International Journal of Modern Physics C, Volume: 22, Issue:
9(2011) pp. 929-952 | 10.1142/S0129183111016701 | null | cs.CV physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work proposes and study the concept of Functional Data Analysis
transform, applying it to the performance improving of volumetric
Bouligand-Minkowski fractal descriptors. The proposed transform consists
essentially in changing the descriptors originally defined in the space of the
calculus of fractal dimension into the space of coefficients used in the
functional data representation of these descriptors. The transformed decriptors
are used here in texture classification problems. The enhancement provided by
the FDA transform is measured by comparing the transformed to the original
descriptors in terms of the correctness rate in the classification of well
known datasets.
| [
{
"version": "v1",
"created": "Sun, 15 Jan 2012 19:38:48 GMT"
}
] | 2012-01-17T00:00:00 | [
[
"Florindo",
"João Batista",
""
],
[
"de Castro",
"Mário",
""
],
[
"Bruno",
"Odemir Martinez",
""
]
] | TITLE: Enhancing Volumetric Bouligand-Minkowski Fractal Descriptors by using
Functional Data Analysis
ABSTRACT: This work proposes and study the concept of Functional Data Analysis
transform, applying it to the performance improving of volumetric
Bouligand-Minkowski fractal descriptors. The proposed transform consists
essentially in changing the descriptors originally defined in the space of the
calculus of fractal dimension into the space of coefficients used in the
functional data representation of these descriptors. The transformed decriptors
are used here in texture classification problems. The enhancement provided by
the FDA transform is measured by comparing the transformed to the original
descriptors in terms of the correctness rate in the classification of well
known datasets.
| no_new_dataset | 0.955569 |
1201.3292 | M\'arton Karsai | Kun Zhao, M\'arton Karsai and Ginestra Bianconi | Entropy of dynamical social networks | null | PLoS ONE 6(12): e28116 (2011) | 10.1371/journal.pone.0028116 | null | physics.soc-ph cond-mat.stat-mech cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human dynamical social networks encode information and are highly adaptive.
To characterize the information encoded in the fast dynamics of social
interactions, here we introduce the entropy of dynamical social networks. By
analysing a large dataset of phone-call interactions we show evidence that the
dynamical social network has an entropy that depends on the time of the day in
a typical week-day. Moreover we show evidence for adaptability of human social
behavior showing data on duration of phone-call interactions that significantly
deviates from the statistics of duration of face-to-face interactions. This
adaptability of behavior corresponds to a different information content of the
dynamics of social human interactions. We quantify this information by the use
of the entropy of dynamical networks on realistic models of social
interactions.
| [
{
"version": "v1",
"created": "Mon, 16 Jan 2012 15:50:39 GMT"
}
] | 2012-01-17T00:00:00 | [
[
"Zhao",
"Kun",
""
],
[
"Karsai",
"Márton",
""
],
[
"Bianconi",
"Ginestra",
""
]
] | TITLE: Entropy of dynamical social networks
ABSTRACT: Human dynamical social networks encode information and are highly adaptive.
To characterize the information encoded in the fast dynamics of social
interactions, here we introduce the entropy of dynamical social networks. By
analysing a large dataset of phone-call interactions we show evidence that the
dynamical social network has an entropy that depends on the time of the day in
a typical week-day. Moreover we show evidence for adaptability of human social
behavior showing data on duration of phone-call interactions that significantly
deviates from the statistics of duration of face-to-face interactions. This
adaptability of behavior corresponds to a different information content of the
dynamics of social human interactions. We quantify this information by the use
of the entropy of dynamical networks on realistic models of social
interactions.
| no_new_dataset | 0.864253 |
1201.2416 | Pierre Machart | Pierre Machart (LIF), Thomas Peel (LIF, LATP), Liva Ralaivola (LIF),
Sandrine Anthoine (LATP), Herv\'e Glotin (LSIS) | Stochastic Low-Rank Kernel Learning for Regression | International Conference on Machine Learning (ICML'11), Bellevue
(Washington) : United States (2011) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel approach to learn a kernel-based regression function. It
is based on the useof conical combinations of data-based parameterized kernels
and on a new stochastic convex optimization procedure of which we establish
convergence guarantees. The overall learning procedure has the nice properties
that a) the learned conical combination is automatically designed to perform
the regression task at hand and b) the updates implicated by the optimization
procedure are quite inexpensive. In order to shed light on the appositeness of
our learning strategy, we present empirical results from experiments conducted
on various benchmark datasets.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2012 21:03:55 GMT"
}
] | 2012-01-13T00:00:00 | [
[
"Machart",
"Pierre",
"",
"LIF"
],
[
"Peel",
"Thomas",
"",
"LIF, LATP"
],
[
"Ralaivola",
"Liva",
"",
"LIF"
],
[
"Anthoine",
"Sandrine",
"",
"LATP"
],
[
"Glotin",
"Hervé",
"",
"LSIS"
]
] | TITLE: Stochastic Low-Rank Kernel Learning for Regression
ABSTRACT: We present a novel approach to learn a kernel-based regression function. It
is based on the useof conical combinations of data-based parameterized kernels
and on a new stochastic convex optimization procedure of which we establish
convergence guarantees. The overall learning procedure has the nice properties
that a) the learned conical combination is automatically designed to perform
the regression task at hand and b) the updates implicated by the optimization
procedure are quite inexpensive. In order to shed light on the appositeness of
our learning strategy, we present empirical results from experiments conducted
on various benchmark datasets.
| no_new_dataset | 0.948155 |
1201.2173 | Gholamreza Bahmanyar | Davar Giveki, Hamid Salimi, GholamReza Bahmanyar, Younes Khademian | Automatic Detection of Diabetes Diagnosis using Feature Weighted Support
Vector Machines based on Mutual Information and Modified Cuckoo Search | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diabetes is a major health problem in both developing and developed countries
and its incidence is rising dramatically. In this study, we investigate a novel
automatic approach to diagnose Diabetes disease based on Feature Weighted
Support Vector Machines (FW-SVMs) and Modified Cuckoo Search (MCS). The
proposed model consists of three stages: Firstly, PCA is applied to select an
optimal subset of features out of set of all the features. Secondly, Mutual
Information is employed to construct the FWSVM by weighting different features
based on their degree of importance. Finally, since parameter selection plays a
vital role in classification accuracy of SVMs, MCS is applied to select the
best parameter values. The proposed MI-MCS-FWSVM method obtains 93.58% accuracy
on UCI dataset. The experimental results demonstrate that our method
outperforms the previous methods by not only giving more accurate results but
also significantly speeding up the classification procedure.
| [
{
"version": "v1",
"created": "Tue, 10 Jan 2012 11:03:42 GMT"
}
] | 2012-01-12T00:00:00 | [
[
"Giveki",
"Davar",
""
],
[
"Salimi",
"Hamid",
""
],
[
"Bahmanyar",
"GholamReza",
""
],
[
"Khademian",
"Younes",
""
]
] | TITLE: Automatic Detection of Diabetes Diagnosis using Feature Weighted Support
Vector Machines based on Mutual Information and Modified Cuckoo Search
ABSTRACT: Diabetes is a major health problem in both developing and developed countries
and its incidence is rising dramatically. In this study, we investigate a novel
automatic approach to diagnose Diabetes disease based on Feature Weighted
Support Vector Machines (FW-SVMs) and Modified Cuckoo Search (MCS). The
proposed model consists of three stages: Firstly, PCA is applied to select an
optimal subset of features out of set of all the features. Secondly, Mutual
Information is employed to construct the FWSVM by weighting different features
based on their degree of importance. Finally, since parameter selection plays a
vital role in classification accuracy of SVMs, MCS is applied to select the
best parameter values. The proposed MI-MCS-FWSVM method obtains 93.58% accuracy
on UCI dataset. The experimental results demonstrate that our method
outperforms the previous methods by not only giving more accurate results but
also significantly speeding up the classification procedure.
| no_new_dataset | 0.950869 |
1201.2025 | Mohsen Zare Baghbidi | Mohsen Zare Baghbidi, Kamal Jamshidi, Ahmad Reza Naghsh Nilchi and
Saeid Homayouni | Improvement of Anomoly Detection Algorithms in Hyperspectral Images
using Discrete Wavelet Transform | 13 pages, 9 figures, printed in Signal & Image Processing : An
International Journal (SIPIJ) | Signal & Image Processing : An International Journal (SIPIJ), Vol.
2, No. 4, 2011,13-25 | 10.5121/sipij.2011.2402 | null | cs.OH | http://creativecommons.org/licenses/by/3.0/ | Recently anomaly detection (AD) has become an important application for
target detection in hyperspectral remotely sensed images. In many applications,
in addition to high accuracy of detection we need a fast and reliable algorithm
as well. This paper presents a novel method to improve the performance of
current AD algorithms. The proposed method first calculates Discrete Wavelet
Transform (DWT) of every pixel vector of image using Daubechies4 wavelet. Then,
AD algorithm performs on four bands of "Wavelet transform" matrix which are the
approximation of main image. In this research some benchmark AD algorithms
including Local RX, DWRX and DWEST have been implemented on Airborne
Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral datasets.
Experimental results demonstrate significant improvement of runtime in proposed
method. In addition, this method improves the accuracy of AD algorithms because
of DWT's power in extracting approximation coefficients of signal, which
contain the main behaviour of signal, and abandon the redundant information in
hyperspectral image data.
| [
{
"version": "v1",
"created": "Tue, 10 Jan 2012 11:29:02 GMT"
}
] | 2012-01-11T00:00:00 | [
[
"Baghbidi",
"Mohsen Zare",
""
],
[
"Jamshidi",
"Kamal",
""
],
[
"Nilchi",
"Ahmad Reza Naghsh",
""
],
[
"Homayouni",
"Saeid",
""
]
] | TITLE: Improvement of Anomoly Detection Algorithms in Hyperspectral Images
using Discrete Wavelet Transform
ABSTRACT: Recently anomaly detection (AD) has become an important application for
target detection in hyperspectral remotely sensed images. In many applications,
in addition to high accuracy of detection we need a fast and reliable algorithm
as well. This paper presents a novel method to improve the performance of
current AD algorithms. The proposed method first calculates Discrete Wavelet
Transform (DWT) of every pixel vector of image using Daubechies4 wavelet. Then,
AD algorithm performs on four bands of "Wavelet transform" matrix which are the
approximation of main image. In this research some benchmark AD algorithms
including Local RX, DWRX and DWEST have been implemented on Airborne
Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral datasets.
Experimental results demonstrate significant improvement of runtime in proposed
method. In addition, this method improves the accuracy of AD algorithms because
of DWT's power in extracting approximation coefficients of signal, which
contain the main behaviour of signal, and abandon the redundant information in
hyperspectral image data.
| no_new_dataset | 0.945951 |
1201.2073 | Rafi Muhammad | Mehwish Aziz, Muhammad Rafi | Pbm: A new dataset for blog mining | 6; Internet and Web Engineering from: International Conference on
Computer Engineering and Technology, 3rd (ICCET 2011) | null | null | null | cs.AI cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text mining is becoming vital as Web 2.0 offers collaborative content
creation and sharing. Now Researchers have growing interest in text mining
methods for discovering knowledge. Text mining researchers come from variety of
areas like: Natural Language Processing, Computational Linguistic, Machine
Learning, and Statistics. A typical text mining application involves
preprocessing of text, stemming and lemmatization, tagging and annotation,
deriving knowledge patterns, evaluating and interpreting the results. There are
numerous approaches for performing text mining tasks, like: clustering,
categorization, sentimental analysis, and summarization. There is a growing
need to standardize the evaluation of these tasks. One major component of
establishing standardization is to provide standard datasets for these tasks.
Although there are various standard datasets available for traditional text
mining tasks, but there are very few and expensive datasets for blog-mining
task. Blogs, a new genre in web 2.0 is a digital diary of web user, which has
chronological entries and contains a lot of useful knowledge, thus offers a lot
of challenges and opportunities for text mining. In this paper, we report a new
indigenous dataset for Pakistani Political Blogosphere. The paper describes the
process of data collection, organization, and standardization. We have used
this dataset for carrying out various text mining tasks for blogosphere, like:
blog-search, political sentiments analysis and tracking, identification of
influential blogger, and clustering of the blog-posts. We wish to offer this
dataset free for others who aspire to pursue further in this domain.
| [
{
"version": "v1",
"created": "Tue, 10 Jan 2012 15:18:38 GMT"
}
] | 2012-01-11T00:00:00 | [
[
"Aziz",
"Mehwish",
""
],
[
"Rafi",
"Muhammad",
""
]
] | TITLE: Pbm: A new dataset for blog mining
ABSTRACT: Text mining is becoming vital as Web 2.0 offers collaborative content
creation and sharing. Now Researchers have growing interest in text mining
methods for discovering knowledge. Text mining researchers come from variety of
areas like: Natural Language Processing, Computational Linguistic, Machine
Learning, and Statistics. A typical text mining application involves
preprocessing of text, stemming and lemmatization, tagging and annotation,
deriving knowledge patterns, evaluating and interpreting the results. There are
numerous approaches for performing text mining tasks, like: clustering,
categorization, sentimental analysis, and summarization. There is a growing
need to standardize the evaluation of these tasks. One major component of
establishing standardization is to provide standard datasets for these tasks.
Although there are various standard datasets available for traditional text
mining tasks, but there are very few and expensive datasets for blog-mining
task. Blogs, a new genre in web 2.0 is a digital diary of web user, which has
chronological entries and contains a lot of useful knowledge, thus offers a lot
of challenges and opportunities for text mining. In this paper, we report a new
indigenous dataset for Pakistani Political Blogosphere. The paper describes the
process of data collection, organization, and standardization. We have used
this dataset for carrying out various text mining tasks for blogosphere, like:
blog-search, political sentiments analysis and tracking, identification of
influential blogger, and clustering of the blog-posts. We wish to offer this
dataset free for others who aspire to pursue further in this domain.
| new_dataset | 0.963916 |
1201.1512 | Jim Ferry | James P. Ferry and J. Oren Bumgarner | Community detection and tracking on networks from a data fusion
perspective | 40 pages, 11 figures | null | null | null | cs.SI math.PR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community structure in networks has been investigated from many viewpoints,
usually with the same end result: a community detection algorithm of some kind.
Recent research offers methods for combining the results of such algorithms
into timelines of community evolution. This paper investigates community
detection and tracking from the data fusion perspective. We avoid the kind of
hard calls made by traditional community detection algorithms in favor of
retaining as much uncertainty information as possible. This results in a method
for directly estimating the probabilities that pairs of nodes are in the same
community. We demonstrate that this method is accurate using the LFR testbed,
that it is fast on a number of standard network datasets, and that it is has a
variety of uses that complement those of standard, hard-call methods. Retaining
uncertainty information allows us to develop a Bayesian filter for tracking
communities. We derive equations for the full filter, and marginalize it to
produce a potentially practical version. Finally, we discuss closures for the
marginalized filter and the work that remains to develop this into a
principled, efficient method for tracking time-evolving communities on
time-evolving networks.
| [
{
"version": "v1",
"created": "Fri, 6 Jan 2012 22:08:32 GMT"
}
] | 2012-01-10T00:00:00 | [
[
"Ferry",
"James P.",
""
],
[
"Bumgarner",
"J. Oren",
""
]
] | TITLE: Community detection and tracking on networks from a data fusion
perspective
ABSTRACT: Community structure in networks has been investigated from many viewpoints,
usually with the same end result: a community detection algorithm of some kind.
Recent research offers methods for combining the results of such algorithms
into timelines of community evolution. This paper investigates community
detection and tracking from the data fusion perspective. We avoid the kind of
hard calls made by traditional community detection algorithms in favor of
retaining as much uncertainty information as possible. This results in a method
for directly estimating the probabilities that pairs of nodes are in the same
community. We demonstrate that this method is accurate using the LFR testbed,
that it is fast on a number of standard network datasets, and that it is has a
variety of uses that complement those of standard, hard-call methods. Retaining
uncertainty information allows us to develop a Bayesian filter for tracking
communities. We derive equations for the full filter, and marginalize it to
produce a potentially practical version. Finally, we discuss closures for the
marginalized filter and the work that remains to develop this into a
principled, efficient method for tracking time-evolving communities on
time-evolving networks.
| no_new_dataset | 0.947332 |
1201.1450 | Casey Bennett | Casey Bennett | The Interaction of Entropy-Based Discretization and Sample Size: An
Empirical Study | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An empirical investigation of the interaction of sample size and
discretization - in this case the entropy-based method CAIM (Class-Attribute
Interdependence Maximization) - was undertaken to evaluate the impact and
potential bias introduced into data mining performance metrics due to variation
in sample size as it impacts the discretization process. Of particular interest
was the effect of discretizing within cross-validation folds averse to outside
discretization folds. Previous publications have suggested that discretizing
externally can bias performance results; however, a thorough review of the
literature found no empirical evidence to support such an assertion. This
investigation involved construction of over 117,000 models on seven distinct
datasets from the UCI (University of California-Irvine) Machine Learning
Library and multiple modeling methods across a variety of configurations of
sample size and discretization, with each unique "setup" being independently
replicated ten times. The analysis revealed a significant optimistic bias as
sample sizes decreased and discretization was employed. The study also revealed
that there may be a relationship between the interaction that produces such
bias and the numbers and types of predictor attributes, extending the "curse of
dimensionality" concept from feature selection into the discretization realm.
Directions for further exploration are laid out, as well some general
guidelines about the proper application of discretization in light of these
results.
| [
{
"version": "v1",
"created": "Fri, 6 Jan 2012 16:45:57 GMT"
}
] | 2012-01-09T00:00:00 | [
[
"Bennett",
"Casey",
""
]
] | TITLE: The Interaction of Entropy-Based Discretization and Sample Size: An
Empirical Study
ABSTRACT: An empirical investigation of the interaction of sample size and
discretization - in this case the entropy-based method CAIM (Class-Attribute
Interdependence Maximization) - was undertaken to evaluate the impact and
potential bias introduced into data mining performance metrics due to variation
in sample size as it impacts the discretization process. Of particular interest
was the effect of discretizing within cross-validation folds averse to outside
discretization folds. Previous publications have suggested that discretizing
externally can bias performance results; however, a thorough review of the
literature found no empirical evidence to support such an assertion. This
investigation involved construction of over 117,000 models on seven distinct
datasets from the UCI (University of California-Irvine) Machine Learning
Library and multiple modeling methods across a variety of configurations of
sample size and discretization, with each unique "setup" being independently
replicated ten times. The analysis revealed a significant optimistic bias as
sample sizes decreased and discretization was employed. The study also revealed
that there may be a relationship between the interaction that produces such
bias and the numbers and types of predictor attributes, extending the "curse of
dimensionality" concept from feature selection into the discretization realm.
Directions for further exploration are laid out, as well some general
guidelines about the proper application of discretization in light of these
results.
| no_new_dataset | 0.939692 |
0907.5155 | Ching-an Hsiao | C. A. Hsiao | On Classification from Outlier View | Conclusion renewed; IAENG International Journal of Computer Science,
Volume 37, Issue 4, Nov, 2010 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classification is the basis of cognition. Unlike other solutions, this study
approaches it from the view of outliers. We present an expanding algorithm to
detect outliers in univariate datasets, together with the underlying
foundation. The expanding algorithm runs in a holistic way, making it a rather
robust solution. Synthetic and real data experiments show its power.
Furthermore, an application for multi-class problems leads to the introduction
of the oscillator algorithm. The corresponding result implies the potential
wide use of the expanding algorithm.
| [
{
"version": "v1",
"created": "Wed, 29 Jul 2009 15:47:33 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Jul 2009 14:17:30 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Jun 2011 13:53:49 GMT"
},
{
"version": "v4",
"created": "Mon, 2 Jan 2012 15:19:41 GMT"
}
] | 2012-01-04T00:00:00 | [
[
"Hsiao",
"C. A.",
""
]
] | TITLE: On Classification from Outlier View
ABSTRACT: Classification is the basis of cognition. Unlike other solutions, this study
approaches it from the view of outliers. We present an expanding algorithm to
detect outliers in univariate datasets, together with the underlying
foundation. The expanding algorithm runs in a holistic way, making it a rather
robust solution. Synthetic and real data experiments show its power.
Furthermore, an application for multi-class problems leads to the introduction
of the oscillator algorithm. The corresponding result implies the potential
wide use of the expanding algorithm.
| no_new_dataset | 0.948251 |
1108.1170 | Martin Jaggi | Martin Jaggi | Convex Optimization without Projection Steps | null | null | null | null | math.OC cs.AI cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For the general problem of minimizing a convex function over a compact convex
domain, we will investigate a simple iterative approximation algorithm based on
the method by Frank & Wolfe 1956, that does not need projection steps in order
to stay inside the optimization domain. Instead of a projection step, the
linearized problem defined by a current subgradient is solved, which gives a
step direction that will naturally stay in the domain. Our framework
generalizes the sparse greedy algorithm of Frank & Wolfe and its primal-dual
analysis by Clarkson 2010 (and the low-rank SDP approach by Hazan 2008) to
arbitrary convex domains. We give a convergence proof guaranteeing
{\epsilon}-small duality gap after O(1/{\epsilon}) iterations.
The method allows us to understand the sparsity of approximate solutions for
any l1-regularized convex optimization problem (and for optimization over the
simplex), expressed as a function of the approximation quality. We obtain
matching upper and lower bounds of {\Theta}(1/{\epsilon}) for the sparsity for
l1-problems. The same bounds apply to low-rank semidefinite optimization with
bounded trace, showing that rank O(1/{\epsilon}) is best possible here as well.
As another application, we obtain sparse matrices of O(1/{\epsilon}) non-zero
entries as {\epsilon}-approximate solutions when optimizing any convex function
over a class of diagonally dominant symmetric matrices.
We show that our proposed first-order method also applies to nuclear norm and
max-norm matrix optimization problems. For nuclear norm regularized
optimization, such as matrix completion and low-rank recovery, we demonstrate
the practical efficiency and scalability of our algorithm for large matrix
problems, as e.g. the Netflix dataset. For general convex optimization over
bounded matrix max-norm, our algorithm is the first with a convergence
guarantee, to the best of our knowledge.
| [
{
"version": "v1",
"created": "Thu, 4 Aug 2011 19:15:04 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2011 22:11:51 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Sep 2011 22:56:49 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Sep 2011 16:42:01 GMT"
},
{
"version": "v5",
"created": "Wed, 23 Nov 2011 15:38:13 GMT"
},
{
"version": "v6",
"created": "Tue, 27 Dec 2011 17:45:39 GMT"
}
] | 2011-12-30T00:00:00 | [
[
"Jaggi",
"Martin",
""
]
] | TITLE: Convex Optimization without Projection Steps
ABSTRACT: For the general problem of minimizing a convex function over a compact convex
domain, we will investigate a simple iterative approximation algorithm based on
the method by Frank & Wolfe 1956, that does not need projection steps in order
to stay inside the optimization domain. Instead of a projection step, the
linearized problem defined by a current subgradient is solved, which gives a
step direction that will naturally stay in the domain. Our framework
generalizes the sparse greedy algorithm of Frank & Wolfe and its primal-dual
analysis by Clarkson 2010 (and the low-rank SDP approach by Hazan 2008) to
arbitrary convex domains. We give a convergence proof guaranteeing
{\epsilon}-small duality gap after O(1/{\epsilon}) iterations.
The method allows us to understand the sparsity of approximate solutions for
any l1-regularized convex optimization problem (and for optimization over the
simplex), expressed as a function of the approximation quality. We obtain
matching upper and lower bounds of {\Theta}(1/{\epsilon}) for the sparsity for
l1-problems. The same bounds apply to low-rank semidefinite optimization with
bounded trace, showing that rank O(1/{\epsilon}) is best possible here as well.
As another application, we obtain sparse matrices of O(1/{\epsilon}) non-zero
entries as {\epsilon}-approximate solutions when optimizing any convex function
over a class of diagonally dominant symmetric matrices.
We show that our proposed first-order method also applies to nuclear norm and
max-norm matrix optimization problems. For nuclear norm regularized
optimization, such as matrix completion and low-rank recovery, we demonstrate
the practical efficiency and scalability of our algorithm for large matrix
problems, as e.g. the Netflix dataset. For general convex optimization over
bounded matrix max-norm, our algorithm is the first with a convergence
guarantee, to the best of our knowledge.
| no_new_dataset | 0.945349 |
1112.6219 | Rafi Muhammad | Muhammad Rafi, M. Shahid Shaikh, Amir Farooq | Document Clustering based on Topic Maps | null | International Journal of Computer Applications 12(1):32-36,
December 2010 | 10.5120/1640-2204 | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Importance of document clustering is now widely acknowledged by researchers
for better management, smart navigation, efficient filtering, and concise
summarization of large collection of documents like World Wide Web (WWW). The
next challenge lies in semantically performing clustering based on the semantic
contents of the document. The problem of document clustering has two main
components: (1) to represent the document in such a form that inherently
captures semantics of the text. This may also help to reduce dimensionality of
the document, and (2) to define a similarity measure based on the semantic
representation such that it assigns higher numerical values to document pairs
which have higher semantic relationship. Feature space of the documents can be
very challenging for document clustering. A document may contain multiple
topics, it may contain a large set of class-independent general-words, and a
handful class-specific core-words. With these features in mind, traditional
agglomerative clustering algorithms, which are based on either Document Vector
model (DVM) or Suffix Tree model (STC), are less efficient in producing results
with high cluster quality. This paper introduces a new approach for document
clustering based on the Topic Map representation of the documents. The document
is being transformed into a compact form. A similarity measure is proposed
based upon the inferred information through topic maps data and structures. The
suggested method is implemented using agglomerative hierarchal clustering and
tested on standard Information retrieval (IR) datasets. The comparative
experiment reveals that the proposed approach is effective in improving the
cluster quality.
| [
{
"version": "v1",
"created": "Thu, 29 Dec 2011 04:15:48 GMT"
}
] | 2011-12-30T00:00:00 | [
[
"Rafi",
"Muhammad",
""
],
[
"Shaikh",
"M. Shahid",
""
],
[
"Farooq",
"Amir",
""
]
] | TITLE: Document Clustering based on Topic Maps
ABSTRACT: Importance of document clustering is now widely acknowledged by researchers
for better management, smart navigation, efficient filtering, and concise
summarization of large collection of documents like World Wide Web (WWW). The
next challenge lies in semantically performing clustering based on the semantic
contents of the document. The problem of document clustering has two main
components: (1) to represent the document in such a form that inherently
captures semantics of the text. This may also help to reduce dimensionality of
the document, and (2) to define a similarity measure based on the semantic
representation such that it assigns higher numerical values to document pairs
which have higher semantic relationship. Feature space of the documents can be
very challenging for document clustering. A document may contain multiple
topics, it may contain a large set of class-independent general-words, and a
handful class-specific core-words. With these features in mind, traditional
agglomerative clustering algorithms, which are based on either Document Vector
model (DVM) or Suffix Tree model (STC), are less efficient in producing results
with high cluster quality. This paper introduces a new approach for document
clustering based on the Topic Map representation of the documents. The document
is being transformed into a compact form. A similarity measure is proposed
based upon the inferred information through topic maps data and structures. The
suggested method is implemented using agglomerative hierarchal clustering and
tested on standard Information retrieval (IR) datasets. The comparative
experiment reveals that the proposed approach is effective in improving the
cluster quality.
| no_new_dataset | 0.951323 |
physics/0512147 | Domenico Patella | Paolo Mauriello and Domenico Patella | Introduction to tensorial resistivity probability tomography | 8 pages, 7 figures | Progress In Electromagnetics Research B, vol. 8, 129-146, 2008 | 10.2528/PIERB08051604 | null | physics.geo-ph physics.data-an | null | The probability tomography approach developed for the scalar resistivity
method is here extended to the 2D tensorial apparent resistivity acquisition
mode. The rotational invariant derived from the trace of the apparent
resistivity tensor is considered, since it gives on the datum plane anomalies
confined above the buried objects. Firstly, a departure function is introduced
as the difference between the tensorial invariant measured over the real
structure and that computed for a reference uniform structure. Secondly, a
resistivity anomaly occurrence probability (RAOP) function is defined as a
normalised crosscorrelation involving the experimental departure function and a
scanning function derived analytically using the Frechet derivative of the
electric potential for the reference uniform structure. The RAOP function can
be calculated in each cell of a 3D grid filling the investigated volume, and
the resulting values can then be contoured in order to obtain the 3D
tomographic image. Each non-vanishing value of the RAOP function is interpreted
as the probability which a resistivity departure from the reference resistivity
obtain in a cell as responsible of the observed tensorial apparent resistivity
dataset on the datum plane. A synthetic case shows that the highest RAOP values
correctly indicate the position of the buried objects and a very high spacial
resolution can be obtained even for adjacent objects with opposite resistivity
contrasts with respect to the resistivity of the hosting matrix. Finally, an
experimental field case dedicated to an archaeological application of the
resistivity tensor method is presented as a proof of the high resolution power
of the probability tomography imaging, even when the data are collected in
noisy open field conditions.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2005 23:59:23 GMT"
}
] | 2011-12-30T00:00:00 | [
[
"Mauriello",
"Paolo",
""
],
[
"Patella",
"Domenico",
""
]
] | TITLE: Introduction to tensorial resistivity probability tomography
ABSTRACT: The probability tomography approach developed for the scalar resistivity
method is here extended to the 2D tensorial apparent resistivity acquisition
mode. The rotational invariant derived from the trace of the apparent
resistivity tensor is considered, since it gives on the datum plane anomalies
confined above the buried objects. Firstly, a departure function is introduced
as the difference between the tensorial invariant measured over the real
structure and that computed for a reference uniform structure. Secondly, a
resistivity anomaly occurrence probability (RAOP) function is defined as a
normalised crosscorrelation involving the experimental departure function and a
scanning function derived analytically using the Frechet derivative of the
electric potential for the reference uniform structure. The RAOP function can
be calculated in each cell of a 3D grid filling the investigated volume, and
the resulting values can then be contoured in order to obtain the 3D
tomographic image. Each non-vanishing value of the RAOP function is interpreted
as the probability which a resistivity departure from the reference resistivity
obtain in a cell as responsible of the observed tensorial apparent resistivity
dataset on the datum plane. A synthetic case shows that the highest RAOP values
correctly indicate the position of the buried objects and a very high spacial
resolution can be obtained even for adjacent objects with opposite resistivity
contrasts with respect to the resistivity of the hosting matrix. Finally, an
experimental field case dedicated to an archaeological application of the
resistivity tensor method is presented as a proof of the high resolution power
of the probability tomography imaging, even when the data are collected in
noisy open field conditions.
| no_new_dataset | 0.955569 |
physics/0602056 | Domenico Patella | Paolo Mauriello and Domenico Patella | Imaging polar and dipolar sources of geophysical anomalies by
probability tomography. Part I: theory and synthetic examples | 6 pages, 3 figures | Progress In Electromagnetics Research, vol. 87, 63-88, 2008 | 10.2528/PIER08092201 | null | physics.geo-ph physics.data-an | null | We develop the theory of a generalized probability tomography method to image
source poles and dipoles of a geophysical vector or scalar field dataset. The
purpose of the new generalized method is to improve the resolution power of
buried geophysical targets, using probability as a suitable paradigm allowing
all possible equivalent solution to be included into a unique 3D tomography
image. The new method is described by first assuming that any geophysical field
dataset can be hypothesized to be caused by a discrete number of source poles
and dipoles. Then, the theoretical derivation of the source pole occurrence
probability (SPOP) tomography, previously published in detail for single
geophysical methods, is symbolically restated in the most general way. Finally,
the theoretical derivation of the source dipole occurrence probability (SDOP)
tomography is given following a formal development similar to that of the SPOP
tomography. The discussion of a few examples allows us to demonstrate that the
combined application of the SPOP and SDOP tomographies can provide the best
core-and-boundary resolution of the most probable buried sources of the
anomalies detected within a datum domain.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2006 23:52:31 GMT"
}
] | 2011-12-30T00:00:00 | [
[
"Mauriello",
"Paolo",
""
],
[
"Patella",
"Domenico",
""
]
] | TITLE: Imaging polar and dipolar sources of geophysical anomalies by
probability tomography. Part I: theory and synthetic examples
ABSTRACT: We develop the theory of a generalized probability tomography method to image
source poles and dipoles of a geophysical vector or scalar field dataset. The
purpose of the new generalized method is to improve the resolution power of
buried geophysical targets, using probability as a suitable paradigm allowing
all possible equivalent solution to be included into a unique 3D tomography
image. The new method is described by first assuming that any geophysical field
dataset can be hypothesized to be caused by a discrete number of source poles
and dipoles. Then, the theoretical derivation of the source pole occurrence
probability (SPOP) tomography, previously published in detail for single
geophysical methods, is symbolically restated in the most general way. Finally,
the theoretical derivation of the source dipole occurrence probability (SDOP)
tomography is given following a formal development similar to that of the SPOP
tomography. The discussion of a few examples allows us to demonstrate that the
combined application of the SPOP and SDOP tomographies can provide the best
core-and-boundary resolution of the most probable buried sources of the
anomalies detected within a datum domain.
| no_new_dataset | 0.94868 |
physics/0602057 | Domenico Patella | Paolo Mauriello and Domenico Patella | Imaging polar and dipolar sources of geophysical anomalies by
probability tomography. Part II: Application to the Vesuvius volcanic area | 7 pages, 10 figures | Progress In Electromagnetics Research, vol. 87, 63-88, 2008 | 10.2528/PIER08092201 | null | physics.geo-ph | null | In the previous part I, we have developed the generalized theory of the
probability tomography method to image polar and dipolar sources of a vector or
scalar geophysical anomaly field. The purpose of the new method was to improve
the core-and-boundary resolution of the most probable buried sources of the
anomalies detected in a datum domain. In this paper, which constitutes the part
II of the same study, an application of the new approach to the Vesuvius
volcano (Naples, Italy) is illustrated in detail by analyzing geoelectrical,
self-potential and gravity datasets collected over the whole volcanic area. The
purpose is to get new insights into the shallow structure and hydrothermal
system of Vesuvius, and the deep geometry of the tectonic depression within
which the volcano grew.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2006 01:01:52 GMT"
}
] | 2011-12-30T00:00:00 | [
[
"Mauriello",
"Paolo",
""
],
[
"Patella",
"Domenico",
""
]
] | TITLE: Imaging polar and dipolar sources of geophysical anomalies by
probability tomography. Part II: Application to the Vesuvius volcanic area
ABSTRACT: In the previous part I, we have developed the generalized theory of the
probability tomography method to image polar and dipolar sources of a vector or
scalar geophysical anomaly field. The purpose of the new method was to improve
the core-and-boundary resolution of the most probable buried sources of the
anomalies detected in a datum domain. In this paper, which constitutes the part
II of the same study, an application of the new approach to the Vesuvius
volcano (Naples, Italy) is illustrated in detail by analyzing geoelectrical,
self-potential and gravity datasets collected over the whole volcanic area. The
purpose is to get new insights into the shallow structure and hydrothermal
system of Vesuvius, and the deep geometry of the tectonic depression within
which the volcano grew.
| no_new_dataset | 0.951278 |
1112.5215 | Dacheng Tao | Tianyi Zhou and Dacheng Tao | Bilateral Random Projections | 17 pages, 3 figures, technical report | null | null | null | stat.ML cs.DS | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Low-rank structure have been profoundly studied in data mining and machine
learning. In this paper, we show a dense matrix $X$'s low-rank approximation
can be rapidly built from its left and right random projections $Y_1=XA_1$ and
$Y_2=X^TA_2$, or bilateral random projection (BRP). We then show power scheme
can further improve the precision. The deterministic, average and deviation
bounds of the proposed method and its power scheme modification are proved
theoretically. The effectiveness and the efficiency of BRP based low-rank
approximation is empirically verified on both artificial and real datasets.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2011 01:16:20 GMT"
}
] | 2011-12-23T00:00:00 | [
[
"Zhou",
"Tianyi",
""
],
[
"Tao",
"Dacheng",
""
]
] | TITLE: Bilateral Random Projections
ABSTRACT: Low-rank structure have been profoundly studied in data mining and machine
learning. In this paper, we show a dense matrix $X$'s low-rank approximation
can be rapidly built from its left and right random projections $Y_1=XA_1$ and
$Y_2=X^TA_2$, or bilateral random projection (BRP). We then show power scheme
can further improve the precision. The deterministic, average and deviation
bounds of the proposed method and its power scheme modification are proved
theoretically. The effectiveness and the efficiency of BRP based low-rank
approximation is empirically verified on both artificial and real datasets.
| no_new_dataset | 0.951051 |
1112.5238 | Vinita Suyal | Vinita Suyal, Awadhesh Prasad, Harinder P. Singh | Symbolic analysis of slow solar wind data using rank order statistics | 10 pages, 7 figures, 1 table | Planetary and Space Science, S.N0. 0032-0633, 2011 | 10.1016/j.pss.2011.12.007 | null | astro-ph.SR physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze time series data of the fluctuations of slow solar wind velocity
using rank order statistics. We selected a total of 18 datasets measured by the
Helios spacecraft at a distance of 0.32 AU from the sun in the inner
heliosphere. The datasets correspond to the years 1975-1982 and cover the end
of the solar activity cycle 20 to the middle of the activity cycle 21. We first
apply rank order statistics to time series from known nonlinear systems and
then extend the analysis to the solar wind data. We find that the underlying
dynamics governing the solar wind velocity remains almost unchanged during an
activity cycle. However, during a solar activity cycle the fluctuations in the
slow solar wind time series increase just before the maximum of the activity
cycle
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2011 06:05:45 GMT"
}
] | 2011-12-23T00:00:00 | [
[
"Suyal",
"Vinita",
""
],
[
"Prasad",
"Awadhesh",
""
],
[
"Singh",
"Harinder P.",
""
]
] | TITLE: Symbolic analysis of slow solar wind data using rank order statistics
ABSTRACT: We analyze time series data of the fluctuations of slow solar wind velocity
using rank order statistics. We selected a total of 18 datasets measured by the
Helios spacecraft at a distance of 0.32 AU from the sun in the inner
heliosphere. The datasets correspond to the years 1975-1982 and cover the end
of the solar activity cycle 20 to the middle of the activity cycle 21. We first
apply rank order statistics to time series from known nonlinear systems and
then extend the analysis to the solar wind data. We find that the underlying
dynamics governing the solar wind velocity remains almost unchanged during an
activity cycle. However, during a solar activity cycle the fluctuations in the
slow solar wind time series increase just before the maximum of the activity
cycle
| no_new_dataset | 0.952397 |
1103.2756 | Xinmei Tian | Xinmei Tian and Dacheng Tao and Yong Rui | Sparse Transfer Learning for Interactive Video Search Reranking | 17 pages | null | 10.1145/0000000.0000000 | null | cs.IR cs.CV cs.MM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual reranking is effective to improve the performance of the text-based
video search. However, existing reranking algorithms can only achieve limited
improvement because of the well-known semantic gap between low level visual
features and high level semantic concepts. In this paper, we adopt interactive
video search reranking to bridge the semantic gap by introducing user's
labeling effort. We propose a novel dimension reduction tool, termed sparse
transfer learning (STL), to effectively and efficiently encode user's labeling
information. STL is particularly designed for interactive video search
reranking. Technically, it a) considers the pair-wise discriminative
information to maximally separate labeled query relevant samples from labeled
query irrelevant ones, b) achieves a sparse representation for the subspace to
encodes user's intention by applying the elastic net penalty, and c) propagates
user's labeling information from labeled samples to unlabeled samples by using
the data distribution knowledge. We conducted extensive experiments on the
TRECVID 2005, 2006 and 2007 benchmark datasets and compared STL with popular
dimension reduction algorithms. We report superior performance by using the
proposed STL based interactive video search reranking.
| [
{
"version": "v1",
"created": "Mon, 14 Mar 2011 19:48:20 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2011 03:49:33 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Dec 2011 00:12:42 GMT"
}
] | 2011-12-22T00:00:00 | [
[
"Tian",
"Xinmei",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Rui",
"Yong",
""
]
] | TITLE: Sparse Transfer Learning for Interactive Video Search Reranking
ABSTRACT: Visual reranking is effective to improve the performance of the text-based
video search. However, existing reranking algorithms can only achieve limited
improvement because of the well-known semantic gap between low level visual
features and high level semantic concepts. In this paper, we adopt interactive
video search reranking to bridge the semantic gap by introducing user's
labeling effort. We propose a novel dimension reduction tool, termed sparse
transfer learning (STL), to effectively and efficiently encode user's labeling
information. STL is particularly designed for interactive video search
reranking. Technically, it a) considers the pair-wise discriminative
information to maximally separate labeled query relevant samples from labeled
query irrelevant ones, b) achieves a sparse representation for the subspace to
encodes user's intention by applying the elastic net penalty, and c) propagates
user's labeling information from labeled samples to unlabeled samples by using
the data distribution knowledge. We conducted extensive experiments on the
TRECVID 2005, 2006 and 2007 benchmark datasets and compared STL with popular
dimension reduction algorithms. We report superior performance by using the
proposed STL based interactive video search reranking.
| no_new_dataset | 0.950227 |
1109.1852 | Bernardo Huberman | Chunyan Wang and Bernardo A. Huberman | Long Trend Dynamics in Social Media | null | null | null | null | physics.soc-ph cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A main characteristic of social media is that its diverse content, copiously
generated by both standard outlets and general users, constantly competes for
the scarce attention of large audiences. Out of this flood of information some
topics manage to get enough attention to become the most popular ones and thus
to be prominently displayed as trends. Equally important, some of these trends
persist long enough so as to shape part of the social agenda. How this happens
is the focus of this paper. By introducing a stochastic dynamical model that
takes into account the user's repeated involvement with given topics, we can
predict the distribution of trend durations as well as the thresholds in
popularity that lead to their emergence within social media. Detailed
measurements of datasets from Twitter confirm the validity of the model and its
predictions.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2011 22:15:08 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2011 19:37:24 GMT"
}
] | 2011-12-21T00:00:00 | [
[
"Wang",
"Chunyan",
""
],
[
"Huberman",
"Bernardo A.",
""
]
] | TITLE: Long Trend Dynamics in Social Media
ABSTRACT: A main characteristic of social media is that its diverse content, copiously
generated by both standard outlets and general users, constantly competes for
the scarce attention of large audiences. Out of this flood of information some
topics manage to get enough attention to become the most popular ones and thus
to be prominently displayed as trends. Equally important, some of these trends
persist long enough so as to shape part of the social agenda. How this happens
is the focus of this paper. By introducing a stochastic dynamical model that
takes into account the user's repeated involvement with given topics, we can
predict the distribution of trend durations as well as the thresholds in
popularity that lead to their emergence within social media. Detailed
measurements of datasets from Twitter confirm the validity of the model and its
predictions.
| no_new_dataset | 0.949529 |
1112.4607 | Arash Afkanpour | Arash Afkanpour and Csaba Szepesvari and Michael Bowling | Alignment Based Kernel Learning with a Continuous Set of Base Kernels | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of kernel-based learning methods depend on the choice of kernel.
Recently, kernel learning methods have been proposed that use data to select
the most appropriate kernel, usually by combining a set of base kernels. We
introduce a new algorithm for kernel learning that combines a {\em continuous
set of base kernels}, without the common step of discretizing the space of base
kernels. We demonstrate that our new method achieves state-of-the-art
performance across a variety of real-world datasets. Furthermore, we explicitly
demonstrate the importance of combining the right dictionary of kernels, which
is problematic for methods based on a finite set of base kernels chosen a
priori. Our method is not the first approach to work with continuously
parameterized kernels. However, we show that our method requires substantially
less computation than previous such approaches, and so is more amenable to
multiple dimensional parameterizations of base kernels, which we demonstrate.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2011 08:52:56 GMT"
}
] | 2011-12-21T00:00:00 | [
[
"Afkanpour",
"Arash",
""
],
[
"Szepesvari",
"Csaba",
""
],
[
"Bowling",
"Michael",
""
]
] | TITLE: Alignment Based Kernel Learning with a Continuous Set of Base Kernels
ABSTRACT: The success of kernel-based learning methods depend on the choice of kernel.
Recently, kernel learning methods have been proposed that use data to select
the most appropriate kernel, usually by combining a set of base kernels. We
introduce a new algorithm for kernel learning that combines a {\em continuous
set of base kernels}, without the common step of discretizing the space of base
kernels. We demonstrate that our new method achieves state-of-the-art
performance across a variety of real-world datasets. Furthermore, we explicitly
demonstrate the importance of combining the right dictionary of kernels, which
is problematic for methods based on a finite set of base kernels chosen a
priori. Our method is not the first approach to work with continuously
parameterized kernels. However, we show that our method requires substantially
less computation than previous such approaches, and so is more amenable to
multiple dimensional parameterizations of base kernels, which we demonstrate.
| no_new_dataset | 0.949248 |
1112.4020 | Andri Mirzal | Andri Mirzal | Clustering and Latent Semantic Indexing Aspects of the Nonnegative
Matrix Factorization | 28 pages, 5 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper provides a theoretical support for clustering aspect of the
nonnegative matrix factorization (NMF). By utilizing the Karush-Kuhn-Tucker
optimality conditions, we show that NMF objective is equivalent to graph
clustering objective, so clustering aspect of the NMF has a solid
justification. Different from previous approaches which usually discard the
nonnegativity constraints, our approach guarantees the stationary point being
used in deriving the equivalence is located on the feasible region in the
nonnegative orthant. Additionally, since clustering capability of a matrix
decomposition technique can sometimes imply its latent semantic indexing (LSI)
aspect, we will also evaluate LSI aspect of the NMF by showing its capability
in solving the synonymy and polysemy problems in synthetic datasets. And more
extensive evaluation will be conducted by comparing LSI performances of the NMF
and the singular value decomposition (SVD), the standard LSI method, using some
standard datasets.
| [
{
"version": "v1",
"created": "Sat, 17 Dec 2011 03:57:06 GMT"
}
] | 2011-12-20T00:00:00 | [
[
"Mirzal",
"Andri",
""
]
] | TITLE: Clustering and Latent Semantic Indexing Aspects of the Nonnegative
Matrix Factorization
ABSTRACT: This paper provides a theoretical support for clustering aspect of the
nonnegative matrix factorization (NMF). By utilizing the Karush-Kuhn-Tucker
optimality conditions, we show that NMF objective is equivalent to graph
clustering objective, so clustering aspect of the NMF has a solid
justification. Different from previous approaches which usually discard the
nonnegativity constraints, our approach guarantees the stationary point being
used in deriving the equivalence is located on the feasible region in the
nonnegative orthant. Additionally, since clustering capability of a matrix
decomposition technique can sometimes imply its latent semantic indexing (LSI)
aspect, we will also evaluate LSI aspect of the NMF by showing its capability
in solving the synonymy and polysemy problems in synthetic datasets. And more
extensive evaluation will be conducted by comparing LSI performances of the NMF
and the singular value decomposition (SVD), the standard LSI method, using some
standard datasets.
| no_new_dataset | 0.943608 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.