id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0903.0682 | Raymond Chi-Wing Wong | Raymond Chi-Wing Wong, Ada Wai-Chee Fu, Jia Liu, Ke Wang and Yabo Xu | Preserving Individual Privacy in Serial Data Publishing | null | null | null | null | cs.DB cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While previous works on privacy-preserving serial data publishing consider
the scenario where sensitive values may persist over multiple data releases, we
find that no previous work has sufficient protection provided for sensitive
values that can change over time, which should be the more common case. In this
work we propose to study the privacy guarantee for such transient sensitive
values, which we call the global guarantee. We formally define the problem for
achieving this guarantee and derive some theoretical properties for this
problem. We show that the anonymized group sizes used in the data anonymization
is a key factor in protecting individual privacy in serial publication. We
propose two strategies for anonymization targeting at minimizing the average
group size and the maximum group size. Finally, we conduct experiments on a
medical dataset to show that our method is highly efficient and also produces
published data of very high utility.
| [
{
"version": "v1",
"created": "Wed, 4 Mar 2009 09:36:29 GMT"
}
] | 2009-03-05T00:00:00 | [
[
"Wong",
"Raymond Chi-Wing",
""
],
[
"Fu",
"Ada Wai-Chee",
""
],
[
"Liu",
"Jia",
""
],
[
"Wang",
"Ke",
""
],
[
"Xu",
"Yabo",
""
]
] | TITLE: Preserving Individual Privacy in Serial Data Publishing
ABSTRACT: While previous works on privacy-preserving serial data publishing consider
the scenario where sensitive values may persist over multiple data releases, we
find that no previous work has sufficient protection provided for sensitive
values that can change over time, which should be the more common case. In this
work we propose to study the privacy guarantee for such transient sensitive
values, which we call the global guarantee. We formally define the problem for
achieving this guarantee and derive some theoretical properties for this
problem. We show that the anonymized group sizes used in the data anonymization
is a key factor in protecting individual privacy in serial publication. We
propose two strategies for anonymization targeting at minimizing the average
group size and the maximum group size. Finally, we conduct experiments on a
medical dataset to show that our method is highly efficient and also produces
published data of very high utility.
| no_new_dataset | 0.946547 |
0903.0041 | Vit Niennattrakul | Vit Niennattrakul and Chotirat Ann Ratanamahatana | Learning DTW Global Constraint for Time Series Classification | The first runner up of Workshop and Challenge on Time Series
Classification held in conjunction with SIGKDD 2007. 8 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 1-Nearest Neighbor with the Dynamic Time Warping (DTW) distance is one of the
most effective classifiers on time series domain. Since the global constraint
has been introduced in speech community, many global constraint models have
been proposed including Sakoe-Chiba (S-C) band, Itakura Parallelogram, and
Ratanamahatana-Keogh (R-K) band. The R-K band is a general global constraint
model that can represent any global constraints with arbitrary shape and size
effectively. However, we need a good learning algorithm to discover the most
suitable set of R-K bands, and the current R-K band learning algorithm still
suffers from an 'overfitting' phenomenon. In this paper, we propose two new
learning algorithms, i.e., band boundary extraction algorithm and iterative
learning algorithm. The band boundary extraction is calculated from the bound
of all possible warping paths in each class, and the iterative learning is
adjusted from the original R-K band learning. We also use a Silhouette index, a
well-known clustering validation technique, as a heuristic function, and the
lower bound function, LB_Keogh, to enhance the prediction speed. Twenty
datasets, from the Workshop and Challenge on Time Series Classification, held
in conjunction of the SIGKDD 2007, are used to evaluate our approach.
| [
{
"version": "v1",
"created": "Sat, 28 Feb 2009 05:46:31 GMT"
}
] | 2009-03-03T00:00:00 | [
[
"Niennattrakul",
"Vit",
""
],
[
"Ratanamahatana",
"Chotirat Ann",
""
]
] | TITLE: Learning DTW Global Constraint for Time Series Classification
ABSTRACT: 1-Nearest Neighbor with the Dynamic Time Warping (DTW) distance is one of the
most effective classifiers on time series domain. Since the global constraint
has been introduced in speech community, many global constraint models have
been proposed including Sakoe-Chiba (S-C) band, Itakura Parallelogram, and
Ratanamahatana-Keogh (R-K) band. The R-K band is a general global constraint
model that can represent any global constraints with arbitrary shape and size
effectively. However, we need a good learning algorithm to discover the most
suitable set of R-K bands, and the current R-K band learning algorithm still
suffers from an 'overfitting' phenomenon. In this paper, we propose two new
learning algorithms, i.e., band boundary extraction algorithm and iterative
learning algorithm. The band boundary extraction is calculated from the bound
of all possible warping paths in each class, and the iterative learning is
adjusted from the original R-K band learning. We also use a Silhouette index, a
well-known clustering validation technique, as a heuristic function, and the
lower bound function, LB_Keogh, to enhance the prediction speed. Twenty
datasets, from the Workshop and Challenge on Time Series Classification, held
in conjunction of the SIGKDD 2007, are used to evaluate our approach.
| no_new_dataset | 0.951414 |
0801.2405 | Katrin Heitmann | Steve Haroz, Kwan-Liu Ma, Katrin Heitmann | Multiple Uncertainties in Time-Variant Cosmological Particle Data | 8 pages, 8 figures, published in Pacific Vis 2008, project website at
http://steveharoz.com/research/cosmology/ | Haroz, S; Ma, K-L; Heitmann, K, "Multiple Uncertainties in
Time-Variant Cosmological Particle Data" IEEE PacificVIS '08, pp.207-214, 5-7
March 2008 | 10.1109/PACIFICVIS.2008.4475478 | LAUR-08-0052 | astro-ph cs.GR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Though the mediums for visualization are limited, the potential dimensions of
a dataset are not. In many areas of scientific study, understanding the
correlations between those dimensions and their uncertainties is pivotal to
mining useful information from a dataset. Obtaining this insight can
necessitate visualizing the many relationships among temporal, spatial, and
other dimensionalities of data and its uncertainties. We utilize multiple views
for interactive dataset exploration and selection of important features, and we
apply those techniques to the unique challenges of cosmological particle
datasets. We show how interactivity and incorporation of multiple visualization
techniques help overcome the problem of limited visualization dimensions and
allow many types of uncertainty to be seen in correlation with other variables.
| [
{
"version": "v1",
"created": "Tue, 15 Jan 2008 22:57:41 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Feb 2009 08:09:24 GMT"
}
] | 2009-02-25T00:00:00 | [
[
"Haroz",
"Steve",
""
],
[
"Ma",
"Kwan-Liu",
""
],
[
"Heitmann",
"Katrin",
""
]
] | TITLE: Multiple Uncertainties in Time-Variant Cosmological Particle Data
ABSTRACT: Though the mediums for visualization are limited, the potential dimensions of
a dataset are not. In many areas of scientific study, understanding the
correlations between those dimensions and their uncertainties is pivotal to
mining useful information from a dataset. Obtaining this insight can
necessitate visualizing the many relationships among temporal, spatial, and
other dimensionalities of data and its uncertainties. We utilize multiple views
for interactive dataset exploration and selection of important features, and we
apply those techniques to the unique challenges of cosmological particle
datasets. We show how interactivity and incorporation of multiple visualization
techniques help overcome the problem of limited visualization dimensions and
allow many types of uncertainty to be seen in correlation with other variables.
| no_new_dataset | 0.946151 |
0902.4228 | Vamsi Potluru | Vamsi K. Potluru, Sergey M. Plis, Morten Morup, Vince D. Calhoun,
Terran Lane | Multiplicative updates For Non-Negative Kernel SVM | 4 pages, 1 figure, 1 table | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present multiplicative updates for solving hard and soft margin support
vector machines (SVM) with non-negative kernels. They follow as a natural
extension of the updates for non-negative matrix factorization. No additional
param- eter setting, such as choosing learning, rate is required. Ex- periments
demonstrate rapid convergence to good classifiers. We analyze the rates of
asymptotic convergence of the up- dates and establish tight bounds. We test the
performance on several datasets using various non-negative kernels and report
equivalent generalization errors to that of a standard SVM.
| [
{
"version": "v1",
"created": "Tue, 24 Feb 2009 20:38:32 GMT"
}
] | 2009-02-25T00:00:00 | [
[
"Potluru",
"Vamsi K.",
""
],
[
"Plis",
"Sergey M.",
""
],
[
"Morup",
"Morten",
""
],
[
"Calhoun",
"Vince D.",
""
],
[
"Lane",
"Terran",
""
]
] | TITLE: Multiplicative updates For Non-Negative Kernel SVM
ABSTRACT: We present multiplicative updates for solving hard and soft margin support
vector machines (SVM) with non-negative kernels. They follow as a natural
extension of the updates for non-negative matrix factorization. No additional
param- eter setting, such as choosing learning, rate is required. Ex- periments
demonstrate rapid convergence to good classifiers. We analyze the rates of
asymptotic convergence of the up- dates and establish tight bounds. We test the
performance on several datasets using various non-negative kernels and report
equivalent generalization errors to that of a standard SVM.
| no_new_dataset | 0.947332 |
0812.2318 | Fabrice Ardhuin | Fabrice Collard, Fabrice Ardhuin (SHOM), Bertrand Chapron (LOS) | Routine monitoring and analysis of ocean swell fields using a spaceborne
SAR | 14 pages. Submitted to Journal of Geophysical Research (revised) | null | null | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Satellite Synthetic Aperture Radar (SAR) observations can provide a global
view of ocean swell fields when using a specific "wave mode" sampling. A
methodology is presented to routinely derive integral properties of the longer
wavelength (swell) portion of the wave spectrum from SAR Level 2 products, and
both monitor and predict their evolution across ocean basins. SAR-derived
estimates of swell height, and energy-weighted peak period and direction, are
validated against buoy observations, and the peak directions are used to
project the peak periods in one dimension along the corresponding great circle
route, both forward and back in time, using the peak period group velocity. The
resulting real time dataset of great circle-projected peak periods produces
two-dimensional maps that can be used to monitor and predict the spatial
extent, and temporal evolution, of individual ocean swell fields as they
propagate from their source region to distant coastlines. The methodology is
found to be consistent with the dispersive arrival of peak swell periods at a
mid-ocean buoy. The simple great circle propagation method cannot project the
swell heights in space like the peak periods, because energy evolution along a
great circle is a function of the source storm characteristics and the unknown
swell dissipation rate. A more general geometric optics model is thus proposed
for the far field of the storms. This model is applied here to determine the
attenuation over long distances. For one of the largest recorded storms,
observations of 15 s period swells are consistent with a constant dissipation
rate that corresponds to a 3300 km e-folding scale for the energy. In this
case, swell dissipation is a significant term in the wave energy balance at
global scales.
| [
{
"version": "v1",
"created": "Fri, 12 Dec 2008 08:55:00 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jan 2009 09:25:02 GMT"
},
{
"version": "v3",
"created": "Mon, 23 Feb 2009 15:13:16 GMT"
}
] | 2009-02-23T00:00:00 | [
[
"Collard",
"Fabrice",
"",
"SHOM"
],
[
"Ardhuin",
"Fabrice",
"",
"SHOM"
],
[
"Chapron",
"Bertrand",
"",
"LOS"
]
] | TITLE: Routine monitoring and analysis of ocean swell fields using a spaceborne
SAR
ABSTRACT: Satellite Synthetic Aperture Radar (SAR) observations can provide a global
view of ocean swell fields when using a specific "wave mode" sampling. A
methodology is presented to routinely derive integral properties of the longer
wavelength (swell) portion of the wave spectrum from SAR Level 2 products, and
both monitor and predict their evolution across ocean basins. SAR-derived
estimates of swell height, and energy-weighted peak period and direction, are
validated against buoy observations, and the peak directions are used to
project the peak periods in one dimension along the corresponding great circle
route, both forward and back in time, using the peak period group velocity. The
resulting real time dataset of great circle-projected peak periods produces
two-dimensional maps that can be used to monitor and predict the spatial
extent, and temporal evolution, of individual ocean swell fields as they
propagate from their source region to distant coastlines. The methodology is
found to be consistent with the dispersive arrival of peak swell periods at a
mid-ocean buoy. The simple great circle propagation method cannot project the
swell heights in space like the peak periods, because energy evolution along a
great circle is a function of the source storm characteristics and the unknown
swell dissipation rate. A more general geometric optics model is thus proposed
for the far field of the storms. This model is applied here to determine the
attenuation over long distances. For one of the largest recorded storms,
observations of 15 s period swells are consistent with a constant dissipation
rate that corresponds to a 3300 km e-folding scale for the energy. In this
case, swell dissipation is a significant term in the wave energy balance at
global scales.
| no_new_dataset | 0.950134 |
0804.1441 | Ratthachat Chatpatanasiri | Ratthachat Chatpatanasiri, Teesid Korsrilabutr, Pasakorn
Tangchanachaianan and Boonserm Kijsirikul | On Kernelization of Supervised Mahalanobis Distance Learners | 23 pages, 5 figures. There is a seriously wrong formula in derivation
of a gradient formula of the "kernel NCA" in the two previous versions. In
this new version, a new theoretical result is provided to properly account
kernel NCA | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on the problem of kernelizing an existing supervised
Mahalanobis distance learner. The following features are included in the paper.
Firstly, three popular learners, namely, "neighborhood component analysis",
"large margin nearest neighbors" and "discriminant neighborhood embedding",
which do not have kernel versions are kernelized in order to improve their
classification performances. Secondly, an alternative kernelization framework
called "KPCA trick" is presented. Implementing a learner in the new framework
gains several advantages over the standard framework, e.g. no mathematical
formulas and no reprogramming are required for a kernel implementation, the
framework avoids troublesome problems such as singularity, etc. Thirdly, while
the truths of representer theorems are just assumptions in previous papers
related to ours, here, representer theorems are formally proven. The proofs
validate both the kernel trick and the KPCA trick in the context of Mahalanobis
distance learning. Fourthly, unlike previous works which always apply brute
force methods to select a kernel, we investigate two approaches which can be
efficiently adopted to construct an appropriate kernel for a given dataset.
Finally, numerical results on various real-world datasets are presented.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2008 09:40:51 GMT"
},
{
"version": "v2",
"created": "Sat, 20 Dec 2008 09:51:46 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Jan 2009 02:19:27 GMT"
}
] | 2009-01-30T00:00:00 | [
[
"Chatpatanasiri",
"Ratthachat",
""
],
[
"Korsrilabutr",
"Teesid",
""
],
[
"Tangchanachaianan",
"Pasakorn",
""
],
[
"Kijsirikul",
"Boonserm",
""
]
] | TITLE: On Kernelization of Supervised Mahalanobis Distance Learners
ABSTRACT: This paper focuses on the problem of kernelizing an existing supervised
Mahalanobis distance learner. The following features are included in the paper.
Firstly, three popular learners, namely, "neighborhood component analysis",
"large margin nearest neighbors" and "discriminant neighborhood embedding",
which do not have kernel versions are kernelized in order to improve their
classification performances. Secondly, an alternative kernelization framework
called "KPCA trick" is presented. Implementing a learner in the new framework
gains several advantages over the standard framework, e.g. no mathematical
formulas and no reprogramming are required for a kernel implementation, the
framework avoids troublesome problems such as singularity, etc. Thirdly, while
the truths of representer theorems are just assumptions in previous papers
related to ours, here, representer theorems are formally proven. The proofs
validate both the kernel trick and the KPCA trick in the context of Mahalanobis
distance learning. Fourthly, unlike previous works which always apply brute
force methods to select a kernel, we investigate two approaches which can be
efficiently adopted to construct an appropriate kernel for a given dataset.
Finally, numerical results on various real-world datasets are presented.
| no_new_dataset | 0.948155 |
0809.1181 | Robert Grossman | Yunhong Gu and Robert L Grossman | Sector and Sphere: Towards Simplified Storage and Processing of Large
Scale Distributed Data | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud computing has demonstrated that processing very large datasets over
commodity clusters can be done simply given the right programming model and
infrastructure. In this paper, we describe the design and implementation of the
Sector storage cloud and the Sphere compute cloud. In contrast to existing
storage and compute clouds, Sector can manage data not only within a data
center, but also across geographically distributed data centers. Similarly, the
Sphere compute cloud supports User Defined Functions (UDF) over data both
within a data center and across data centers. As a special case, MapReduce
style programming can be implemented in Sphere by using a Map UDF followed by a
Reduce UDF. We describe some experimental studies comparing Sector/Sphere and
Hadoop using the Terasort Benchmark. In these studies, Sector is about twice as
fast as Hadoop. Sector/Sphere is open source.
| [
{
"version": "v1",
"created": "Sat, 6 Sep 2008 18:37:51 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jan 2009 00:34:47 GMT"
}
] | 2009-01-17T00:00:00 | [
[
"Gu",
"Yunhong",
""
],
[
"Grossman",
"Robert L",
""
]
] | TITLE: Sector and Sphere: Towards Simplified Storage and Processing of Large
Scale Distributed Data
ABSTRACT: Cloud computing has demonstrated that processing very large datasets over
commodity clusters can be done simply given the right programming model and
infrastructure. In this paper, we describe the design and implementation of the
Sector storage cloud and the Sphere compute cloud. In contrast to existing
storage and compute clouds, Sector can manage data not only within a data
center, but also across geographically distributed data centers. Similarly, the
Sphere compute cloud supports User Defined Functions (UDF) over data both
within a data center and across data centers. As a special case, MapReduce
style programming can be implemented in Sphere by using a Map UDF followed by a
Reduce UDF. We describe some experimental studies comparing Sector/Sphere and
Hadoop using the Terasort Benchmark. In these studies, Sector is about twice as
fast as Hadoop. Sector/Sphere is open source.
| no_new_dataset | 0.947478 |
0901.0489 | Pascal Pernot | Pascal Pernot (LCPO) | Scaling factors for ab initio vibrational frequencies: comparison of
uncertainty models for quantified prediction | null | null | null | null | physics.data-an physics.chem-ph physics.class-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian Model Calibration is used to revisit the problem of scaling factor
calibration for semi-empirical correction of ab initio calculations. A
particular attention is devoted to uncertainty evaluation for scaling factors,
and to their effect on prediction of observables involving scaled properties.
We argue that linear models used for calibration of scaling factors are
generally not statistically valid, in the sense that they are not able to fit
calibration data within their uncertainty limits. Uncertainty evaluation and
uncertainty propagation by statistical methods from such invalid models are
doomed to failure. To relieve this problem, a stochastic function is included
in the model to account for model inadequacy, according to the Bayesian Model
Calibration approach. In this framework, we demonstrate that standard
calibration summary statistics, as optimal scaling factor and root mean square,
can be safely used for uncertainty propagation only when large calibration sets
of precise data are used. For small datasets containing a few dozens of data, a
more accurate formula is provided which involves scaling factor calibration
uncertainty. For measurement uncertainties larger than model inadequacy, the
problem can be reduced to a weighted least squares analysis. For intermediate
cases, no analytical estimators were found, and numerical Bayesian estimation
of parameters has to be used.
| [
{
"version": "v1",
"created": "Mon, 5 Jan 2009 14:35:30 GMT"
}
] | 2009-01-12T00:00:00 | [
[
"Pernot",
"Pascal",
"",
"LCPO"
]
] | TITLE: Scaling factors for ab initio vibrational frequencies: comparison of
uncertainty models for quantified prediction
ABSTRACT: Bayesian Model Calibration is used to revisit the problem of scaling factor
calibration for semi-empirical correction of ab initio calculations. A
particular attention is devoted to uncertainty evaluation for scaling factors,
and to their effect on prediction of observables involving scaled properties.
We argue that linear models used for calibration of scaling factors are
generally not statistically valid, in the sense that they are not able to fit
calibration data within their uncertainty limits. Uncertainty evaluation and
uncertainty propagation by statistical methods from such invalid models are
doomed to failure. To relieve this problem, a stochastic function is included
in the model to account for model inadequacy, according to the Bayesian Model
Calibration approach. In this framework, we demonstrate that standard
calibration summary statistics, as optimal scaling factor and root mean square,
can be safely used for uncertainty propagation only when large calibration sets
of precise data are used. For small datasets containing a few dozens of data, a
more accurate formula is provided which involves scaling factor calibration
uncertainty. For measurement uncertainties larger than model inadequacy, the
problem can be reduced to a weighted least squares analysis. For intermediate
cases, no analytical estimators were found, and numerical Bayesian estimation
of parameters has to be used.
| no_new_dataset | 0.94743 |
0901.0537 | Ian Ross | Ian Ross (University of Bristol) | Nonlinear Dimensionality Reduction Methods in Climate Data Analysis | 273 pages, 76 figures; University of Bristol Ph.D. thesis; version
with high-resolution figures available from
http://www.skybluetrades.net/thesis/ian-ross-thesis.pdf (52Mb download) | null | null | null | physics.ao-ph physics.data-an | http://creativecommons.org/licenses/by/3.0/ | Linear dimensionality reduction techniques, notably principal component
analysis, are widely used in climate data analysis as a means to aid in the
interpretation of datasets of high dimensionality. These linear methods may not
be appropriate for the analysis of data arising from nonlinear processes
occurring in the climate system. Numerous techniques for nonlinear
dimensionality reduction have been developed recently that may provide a
potentially useful tool for the identification of low-dimensional manifolds in
climate data sets arising from nonlinear dynamics. In this thesis I apply three
such techniques to the study of El Nino/Southern Oscillation variability in
tropical Pacific sea surface temperatures and thermocline depth, comparing
observational data with simulations from coupled atmosphere-ocean general
circulation models from the CMIP3 multi-model ensemble.
The three methods used here are a nonlinear principal component analysis
(NLPCA) approach based on neural networks, the Isomap isometric mapping
algorithm, and Hessian locally linear embedding. I use these three methods to
examine El Nino variability in the different data sets and assess the
suitability of these nonlinear dimensionality reduction approaches for climate
data analysis.
I conclude that although, for the application presented here, analysis using
NLPCA, Isomap and Hessian locally linear embedding does not provide additional
information beyond that already provided by principal component analysis, these
methods are effective tools for exploratory data analysis.
| [
{
"version": "v1",
"created": "Fri, 2 Jan 2009 16:33:30 GMT"
}
] | 2009-01-06T00:00:00 | [
[
"Ross",
"Ian",
"",
"University of Bristol"
]
] | TITLE: Nonlinear Dimensionality Reduction Methods in Climate Data Analysis
ABSTRACT: Linear dimensionality reduction techniques, notably principal component
analysis, are widely used in climate data analysis as a means to aid in the
interpretation of datasets of high dimensionality. These linear methods may not
be appropriate for the analysis of data arising from nonlinear processes
occurring in the climate system. Numerous techniques for nonlinear
dimensionality reduction have been developed recently that may provide a
potentially useful tool for the identification of low-dimensional manifolds in
climate data sets arising from nonlinear dynamics. In this thesis I apply three
such techniques to the study of El Nino/Southern Oscillation variability in
tropical Pacific sea surface temperatures and thermocline depth, comparing
observational data with simulations from coupled atmosphere-ocean general
circulation models from the CMIP3 multi-model ensemble.
The three methods used here are a nonlinear principal component analysis
(NLPCA) approach based on neural networks, the Isomap isometric mapping
algorithm, and Hessian locally linear embedding. I use these three methods to
examine El Nino variability in the different data sets and assess the
suitability of these nonlinear dimensionality reduction approaches for climate
data analysis.
I conclude that although, for the application presented here, analysis using
NLPCA, Isomap and Hessian locally linear embedding does not provide additional
information beyond that already provided by principal component analysis, these
methods are effective tools for exploratory data analysis.
| no_new_dataset | 0.951953 |
0812.5032 | Qiang Li | Qiang Li, Yan He, Jing-ping Jiang | A New Clustering Algorithm Based Upon Flocking On Complex Network | 18 pages, 4 figures, 3 tables | null | null | null | cs.LG cs.AI cs.CV physics.soc-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We have proposed a model based upon flocking on a complex network, and then
developed two clustering algorithms on the basis of it. In the algorithms,
firstly a \textit{k}-nearest neighbor (knn) graph as a weighted and directed
graph is produced among all data points in a dataset each of which is regarded
as an agent who can move in space, and then a time-varying complex network is
created by adding long-range links for each data point. Furthermore, each data
point is not only acted by its \textit{k} nearest neighbors but also \textit{r}
long-range neighbors through fields established in space by them together, so
it will take a step along the direction of the vector sum of all fields. It is
more important that these long-range links provides some hidden information for
each data point when it moves and at the same time accelerate its speed
converging to a center. As they move in space according to the proposed model,
data points that belong to the same class are located at a same position
gradually, whereas those that belong to different classes are away from one
another. Consequently, the experimental results have demonstrated that data
points in datasets are clustered reasonably and efficiently, and the rates of
convergence of clustering algorithms are fast enough. Moreover, the comparison
with other algorithms also provides an indication of the effectiveness of the
proposed approach.
| [
{
"version": "v1",
"created": "Tue, 30 Dec 2008 08:30:27 GMT"
}
] | 2008-12-31T00:00:00 | [
[
"Li",
"Qiang",
""
],
[
"He",
"Yan",
""
],
[
"Jiang",
"Jing-ping",
""
]
] | TITLE: A New Clustering Algorithm Based Upon Flocking On Complex Network
ABSTRACT: We have proposed a model based upon flocking on a complex network, and then
developed two clustering algorithms on the basis of it. In the algorithms,
firstly a \textit{k}-nearest neighbor (knn) graph as a weighted and directed
graph is produced among all data points in a dataset each of which is regarded
as an agent who can move in space, and then a time-varying complex network is
created by adding long-range links for each data point. Furthermore, each data
point is not only acted by its \textit{k} nearest neighbors but also \textit{r}
long-range neighbors through fields established in space by them together, so
it will take a step along the direction of the vector sum of all fields. It is
more important that these long-range links provides some hidden information for
each data point when it moves and at the same time accelerate its speed
converging to a center. As they move in space according to the proposed model,
data points that belong to the same class are located at a same position
gradually, whereas those that belong to different classes are away from one
another. Consequently, the experimental results have demonstrated that data
points in datasets are clustered reasonably and efficiently, and the rates of
convergence of clustering algorithms are fast enough. Moreover, the comparison
with other algorithms also provides an indication of the effectiveness of the
proposed approach.
| no_new_dataset | 0.953535 |
0812.4460 | Ernesto Diaz-Aviles | Ernesto Diaz-Aviles, Lars Schmidt-Thieme and Cai-Nicolas Ziegler | Emergence of Spontaneous Order Through Neighborhood Formation in
Peer-to-Peer Recommender Systems | WWW '05 International Workshop on Innovations in Web Infrastructure
(IWI '05) May 10, 2005, Chiba, Japan | WWW '05 International Workshop on Innovations in Web
Infrastructure (IWI '05) May 10, 2005, Chiba, Japan | null | null | cs.AI cs.IR cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of the Semantic Web necessitates paradigm shifts away from
centralized client/server architectures towards decentralization and
peer-to-peer computation, making the existence of central authorities
superfluous and even impossible. At the same time, recommender systems are
gaining considerable impact in e-commerce, providing people with
recommendations that are personalized and tailored to their very needs. These
recommender systems have traditionally been deployed with stark centralized
scenarios in mind, operating in closed communities detached from their host
network's outer perimeter. We aim at marrying these two worlds, i.e.,
decentralized peer-to-peer computing and recommender systems, in one
agent-based framework. Our architecture features an epidemic-style protocol
maintaining neighborhoods of like-minded peers in a robust, selforganizing
fashion. In order to demonstrate our architecture's ability to retain
scalability, robustness and to allow for convergence towards high-quality
recommendations, we conduct offline experiments on top of the popular MovieLens
dataset.
| [
{
"version": "v1",
"created": "Tue, 23 Dec 2008 23:26:27 GMT"
}
] | 2008-12-25T00:00:00 | [
[
"Diaz-Aviles",
"Ernesto",
""
],
[
"Schmidt-Thieme",
"Lars",
""
],
[
"Ziegler",
"Cai-Nicolas",
""
]
] | TITLE: Emergence of Spontaneous Order Through Neighborhood Formation in
Peer-to-Peer Recommender Systems
ABSTRACT: The advent of the Semantic Web necessitates paradigm shifts away from
centralized client/server architectures towards decentralization and
peer-to-peer computation, making the existence of central authorities
superfluous and even impossible. At the same time, recommender systems are
gaining considerable impact in e-commerce, providing people with
recommendations that are personalized and tailored to their very needs. These
recommender systems have traditionally been deployed with stark centralized
scenarios in mind, operating in closed communities detached from their host
network's outer perimeter. We aim at marrying these two worlds, i.e.,
decentralized peer-to-peer computing and recommender systems, in one
agent-based framework. Our architecture features an epidemic-style protocol
maintaining neighborhoods of like-minded peers in a robust, selforganizing
fashion. In order to demonstrate our architecture's ability to retain
scalability, robustness and to allow for convergence towards high-quality
recommendations, we conduct offline experiments on top of the popular MovieLens
dataset.
| no_new_dataset | 0.943764 |
0802.1430 | Francis Bach | Jacob Abernethy, Francis Bach (INRIA Rocquencourt), Theodoros
Evgeniou, Jean-Philippe Vert (CB) | A New Approach to Collaborative Filtering: Operator Estimation with
Spectral Regularization | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general approach for collaborative filtering (CF) using spectral
regularization to learn linear operators from "users" to the "objects" they
rate. Recent low-rank type matrix completion approaches to CF are shown to be
special cases. However, unlike existing regularization based CF methods, our
approach can be used to also incorporate information such as attributes of the
users or the objects -- a limitation of existing regularization based CF
methods. We then provide novel representer theorems that we use to develop new
estimation methods. We provide learning algorithms based on low-rank
decompositions, and test them on a standard CF dataset. The experiments
indicate the advantages of generalizing the existing regularization based CF
methods to incorporate related information about users and objects. Finally, we
show that certain multi-task learning methods can be also seen as special cases
of our proposed approach.
| [
{
"version": "v1",
"created": "Mon, 11 Feb 2008 12:55:34 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Dec 2008 14:05:14 GMT"
}
] | 2008-12-19T00:00:00 | [
[
"Abernethy",
"Jacob",
"",
"INRIA Rocquencourt"
],
[
"Bach",
"Francis",
"",
"INRIA Rocquencourt"
],
[
"Evgeniou",
"Theodoros",
"",
"CB"
],
[
"Vert",
"Jean-Philippe",
"",
"CB"
]
] | TITLE: A New Approach to Collaborative Filtering: Operator Estimation with
Spectral Regularization
ABSTRACT: We present a general approach for collaborative filtering (CF) using spectral
regularization to learn linear operators from "users" to the "objects" they
rate. Recent low-rank type matrix completion approaches to CF are shown to be
special cases. However, unlike existing regularization based CF methods, our
approach can be used to also incorporate information such as attributes of the
users or the objects -- a limitation of existing regularization based CF
methods. We then provide novel representer theorems that we use to develop new
estimation methods. We provide learning algorithms based on low-rank
decompositions, and test them on a standard CF dataset. The experiments
indicate the advantages of generalizing the existing regularization based CF
methods to incorporate related information about users and objects. Finally, we
show that certain multi-task learning methods can be also seen as special cases
of our proposed approach.
| no_new_dataset | 0.943295 |
0804.1302 | Francis Bach | Francis Bach (INRIA Rocquencourt) | Bolasso: model consistent Lasso estimation through the bootstrap | null | null | null | null | cs.LG math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the least-square linear regression problem with regularization by
the l1-norm, a problem usually referred to as the Lasso. In this paper, we
present a detailed asymptotic analysis of model consistency of the Lasso. For
various decays of the regularization parameter, we compute asymptotic
equivalents of the probability of correct model selection (i.e., variable
selection). For a specific rate decay, we show that the Lasso selects all the
variables that should enter the model with probability tending to one
exponentially fast, while it selects all other variables with strictly positive
probability. We show that this property implies that if we run the Lasso for
several bootstrapped replications of a given sample, then intersecting the
supports of the Lasso bootstrap estimates leads to consistent model selection.
This novel variable selection algorithm, referred to as the Bolasso, is
compared favorably to other linear regression methods on synthetic data and
datasets from the UCI machine learning repository.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2008 15:40:03 GMT"
}
] | 2008-12-18T00:00:00 | [
[
"Bach",
"Francis",
"",
"INRIA Rocquencourt"
]
] | TITLE: Bolasso: model consistent Lasso estimation through the bootstrap
ABSTRACT: We consider the least-square linear regression problem with regularization by
the l1-norm, a problem usually referred to as the Lasso. In this paper, we
present a detailed asymptotic analysis of model consistency of the Lasso. For
various decays of the regularization parameter, we compute asymptotic
equivalents of the probability of correct model selection (i.e., variable
selection). For a specific rate decay, we show that the Lasso selects all the
variables that should enter the model with probability tending to one
exponentially fast, while it selects all other variables with strictly positive
probability. We show that this property implies that if we run the Lasso for
several bootstrapped replications of a given sample, then intersecting the
supports of the Lasso bootstrap estimates leads to consistent model selection.
This novel variable selection algorithm, referred to as the Bolasso, is
compared favorably to other linear regression methods on synthetic data and
datasets from the UCI machine learning repository.
| no_new_dataset | 0.950227 |
0806.3708 | Jocelyne Troccaz | S\'ebastien Martin (TIMC), Vincent Daanen (TIMC), Jocelyne Troccaz
(TIMC) | Atlas-Based Prostate Segmentation Using an Hybrid Registration | International Journal of Computer Assisted Radiology and Surgery
(2008) 000-999 | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: This paper presents the preliminary results of a semi-automatic
method for prostate segmentation of Magnetic Resonance Images (MRI) which aims
to be incorporated in a navigation system for prostate brachytherapy. Methods:
The method is based on the registration of an anatomical atlas computed from a
population of 18 MRI exams onto a patient image. An hybrid registration
framework which couples an intensity-based registration with a robust
point-matching algorithm is used for both atlas building and atlas
registration. Results: The method has been validated on the same dataset that
the one used to construct the atlas using the "leave-one-out method". Results
gives a mean error of 3.39 mm and a standard deviation of 1.95 mm with respect
to expert segmentations. Conclusions: We think that this segmentation tool may
be a very valuable help to the clinician for routine quantitative image
exploitation.
| [
{
"version": "v1",
"created": "Mon, 23 Jun 2008 15:43:28 GMT"
}
] | 2008-12-18T00:00:00 | [
[
"Martin",
"Sébastien",
"",
"TIMC"
],
[
"Daanen",
"Vincent",
"",
"TIMC"
],
[
"Troccaz",
"Jocelyne",
"",
"TIMC"
]
] | TITLE: Atlas-Based Prostate Segmentation Using an Hybrid Registration
ABSTRACT: Purpose: This paper presents the preliminary results of a semi-automatic
method for prostate segmentation of Magnetic Resonance Images (MRI) which aims
to be incorporated in a navigation system for prostate brachytherapy. Methods:
The method is based on the registration of an anatomical atlas computed from a
population of 18 MRI exams onto a patient image. An hybrid registration
framework which couples an intensity-based registration with a robust
point-matching algorithm is used for both atlas building and atlas
registration. Results: The method has been validated on the same dataset that
the one used to construct the atlas using the "leave-one-out method". Results
gives a mean error of 3.39 mm and a standard deviation of 1.95 mm with respect
to expert segmentations. Conclusions: We think that this segmentation tool may
be a very valuable help to the clinician for routine quantitative image
exploitation.
| no_new_dataset | 0.946448 |
0812.1357 | Qiang Li | Qiang Li, Yan He, Jing-ping Jiang | A Novel Clustering Algorithm Based on Quantum Random Walk | 14 pages, 6 figures, 3 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The enormous successes have been made by quantum algorithms during the last
decade. In this paper, we combine the quantum random walk (QRW) with the
problem of data clustering, and develop two clustering algorithms based on the
one dimensional QRW. Then, the probability distributions on the positions
induced by QRW in these algorithms are investigated, which also indicates the
possibility of obtaining better results. Consequently, the experimental results
have demonstrated that data points in datasets are clustered reasonably and
efficiently, and the clustering algorithms are of fast rates of convergence.
Moreover, the comparison with other algorithms also provides an indication of
the effectiveness of the proposed approach.
| [
{
"version": "v1",
"created": "Sun, 7 Dec 2008 15:22:27 GMT"
}
] | 2008-12-09T00:00:00 | [
[
"Li",
"Qiang",
""
],
[
"He",
"Yan",
""
],
[
"Jiang",
"Jing-ping",
""
]
] | TITLE: A Novel Clustering Algorithm Based on Quantum Random Walk
ABSTRACT: The enormous successes have been made by quantum algorithms during the last
decade. In this paper, we combine the quantum random walk (QRW) with the
problem of data clustering, and develop two clustering algorithms based on the
one dimensional QRW. Then, the probability distributions on the positions
induced by QRW in these algorithms are investigated, which also indicates the
possibility of obtaining better results. Consequently, the experimental results
have demonstrated that data points in datasets are clustered reasonably and
efficiently, and the clustering algorithms are of fast rates of convergence.
Moreover, the comparison with other algorithms also provides an indication of
the effectiveness of the proposed approach.
| no_new_dataset | 0.954308 |
0801.3263 | Rosane Freire Riera | A.A.G. Cortines, R. Riera, C. Anteneodo | From short to fat tails in financial markets: A unified description | 11 pages, 5 figures | European Journal of Physics B, volume 60, p. 385, 2007 | null | null | q-fin.ST cond-mat.stat-mech physics.soc-ph | null | In complex systems such as turbulent flows and financial markets, the
dynamics in long and short time-lags, signaled by Gaussian and fat-tailed
statistics, respectively, calls for a unified description. To address this
issue we analyze a real dataset, namely, price fluctuations, in a wide range of
temporal scales to embrace both regimes. By means of Kramers-Moyal (KM)
coefficients evaluated from empirical time series, we obtain the evolution
equation for the probability density function (PDF) of price returns. We also
present consistent asymptotic solutions for the timescale dependent equation
that emerges from the empirical analysis. From these solutions, new
relationships connecting PDF characteristics, such as tail exponents, to
parameters of KM coefficients arise. The results reveal a dynamical path that
leads from Gaussian to fat-tailed statistics, furnishing insights on other
complex systems where akin crossover is observed.
| [
{
"version": "v1",
"created": "Mon, 21 Jan 2008 19:30:12 GMT"
}
] | 2008-12-02T00:00:00 | [
[
"Cortines",
"A. A. G.",
""
],
[
"Riera",
"R.",
""
],
[
"Anteneodo",
"C.",
""
]
] | TITLE: From short to fat tails in financial markets: A unified description
ABSTRACT: In complex systems such as turbulent flows and financial markets, the
dynamics in long and short time-lags, signaled by Gaussian and fat-tailed
statistics, respectively, calls for a unified description. To address this
issue we analyze a real dataset, namely, price fluctuations, in a wide range of
temporal scales to embrace both regimes. By means of Kramers-Moyal (KM)
coefficients evaluated from empirical time series, we obtain the evolution
equation for the probability density function (PDF) of price returns. We also
present consistent asymptotic solutions for the timescale dependent equation
that emerges from the empirical analysis. From these solutions, new
relationships connecting PDF characteristics, such as tail exponents, to
parameters of KM coefficients arise. The results reveal a dynamical path that
leads from Gaussian to fat-tailed statistics, furnishing insights on other
complex systems where akin crossover is observed.
| no_new_dataset | 0.951549 |
physics/0511101 | Fengzhong Wang | Fengzhong Wang, Kazuko Yamasaki, Shlomo Havlin and H. Eugene Stanley | Scaling and memory of intraday volatility return intervals in stock
market | 19 pages, 8 figures | Phys. Rev. E 73, 026117 (2006) | 10.1103/PhysRevE.73.026117 | null | physics.soc-ph q-fin.ST | null | We study the return interval $\tau$ between price volatilities that are above
a certain threshold $q$ for 31 intraday datasets, including the Standard &
Poor's 500 index and the 30 stocks that form the Dow Jones Industrial index.
For different threshold $q$, the probability density function $P_q(\tau)$
scales with the mean interval $\bar{\tau}$ as
$P_q(\tau)={\bar{\tau}}^{-1}f(\tau/\bar{\tau})$, similar to that found in daily
volatilities. Since the intraday records have significantly more data points
compared to the daily records, we could probe for much higher thresholds $q$
and still obtain good statistics. We find that the scaling function $f(x)$ is
consistent for all 31 intraday datasets in various time resolutions, and the
function is well approximated by the stretched exponential, $f(x)\sim e^{-a
x^\gamma}$, with $\gamma=0.38\pm 0.05$ and $a=3.9\pm 0.5$, which indicates the
existence of correlations. We analyze the conditional probability distribution
$P_q(\tau|\tau_0)$ for $\tau$ following a certain interval $\tau_0$, and find
$P_q(\tau|\tau_0)$ depends on $\tau_0$, which demonstrates memory in intraday
return intervals. Also, we find that the mean conditional interval
$<\tau|\tau_0>$ increases with $\tau_0$, consistent with the memory found for
$P_q(\tau|\tau_0)$. Moreover, we find that return interval records have long
term correlations with correlation exponents similar to that of volatility
records.
| [
{
"version": "v1",
"created": "Fri, 11 Nov 2005 15:56:02 GMT"
}
] | 2008-12-02T00:00:00 | [
[
"Wang",
"Fengzhong",
""
],
[
"Yamasaki",
"Kazuko",
""
],
[
"Havlin",
"Shlomo",
""
],
[
"Stanley",
"H. Eugene",
""
]
] | TITLE: Scaling and memory of intraday volatility return intervals in stock
market
ABSTRACT: We study the return interval $\tau$ between price volatilities that are above
a certain threshold $q$ for 31 intraday datasets, including the Standard &
Poor's 500 index and the 30 stocks that form the Dow Jones Industrial index.
For different threshold $q$, the probability density function $P_q(\tau)$
scales with the mean interval $\bar{\tau}$ as
$P_q(\tau)={\bar{\tau}}^{-1}f(\tau/\bar{\tau})$, similar to that found in daily
volatilities. Since the intraday records have significantly more data points
compared to the daily records, we could probe for much higher thresholds $q$
and still obtain good statistics. We find that the scaling function $f(x)$ is
consistent for all 31 intraday datasets in various time resolutions, and the
function is well approximated by the stretched exponential, $f(x)\sim e^{-a
x^\gamma}$, with $\gamma=0.38\pm 0.05$ and $a=3.9\pm 0.5$, which indicates the
existence of correlations. We analyze the conditional probability distribution
$P_q(\tau|\tau_0)$ for $\tau$ following a certain interval $\tau_0$, and find
$P_q(\tau|\tau_0)$ depends on $\tau_0$, which demonstrates memory in intraday
return intervals. Also, we find that the mean conditional interval
$<\tau|\tau_0>$ increases with $\tau_0$, consistent with the memory found for
$P_q(\tau|\tau_0)$. Moreover, we find that return interval records have long
term correlations with correlation exponents similar to that of volatility
records.
| no_new_dataset | 0.934574 |
astro-ph/0012539 | John Webb | J.K. Webb, M.T. Murphy, V.V. Flambaum, V.A. Dzuba, J.D. Barrow, C.W.
Churchill, J.X. Prochaska, A.M. Wolfe | Further Evidence for Cosmological Evolution of the Fine Structure
Constant | 5 pages, 1 figure. Published in Phys. Rev. Lett. Small changes to
discussion, added an acknowledgement and a reference | Phys.Rev.Lett.87:091301,2001 | 10.1103/PhysRevLett.87.091301 | null | astro-ph gr-qc hep-ph hep-th physics.atom-ph | null | We describe the results of a search for time variability of the fine
structure constant, alpha, using absorption systems in the spectra of distant
quasars. Three large optical datasets and two 21cm/mm absorption systems
provide four independent samples, spanning 23% to 87% of the age of the
universe. Each sample yields a smaller alpha in the past and the optical sample
shows a 4-sigma deviation: da/a = -0.72 +/- 0.18 x 10^{-5} over the redshift
range 0.5 < z < 3.5. We find no systematic effects which can explain our
results. The only potentially significant systematic effects push da/a towards
positive values, i.e. our results would become more significant were we to
correct for them.
| [
{
"version": "v1",
"created": "Fri, 29 Dec 2000 16:22:11 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jan 2001 02:17:52 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Sep 2001 05:50:36 GMT"
}
] | 2008-11-26T00:00:00 | [
[
"Webb",
"J. K.",
""
],
[
"Murphy",
"M. T.",
""
],
[
"Flambaum",
"V. V.",
""
],
[
"Dzuba",
"V. A.",
""
],
[
"Barrow",
"J. D.",
""
],
[
"Churchill",
"C. W.",
""
],
[
"Prochaska",
"J. X.",
""
],
[
"Wolfe",
"A. M.",
""
]
] | TITLE: Further Evidence for Cosmological Evolution of the Fine Structure
Constant
ABSTRACT: We describe the results of a search for time variability of the fine
structure constant, alpha, using absorption systems in the spectra of distant
quasars. Three large optical datasets and two 21cm/mm absorption systems
provide four independent samples, spanning 23% to 87% of the age of the
universe. Each sample yields a smaller alpha in the past and the optical sample
shows a 4-sigma deviation: da/a = -0.72 +/- 0.18 x 10^{-5} over the redshift
range 0.5 < z < 3.5. We find no systematic effects which can explain our
results. The only potentially significant systematic effects push da/a towards
positive values, i.e. our results would become more significant were we to
correct for them.
| no_new_dataset | 0.948822 |
0811.2055 | Tamas Szalay | Tamas Szalay, Volker Springel, Gerard Lemson | GPU-Based Interactive Visualization of Billion Point Cosmological
Simulations | 2008 Microsoft eScience conference | null | null | null | cs.GR astro-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent advances in graphics hardware capabilities, a brute force
approach is incapable of interactively displaying terabytes of data. We have
implemented a system that uses hierarchical level-of-detailing for the results
of cosmological simulations, in order to display visually accurate results
without loading in the full dataset (containing over 10 billion points). The
guiding principle of the program is that the user should not be able to
distinguish what they are seeing from a full rendering of the original data.
Furthermore, by using a tree-based system for levels of detail, the size of the
underlying data is limited only by the capacity of the IO system containing it.
| [
{
"version": "v1",
"created": "Thu, 13 Nov 2008 09:34:42 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Nov 2008 20:31:15 GMT"
}
] | 2008-11-18T00:00:00 | [
[
"Szalay",
"Tamas",
""
],
[
"Springel",
"Volker",
""
],
[
"Lemson",
"Gerard",
""
]
] | TITLE: GPU-Based Interactive Visualization of Billion Point Cosmological
Simulations
ABSTRACT: Despite the recent advances in graphics hardware capabilities, a brute force
approach is incapable of interactively displaying terabytes of data. We have
implemented a system that uses hierarchical level-of-detailing for the results
of cosmological simulations, in order to display visually accurate results
without loading in the full dataset (containing over 10 billion points). The
guiding principle of the program is that the user should not be able to
distinguish what they are seeing from a full rendering of the original data.
Furthermore, by using a tree-based system for levels of detail, the size of the
underlying data is limited only by the capacity of the IO system containing it.
| no_new_dataset | 0.939913 |
0810.1648 | Danny Bickson | Danny Bickson, Elad Yom-Tov and Danny Dolev | A Gaussian Belief Propagation Solver for Large Scale Support Vector
Machines | 12 pages, 1 figure, appeared in the 5th European Complex Systems
Conference, Jerusalem, Sept. 2008 | The 5th European Complex Systems Conference (ECCS 2008),
Jerusalem, Sept. 2008 | null | null | cs.LG cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Support vector machines (SVMs) are an extremely successful type of
classification and regression algorithms. Building an SVM entails solving a
constrained convex quadratic programming problem, which is quadratic in the
number of training samples. We introduce an efficient parallel implementation
of an support vector regression solver, based on the Gaussian Belief
Propagation algorithm (GaBP).
In this paper, we demonstrate that methods from the complex system domain
could be utilized for performing efficient distributed computation. We compare
the proposed algorithm to previously proposed distributed and single-node SVM
solvers. Our comparison shows that the proposed algorithm is just as accurate
as these solvers, while being significantly faster, especially for large
datasets. We demonstrate scalability of the proposed algorithm to up to 1,024
computing nodes and hundreds of thousands of data points using an IBM Blue Gene
supercomputer. As far as we know, our work is the largest parallel
implementation of belief propagation ever done, demonstrating the applicability
of this algorithm for large scale distributed computing systems.
| [
{
"version": "v1",
"created": "Thu, 9 Oct 2008 12:56:43 GMT"
}
] | 2008-11-15T00:00:00 | [
[
"Bickson",
"Danny",
""
],
[
"Yom-Tov",
"Elad",
""
],
[
"Dolev",
"Danny",
""
]
] | TITLE: A Gaussian Belief Propagation Solver for Large Scale Support Vector
Machines
ABSTRACT: Support vector machines (SVMs) are an extremely successful type of
classification and regression algorithms. Building an SVM entails solving a
constrained convex quadratic programming problem, which is quadratic in the
number of training samples. We introduce an efficient parallel implementation
of an support vector regression solver, based on the Gaussian Belief
Propagation algorithm (GaBP).
In this paper, we demonstrate that methods from the complex system domain
could be utilized for performing efficient distributed computation. We compare
the proposed algorithm to previously proposed distributed and single-node SVM
solvers. Our comparison shows that the proposed algorithm is just as accurate
as these solvers, while being significantly faster, especially for large
datasets. We demonstrate scalability of the proposed algorithm to up to 1,024
computing nodes and hundreds of thousands of data points using an IBM Blue Gene
supercomputer. As far as we know, our work is the largest parallel
implementation of belief propagation ever done, demonstrating the applicability
of this algorithm for large scale distributed computing systems.
| no_new_dataset | 0.949201 |
0811.1711 | Tshilidzi Marwala | Sarah Wright and Tshilidzi Marwala | Artificial Intelligence Techniques for Steam Generator Modelling | 23 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the use of different Artificial Intelligence methods
to predict the values of several continuous variables from a Steam Generator.
The objective was to determine how the different artificial intelligence
methods performed in making predictions on the given dataset. The artificial
intelligence methods evaluated were Neural Networks, Support Vector Machines,
and Adaptive Neuro-Fuzzy Inference Systems. The types of neural networks
investigated were Multi-Layer Perceptions, and Radial Basis Function. Bayesian
and committee techniques were applied to these neural networks. Each of the AI
methods considered was simulated in Matlab. The results of the simulations
showed that all the AI methods were capable of predicting the Steam Generator
data reasonably accurately. However, the Adaptive Neuro-Fuzzy Inference system
out performed the other methods in terms of accuracy and ease of
implementation, while still achieving a fast execution time as well as a
reasonable training time.
| [
{
"version": "v1",
"created": "Tue, 11 Nov 2008 14:09:36 GMT"
}
] | 2008-11-12T00:00:00 | [
[
"Wright",
"Sarah",
""
],
[
"Marwala",
"Tshilidzi",
""
]
] | TITLE: Artificial Intelligence Techniques for Steam Generator Modelling
ABSTRACT: This paper investigates the use of different Artificial Intelligence methods
to predict the values of several continuous variables from a Steam Generator.
The objective was to determine how the different artificial intelligence
methods performed in making predictions on the given dataset. The artificial
intelligence methods evaluated were Neural Networks, Support Vector Machines,
and Adaptive Neuro-Fuzzy Inference Systems. The types of neural networks
investigated were Multi-Layer Perceptions, and Radial Basis Function. Bayesian
and committee techniques were applied to these neural networks. Each of the AI
methods considered was simulated in Matlab. The results of the simulations
showed that all the AI methods were capable of predicting the Steam Generator
data reasonably accurately. However, the Adaptive Neuro-Fuzzy Inference system
out performed the other methods in terms of accuracy and ease of
implementation, while still achieving a fast execution time as well as a
reasonable training time.
| no_new_dataset | 0.953232 |
0810.5582 | Shubha Nabar | Rajeev Motwani, Shubha U. Nabar | Anonymizing Unstructured Data | 9 pages, 1 figure | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the problem of anonymizing datasets in which each
individual is associated with a set of items that constitute private
information about the individual. Illustrative datasets include market-basket
datasets and search engine query logs. We formalize the notion of k-anonymity
for set-valued data as a variant of the k-anonymity model for traditional
relational datasets. We define an optimization problem that arises from this
definition of anonymity and provide O(klogk) and O(1)-approximation algorithms
for the same. We demonstrate applicability of our algorithms to the America
Online query log dataset.
| [
{
"version": "v1",
"created": "Fri, 31 Oct 2008 19:25:02 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Nov 2008 23:33:20 GMT"
}
] | 2008-11-04T00:00:00 | [
[
"Motwani",
"Rajeev",
""
],
[
"Nabar",
"Shubha U.",
""
]
] | TITLE: Anonymizing Unstructured Data
ABSTRACT: In this paper we consider the problem of anonymizing datasets in which each
individual is associated with a set of items that constitute private
information about the individual. Illustrative datasets include market-basket
datasets and search engine query logs. We formalize the notion of k-anonymity
for set-valued data as a variant of the k-anonymity model for traditional
relational datasets. We define an optimization problem that arises from this
definition of anonymity and provide O(klogk) and O(1)-approximation algorithms
for the same. We demonstrate applicability of our algorithms to the America
Online query log dataset.
| no_new_dataset | 0.942348 |
0810.5758 | Renat Nuriyev | Renat Nuriyev | Non procedural language for parallel programs | 20 pages, will be printed in "Programming" magazine of RAS | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probably building non procedural languages is the most prospective way for
parallel programming just because non procedural means no fixed way for
execution. The article consists of 3 parts. In first part we consider formal
systems for definition a named datasets and studying an expression power of
different subclasses. In the second part we consider a complexity of algorithms
of building sets by the definitions. In third part we consider a fullness and
flexibility of the class of program based data set definitions.
| [
{
"version": "v1",
"created": "Fri, 31 Oct 2008 18:44:38 GMT"
}
] | 2008-11-03T00:00:00 | [
[
"Nuriyev",
"Renat",
""
]
] | TITLE: Non procedural language for parallel programs
ABSTRACT: Probably building non procedural languages is the most prospective way for
parallel programming just because non procedural means no fixed way for
execution. The article consists of 3 parts. In first part we consider formal
systems for definition a named datasets and studying an expression power of
different subclasses. In the second part we consider a complexity of algorithms
of building sets by the definitions. In third part we consider a fullness and
flexibility of the class of program based data set definitions.
| no_new_dataset | 0.941061 |
0810.5407 | Aleksandar Stojmirovi\'c | Aleksandar Stojmirovic | Quasi-metrics, Similarities and Searches: aspects of geometry of protein
datasets | 299 pages, 44 figures, 10 tables, 9 algorithms. PhD thesis in
mathematics defended in May 2005 at the Victoria University of Wellington,
Wellington, New Zealand (supervisors: Prof. Vladimir Pestov and Dr. Bill
Jordan) | null | null | null | cs.IR math.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A quasi-metric is a distance function which satisfies the triangle inequality
but is not symmetric: it can be thought of as an asymmetric metric. The central
result of this thesis, developed in Chapter 3, is that a natural correspondence
exists between similarity measures between biological (nucleotide or protein)
sequences and quasi-metrics.
Chapter 2 presents basic concepts of the theory of quasi-metric spaces and
introduces a new examples of them: the universal countable rational
quasi-metric space and its bicompletion, the universal bicomplete separable
quasi-metric space. Chapter 4 is dedicated to development of a notion of the
quasi-metric space with Borel probability measure, or pq-space. The main result
of this chapter indicates that `a high dimensional quasi-metric space is close
to being a metric space'.
Chapter 5 investigates the geometric aspects of the theory of database
similarity search in the context of quasi-metrics. The results about
$pq$-spaces are used to produce novel theoretical bounds on performance of
indexing schemes.
Finally, the thesis presents some biological applications. Chapter 6
introduces FSIndex, an indexing scheme that significantly accelerates
similarity searches of short protein fragment datasets. Chapter 7 presents the
prototype of the system for discovery of short functional protein motifs called
PFMFind, which relies on FSIndex for similarity searches.
| [
{
"version": "v1",
"created": "Thu, 30 Oct 2008 03:14:17 GMT"
}
] | 2008-10-31T00:00:00 | [
[
"Stojmirovic",
"Aleksandar",
""
]
] | TITLE: Quasi-metrics, Similarities and Searches: aspects of geometry of protein
datasets
ABSTRACT: A quasi-metric is a distance function which satisfies the triangle inequality
but is not symmetric: it can be thought of as an asymmetric metric. The central
result of this thesis, developed in Chapter 3, is that a natural correspondence
exists between similarity measures between biological (nucleotide or protein)
sequences and quasi-metrics.
Chapter 2 presents basic concepts of the theory of quasi-metric spaces and
introduces a new examples of them: the universal countable rational
quasi-metric space and its bicompletion, the universal bicomplete separable
quasi-metric space. Chapter 4 is dedicated to development of a notion of the
quasi-metric space with Borel probability measure, or pq-space. The main result
of this chapter indicates that `a high dimensional quasi-metric space is close
to being a metric space'.
Chapter 5 investigates the geometric aspects of the theory of database
similarity search in the context of quasi-metrics. The results about
$pq$-spaces are used to produce novel theoretical bounds on performance of
indexing schemes.
Finally, the thesis presents some biological applications. Chapter 6
introduces FSIndex, an indexing scheme that significantly accelerates
similarity searches of short protein fragment datasets. Chapter 7 presents the
prototype of the system for discovery of short functional protein motifs called
PFMFind, which relies on FSIndex for similarity searches.
| no_new_dataset | 0.942718 |
0810.5484 | Qiang Li | Qiang Li, Yan He, Jing-ping Jiang | A Novel Clustering Algorithm Based on a Modified Model of Random Walk | 21 pages, 13 figures | null | null | null | cs.LG cs.AI cs.MA | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We introduce a modified model of random walk, and then develop two novel
clustering algorithms based on it. In the algorithms, each data point in a
dataset is considered as a particle which can move at random in space according
to the preset rules in the modified model. Further, this data point may be also
viewed as a local control subsystem, in which the controller adjusts its
transition probability vector in terms of the feedbacks of all data points, and
then its transition direction is identified by an event-generating function.
Finally, the positions of all data points are updated. As they move in space,
data points collect gradually and some separating parts emerge among them
automatically. As a consequence, data points that belong to the same class are
located at a same position, whereas those that belong to different classes are
away from one another. Moreover, the experimental results have demonstrated
that data points in the test datasets are clustered reasonably and efficiently,
and the comparison with other algorithms also provides an indication of the
effectiveness of the proposed algorithms.
| [
{
"version": "v1",
"created": "Thu, 30 Oct 2008 13:26:31 GMT"
}
] | 2008-10-31T00:00:00 | [
[
"Li",
"Qiang",
""
],
[
"He",
"Yan",
""
],
[
"Jiang",
"Jing-ping",
""
]
] | TITLE: A Novel Clustering Algorithm Based on a Modified Model of Random Walk
ABSTRACT: We introduce a modified model of random walk, and then develop two novel
clustering algorithms based on it. In the algorithms, each data point in a
dataset is considered as a particle which can move at random in space according
to the preset rules in the modified model. Further, this data point may be also
viewed as a local control subsystem, in which the controller adjusts its
transition probability vector in terms of the feedbacks of all data points, and
then its transition direction is identified by an event-generating function.
Finally, the positions of all data points are updated. As they move in space,
data points collect gradually and some separating parts emerge among them
automatically. As a consequence, data points that belong to the same class are
located at a same position, whereas those that belong to different classes are
away from one another. Moreover, the experimental results have demonstrated
that data points in the test datasets are clustered reasonably and efficiently,
and the comparison with other algorithms also provides an indication of the
effectiveness of the proposed algorithms.
| no_new_dataset | 0.954605 |
0801.3654 | Mikhail Zaslavskiy | Mikhail Zaslavskiy, Francis Bach, and Jean-Philippe Vert | A path following algorithm for the graph matching problem | 23 pages, 13 figures,typo correction, new results in sections 4,5,6 | null | null | null | cs.CV cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a convex-concave programming approach for the labeled weighted
graph matching problem. The convex-concave programming formulation is obtained
by rewriting the weighted graph matching problem as a least-square problem on
the set of permutation matrices and relaxing it to two different optimization
problems: a quadratic convex and a quadratic concave optimization problem on
the set of doubly stochastic matrices. The concave relaxation has the same
global minimum as the initial graph matching problem, but the search for its
global minimum is also a hard combinatorial problem. We therefore construct an
approximation of the concave problem solution by following a solution path of a
convex-concave problem obtained by linear interpolation of the convex and
concave formulations, starting from the convex relaxation. This method allows
to easily integrate the information on graph label similarities into the
optimization problem, and therefore to perform labeled weighted graph matching.
The algorithm is compared with some of the best performing graph matching
methods on four datasets: simulated graphs, QAPLib, retina vessel images and
handwritten chinese characters. In all cases, the results are competitive with
the state-of-the-art.
| [
{
"version": "v1",
"created": "Wed, 23 Jan 2008 20:20:32 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Oct 2008 14:16:01 GMT"
}
] | 2008-10-27T00:00:00 | [
[
"Zaslavskiy",
"Mikhail",
""
],
[
"Bach",
"Francis",
""
],
[
"Vert",
"Jean-Philippe",
""
]
] | TITLE: A path following algorithm for the graph matching problem
ABSTRACT: We propose a convex-concave programming approach for the labeled weighted
graph matching problem. The convex-concave programming formulation is obtained
by rewriting the weighted graph matching problem as a least-square problem on
the set of permutation matrices and relaxing it to two different optimization
problems: a quadratic convex and a quadratic concave optimization problem on
the set of doubly stochastic matrices. The concave relaxation has the same
global minimum as the initial graph matching problem, but the search for its
global minimum is also a hard combinatorial problem. We therefore construct an
approximation of the concave problem solution by following a solution path of a
convex-concave problem obtained by linear interpolation of the convex and
concave formulations, starting from the convex relaxation. This method allows
to easily integrate the information on graph label similarities into the
optimization problem, and therefore to perform labeled weighted graph matching.
The algorithm is compared with some of the best performing graph matching
methods on four datasets: simulated graphs, QAPLib, retina vessel images and
handwritten chinese characters. In all cases, the results are competitive with
the state-of-the-art.
| no_new_dataset | 0.94545 |
0810.2764 | Nir Ailon | Nir Ailon | A Simple Linear Ranking Algorithm Using Query Dependent Intercept
Variables | 5 pages | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The LETOR website contains three information retrieval datasets used as a
benchmark for testing machine learning ideas for ranking. Algorithms
participating in the challenge are required to assign score values to search
results for a collection of queries, and are measured using standard IR ranking
measures (NDCG, precision, MAP) that depend only the relative score-induced
order of the results. Similarly to many of the ideas proposed in the
participating algorithms, we train a linear classifier. In contrast with other
participating algorithms, we define an additional free variable (intercept, or
benchmark) for each query. This allows expressing the fact that results for
different queries are incomparable for the purpose of determining relevance.
The cost of this idea is the addition of relatively few nuisance parameters.
Our approach is simple, and we used a standard logistic regression library to
test it. The results beat the reported participating algorithms. Hence, it
seems promising to combine our approach with other more complex ideas.
| [
{
"version": "v1",
"created": "Wed, 15 Oct 2008 19:03:10 GMT"
}
] | 2008-10-16T00:00:00 | [
[
"Ailon",
"Nir",
""
]
] | TITLE: A Simple Linear Ranking Algorithm Using Query Dependent Intercept
Variables
ABSTRACT: The LETOR website contains three information retrieval datasets used as a
benchmark for testing machine learning ideas for ranking. Algorithms
participating in the challenge are required to assign score values to search
results for a collection of queries, and are measured using standard IR ranking
measures (NDCG, precision, MAP) that depend only the relative score-induced
order of the results. Similarly to many of the ideas proposed in the
participating algorithms, we train a linear classifier. In contrast with other
participating algorithms, we define an additional free variable (intercept, or
benchmark) for each query. This allows expressing the fact that results for
different queries are incomparable for the purpose of determining relevance.
The cost of this idea is the addition of relatively few nuisance parameters.
Our approach is simple, and we used a standard logistic regression library to
test it. The results beat the reported participating algorithms. Hence, it
seems promising to combine our approach with other more complex ideas.
| no_new_dataset | 0.945399 |
0810.1355 | Michael Mahoney | Jure Leskovec, Kevin J. Lang, Anirban Dasgupta, and Michael W. Mahoney | Community Structure in Large Networks: Natural Cluster Sizes and the
Absence of Large Well-Defined Clusters | 66 pages, a much expanded version of our WWW 2008 paper | null | null | null | cs.DS physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large body of work has been devoted to defining and identifying clusters or
communities in social and information networks. We explore from a novel
perspective several questions related to identifying meaningful communities in
large social and information networks, and we come to several striking
conclusions. We employ approximation algorithms for the graph partitioning
problem to characterize as a function of size the statistical and structural
properties of partitions of graphs that could plausibly be interpreted as
communities. In particular, we define the network community profile plot, which
characterizes the "best" possible community--according to the conductance
measure--over a wide range of size scales. We study over 100 large real-world
social and information networks. Our results suggest a significantly more
refined picture of community structure in large networks than has been
appreciated previously. In particular, we observe tight communities that are
barely connected to the rest of the network at very small size scales; and
communities of larger size scales gradually "blend into" the expander-like core
of the network and thus become less "community-like." This behavior is not
explained, even at a qualitative level, by any of the commonly-used network
generation models. Moreover, it is exactly the opposite of what one would
expect based on intuition from expander graphs, low-dimensional or
manifold-like graphs, and from small social networks that have served as
testbeds of community detection algorithms. We have found that a generative
graph model, in which new edges are added via an iterative "forest fire"
burning process, is able to produce graphs exhibiting a network community
profile plot similar to what we observe in our network datasets.
| [
{
"version": "v1",
"created": "Wed, 8 Oct 2008 05:42:43 GMT"
}
] | 2008-10-13T00:00:00 | [
[
"Leskovec",
"Jure",
""
],
[
"Lang",
"Kevin J.",
""
],
[
"Dasgupta",
"Anirban",
""
],
[
"Mahoney",
"Michael W.",
""
]
] | TITLE: Community Structure in Large Networks: Natural Cluster Sizes and the
Absence of Large Well-Defined Clusters
ABSTRACT: A large body of work has been devoted to defining and identifying clusters or
communities in social and information networks. We explore from a novel
perspective several questions related to identifying meaningful communities in
large social and information networks, and we come to several striking
conclusions. We employ approximation algorithms for the graph partitioning
problem to characterize as a function of size the statistical and structural
properties of partitions of graphs that could plausibly be interpreted as
communities. In particular, we define the network community profile plot, which
characterizes the "best" possible community--according to the conductance
measure--over a wide range of size scales. We study over 100 large real-world
social and information networks. Our results suggest a significantly more
refined picture of community structure in large networks than has been
appreciated previously. In particular, we observe tight communities that are
barely connected to the rest of the network at very small size scales; and
communities of larger size scales gradually "blend into" the expander-like core
of the network and thus become less "community-like." This behavior is not
explained, even at a qualitative level, by any of the commonly-used network
generation models. Moreover, it is exactly the opposite of what one would
expect based on intuition from expander graphs, low-dimensional or
manifold-like graphs, and from small social networks that have served as
testbeds of community detection algorithms. We have found that a generative
graph model, in which new edges are added via an iterative "forest fire"
burning process, is able to produce graphs exhibiting a network community
profile plot similar to what we observe in our network datasets.
| no_new_dataset | 0.947914 |
0810.1426 | Matthew Wallace | Matthew L. Wallace, Vincent Larivi\`ere, Yves Gingras | Modeling a Century of Citation Distributions | 20 pages, 5 figures | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Changes in citation distributions over 100 years can reveal much about the
evolution of the scientific communities or disciplines. The prevalence of
uncited papers or of highly-cited papers, with respect to the bulk of
publications, provides important clues as to the dynamics of scientific
research. Using 25 million papers and 600 million references from the Web of
Science over the 1900-2006 period, this paper proposes a simple model based on
a random selection process to explain the "uncitedness" phenomenon and its
decline in recent years. We show that the proportion of uncited papers is a
function of 1) the number of articles published in a given year (the competing
papers) and 2) the number of articles subsequently published (the citing
papers) and the number of references they contain. Using uncitedness as a
departure point, we demonstrate the utility of the stretched-exponential
function and a form of the Tsallis function to fit complete citation
distributions over the 20th century. As opposed to simple power-law fits, for
instance, both these approaches are shown to be empirically well-grounded and
robust enough to better understand citation dynamics at the aggregate level.
Based on an expansion of these models, on our new understanding of uncitedness
and on our large dataset, we are able provide clear quantitative evidence and
provisional explanations for an important shift in citation practices around
1960, unmatched in the 20th century. We also propose a revision of the
"citation classic" category as a set of articles which is clearly
distinguishable from the rest of the field.
| [
{
"version": "v1",
"created": "Wed, 8 Oct 2008 13:14:22 GMT"
}
] | 2008-10-09T00:00:00 | [
[
"Wallace",
"Matthew L.",
""
],
[
"Larivière",
"Vincent",
""
],
[
"Gingras",
"Yves",
""
]
] | TITLE: Modeling a Century of Citation Distributions
ABSTRACT: Changes in citation distributions over 100 years can reveal much about the
evolution of the scientific communities or disciplines. The prevalence of
uncited papers or of highly-cited papers, with respect to the bulk of
publications, provides important clues as to the dynamics of scientific
research. Using 25 million papers and 600 million references from the Web of
Science over the 1900-2006 period, this paper proposes a simple model based on
a random selection process to explain the "uncitedness" phenomenon and its
decline in recent years. We show that the proportion of uncited papers is a
function of 1) the number of articles published in a given year (the competing
papers) and 2) the number of articles subsequently published (the citing
papers) and the number of references they contain. Using uncitedness as a
departure point, we demonstrate the utility of the stretched-exponential
function and a form of the Tsallis function to fit complete citation
distributions over the 20th century. As opposed to simple power-law fits, for
instance, both these approaches are shown to be empirically well-grounded and
robust enough to better understand citation dynamics at the aggregate level.
Based on an expansion of these models, on our new understanding of uncitedness
and on our large dataset, we are able provide clear quantitative evidence and
provisional explanations for an important shift in citation practices around
1960, unmatched in the 20th century. We also propose a revision of the
"citation classic" category as a set of articles which is clearly
distinguishable from the rest of the field.
| no_new_dataset | 0.699614 |
0809.3618 | Julian McAuley | Julian J. McAuley, Tiberio S. Caetano, Alexander J. Smola | Robust Near-Isometric Matching via Structured Learning of Graphical
Models | 11 pages, 9 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models for near-rigid shape matching are typically based on distance-related
features, in order to infer matches that are consistent with the isometric
assumption. However, real shapes from image datasets, even when expected to be
related by "almost isometric" transformations, are actually subject not only to
noise but also, to some limited degree, to variations in appearance and scale.
In this paper, we introduce a graphical model that parameterises appearance,
distance, and angle features and we learn all of the involved parameters via
structured prediction. The outcome is a model for near-rigid shape matching
which is robust in the sense that it is able to capture the possibly limited
but still important scale and appearance variations. Our experimental results
reveal substantial improvements upon recent successful models, while
maintaining similar running times.
| [
{
"version": "v1",
"created": "Sun, 21 Sep 2008 23:23:26 GMT"
}
] | 2008-09-23T00:00:00 | [
[
"McAuley",
"Julian J.",
""
],
[
"Caetano",
"Tiberio S.",
""
],
[
"Smola",
"Alexander J.",
""
]
] | TITLE: Robust Near-Isometric Matching via Structured Learning of Graphical
Models
ABSTRACT: Models for near-rigid shape matching are typically based on distance-related
features, in order to infer matches that are consistent with the isometric
assumption. However, real shapes from image datasets, even when expected to be
related by "almost isometric" transformations, are actually subject not only to
noise but also, to some limited degree, to variations in appearance and scale.
In this paper, we introduce a graphical model that parameterises appearance,
distance, and angle features and we learn all of the involved parameters via
structured prediction. The outcome is a model for near-rigid shape matching
which is robust in the sense that it is able to capture the possibly limited
but still important scale and appearance variations. Our experimental results
reveal substantial improvements upon recent successful models, while
maintaining similar running times.
| no_new_dataset | 0.95418 |
0809.3415 | Cl\'emence Magnien | Frederic Aidouni, Matthieu Latapy and Clemence Magnien | Ten weeks in the life of an eDonkey server | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a capture of the queries managed by an eDonkey server
during almost 10 weeks, leading to the observation of almost 9 billion messages
involving almost 90 million users and more than 275 million distinct files.
Acquisition and management of such data raises several challenges, which we
discuss as well as the solutions we developed. We obtain a very rich dataset,
orders of magnitude larger than previously avalaible ones, which we provide for
public use. We finally present basic analysis of the obtained data, which
already gives evidence of non-trivial features.
| [
{
"version": "v1",
"created": "Fri, 19 Sep 2008 16:45:26 GMT"
}
] | 2008-09-22T00:00:00 | [
[
"Aidouni",
"Frederic",
""
],
[
"Latapy",
"Matthieu",
""
],
[
"Magnien",
"Clemence",
""
]
] | TITLE: Ten weeks in the life of an eDonkey server
ABSTRACT: This paper presents a capture of the queries managed by an eDonkey server
during almost 10 weeks, leading to the observation of almost 9 billion messages
involving almost 90 million users and more than 275 million distinct files.
Acquisition and management of such data raises several challenges, which we
discuss as well as the solutions we developed. We obtain a very rich dataset,
orders of magnitude larger than previously avalaible ones, which we provide for
public use. We finally present basic analysis of the obtained data, which
already gives evidence of non-trivial features.
| new_dataset | 0.628874 |
0809.2085 | Laurent Jacob | Laurent Jacob, Francis Bach (INRIA Rocquencourt), Jean-Philippe Vert | Clustered Multi-Task Learning: A Convex Formulation | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multi-task learning several related tasks are considered simultaneously,
with the hope that by an appropriate sharing of information across tasks, each
task may benefit from the others. In the context of learning linear functions
for supervised classification or regression, this can be achieved by including
a priori information about the weight vectors associated with the tasks, and
how they are expected to be related to each other. In this paper, we assume
that tasks are clustered into groups, which are unknown beforehand, and that
tasks within a group have similar weight vectors. We design a new spectral norm
that encodes this a priori assumption, without the prior knowledge of the
partition of tasks into groups, resulting in a new convex optimization
formulation for multi-task learning. We show in simulations on synthetic
examples and on the IEDB MHC-I binding dataset, that our approach outperforms
well-known convex methods for multi-task learning, as well as related non
convex methods dedicated to the same problem.
| [
{
"version": "v1",
"created": "Thu, 11 Sep 2008 19:01:39 GMT"
}
] | 2008-09-12T00:00:00 | [
[
"Jacob",
"Laurent",
"",
"INRIA Rocquencourt"
],
[
"Bach",
"Francis",
"",
"INRIA Rocquencourt"
],
[
"Vert",
"Jean-Philippe",
""
]
] | TITLE: Clustered Multi-Task Learning: A Convex Formulation
ABSTRACT: In multi-task learning several related tasks are considered simultaneously,
with the hope that by an appropriate sharing of information across tasks, each
task may benefit from the others. In the context of learning linear functions
for supervised classification or regression, this can be achieved by including
a priori information about the weight vectors associated with the tasks, and
how they are expected to be related to each other. In this paper, we assume
that tasks are clustered into groups, which are unknown beforehand, and that
tasks within a group have similar weight vectors. We design a new spectral norm
that encodes this a priori assumption, without the prior knowledge of the
partition of tasks into groups, resulting in a new convex optimization
formulation for multi-task learning. We show in simulations on synthetic
examples and on the IEDB MHC-I binding dataset, that our approach outperforms
well-known convex methods for multi-task learning, as well as related non
convex methods dedicated to the same problem.
| no_new_dataset | 0.942876 |
0809.1493 | Francis Bach | Francis Bach (INRIA Rocquencourt) | Exploring Large Feature Spaces with Hierarchical Multiple Kernel
Learning | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For supervised and unsupervised learning, positive definite kernels allow to
use large and potentially infinite dimensional feature spaces with a
computational cost that only depends on the number of observations. This is
usually done through the penalization of predictor functions by Euclidean or
Hilbertian norms. In this paper, we explore penalizing by sparsity-inducing
norms such as the l1-norm or the block l1-norm. We assume that the kernel
decomposes into a large sum of individual basis kernels which can be embedded
in a directed acyclic graph; we show that it is then possible to perform kernel
selection through a hierarchical multiple kernel learning framework, in
polynomial time in the number of selected kernels. This framework is naturally
applied to non linear variable selection; our extensive simulations on
synthetic datasets and datasets from the UCI repository show that efficiently
exploring the large feature space through sparsity-inducing norms leads to
state-of-the-art predictive performance.
| [
{
"version": "v1",
"created": "Tue, 9 Sep 2008 06:48:10 GMT"
}
] | 2008-09-10T00:00:00 | [
[
"Bach",
"Francis",
"",
"INRIA Rocquencourt"
]
] | TITLE: Exploring Large Feature Spaces with Hierarchical Multiple Kernel
Learning
ABSTRACT: For supervised and unsupervised learning, positive definite kernels allow to
use large and potentially infinite dimensional feature spaces with a
computational cost that only depends on the number of observations. This is
usually done through the penalization of predictor functions by Euclidean or
Hilbertian norms. In this paper, we explore penalizing by sparsity-inducing
norms such as the l1-norm or the block l1-norm. We assume that the kernel
decomposes into a large sum of individual basis kernels which can be embedded
in a directed acyclic graph; we show that it is then possible to perform kernel
selection through a hierarchical multiple kernel learning framework, in
polynomial time in the number of selected kernels. This framework is naturally
applied to non linear variable selection; our extensive simulations on
synthetic datasets and datasets from the UCI repository show that efficiently
exploring the large feature space through sparsity-inducing norms leads to
state-of-the-art predictive performance.
| no_new_dataset | 0.947527 |
0808.3535 | Ioan Raicu | Ioan Raicu, Yong Zhao, Ian Foster, Alex Szalay | Data Diffusion: Dynamic Resource Provision and Data-Aware Scheduling for
Data Intensive Applications | 16 pages, 15 figures | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data intensive applications often involve the analysis of large datasets that
require large amounts of compute and storage resources. While dedicated compute
and/or storage farms offer good task/data throughput, they suffer low resource
utilization problem under varying workloads conditions. If we instead move such
data to distributed computing resources, then we incur expensive data transfer
cost. In this paper, we propose a data diffusion approach that combines dynamic
resource provisioning, on-demand data replication and caching, and data
locality-aware scheduling to achieve improved resource efficiency under varying
workloads. We define an abstract "data diffusion model" that takes into
consideration the workload characteristics, data accessing cost, application
throughput and resource utilization; we validate the model using a real-world
large-scale astronomy application. Our results show that data diffusion can
increase the performance index by as much as 34X, and improve application
response time by over 506X, while achieving near-optimal throughputs and
execution times.
| [
{
"version": "v1",
"created": "Tue, 26 Aug 2008 15:19:44 GMT"
}
] | 2008-08-27T00:00:00 | [
[
"Raicu",
"Ioan",
""
],
[
"Zhao",
"Yong",
""
],
[
"Foster",
"Ian",
""
],
[
"Szalay",
"Alex",
""
]
] | TITLE: Data Diffusion: Dynamic Resource Provision and Data-Aware Scheduling for
Data Intensive Applications
ABSTRACT: Data intensive applications often involve the analysis of large datasets that
require large amounts of compute and storage resources. While dedicated compute
and/or storage farms offer good task/data throughput, they suffer low resource
utilization problem under varying workloads conditions. If we instead move such
data to distributed computing resources, then we incur expensive data transfer
cost. In this paper, we propose a data diffusion approach that combines dynamic
resource provisioning, on-demand data replication and caching, and data
locality-aware scheduling to achieve improved resource efficiency under varying
workloads. We define an abstract "data diffusion model" that takes into
consideration the workload characteristics, data accessing cost, application
throughput and resource utilization; we validate the model using a real-world
large-scale astronomy application. Our results show that data diffusion can
increase the performance index by as much as 34X, and improve application
response time by over 506X, while achieving near-optimal throughputs and
execution times.
| no_new_dataset | 0.947866 |
0807.3755 | Martin Klein | Martin Klein, Michael L. Nelson | Approximating Document Frequency with Term Count Values | 11 pages, 6 figures, 4 tables | null | null | null | cs.IR cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For bounded datasets such as the TREC Web Track (WT10g) the computation of
term frequency (TF) and inverse document frequency (IDF) is not difficult.
However, when the corpus is the entire web, direct IDF calculation is
impossible and values must instead be estimated. Most available datasets
provide values for term count (TC) meaning the number of times a certain term
occurs in the entire corpus. Intuitively this value is different from document
frequency (DF), the number of documents (e.g., web pages) a certain term occurs
in. We conduct a comparison study between TC and DF values within the Web as
Corpus (WaC). We found a very strong correlation with Spearman's rho >0.8
(p<0.005) which makes us confident in claiming that for such recently created
corpora the TC and DF values can be used interchangeably to compute IDF values.
These results are useful for the generation of accurate lexical signatures
based on the TF-IDF scheme.
| [
{
"version": "v1",
"created": "Wed, 23 Jul 2008 21:44:46 GMT"
}
] | 2008-07-25T00:00:00 | [
[
"Klein",
"Martin",
""
],
[
"Nelson",
"Michael L.",
""
]
] | TITLE: Approximating Document Frequency with Term Count Values
ABSTRACT: For bounded datasets such as the TREC Web Track (WT10g) the computation of
term frequency (TF) and inverse document frequency (IDF) is not difficult.
However, when the corpus is the entire web, direct IDF calculation is
impossible and values must instead be estimated. Most available datasets
provide values for term count (TC) meaning the number of times a certain term
occurs in the entire corpus. Intuitively this value is different from document
frequency (DF), the number of documents (e.g., web pages) a certain term occurs
in. We conduct a comparison study between TC and DF values within the Web as
Corpus (WaC). We found a very strong correlation with Spearman's rho >0.8
(p<0.005) which makes us confident in claiming that for such recently created
corpora the TC and DF values can be used interchangeably to compute IDF values.
These results are useful for the generation of accurate lexical signatures
based on the TF-IDF scheme.
| no_new_dataset | 0.944893 |
0806.4703 | Feng Li | Feng Li and Shuigeng Zhou | Challenging More Updates: Towards Anonymous Re-publication of Fully
Dynamic Datasets | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most existing anonymization work has been done on static datasets, which have
no update and need only one-time publication. Recent studies consider
anonymizing dynamic datasets with external updates: the datasets are updated
with record insertions and/or deletions. This paper addresses a new problem:
anonymous re-publication of datasets with internal updates, where the attribute
values of each record are dynamically updated. This is an important and
challenging problem for attribute values of records are updating frequently in
practice and existing methods are unable to deal with such a situation.
We initiate a formal study of anonymous re-publication of dynamic datasets
with internal updates, and show the invalidation of existing methods. We
introduce theoretical definition and analysis of dynamic datasets, and present
a general privacy disclosure framework that is applicable to all anonymous
re-publication problems. We propose a new counterfeited generalization
principle alled m-Distinct to effectively anonymize datasets with both external
updates and internal updates. We also develop an algorithm to generalize
datasets to meet m-Distinct. The experiments conducted on real-world data
demonstrate the effectiveness of the proposed solution.
| [
{
"version": "v1",
"created": "Sat, 28 Jun 2008 16:24:03 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Jul 2008 08:24:57 GMT"
}
] | 2008-07-24T00:00:00 | [
[
"Li",
"Feng",
""
],
[
"Zhou",
"Shuigeng",
""
]
] | TITLE: Challenging More Updates: Towards Anonymous Re-publication of Fully
Dynamic Datasets
ABSTRACT: Most existing anonymization work has been done on static datasets, which have
no update and need only one-time publication. Recent studies consider
anonymizing dynamic datasets with external updates: the datasets are updated
with record insertions and/or deletions. This paper addresses a new problem:
anonymous re-publication of datasets with internal updates, where the attribute
values of each record are dynamically updated. This is an important and
challenging problem for attribute values of records are updating frequently in
practice and existing methods are unable to deal with such a situation.
We initiate a formal study of anonymous re-publication of dynamic datasets
with internal updates, and show the invalidation of existing methods. We
introduce theoretical definition and analysis of dynamic datasets, and present
a general privacy disclosure framework that is applicable to all anonymous
re-publication problems. We propose a new counterfeited generalization
principle alled m-Distinct to effectively anonymize datasets with both external
updates and internal updates. We also develop an algorithm to generalize
datasets to meet m-Distinct. The experiments conducted on real-world data
demonstrate the effectiveness of the proposed solution.
| no_new_dataset | 0.943712 |
0807.2097 | Seung Ki Baek | Seung Ki Baek, Tae Young Kim, Beom Jun Kim | Testing a priority-based queue model with Linux command histories | 17 pages, 17 figures | Physica A 387, 3660 (2008) | 10.1016/j.physa.2008.02.021 | null | physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study human dynamics by analyzing Linux history files. The goodness-of-fit
test shows that most of the collected datasets belong to the universality class
suggested in the literature by a variable-length queueing process based on
priority. In order to check the validity of this model, we design two tests
based on mutual information between time intervals and a mathematical
relationship known as the arcsine law. Since the previously suggested queueing
process fails to pass these tests, the result suggests that the modelling of
human dynamics should properly consider the statistical dependency in the
temporal dimension.
| [
{
"version": "v1",
"created": "Mon, 14 Jul 2008 07:26:54 GMT"
}
] | 2008-07-15T00:00:00 | [
[
"Baek",
"Seung Ki",
""
],
[
"Kim",
"Tae Young",
""
],
[
"Kim",
"Beom Jun",
""
]
] | TITLE: Testing a priority-based queue model with Linux command histories
ABSTRACT: We study human dynamics by analyzing Linux history files. The goodness-of-fit
test shows that most of the collected datasets belong to the universality class
suggested in the literature by a variable-length queueing process based on
priority. In order to check the validity of this model, we design two tests
based on mutual information between time intervals and a mathematical
relationship known as the arcsine law. Since the previously suggested queueing
process fails to pass these tests, the result suggests that the modelling of
human dynamics should properly consider the statistical dependency in the
temporal dimension.
| no_new_dataset | 0.94545 |
0806.4686 | Tong Zhang | John Langford, Lihong Li, Tong Zhang | Sparse Online Learning via Truncated Gradient | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a general method called truncated gradient to induce sparsity in
the weights of online learning algorithms with convex loss functions. This
method has several essential properties: The degree of sparsity is continuous
-- a parameter controls the rate of sparsification from no sparsification to
total sparsification. The approach is theoretically motivated, and an instance
of it can be regarded as an online counterpart of the popular
$L_1$-regularization method in the batch setting. We prove that small rates of
sparsification result in only small additional regret with respect to typical
online learning guarantees. The approach works well empirically. We apply the
approach to several datasets and find that for datasets with large numbers of
features, substantial sparsity is discoverable.
| [
{
"version": "v1",
"created": "Sat, 28 Jun 2008 14:19:50 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Jul 2008 01:58:32 GMT"
}
] | 2008-07-04T00:00:00 | [
[
"Langford",
"John",
""
],
[
"Li",
"Lihong",
""
],
[
"Zhang",
"Tong",
""
]
] | TITLE: Sparse Online Learning via Truncated Gradient
ABSTRACT: We propose a general method called truncated gradient to induce sparsity in
the weights of online learning algorithms with convex loss functions. This
method has several essential properties: The degree of sparsity is continuous
-- a parameter controls the rate of sparsification from no sparsification to
total sparsification. The approach is theoretically motivated, and an instance
of it can be regarded as an online counterpart of the popular
$L_1$-regularization method in the batch setting. We prove that small rates of
sparsification result in only small additional regret with respect to typical
online learning guarantees. The approach works well empirically. We apply the
approach to several datasets and find that for datasets with large numbers of
features, substantial sparsity is discoverable.
| no_new_dataset | 0.948728 |
0806.2833 | Robert Cameron | R. Cameron M. Sch\"ussler | A robust correlation between growth rate and amplitude of solar cycles:
consequences for prediction methods | ApJ accepted | null | null | null | astro-ph physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the statistical relationship between the growth rate of activity
in the early phase of a solar cycle with its subsequent amplitude on the basis
of four datasets of global activity indices (Wolf sunspot number, group sunspot
number, sunspot area, and 10.7-cm radio flux). In all cases, a significant
correlation is found: stronger cycles tend to rise faster. Owing to the
overlapping of sunspot cycles, this correlation leads to an amplitude-dependent
shift of the solar minimum epoch. We show that this effect explains the
correlations underlying various so-called precursor methods for the prediction
of solar cycle amplitudes and also affects the prediction tool of Dikpati et
al. (2006) based upon a dynamo model. Inferences as to the nature of the solar
dynamo mechanism resulting from predictive schemes which (directly or
indirectly) use the timing of solar minima should therefore be treated with
caution.
| [
{
"version": "v1",
"created": "Tue, 17 Jun 2008 16:25:40 GMT"
}
] | 2008-06-18T00:00:00 | [
[
"Schüssler",
"R. Cameron M.",
""
]
] | TITLE: A robust correlation between growth rate and amplitude of solar cycles:
consequences for prediction methods
ABSTRACT: We consider the statistical relationship between the growth rate of activity
in the early phase of a solar cycle with its subsequent amplitude on the basis
of four datasets of global activity indices (Wolf sunspot number, group sunspot
number, sunspot area, and 10.7-cm radio flux). In all cases, a significant
correlation is found: stronger cycles tend to rise faster. Owing to the
overlapping of sunspot cycles, this correlation leads to an amplitude-dependent
shift of the solar minimum epoch. We show that this effect explains the
correlations underlying various so-called precursor methods for the prediction
of solar cycle amplitudes and also affects the prediction tool of Dikpati et
al. (2006) based upon a dynamo model. Inferences as to the nature of the solar
dynamo mechanism resulting from predictive schemes which (directly or
indirectly) use the timing of solar minima should therefore be treated with
caution.
| no_new_dataset | 0.94474 |
0805.4508 | Hong Tang | Hong Tang, Nozha Boujemma, Yunhao Chen | Modeling Loosely Annotated Images with Imagined Annotations | 10 pages | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an approach to learning latent semantic analysis
models from loosely annotated images for automatic image annotation and
indexing. The given annotation in training images is loose due to: (1)
ambiguous correspondences between visual features and annotated keywords; (2)
incomplete lists of annotated keywords. The second reason motivates us to
enrich the incomplete annotation in a simple way before learning topic models.
In particular, some imagined keywords are poured into the incomplete annotation
through measuring similarity between keywords. Then, both given and imagined
annotations are used to learning probabilistic topic models for automatically
annotating new images. We conduct experiments on a typical Corel dataset of
images and loose annotations, and compare the proposed method with
state-of-the-art discrete annotation methods (using a set of discrete blobs to
represent an image). The proposed method improves word-driven probability
Latent Semantic Analysis (PLSA-words) up to a comparable performance with the
best discrete annotation method, while a merit of PLSA-words is still kept,
i.e., a wider semantic range.
| [
{
"version": "v1",
"created": "Thu, 29 May 2008 10:35:29 GMT"
}
] | 2008-05-30T00:00:00 | [
[
"Tang",
"Hong",
""
],
[
"Boujemma",
"Nozha",
""
],
[
"Chen",
"Yunhao",
""
]
] | TITLE: Modeling Loosely Annotated Images with Imagined Annotations
ABSTRACT: In this paper, we present an approach to learning latent semantic analysis
models from loosely annotated images for automatic image annotation and
indexing. The given annotation in training images is loose due to: (1)
ambiguous correspondences between visual features and annotated keywords; (2)
incomplete lists of annotated keywords. The second reason motivates us to
enrich the incomplete annotation in a simple way before learning topic models.
In particular, some imagined keywords are poured into the incomplete annotation
through measuring similarity between keywords. Then, both given and imagined
annotations are used to learning probabilistic topic models for automatically
annotating new images. We conduct experiments on a typical Corel dataset of
images and loose annotations, and compare the proposed method with
state-of-the-art discrete annotation methods (using a set of discrete blobs to
represent an image). The proposed method improves word-driven probability
Latent Semantic Analysis (PLSA-words) up to a comparable performance with the
best discrete annotation method, while a merit of PLSA-words is still kept,
i.e., a wider semantic range.
| no_new_dataset | 0.951818 |
0803.0034 | Leonid Andreev V | Leonid Andreev | From a set of parts to an indivisible whole. Part I: Operations in a
closed mode | 28 pages, 10 figures; typos in equations (4) and (5) corrected | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper provides a description of a new method for information processing
based on holistic approach wherein analysis is a direct product of synthesis.
The core of the method is iterative averaging of all the elements of a system
according to all the parameters describing the elements. Contrary to common
logic, the iterative averaging of a system's elements does not result in
homogenization of the system; instead, it causes an obligatory subdivision of
the system into two alternative subgroups, leaving no outliers. Within each of
the formed subgroups, similarity coefficients between the elements reach the
value of 1, whereas similarity coefficients between the elements of different
subgroups equal a certain constant value greater than 0 but lower than 1. When
subjected to iterative averaging, any system consisting of three or more
elements of which at least two elements are not completely identical undergo
such a process of bifurcation that occurs non-linearly. Successive iterative
averaging of each of the forming subgroups eventually provides a hierarchical
system that reflects relationships between the elements of an input system
under analysis. We propose a definition of a natural hierarchy that can exist
only in conditions of closeness of a system and can be discovered upon
providing such an effect onto a system which allows its elements interact with
each other based on the principle of self-organization. Self-organization can
be achieved through an overall and total cross-averaging of a system's
elements. We demonstrate the application potentials of the proposed technology
on a number of examples, including a system of scattered points, randomized
datasets, as well as meteorological and demographical datasets.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2008 01:58:59 GMT"
},
{
"version": "v2",
"created": "Wed, 28 May 2008 04:43:14 GMT"
}
] | 2008-05-28T00:00:00 | [
[
"Andreev",
"Leonid",
""
]
] | TITLE: From a set of parts to an indivisible whole. Part I: Operations in a
closed mode
ABSTRACT: This paper provides a description of a new method for information processing
based on holistic approach wherein analysis is a direct product of synthesis.
The core of the method is iterative averaging of all the elements of a system
according to all the parameters describing the elements. Contrary to common
logic, the iterative averaging of a system's elements does not result in
homogenization of the system; instead, it causes an obligatory subdivision of
the system into two alternative subgroups, leaving no outliers. Within each of
the formed subgroups, similarity coefficients between the elements reach the
value of 1, whereas similarity coefficients between the elements of different
subgroups equal a certain constant value greater than 0 but lower than 1. When
subjected to iterative averaging, any system consisting of three or more
elements of which at least two elements are not completely identical undergo
such a process of bifurcation that occurs non-linearly. Successive iterative
averaging of each of the forming subgroups eventually provides a hierarchical
system that reflects relationships between the elements of an input system
under analysis. We propose a definition of a natural hierarchy that can exist
only in conditions of closeness of a system and can be discovered upon
providing such an effect onto a system which allows its elements interact with
each other based on the principle of self-organization. Self-organization can
be achieved through an overall and total cross-averaging of a system's
elements. We demonstrate the application potentials of the proposed technology
on a number of examples, including a system of scattered points, randomized
datasets, as well as meteorological and demographical datasets.
| no_new_dataset | 0.943191 |
0805.2045 | Ciro Cattuto | Ciro Cattuto, Dominik Benz, Andreas Hotho, Gerd Stumme | Semantic Analysis of Tag Similarity Measures in Collaborative Tagging
Systems | 5 pages, 2 figures | null | null | null | cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social bookmarking systems allow users to organise collections of resources
on the Web in a collaborative fashion. The increasing popularity of these
systems as well as first insights into their emergent semantics have made them
relevant to disciplines like knowledge extraction and ontology learning. The
problem of devising methods to measure the semantic relatedness between tags
and characterizing it semantically is still largely open. Here we analyze three
measures of tag relatedness: tag co-occurrence, cosine similarity of
co-occurrence distributions, and FolkRank, an adaptation of the PageRank
algorithm to folksonomies. Each measure is computed on tags from a large-scale
dataset crawled from the social bookmarking system del.icio.us. To provide a
semantic grounding of our findings, a connection to WordNet (a semantic lexicon
for the English language) is established by mapping tags into synonym sets of
WordNet, and applying there well-known metrics of semantic similarity. Our
results clearly expose different characteristics of the selected measures of
relatedness, making them applicable to different subtasks of knowledge
extraction such as synonym detection or discovery of concept hierarchies.
| [
{
"version": "v1",
"created": "Wed, 14 May 2008 14:10:02 GMT"
}
] | 2008-05-15T00:00:00 | [
[
"Cattuto",
"Ciro",
""
],
[
"Benz",
"Dominik",
""
],
[
"Hotho",
"Andreas",
""
],
[
"Stumme",
"Gerd",
""
]
] | TITLE: Semantic Analysis of Tag Similarity Measures in Collaborative Tagging
Systems
ABSTRACT: Social bookmarking systems allow users to organise collections of resources
on the Web in a collaborative fashion. The increasing popularity of these
systems as well as first insights into their emergent semantics have made them
relevant to disciplines like knowledge extraction and ontology learning. The
problem of devising methods to measure the semantic relatedness between tags
and characterizing it semantically is still largely open. Here we analyze three
measures of tag relatedness: tag co-occurrence, cosine similarity of
co-occurrence distributions, and FolkRank, an adaptation of the PageRank
algorithm to folksonomies. Each measure is computed on tags from a large-scale
dataset crawled from the social bookmarking system del.icio.us. To provide a
semantic grounding of our findings, a connection to WordNet (a semantic lexicon
for the English language) is established by mapping tags into synonym sets of
WordNet, and applying there well-known metrics of semantic similarity. Our
results clearly expose different characteristics of the selected measures of
relatedness, making them applicable to different subtasks of knowledge
extraction such as synonym detection or discovery of concept hierarchies.
| no_new_dataset | 0.944587 |
0805.0120 | Stephen Vavasis | Michael Biggs, Ali Ghodsi, Stephen Vavasis | Nonnegative Matrix Factorization via Rank-One Downdate | null | null | null | null | cs.IR cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonnegative matrix factorization (NMF) was popularized as a tool for data
mining by Lee and Seung in 1999. NMF attempts to approximate a matrix with
nonnegative entries by a product of two low-rank matrices, also with
nonnegative entries. We propose an algorithm called rank-one downdate (R1D) for
computing a NMF that is partly motivated by singular value decomposition. This
algorithm computes the dominant singular values and vectors of adaptively
determined submatrices of a matrix. On each iteration, R1D extracts a rank-one
submatrix from the dataset according to an objective function. We establish a
theoretical result that maximizing this objective function corresponds to
correctly classifying articles in a nearly separable corpus. We also provide
computational experiments showing the success of this method in identifying
features in realistic datasets.
| [
{
"version": "v1",
"created": "Thu, 1 May 2008 17:59:44 GMT"
}
] | 2008-05-02T00:00:00 | [
[
"Biggs",
"Michael",
""
],
[
"Ghodsi",
"Ali",
""
],
[
"Vavasis",
"Stephen",
""
]
] | TITLE: Nonnegative Matrix Factorization via Rank-One Downdate
ABSTRACT: Nonnegative matrix factorization (NMF) was popularized as a tool for data
mining by Lee and Seung in 1999. NMF attempts to approximate a matrix with
nonnegative entries by a product of two low-rank matrices, also with
nonnegative entries. We propose an algorithm called rank-one downdate (R1D) for
computing a NMF that is partly motivated by singular value decomposition. This
algorithm computes the dominant singular values and vectors of adaptively
determined submatrices of a matrix. On each iteration, R1D extracts a rank-one
submatrix from the dataset according to an objective function. We establish a
theoretical result that maximizing this objective function corresponds to
correctly classifying articles in a nearly separable corpus. We also provide
computational experiments showing the success of this method in identifying
features in realistic datasets.
| no_new_dataset | 0.944893 |
0804.3417 | Nicholas M. Ball | Nicholas M. Ball (1), Robert J. Brunner (1 and 2), Adam D. Myers (1)
((1) Department of Astronomy, University of Illinois at Urbana-Champaign, (2)
National Center for Supercomputing Applications, Urbana-Champaign) | Robust Machine Learning Applied to Terascale Astronomical Datasets | 11 pages, 2 figures, uses llncs.cls. To appear in the 9th LCI
International Conference on High-Performance Clustered Computing | null | null | Not arXiv:0710.4482 | astro-ph cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present recent results from the LCDM (Laboratory for Cosmological Data
Mining; http://lcdm.astro.uiuc.edu) collaboration between UIUC Astronomy and
NCSA to deploy supercomputing cluster resources and machine learning algorithms
for the mining of terascale astronomical datasets. This is a novel application
in the field of astronomy, because we are using such resources for data mining,
and not just performing simulations. Via a modified implementation of the NCSA
cyberenvironment Data-to-Knowledge, we are able to provide improved
classifications for over 100 million stars and galaxies in the Sloan Digital
Sky Survey, improved distance measures, and a full exploitation of the simple
but powerful k-nearest neighbor algorithm. A driving principle of this work is
that our methods should be extensible from current terascale datasets to
upcoming petascale datasets and beyond. We discuss issues encountered to-date,
and further issues for the transition to petascale. In particular, disk I/O
will become a major limiting factor unless the necessary infrastructure is
implemented.
| [
{
"version": "v1",
"created": "Mon, 21 Apr 2008 21:58:18 GMT"
}
] | 2008-04-29T00:00:00 | [
[
"Ball",
"Nicholas M.",
"",
"1 and 2"
],
[
"Brunner",
"Robert J.",
"",
"1 and 2"
],
[
"Myers",
"Adam D.",
""
]
] | TITLE: Robust Machine Learning Applied to Terascale Astronomical Datasets
ABSTRACT: We present recent results from the LCDM (Laboratory for Cosmological Data
Mining; http://lcdm.astro.uiuc.edu) collaboration between UIUC Astronomy and
NCSA to deploy supercomputing cluster resources and machine learning algorithms
for the mining of terascale astronomical datasets. This is a novel application
in the field of astronomy, because we are using such resources for data mining,
and not just performing simulations. Via a modified implementation of the NCSA
cyberenvironment Data-to-Knowledge, we are able to provide improved
classifications for over 100 million stars and galaxies in the Sloan Digital
Sky Survey, improved distance measures, and a full exploitation of the simple
but powerful k-nearest neighbor algorithm. A driving principle of this work is
that our methods should be extensible from current terascale datasets to
upcoming petascale datasets and beyond. We discuss issues encountered to-date,
and further issues for the transition to petascale. In particular, disk I/O
will become a major limiting factor unless the necessary infrastructure is
implemented.
| no_new_dataset | 0.95253 |
cs/0512095 | Dmitri Krioukov | Priya Mahadevan, Dmitri Krioukov, Marina Fomenkov, Bradley Huffaker,
Xenofontas Dimitropoulos, kc claffy, Amin Vahdat | The Internet AS-Level Topology: Three Data Sources and One Definitive
Metric | This paper is a revised journal version of cs.NI/0508033 | ACM SIGCOMM Computer Communication Review (CCR), v.36, n.1,
p.17-26, 2006 | 10.1145/1111322.1111328 | null | cs.NI physics.soc-ph | null | We calculate an extensive set of characteristics for Internet AS topologies
extracted from the three data sources most frequently used by the research
community: traceroutes, BGP, and WHOIS. We discover that traceroute and BGP
topologies are similar to one another but differ substantially from the WHOIS
topology. Among the widely considered metrics, we find that the joint degree
distribution appears to fundamentally characterize Internet AS topologies as
well as narrowly define values for other important metrics. We discuss the
interplay between the specifics of the three data collection mechanisms and the
resulting topology views. In particular, we show how the data collection
peculiarities explain differences in the resulting joint degree distributions
of the respective topologies. Finally, we release to the community the input
topology datasets, along with the scripts and output of our calculations. This
supplement should enable researchers to validate their models against real data
and to make more informed selection of topology data sources for their specific
needs.
| [
{
"version": "v1",
"created": "Sat, 24 Dec 2005 03:19:24 GMT"
}
] | 2008-04-16T00:00:00 | [
[
"Mahadevan",
"Priya",
""
],
[
"Krioukov",
"Dmitri",
""
],
[
"Fomenkov",
"Marina",
""
],
[
"Huffaker",
"Bradley",
""
],
[
"Dimitropoulos",
"Xenofontas",
""
],
[
"claffy",
"kc",
""
],
[
"Vahdat",
"Amin",
""
]
] | TITLE: The Internet AS-Level Topology: Three Data Sources and One Definitive
Metric
ABSTRACT: We calculate an extensive set of characteristics for Internet AS topologies
extracted from the three data sources most frequently used by the research
community: traceroutes, BGP, and WHOIS. We discover that traceroute and BGP
topologies are similar to one another but differ substantially from the WHOIS
topology. Among the widely considered metrics, we find that the joint degree
distribution appears to fundamentally characterize Internet AS topologies as
well as narrowly define values for other important metrics. We discuss the
interplay between the specifics of the three data collection mechanisms and the
resulting topology views. In particular, we show how the data collection
peculiarities explain differences in the resulting joint degree distributions
of the respective topologies. Finally, we release to the community the input
topology datasets, along with the scripts and output of our calculations. This
supplement should enable researchers to validate their models against real data
and to make more informed selection of topology data sources for their specific
needs.
| no_new_dataset | 0.946843 |
0803.1417 | Alessandra Retico | P. Delogu, M.E. Fantacci, P. Kasae, A. Retico | Characterization of mammographic masses using a gradient-based
segmentation algorithm and a neural classifier | 18 pages, 7 figures | Comput Biol Med. 2007 Oct;37(10):1479-91. Epub 2007 Mar 26 | 10.1016/j.compbiomed.2007.01.009 | null | physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The computer-aided diagnosis system we developed for the mass
characterization is mainly based on a segmentation algorithm and on the neural
classification of several features computed on the segmented mass. Mass
segmentation plays a key role in most computerized systems. Our technique is a
gradient-based one, showing the main characteristic that no free parameters
have been evaluated on the dataset used in this analysis, thus it can directly
be applied to datasets acquired in different conditions without any ad-hoc
modification. A dataset of 226 masses (109 malignant and 117 benign) has been
used in this study. The segmentation algorithm works with a comparable
efficiency both on malignant and benign masses. Sixteen features based on
shape, size and intensity of the segmented masses are analyzed by a
multi-layered perceptron neural network. A feature selection procedure has been
carried out on the basis of the feature discriminating power and of the linear
correlations interplaying among them. The comparison of the areas under the ROC
curves obtained by varying the number of features to be classified has shown
that 12 selected features out of the 16 computed ones are powerful enough to
achieve the best classifier performances. The radiologist assigned the
segmented masses to three different categories: correctly-, acceptably- and
non-acceptably-segmented masses. We initially estimated the area under ROC
curve only on the first category of segmented masses (the 88.5% of the
dataset), then extending the dataset to the second sub-class (reaching the
97.8% of the dataset) and finally to the whole dataset, obtaining Az =
0.805+-0.030, 0.787+-0.024 and 0.780+-0.023, respectively.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2008 13:46:28 GMT"
}
] | 2008-03-11T00:00:00 | [
[
"Delogu",
"P.",
""
],
[
"Fantacci",
"M. E.",
""
],
[
"Kasae",
"P.",
""
],
[
"Retico",
"A.",
""
]
] | TITLE: Characterization of mammographic masses using a gradient-based
segmentation algorithm and a neural classifier
ABSTRACT: The computer-aided diagnosis system we developed for the mass
characterization is mainly based on a segmentation algorithm and on the neural
classification of several features computed on the segmented mass. Mass
segmentation plays a key role in most computerized systems. Our technique is a
gradient-based one, showing the main characteristic that no free parameters
have been evaluated on the dataset used in this analysis, thus it can directly
be applied to datasets acquired in different conditions without any ad-hoc
modification. A dataset of 226 masses (109 malignant and 117 benign) has been
used in this study. The segmentation algorithm works with a comparable
efficiency both on malignant and benign masses. Sixteen features based on
shape, size and intensity of the segmented masses are analyzed by a
multi-layered perceptron neural network. A feature selection procedure has been
carried out on the basis of the feature discriminating power and of the linear
correlations interplaying among them. The comparison of the areas under the ROC
curves obtained by varying the number of features to be classified has shown
that 12 selected features out of the 16 computed ones are powerful enough to
achieve the best classifier performances. The radiologist assigned the
segmented masses to three different categories: correctly-, acceptably- and
non-acceptably-segmented masses. We initially estimated the area under ROC
curve only on the first category of segmented masses (the 88.5% of the
dataset), then extending the dataset to the second sub-class (reaching the
97.8% of the dataset) and finally to the whole dataset, obtaining Az =
0.805+-0.030, 0.787+-0.024 and 0.780+-0.023, respectively.
| no_new_dataset | 0.946597 |
0803.0939 | Jure Leskovec | Jure Leskovec, Eric Horvitz | Planetary-Scale Views on an Instant-Messaging Network | null | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a study of anonymized data capturing a month of high-level
communication activities within the whole of the Microsoft Messenger
instant-messaging system. We examine characteristics and patterns that emerge
from the collective dynamics of large numbers of people, rather than the
actions and characteristics of individuals. The dataset contains summary
properties of 30 billion conversations among 240 million people. From the data,
we construct a communication graph with 180 million nodes and 1.3 billion
undirected edges, creating the largest social network constructed and analyzed
to date. We report on multiple aspects of the dataset and synthesized graph. We
find that the graph is well-connected and robust to node removal. We
investigate on a planetary-scale the oft-cited report that people are separated
by ``six degrees of separation'' and find that the average path length among
Messenger users is 6.6. We also find that people tend to communicate more with
each other when they have similar age, language, and location, and that
cross-gender conversations are both more frequent and of longer duration than
conversations with the same gender.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2008 18:40:37 GMT"
}
] | 2008-03-07T00:00:00 | [
[
"Leskovec",
"Jure",
""
],
[
"Horvitz",
"Eric",
""
]
] | TITLE: Planetary-Scale Views on an Instant-Messaging Network
ABSTRACT: We present a study of anonymized data capturing a month of high-level
communication activities within the whole of the Microsoft Messenger
instant-messaging system. We examine characteristics and patterns that emerge
from the collective dynamics of large numbers of people, rather than the
actions and characteristics of individuals. The dataset contains summary
properties of 30 billion conversations among 240 million people. From the data,
we construct a communication graph with 180 million nodes and 1.3 billion
undirected edges, creating the largest social network constructed and analyzed
to date. We report on multiple aspects of the dataset and synthesized graph. We
find that the graph is well-connected and robust to node removal. We
investigate on a planetary-scale the oft-cited report that people are separated
by ``six degrees of separation'' and find that the average path length among
Messenger users is 6.6. We also find that people tend to communicate more with
each other when they have similar age, language, and location, and that
cross-gender conversations are both more frequent and of longer duration than
conversations with the same gender.
| new_dataset | 0.865906 |
0802.4126 | Alexei Botchkarev | Peter Andru, Alexei Botchkarev | Hospital Case Cost Estimates Modelling - Algorithm Comparison | null | null | null | null | cs.CE cs.DB | http://creativecommons.org/licenses/publicdomain/ | Ontario (Canada) Health System stakeholders support the idea and necessity of
the integrated source of data that would include both clinical (e.g. diagnosis,
intervention, length of stay, case mix group) and financial (e.g. cost per
weighted case, cost per diem) characteristics of the Ontario healthcare system
activities at the patient-specific level. At present, the actual patient-level
case costs in the explicit form are not available in the financial databases
for all hospitals. The goal of this research effort is to develop financial
models that will assign each clinical case in the patient-specific data
warehouse a dollar value, representing the cost incurred by the Ontario health
care facility which treated the patient. Five mathematical models have been
developed and verified using real dataset. All models can be classified into
two groups based on their underlying method: 1. Models based on using relative
intensity weights of the cases, and 2. Models based on using cost per diem.
| [
{
"version": "v1",
"created": "Thu, 28 Feb 2008 04:56:48 GMT"
}
] | 2008-02-29T00:00:00 | [
[
"Andru",
"Peter",
""
],
[
"Botchkarev",
"Alexei",
""
]
] | TITLE: Hospital Case Cost Estimates Modelling - Algorithm Comparison
ABSTRACT: Ontario (Canada) Health System stakeholders support the idea and necessity of
the integrated source of data that would include both clinical (e.g. diagnosis,
intervention, length of stay, case mix group) and financial (e.g. cost per
weighted case, cost per diem) characteristics of the Ontario healthcare system
activities at the patient-specific level. At present, the actual patient-level
case costs in the explicit form are not available in the financial databases
for all hospitals. The goal of this research effort is to develop financial
models that will assign each clinical case in the patient-specific data
warehouse a dollar value, representing the cost incurred by the Ontario health
care facility which treated the patient. Five mathematical models have been
developed and verified using real dataset. All models can be classified into
two groups based on their underlying method: 1. Models based on using relative
intensity weights of the cases, and 2. Models based on using cost per diem.
| no_new_dataset | 0.953144 |
0802.1026 | Benjamin Sach Mr | Benjamin Sach and Rapha\"el Clifford | An Empirical Study of Cache-Oblivious Priority Queues and their
Application to the Shortest Path Problem | null | null | null | null | cs.DS cs.SE | null | In recent years the Cache-Oblivious model of external memory computation has
provided an attractive theoretical basis for the analysis of algorithms on
massive datasets. Much progress has been made in discovering algorithms that
are asymptotically optimal or near optimal. However, to date there are still
relatively few successful experimental studies. In this paper we compare two
different Cache-Oblivious priority queues based on the Funnel and Bucket Heap
and apply them to the single source shortest path problem on graphs with
positive edge weights. Our results show that when RAM is limited and data is
swapping to external storage, the Cache-Oblivious priority queues achieve
orders of magnitude speedups over standard internal memory techniques. However,
for the single source shortest path problem both on simulated and real world
graph data, these speedups are markedly lower due to the time required to
access the graph adjacency list itself.
| [
{
"version": "v1",
"created": "Thu, 7 Feb 2008 18:02:11 GMT"
}
] | 2008-02-08T00:00:00 | [
[
"Sach",
"Benjamin",
""
],
[
"Clifford",
"Raphaël",
""
]
] | TITLE: An Empirical Study of Cache-Oblivious Priority Queues and their
Application to the Shortest Path Problem
ABSTRACT: In recent years the Cache-Oblivious model of external memory computation has
provided an attractive theoretical basis for the analysis of algorithms on
massive datasets. Much progress has been made in discovering algorithms that
are asymptotically optimal or near optimal. However, to date there are still
relatively few successful experimental studies. In this paper we compare two
different Cache-Oblivious priority queues based on the Funnel and Bucket Heap
and apply them to the single source shortest path problem on graphs with
positive edge weights. Our results show that when RAM is limited and data is
swapping to external storage, the Cache-Oblivious priority queues achieve
orders of magnitude speedups over standard internal memory techniques. However,
for the single source shortest path problem both on simulated and real world
graph data, these speedups are markedly lower due to the time required to
access the graph adjacency list itself.
| no_new_dataset | 0.945399 |
bayes-an/9510001 | Hugh Chipman | Hugh Chipman (University of Chicago Graduate School of Business) | Bayesian Variable Selection with Related Predictors | uuencoded, gzipped postscript file, 24 pages including graphics and
tables. Revised version includes new example and improved plot. Paper also
available at http://gsbhac.uchicago.edu/techreports/ Author has web page at
http://www-gsb.uchicago.edu/ | null | null | STAT-94-13 (University of Waterloo) | bayes-an physics.data-an | null | In data sets with many predictors, algorithms for identifying a good subset
of predictors are often used. Most such algorithms do not account for any
relationships between predictors. For example, stepwise regression might select
a model containing an interaction AB but neither main effect A or B. This paper
develops mathematical representations of this and other relations between
predictors, which may then be incorporated in a model selection procedure. A
Bayesian approach that goes beyond the standard independence prior for variable
selection is adopted, and preference for certain models is interpreted as prior
information. Priors relevant to arbitrary interactions and polynomials, dummy
variables for categorical factors, competing predictors, and restrictions on
the size of the models are developed. Since the relations developed are for
priors, they may be incorporated in any Bayesian variable selection algorithm
for any type of linear model. The application of the methods here is
illustrated via the Stochastic Search Variable Selection algorithm of George
and McCulloch (1993), which is modified to utilize the new priors. The
performance of the approach is illustrated with two constructed examples and a
computer performance dataset. Keywords: Model Selection, Prior Distributions,
Interaction, Dummy Variable
| [
{
"version": "v1",
"created": "Mon, 30 Oct 1995 18:32:07 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Oct 1995 17:37:16 GMT"
}
] | 2008-02-03T00:00:00 | [
[
"Chipman",
"Hugh",
"",
"University of Chicago Graduate School of Business"
]
] | TITLE: Bayesian Variable Selection with Related Predictors
ABSTRACT: In data sets with many predictors, algorithms for identifying a good subset
of predictors are often used. Most such algorithms do not account for any
relationships between predictors. For example, stepwise regression might select
a model containing an interaction AB but neither main effect A or B. This paper
develops mathematical representations of this and other relations between
predictors, which may then be incorporated in a model selection procedure. A
Bayesian approach that goes beyond the standard independence prior for variable
selection is adopted, and preference for certain models is interpreted as prior
information. Priors relevant to arbitrary interactions and polynomials, dummy
variables for categorical factors, competing predictors, and restrictions on
the size of the models are developed. Since the relations developed are for
priors, they may be incorporated in any Bayesian variable selection algorithm
for any type of linear model. The application of the methods here is
illustrated via the Stochastic Search Variable Selection algorithm of George
and McCulloch (1993), which is modified to utilize the new priors. The
performance of the approach is illustrated with two constructed examples and a
computer performance dataset. Keywords: Model Selection, Prior Distributions,
Interaction, Dummy Variable
| no_new_dataset | 0.54306 |
cmp-lg/9607027 | Ilyas Cicekli | Ilyas Cicekli and H. Altay Guvenir | Learning Translation Rules From A Bilingual Corpus | 8 pages, Latex, uses nemlap.sty | Published in Proceedings of NEMLAP-2 | null | null | cmp-lg cs.CL | null | This paper proposes a mechanism for learning pattern correspondences between
two languages from a corpus of translated sentence pairs. The proposed
mechanism uses analogical reasoning between two translations. Given a pair of
translations, the similar parts of the sentences in the source language must
correspond the similar parts of the sentences in the target language.
Similarly, the different parts should correspond to the respective parts in the
translated sentences. The correspondences between the similarities, and also
differences are learned in the form of translation rules. The system is tested
on a small training dataset and produced promising results for further
investigation.
| [
{
"version": "v1",
"created": "Fri, 26 Jul 1996 10:36:59 GMT"
}
] | 2008-02-03T00:00:00 | [
[
"Cicekli",
"Ilyas",
""
],
[
"Guvenir",
"H. Altay",
""
]
] | TITLE: Learning Translation Rules From A Bilingual Corpus
ABSTRACT: This paper proposes a mechanism for learning pattern correspondences between
two languages from a corpus of translated sentence pairs. The proposed
mechanism uses analogical reasoning between two translations. Given a pair of
translations, the similar parts of the sentences in the source language must
correspond the similar parts of the sentences in the target language.
Similarly, the different parts should correspond to the respective parts in the
translated sentences. The correspondences between the similarities, and also
differences are learned in the form of translation rules. The system is tested
on a small training dataset and produced promising results for further
investigation.
| no_new_dataset | 0.938857 |
physics/9701026 | Radford Neal | Radford M. Neal (Dept. of Statistics, University of Toronto) | Monte Carlo Implementation of Gaussian Process Models for Bayesian
Regression and Classification | null | null | null | 9702 | physics.data-an | null | Gaussian processes are a natural way of defining prior distributions over
functions of one or more input variables. In a simple nonparametric regression
problem, where such a function gives the mean of a Gaussian distribution for an
observed response, a Gaussian process model can easily be implemented using
matrix computations that are feasible for datasets of up to about a thousand
cases. Hyperparameters that define the covariance function of the Gaussian
process can be sampled using Markov chain methods. Regression models where the
noise has a t distribution and logistic or probit models for classification
applications can be implemented by sampling as well for latent values
underlying the observations. Software is now available that implements these
methods using covariance functions with hierarchical parameterizations. Models
defined in this way can discover high-level properties of the data, such as
which inputs are relevant to predicting the response.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 1997 00:59:11 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jan 1997 01:14:50 GMT"
}
] | 2008-02-03T00:00:00 | [
[
"Neal",
"Radford M.",
"",
"Dept. of Statistics, University of Toronto"
]
] | TITLE: Monte Carlo Implementation of Gaussian Process Models for Bayesian
Regression and Classification
ABSTRACT: Gaussian processes are a natural way of defining prior distributions over
functions of one or more input variables. In a simple nonparametric regression
problem, where such a function gives the mean of a Gaussian distribution for an
observed response, a Gaussian process model can easily be implemented using
matrix computations that are feasible for datasets of up to about a thousand
cases. Hyperparameters that define the covariance function of the Gaussian
process can be sampled using Markov chain methods. Regression models where the
noise has a t distribution and logistic or probit models for classification
applications can be implemented by sampling as well for latent values
underlying the observations. Software is now available that implements these
methods using covariance functions with hierarchical parameterizations. Models
defined in this way can discover high-level properties of the data, such as
which inputs are relevant to predicting the response.
| no_new_dataset | 0.944689 |
0801.2349 | Cristian Marchioli Dr. | C. Marchioli, A. Soldati, J.G.M. Kuerten, B. Arcen, A. Taniere, G.
Goldensoph, K.D. Squires, M.F. Cargnelutti and L.M. Portela | Statistics of particle dispersion in Direct Numerical Simulations of
wall-bounded turbulence: results of an international collaborative benchmark
test | null | null | null | null | physics.flu-dyn | null | In this paper, the results of an international collaborative test case
relative to the production of a Direct Numerical Simulation and Lagrangian
Particle Tracking database for turbulent particle dispersion in channel flow at
low Reynolds number are presented. The objective of this test case is to
establish a homogeneous source of data relevant to the general problem of
particle dispersion in wall-bounded turbulence. Different numerical approaches
and computational codes have been used to simulate the particle-laden flow and
calculations have been carried on long enough to achieve a statistically-steady
condition for particle distribution. In such stationary regime, a comprehensive
database including both post-processed statistics and raw data for the fluid
and for the particles has been obtained. The complete datasets can be
downloaded from the web at http://cfd.cineca.it/cfd/repository/. In this paper,
the most relevant velocity statistics (for both phases) and particle
distribution statistics are discussed and benchmarked by direct comparison
between the different numerical predictions.
| [
{
"version": "v1",
"created": "Tue, 15 Jan 2008 18:11:21 GMT"
}
] | 2008-01-16T00:00:00 | [
[
"Marchioli",
"C.",
""
],
[
"Soldati",
"A.",
""
],
[
"Kuerten",
"J. G. M.",
""
],
[
"Arcen",
"B.",
""
],
[
"Taniere",
"A.",
""
],
[
"Goldensoph",
"G.",
""
],
[
"Squires",
"K. D.",
""
],
[
"Cargnelutti",
"M. F.",
""
],
[
"Portela",
"L. M.",
""
]
] | TITLE: Statistics of particle dispersion in Direct Numerical Simulations of
wall-bounded turbulence: results of an international collaborative benchmark
test
ABSTRACT: In this paper, the results of an international collaborative test case
relative to the production of a Direct Numerical Simulation and Lagrangian
Particle Tracking database for turbulent particle dispersion in channel flow at
low Reynolds number are presented. The objective of this test case is to
establish a homogeneous source of data relevant to the general problem of
particle dispersion in wall-bounded turbulence. Different numerical approaches
and computational codes have been used to simulate the particle-laden flow and
calculations have been carried on long enough to achieve a statistically-steady
condition for particle distribution. In such stationary regime, a comprehensive
database including both post-processed statistics and raw data for the fluid
and for the particles has been obtained. The complete datasets can be
downloaded from the web at http://cfd.cineca.it/cfd/repository/. In this paper,
the most relevant velocity statistics (for both phases) and particle
distribution statistics are discussed and benchmarked by direct comparison
between the different numerical predictions.
| no_new_dataset | 0.56135 |
0712.4126 | Chandan Reddy | Chandan K. Reddy | TRUST-TECH based Methods for Optimization and Learning | PHD Thesis | Chandan K. Reddy, TRUST-TECH based Methods for Optimization and
Learning, PHD Thesis, Cornell University, February 2007 | null | null | cs.AI cs.CE cs.MS cs.NA cs.NE | null | Many problems that arise in machine learning domain deal with nonlinearity
and quite often demand users to obtain global optimal solutions rather than
local optimal ones. Optimization problems are inherent in machine learning
algorithms and hence many methods in machine learning were inherited from the
optimization literature. Popularly known as the initialization problem, the
ideal set of parameters required will significantly depend on the given
initialization values. The recently developed TRUST-TECH (TRansformation Under
STability-reTaining Equilibria CHaracterization) methodology systematically
explores the subspace of the parameters to obtain a complete set of local
optimal solutions. In this thesis work, we propose TRUST-TECH based methods for
solving several optimization and machine learning problems. Two stages namely,
the local stage and the neighborhood-search stage, are repeated alternatively
in the solution space to achieve improvements in the quality of the solutions.
Our methods were tested on both synthetic and real datasets and the advantages
of using this novel framework are clearly manifested. This framework not only
reduces the sensitivity to initialization, but also allows the flexibility for
the practitioners to use various global and local methods that work well for a
particular problem of interest. Other hierarchical stochastic algorithms like
evolutionary algorithms and smoothing algorithms are also studied and
frameworks for combining these methods with TRUST-TECH have been proposed and
evaluated on several test systems.
| [
{
"version": "v1",
"created": "Tue, 25 Dec 2007 03:14:32 GMT"
}
] | 2007-12-27T00:00:00 | [
[
"Reddy",
"Chandan K.",
""
]
] | TITLE: TRUST-TECH based Methods for Optimization and Learning
ABSTRACT: Many problems that arise in machine learning domain deal with nonlinearity
and quite often demand users to obtain global optimal solutions rather than
local optimal ones. Optimization problems are inherent in machine learning
algorithms and hence many methods in machine learning were inherited from the
optimization literature. Popularly known as the initialization problem, the
ideal set of parameters required will significantly depend on the given
initialization values. The recently developed TRUST-TECH (TRansformation Under
STability-reTaining Equilibria CHaracterization) methodology systematically
explores the subspace of the parameters to obtain a complete set of local
optimal solutions. In this thesis work, we propose TRUST-TECH based methods for
solving several optimization and machine learning problems. Two stages namely,
the local stage and the neighborhood-search stage, are repeated alternatively
in the solution space to achieve improvements in the quality of the solutions.
Our methods were tested on both synthetic and real datasets and the advantages
of using this novel framework are clearly manifested. This framework not only
reduces the sensitivity to initialization, but also allows the flexibility for
the practitioners to use various global and local methods that work well for a
particular problem of interest. Other hierarchical stochastic algorithms like
evolutionary algorithms and smoothing algorithms are also studied and
frameworks for combining these methods with TRUST-TECH have been proposed and
evaluated on several test systems.
| no_new_dataset | 0.949435 |
0712.2262 | Ian T Foster | David Bernholdt, Shishir Bharathi, David Brown, Kasidit Chanchio,
Meili Chen, Ann Chervenak, Luca Cinquini, Bob Drach, Ian Foster, Peter Fox,
Jose Garcia, Carl Kesselman, Rob Markel, Don Middleton, Veronika Nefedova,
Line Pouchard, Arie Shoshani, Alex Sim, Gary Strand, Dean Williams | The Earth System Grid: Supporting the Next Generation of Climate
Modeling Research | null | null | null | null | cs.CE cs.DC cs.NI | null | Understanding the earth's climate system and how it might be changing is a
preeminent scientific challenge. Global climate models are used to simulate
past, present, and future climates, and experiments are executed continuously
on an array of distributed supercomputers. The resulting data archive, spread
over several sites, currently contains upwards of 100 TB of simulation data and
is growing rapidly. Looking toward mid-decade and beyond, we must anticipate
and prepare for distributed climate research data holdings of many petabytes.
The Earth System Grid (ESG) is a collaborative interdisciplinary project aimed
at addressing the challenge of enabling management, discovery, access, and
analysis of these critically important datasets in a distributed and
heterogeneous computational environment. The problem is fundamentally a Grid
problem. Building upon the Globus toolkit and a variety of other technologies,
ESG is developing an environment that addresses authentication, authorization
for data access, large-scale data transport and management, services and
abstractions for high-performance remote data access, mechanisms for scalable
data replication, cataloging with rich semantic and syntactic information, data
discovery, distributed monitoring, and Web-based portals for using the system.
| [
{
"version": "v1",
"created": "Thu, 13 Dec 2007 23:39:04 GMT"
}
] | 2007-12-17T00:00:00 | [
[
"Bernholdt",
"David",
""
],
[
"Bharathi",
"Shishir",
""
],
[
"Brown",
"David",
""
],
[
"Chanchio",
"Kasidit",
""
],
[
"Chen",
"Meili",
""
],
[
"Chervenak",
"Ann",
""
],
[
"Cinquini",
"Luca",
""
],
[
"Drach",
"Bob",
""
],
[
"Foster",
"Ian",
""
],
[
"Fox",
"Peter",
""
],
[
"Garcia",
"Jose",
""
],
[
"Kesselman",
"Carl",
""
],
[
"Markel",
"Rob",
""
],
[
"Middleton",
"Don",
""
],
[
"Nefedova",
"Veronika",
""
],
[
"Pouchard",
"Line",
""
],
[
"Shoshani",
"Arie",
""
],
[
"Sim",
"Alex",
""
],
[
"Strand",
"Gary",
""
],
[
"Williams",
"Dean",
""
]
] | TITLE: The Earth System Grid: Supporting the Next Generation of Climate
Modeling Research
ABSTRACT: Understanding the earth's climate system and how it might be changing is a
preeminent scientific challenge. Global climate models are used to simulate
past, present, and future climates, and experiments are executed continuously
on an array of distributed supercomputers. The resulting data archive, spread
over several sites, currently contains upwards of 100 TB of simulation data and
is growing rapidly. Looking toward mid-decade and beyond, we must anticipate
and prepare for distributed climate research data holdings of many petabytes.
The Earth System Grid (ESG) is a collaborative interdisciplinary project aimed
at addressing the challenge of enabling management, discovery, access, and
analysis of these critically important datasets in a distributed and
heterogeneous computational environment. The problem is fundamentally a Grid
problem. Building upon the Globus toolkit and a variety of other technologies,
ESG is developing an environment that addresses authentication, authorization
for data access, large-scale data transport and management, services and
abstractions for high-performance remote data access, mechanisms for scalable
data replication, cataloging with rich semantic and syntactic information, data
discovery, distributed monitoring, and Web-based portals for using the system.
| no_new_dataset | 0.931774 |
cs/0610105 | Vitaly Shmatikov | Arvind Narayanan and Vitaly Shmatikov | How To Break Anonymity of the Netflix Prize Dataset | null | null | null | null | cs.CR cs.DB | null | We present a new class of statistical de-anonymization attacks against
high-dimensional micro-data, such as individual preferences, recommendations,
transaction records and so on. Our techniques are robust to perturbation in the
data and tolerate some mistakes in the adversary's background knowledge.
We apply our de-anonymization methodology to the Netflix Prize dataset, which
contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's
largest online movie rental service. We demonstrate that an adversary who knows
only a little bit about an individual subscriber can easily identify this
subscriber's record in the dataset. Using the Internet Movie Database as the
source of background knowledge, we successfully identified the Netflix records
of known users, uncovering their apparent political preferences and other
potentially sensitive information.
| [
{
"version": "v1",
"created": "Wed, 18 Oct 2006 06:03:41 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Nov 2007 05:13:06 GMT"
}
] | 2007-11-22T00:00:00 | [
[
"Narayanan",
"Arvind",
""
],
[
"Shmatikov",
"Vitaly",
""
]
] | TITLE: How To Break Anonymity of the Netflix Prize Dataset
ABSTRACT: We present a new class of statistical de-anonymization attacks against
high-dimensional micro-data, such as individual preferences, recommendations,
transaction records and so on. Our techniques are robust to perturbation in the
data and tolerate some mistakes in the adversary's background knowledge.
We apply our de-anonymization methodology to the Netflix Prize dataset, which
contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's
largest online movie rental service. We demonstrate that an adversary who knows
only a little bit about an individual subscriber can easily identify this
subscriber's record in the dataset. Using the Internet Movie Database as the
source of background knowledge, we successfully identified the Netflix records
of known users, uncovering their apparent political preferences and other
potentially sensitive information.
| no_new_dataset | 0.941385 |
0711.2914 | Tshilidzi Marwala | Gidudu Anthony, Hulley Gregg and Marwala Tshilidzi | Image Classification Using SVMs: One-against-One Vs One-against-All | Proccedings of the 28th Asian Conference on Remote Sensing, 2007 | null | null | null | cs.LG cs.AI cs.CV | null | Support Vector Machines (SVMs) are a relatively new supervised classification
technique to the land cover mapping community. They have their roots in
Statistical Learning Theory and have gained prominence because they are robust,
accurate and are effective even when using a small training sample. By their
nature SVMs are essentially binary classifiers, however, they can be adopted to
handle the multiple classification tasks common in remote sensing studies. The
two approaches commonly used are the One-Against-One (1A1) and One-Against-All
(1AA) techniques. In this paper, these approaches are evaluated in as far as
their impact and implication for land cover mapping. The main finding from this
research is that whereas the 1AA technique is more predisposed to yielding
unclassified and mixed pixels, the resulting classification accuracy is not
significantly different from 1A1 approach. It is the authors conclusion
therefore that ultimately the choice of technique adopted boils down to
personal preference and the uniqueness of the dataset at hand.
| [
{
"version": "v1",
"created": "Mon, 19 Nov 2007 12:25:00 GMT"
}
] | 2007-11-20T00:00:00 | [
[
"Anthony",
"Gidudu",
""
],
[
"Gregg",
"Hulley",
""
],
[
"Tshilidzi",
"Marwala",
""
]
] | TITLE: Image Classification Using SVMs: One-against-One Vs One-against-All
ABSTRACT: Support Vector Machines (SVMs) are a relatively new supervised classification
technique to the land cover mapping community. They have their roots in
Statistical Learning Theory and have gained prominence because they are robust,
accurate and are effective even when using a small training sample. By their
nature SVMs are essentially binary classifiers, however, they can be adopted to
handle the multiple classification tasks common in remote sensing studies. The
two approaches commonly used are the One-Against-One (1A1) and One-Against-All
(1AA) techniques. In this paper, these approaches are evaluated in as far as
their impact and implication for land cover mapping. The main finding from this
research is that whereas the 1AA technique is more predisposed to yielding
unclassified and mixed pixels, the resulting classification accuracy is not
significantly different from 1A1 approach. It is the authors conclusion
therefore that ultimately the choice of technique adopted boils down to
personal preference and the uniqueness of the dataset at hand.
| no_new_dataset | 0.950411 |
physics/0609252 | Chih-Yuan Tseng | Chien-chih Chen, Chih-Yuan Tseng and Jia-Jyun Dong | Variable selection based on entropic criterion and its application to
the debris-flow triggering | 9 pages and 4 tables | Engineering Geology 94, 19 (2007) | 10.1016/j.enggeo.2007.06.004 | null | physics.data-an physics.geo-ph | null | We propose a new data analyzing scheme, the method of minimum entropy
analysis (MEA), in this paper. New MEA provides a quantitative criterion to
select relevant variables for modeling the physical system interested. Such
method can be easily extended to various geophysical/geological data analysis,
where many relevant or irrelevant available measurements may obscure the
understanding of the highly complicated physical system like the triggering of
debris-flows. After demonstrating and testing the MEA method, we apply this
method to a dataset of debris-flow occurrences in Taiwan and successfully find
out three relevant variables, i.e. the hydrological form factor, numbers and
areas of landslides, to the triggering of observed debris-flow events due to
the 1996 Typhoon Herb.
| [
{
"version": "v1",
"created": "Fri, 29 Sep 2006 05:19:27 GMT"
}
] | 2007-11-20T00:00:00 | [
[
"Chen",
"Chien-chih",
""
],
[
"Tseng",
"Chih-Yuan",
""
],
[
"Dong",
"Jia-Jyun",
""
]
] | TITLE: Variable selection based on entropic criterion and its application to
the debris-flow triggering
ABSTRACT: We propose a new data analyzing scheme, the method of minimum entropy
analysis (MEA), in this paper. New MEA provides a quantitative criterion to
select relevant variables for modeling the physical system interested. Such
method can be easily extended to various geophysical/geological data analysis,
where many relevant or irrelevant available measurements may obscure the
understanding of the highly complicated physical system like the triggering of
debris-flows. After demonstrating and testing the MEA method, we apply this
method to a dataset of debris-flow occurrences in Taiwan and successfully find
out three relevant variables, i.e. the hydrological form factor, numbers and
areas of landslides, to the triggering of observed debris-flow events due to
the 1996 Typhoon Herb.
| no_new_dataset | 0.953405 |
0708.1242 | Christos Dimitrakakis | Christos Dimitrakakis and Christian Savu-Krohn | Cost-minimising strategies for data labelling : optimal stopping and
active learning | 17 pages, 4 figures. Corrected some errors and changed the flow of
the text | null | null | null | cs.LG | null | Supervised learning deals with the inference of a distribution over an output
or label space $\CY$ conditioned on points in an observation space $\CX$, given
a training dataset $D$ of pairs in $\CX \times \CY$. However, in a lot of
applications of interest, acquisition of large amounts of observations is easy,
while the process of generating labels is time-consuming or costly. One way to
deal with this problem is {\em active} learning, where points to be labelled
are selected with the aim of creating a model with better performance than that
of an model trained on an equal number of randomly sampled points. In this
paper, we instead propose to deal with the labelling cost directly: The
learning goal is defined as the minimisation of a cost which is a function of
the expected model performance and the total cost of the labels used. This
allows the development of general strategies and specific algorithms for (a)
optimal stopping, where the expected cost dictates whether label acquisition
should continue (b) empirical evaluation, where the cost is used as a
performance metric for a given combination of inference, stopping and sampling
methods. Though the main focus of the paper is optimal stopping, we also aim to
provide the background for further developments and discussion in the related
field of active learning.
| [
{
"version": "v1",
"created": "Thu, 9 Aug 2007 10:21:34 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Aug 2007 22:05:57 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Nov 2007 16:37:51 GMT"
}
] | 2007-11-15T00:00:00 | [
[
"Dimitrakakis",
"Christos",
""
],
[
"Savu-Krohn",
"Christian",
""
]
] | TITLE: Cost-minimising strategies for data labelling : optimal stopping and
active learning
ABSTRACT: Supervised learning deals with the inference of a distribution over an output
or label space $\CY$ conditioned on points in an observation space $\CX$, given
a training dataset $D$ of pairs in $\CX \times \CY$. However, in a lot of
applications of interest, acquisition of large amounts of observations is easy,
while the process of generating labels is time-consuming or costly. One way to
deal with this problem is {\em active} learning, where points to be labelled
are selected with the aim of creating a model with better performance than that
of an model trained on an equal number of randomly sampled points. In this
paper, we instead propose to deal with the labelling cost directly: The
learning goal is defined as the minimisation of a cost which is a function of
the expected model performance and the total cost of the labels used. This
allows the development of general strategies and specific algorithms for (a)
optimal stopping, where the expected cost dictates whether label acquisition
should continue (b) empirical evaluation, where the cost is used as a
performance metric for a given combination of inference, stopping and sampling
methods. Though the main focus of the paper is optimal stopping, we also aim to
provide the background for further developments and discussion in the related
field of active learning.
| no_new_dataset | 0.941654 |
0709.3640 | Fabrice Rossi | Damien Fran\c{c}ois (CESAME), Fabrice Rossi (INRIA Rocquencourt /
INRIA Sophia Antipolis), Vincent Wertz (CESAME), Michel Verleysen (DICE -
MLG) | Resampling methods for parameter-free and robust feature selection with
mutual information | null | Neurocomputing 70, 7-9 (2007) 1276-1288 | 10.1016/j.neucom.2006.11.019 | null | cs.LG stat.AP | null | Combining the mutual information criterion with a forward feature selection
strategy offers a good trade-off between optimality of the selected feature
subset and computation time. However, it requires to set the parameter(s) of
the mutual information estimator and to determine when to halt the forward
procedure. These two choices are difficult to make because, as the
dimensionality of the subset increases, the estimation of the mutual
information becomes less and less reliable. This paper proposes to use
resampling methods, a K-fold cross-validation and the permutation test, to
address both issues. The resampling methods bring information about the
variance of the estimator, information which can then be used to automatically
set the parameter and to calculate a threshold to stop the forward procedure.
The procedure is illustrated on a synthetic dataset as well as on real-world
examples.
| [
{
"version": "v1",
"created": "Sun, 23 Sep 2007 14:09:28 GMT"
}
] | 2007-09-26T00:00:00 | [
[
"François",
"Damien",
"",
"CESAME"
],
[
"Rossi",
"Fabrice",
"",
"INRIA Rocquencourt /\n INRIA Sophia Antipolis"
],
[
"Wertz",
"Vincent",
"",
"CESAME"
],
[
"Verleysen",
"Michel",
"",
"DICE -\n MLG"
]
] | TITLE: Resampling methods for parameter-free and robust feature selection with
mutual information
ABSTRACT: Combining the mutual information criterion with a forward feature selection
strategy offers a good trade-off between optimality of the selected feature
subset and computation time. However, it requires to set the parameter(s) of
the mutual information estimator and to determine when to halt the forward
procedure. These two choices are difficult to make because, as the
dimensionality of the subset increases, the estimation of the mutual
information becomes less and less reliable. This paper proposes to use
resampling methods, a K-fold cross-validation and the permutation test, to
address both issues. The resampling methods bring information about the
variance of the estimator, information which can then be used to automatically
set the parameter and to calculate a threshold to stop the forward procedure.
The procedure is illustrated on a synthetic dataset as well as on real-world
examples.
| no_new_dataset | 0.947381 |
0709.3965 | Tshilidzi Marwala | Greg Hulley and Tshilidzi Marwala | Evolving Classifiers: Methods for Incremental Learning | 14 pages | null | null | null | cs.LG cs.AI cs.NE | null | The ability of a classifier to take on new information and classes by
evolving the classifier without it having to be fully retrained is known as
incremental learning. Incremental learning has been successfully applied to
many classification problems, where the data is changing and is not all
available at once. In this paper there is a comparison between Learn++, which
is one of the most recent incremental learning algorithms, and the new proposed
method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has
shown good incremental learning capabilities on benchmark datasets on which the
new ILUGA method has been tested. ILUGA has also shown good incremental
learning ability using only a few classifiers and does not suffer from
catastrophic forgetting. The results obtained for ILUGA on the Optical
Character Recognition (OCR) and Wine datasets are good, with an overall
accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT
for the difficult multi-class OCR dataset.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2007 14:28:32 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Sep 2007 10:37:00 GMT"
}
] | 2007-09-26T00:00:00 | [
[
"Hulley",
"Greg",
""
],
[
"Marwala",
"Tshilidzi",
""
]
] | TITLE: Evolving Classifiers: Methods for Incremental Learning
ABSTRACT: The ability of a classifier to take on new information and classes by
evolving the classifier without it having to be fully retrained is known as
incremental learning. Incremental learning has been successfully applied to
many classification problems, where the data is changing and is not all
available at once. In this paper there is a comparison between Learn++, which
is one of the most recent incremental learning algorithms, and the new proposed
method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has
shown good incremental learning capabilities on benchmark datasets on which the
new ILUGA method has been tested. ILUGA has also shown good incremental
learning ability using only a few classifiers and does not suffer from
catastrophic forgetting. The results obtained for ILUGA on the Optical
Character Recognition (OCR) and Wine datasets are good, with an overall
accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT
for the difficult multi-class OCR dataset.
| no_new_dataset | 0.953362 |
0709.3967 | Tshilidzi Marwala | Gidudu Anthony, Hulley Greg and Marwala Tshilidzi | Classification of Images Using Support Vector Machines | 6 pages | null | null | null | cs.LG cs.AI | null | Support Vector Machines (SVMs) are a relatively new supervised classification
technique to the land cover mapping community. They have their roots in
Statistical Learning Theory and have gained prominence because they are robust,
accurate and are effective even when using a small training sample. By their
nature SVMs are essentially binary classifiers, however, they can be adopted to
handle the multiple classification tasks common in remote sensing studies. The
two approaches commonly used are the One-Against-One (1A1) and One-Against-All
(1AA) techniques. In this paper, these approaches are evaluated in as far as
their impact and implication for land cover mapping. The main finding from this
research is that whereas the 1AA technique is more predisposed to yielding
unclassified and mixed pixels, the resulting classification accuracy is not
significantly different from 1A1 approach. It is the authors conclusions that
ultimately the choice of technique adopted boils down to personal preference
and the uniqueness of the dataset at hand.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2007 14:37:40 GMT"
}
] | 2007-09-26T00:00:00 | [
[
"Anthony",
"Gidudu",
""
],
[
"Greg",
"Hulley",
""
],
[
"Tshilidzi",
"Marwala",
""
]
] | TITLE: Classification of Images Using Support Vector Machines
ABSTRACT: Support Vector Machines (SVMs) are a relatively new supervised classification
technique to the land cover mapping community. They have their roots in
Statistical Learning Theory and have gained prominence because they are robust,
accurate and are effective even when using a small training sample. By their
nature SVMs are essentially binary classifiers, however, they can be adopted to
handle the multiple classification tasks common in remote sensing studies. The
two approaches commonly used are the One-Against-One (1A1) and One-Against-All
(1AA) techniques. In this paper, these approaches are evaluated in as far as
their impact and implication for land cover mapping. The main finding from this
research is that whereas the 1AA technique is more predisposed to yielding
unclassified and mixed pixels, the resulting classification accuracy is not
significantly different from 1A1 approach. It is the authors conclusions that
ultimately the choice of technique adopted boils down to personal preference
and the uniqueness of the dataset at hand.
| no_new_dataset | 0.952309 |
cs/0309005 | Aleksandar Stojmirovic | Aleksandar Stojmirovic and Vladimir Pestov | Indexing Schemes for Similarity Search In Datasets of Short Protein
Fragments | 34 pages, 12 figures, 4 tables - Timings for experiments added upon
referees' request, and a number of less substantial modifications made | Information Systems 32 (2007), 1145-1165 | null | null | cs.DS q-bio.BM | null | We propose a family of very efficient hierarchical indexing schemes for
ungapped, score matrix-based similarity search in large datasets of short (4-12
amino acid) protein fragments. This type of similarity search has importance in
both providing a building block to more complex algorithms and for possible use
in direct biological investigations where datasets are of the order of 60
million objects. Our scheme is based on the internal geometry of the amino acid
alphabet and performs exceptionally well, for example outputting 100 nearest
neighbours to any possible fragment of length 10 after scanning on average less
than one per cent of the entire dataset.
| [
{
"version": "v1",
"created": "Fri, 5 Sep 2003 22:59:40 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Jan 2006 00:45:57 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Jul 2006 17:30:29 GMT"
},
{
"version": "v4",
"created": "Fri, 9 Feb 2007 02:33:56 GMT"
}
] | 2007-09-04T00:00:00 | [
[
"Stojmirovic",
"Aleksandar",
""
],
[
"Pestov",
"Vladimir",
""
]
] | TITLE: Indexing Schemes for Similarity Search In Datasets of Short Protein
Fragments
ABSTRACT: We propose a family of very efficient hierarchical indexing schemes for
ungapped, score matrix-based similarity search in large datasets of short (4-12
amino acid) protein fragments. This type of similarity search has importance in
both providing a building block to more complex algorithms and for possible use
in direct biological investigations where datasets are of the order of 60
million objects. Our scheme is based on the internal geometry of the amino acid
alphabet and performs exceptionally well, for example outputting 100 nearest
neighbours to any possible fragment of length 10 after scanning on average less
than one per cent of the entire dataset.
| no_new_dataset | 0.94743 |
0707.3670 | Xu Cheng | Xu Cheng and Cameron Dale and Jiangchuan Liu | Understanding the Characteristics of Internet Short Video Sharing:
YouTube as a Case Study | IEEE format, 9 pages, 16 figures | null | null | null | cs.NI cs.MM | null | Established in 2005, YouTube has become the most successful Internet site
providing a new generation of short video sharing service. Today, YouTube alone
comprises approximately 20% of all HTTP traffic, or nearly 10% of all traffic
on the Internet. Understanding the features of YouTube and similar video
sharing sites is thus crucial to their sustainable development and to network
traffic engineering. In this paper, using traces crawled in a 3-month period,
we present an in-depth and systematic measurement study on the characteristics
of YouTube videos. We find that YouTube videos have noticeably different
statistics compared to traditional streaming videos, ranging from length and
access pattern, to their active life span, ratings, and comments. The series of
datasets also allows us to identify the growth trend of this fast evolving
Internet site in various aspects, which has seldom been explored before. We
also look closely at the social networking aspect of YouTube, as this is a key
driving force toward its success. In particular, we find that the links to
related videos generated by uploaders' choices form a small-world network. This
suggests that the videos have strong correlations with each other, and creates
opportunities for developing novel caching or peer-to-peer distribution schemes
to efficiently deliver videos to end users.
| [
{
"version": "v1",
"created": "Wed, 25 Jul 2007 05:39:44 GMT"
}
] | 2007-07-26T00:00:00 | [
[
"Cheng",
"Xu",
""
],
[
"Dale",
"Cameron",
""
],
[
"Liu",
"Jiangchuan",
""
]
] | TITLE: Understanding the Characteristics of Internet Short Video Sharing:
YouTube as a Case Study
ABSTRACT: Established in 2005, YouTube has become the most successful Internet site
providing a new generation of short video sharing service. Today, YouTube alone
comprises approximately 20% of all HTTP traffic, or nearly 10% of all traffic
on the Internet. Understanding the features of YouTube and similar video
sharing sites is thus crucial to their sustainable development and to network
traffic engineering. In this paper, using traces crawled in a 3-month period,
we present an in-depth and systematic measurement study on the characteristics
of YouTube videos. We find that YouTube videos have noticeably different
statistics compared to traditional streaming videos, ranging from length and
access pattern, to their active life span, ratings, and comments. The series of
datasets also allows us to identify the growth trend of this fast evolving
Internet site in various aspects, which has seldom been explored before. We
also look closely at the social networking aspect of YouTube, as this is a key
driving force toward its success. In particular, we find that the links to
related videos generated by uploaders' choices form a small-world network. This
suggests that the videos have strong correlations with each other, and creates
opportunities for developing novel caching or peer-to-peer distribution schemes
to efficiently deliver videos to end users.
| no_new_dataset | 0.921922 |
physics/0405044 | Harald St\"ogbauer | Harald St\"ogbauer, Alexander Kraskov, Sergey A. Astakhov, and Peter
Grassberger | Least Dependent Component Analysis Based on Mutual Information | 18 pages, 20 figures, Phys. Rev. E (in press) | Phys. Rev. E 70, 066123 (2004) | 10.1103/PhysRevE.70.066123 | null | physics.comp-ph cs.IT math.IT physics.data-an q-bio.QM | null | We propose to use precise estimators of mutual information (MI) to find least
dependent components in a linearly mixed signal. On the one hand this seems to
lead to better blind source separation than with any other presently available
algorithm. On the other hand it has the advantage, compared to other
implementations of `independent' component analysis (ICA) some of which are
based on crude approximations for MI, that the numerical values of the MI can
be used for:
(i) estimating residual dependencies between the output components;
(ii) estimating the reliability of the output, by comparing the pairwise MIs
with those of re-mixed components;
(iii) clustering the output according to the residual interdependencies.
For the MI estimator we use a recently proposed k-nearest neighbor based
algorithm. For time sequences we combine this with delay embedding, in order to
take into account non-trivial time correlations. After several tests with
artificial data, we apply the resulting MILCA (Mutual Information based Least
dependent Component Analysis) algorithm to a real-world dataset, the ECG of a
pregnant woman.
The software implementation of the MILCA algorithm is freely available at
http://www.fz-juelich.de/nic/cs/software
| [
{
"version": "v1",
"created": "Mon, 10 May 2004 14:58:17 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Sep 2004 14:05:32 GMT"
}
] | 2007-07-16T00:00:00 | [
[
"Stögbauer",
"Harald",
""
],
[
"Kraskov",
"Alexander",
""
],
[
"Astakhov",
"Sergey A.",
""
],
[
"Grassberger",
"Peter",
""
]
] | TITLE: Least Dependent Component Analysis Based on Mutual Information
ABSTRACT: We propose to use precise estimators of mutual information (MI) to find least
dependent components in a linearly mixed signal. On the one hand this seems to
lead to better blind source separation than with any other presently available
algorithm. On the other hand it has the advantage, compared to other
implementations of `independent' component analysis (ICA) some of which are
based on crude approximations for MI, that the numerical values of the MI can
be used for:
(i) estimating residual dependencies between the output components;
(ii) estimating the reliability of the output, by comparing the pairwise MIs
with those of re-mixed components;
(iii) clustering the output according to the residual interdependencies.
For the MI estimator we use a recently proposed k-nearest neighbor based
algorithm. For time sequences we combine this with delay embedding, in order to
take into account non-trivial time correlations. After several tests with
artificial data, we apply the resulting MILCA (Mutual Information based Least
dependent Component Analysis) algorithm to a real-world dataset, the ECG of a
pregnant woman.
The software implementation of the MILCA algorithm is freely available at
http://www.fz-juelich.de/nic/cs/software
| no_new_dataset | 0.940681 |
0707.1618 | Per Ola Kristensson | Per Ola Kristensson, Nils Dahlback, Daniel Anundi, Marius Bjornstad,
Hanna Gillberg, Jonas Haraldsson, Ingrid Martensson, Matttias Nordvall,
Josefin Stahl | The Trade-offs with Space Time Cube Representation of Spatiotemporal
Patterns | null | null | null | null | cs.HC cs.GR | null | Space time cube representation is an information visualization technique
where spatiotemporal data points are mapped into a cube. Fast and correct
analysis of such information is important in for instance geospatial and social
visualization applications. Information visualization researchers have
previously argued that space time cube representation is beneficial in
revealing complex spatiotemporal patterns in a dataset to users. The argument
is based on the fact that both time and spatial information are displayed
simultaneously to users, an effect difficult to achieve in other
representations. However, to our knowledge the actual usefulness of space time
cube representation in conveying complex spatiotemporal patterns to users has
not been empirically validated. To fill this gap we report on a
between-subjects experiment comparing novice users error rates and response
times when answering a set of questions using either space time cube or a
baseline 2D representation. For some simple questions the error rates were
lower when using the baseline representation. For complex questions where the
participants needed an overall understanding of the spatiotemporal structure of
the dataset, the space time cube representation resulted in on average twice as
fast response times with no difference in error rates compared to the baseline.
These results provide an empirical foundation for the hypothesis that space
time cube representation benefits users when analyzing complex spatiotemporal
patterns.
| [
{
"version": "v1",
"created": "Wed, 11 Jul 2007 13:39:34 GMT"
}
] | 2007-07-12T00:00:00 | [
[
"Kristensson",
"Per Ola",
""
],
[
"Dahlback",
"Nils",
""
],
[
"Anundi",
"Daniel",
""
],
[
"Bjornstad",
"Marius",
""
],
[
"Gillberg",
"Hanna",
""
],
[
"Haraldsson",
"Jonas",
""
],
[
"Martensson",
"Ingrid",
""
],
[
"Nordvall",
"Matttias",
""
],
[
"Stahl",
"Josefin",
""
]
] | TITLE: The Trade-offs with Space Time Cube Representation of Spatiotemporal
Patterns
ABSTRACT: Space time cube representation is an information visualization technique
where spatiotemporal data points are mapped into a cube. Fast and correct
analysis of such information is important in for instance geospatial and social
visualization applications. Information visualization researchers have
previously argued that space time cube representation is beneficial in
revealing complex spatiotemporal patterns in a dataset to users. The argument
is based on the fact that both time and spatial information are displayed
simultaneously to users, an effect difficult to achieve in other
representations. However, to our knowledge the actual usefulness of space time
cube representation in conveying complex spatiotemporal patterns to users has
not been empirically validated. To fill this gap we report on a
between-subjects experiment comparing novice users error rates and response
times when answering a set of questions using either space time cube or a
baseline 2D representation. For some simple questions the error rates were
lower when using the baseline representation. For complex questions where the
participants needed an overall understanding of the spatiotemporal structure of
the dataset, the space time cube representation resulted in on average twice as
fast response times with no difference in error rates compared to the baseline.
These results provide an empirical foundation for the hypothesis that space
time cube representation benefits users when analyzing complex spatiotemporal
patterns.
| no_new_dataset | 0.954647 |
0706.1842 | Dietrich Stauffer | Soeren Wichmann, Dietrich Stauffer, Christian Schulze, Eric W. Holman | Do language change rates depend on population size? | 20 pages including all figures for a linguistic journal | null | null | null | physics.soc-ph | null | An earlier study (Nettle 1999b) concluded, based on computer simulations and
some inferences from empirical data, that languages will change the more slowly
the larger the population gets. We replicate this study using a more complete
language model for simulations (the Schulze model combined with a
Barabasi-Albert net- work) and a richer empirical dataset (the World Atlas of
Language Structures edited by Haspelmath et al. 2005). Our simulations show
either a weak or stronger dependence of language change on population sizes
depending on the parameter settings, and empirical data, like some of the
simulations, show a weak dependence.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2007 07:53:34 GMT"
}
] | 2007-06-14T00:00:00 | [
[
"Wichmann",
"Soeren",
""
],
[
"Stauffer",
"Dietrich",
""
],
[
"Schulze",
"Christian",
""
],
[
"Holman",
"Eric W.",
""
]
] | TITLE: Do language change rates depend on population size?
ABSTRACT: An earlier study (Nettle 1999b) concluded, based on computer simulations and
some inferences from empirical data, that languages will change the more slowly
the larger the population gets. We replicate this study using a more complete
language model for simulations (the Schulze model combined with a
Barabasi-Albert net- work) and a richer empirical dataset (the World Atlas of
Language Structures edited by Haspelmath et al. 2005). Our simulations show
either a weak or stronger dependence of language change on population sizes
depending on the parameter settings, and empirical data, like some of the
simulations, show a weak dependence.
| no_new_dataset | 0.945248 |
cs/0601001 | Jens Oehlschl\"agel | Jens Oehlschl\"agel | Truecluster: robust scalable clustering with model selection | Article (10 figures). Changes in 2nd version: dropped supplements in
favor of better integrated presentation, better literature coverage, put into
proper English. Author's website available via http://www.truecluster.com | null | null | null | cs.AI | null | Data-based classification is fundamental to most branches of science. While
recent years have brought enormous progress in various areas of statistical
computing and clustering, some general challenges in clustering remain: model
selection, robustness, and scalability to large datasets. We consider the
important problem of deciding on the optimal number of clusters, given an
arbitrary definition of space and clusteriness. We show how to construct a
cluster information criterion that allows objective model selection. Differing
from other approaches, our truecluster method does not require specific
assumptions about underlying distributions, dissimilarity definitions or
cluster models. Truecluster puts arbitrary clustering algorithms into a generic
unified (sampling-based) statistical framework. It is scalable to big datasets
and provides robust cluster assignments and case-wise diagnostics. Truecluster
will make clustering more objective, allows for automation, and will save time
and costs. Free R software is available.
| [
{
"version": "v1",
"created": "Mon, 2 Jan 2006 13:17:09 GMT"
},
{
"version": "v2",
"created": "Mon, 28 May 2007 17:18:09 GMT"
}
] | 2007-06-13T00:00:00 | [
[
"Oehlschlägel",
"Jens",
""
]
] | TITLE: Truecluster: robust scalable clustering with model selection
ABSTRACT: Data-based classification is fundamental to most branches of science. While
recent years have brought enormous progress in various areas of statistical
computing and clustering, some general challenges in clustering remain: model
selection, robustness, and scalability to large datasets. We consider the
important problem of deciding on the optimal number of clusters, given an
arbitrary definition of space and clusteriness. We show how to construct a
cluster information criterion that allows objective model selection. Differing
from other approaches, our truecluster method does not require specific
assumptions about underlying distributions, dissimilarity definitions or
cluster models. Truecluster puts arbitrary clustering algorithms into a generic
unified (sampling-based) statistical framework. It is scalable to big datasets
and provides robust cluster assignments and case-wise diagnostics. Truecluster
will make clustering more objective, allows for automation, and will save time
and costs. Free R software is available.
| no_new_dataset | 0.94743 |
cs/0610031 | Simeon Warner | Simeon Warner, Jeroen Bekaert, Carl Lagoze, Xiaoming Liu, Sandy
Payette, Herbert Van de Sompel | Pathways: Augmenting interoperability across scholarly repositories | 18 pages. Accepted for International Journal on Digital Libraries
special issue on Digital Libraries and eScience | null | 10.1007/s00799-007-0016-7 | null | cs.DL | null | In the emerging eScience environment, repositories of papers, datasets,
software, etc., should be the foundation of a global and natively-digital
scholarly communications system. The current infrastructure falls far short of
this goal. Cross-repository interoperability must be augmented to support the
many workflows and value-chains involved in scholarly communication. This will
not be achieved through the promotion of single repository architecture or
content representation, but instead requires an interoperability framework to
connect the many heterogeneous systems that will exist.
We present a simple data model and service architecture that augments
repository interoperability to enable scholarly value-chains to be implemented.
We describe an experiment that demonstrates how the proposed infrastructure can
be deployed to implement the workflow involved in the creation of an overlay
journal over several different repository systems (Fedora, aDORe, DSpace and
arXiv).
| [
{
"version": "v1",
"created": "Thu, 5 Oct 2006 19:55:09 GMT"
}
] | 2007-06-13T00:00:00 | [
[
"Warner",
"Simeon",
""
],
[
"Bekaert",
"Jeroen",
""
],
[
"Lagoze",
"Carl",
""
],
[
"Liu",
"Xiaoming",
""
],
[
"Payette",
"Sandy",
""
],
[
"Van de Sompel",
"Herbert",
""
]
] | TITLE: Pathways: Augmenting interoperability across scholarly repositories
ABSTRACT: In the emerging eScience environment, repositories of papers, datasets,
software, etc., should be the foundation of a global and natively-digital
scholarly communications system. The current infrastructure falls far short of
this goal. Cross-repository interoperability must be augmented to support the
many workflows and value-chains involved in scholarly communication. This will
not be achieved through the promotion of single repository architecture or
content representation, but instead requires an interoperability framework to
connect the many heterogeneous systems that will exist.
We present a simple data model and service architecture that augments
repository interoperability to enable scholarly value-chains to be implemented.
We describe an experiment that demonstrates how the proposed infrastructure can
be deployed to implement the workflow involved in the creation of an overlay
journal over several different repository systems (Fedora, aDORe, DSpace and
arXiv).
| no_new_dataset | 0.946051 |
0704.1028 | Jianlin Cheng | Jianlin Cheng | A neural network approach to ordinal regression | 8 pages | null | null | null | cs.LG cs.AI cs.NE | null | Ordinal regression is an important type of learning, which has properties of
both classification and regression. Here we describe a simple and effective
approach to adapt a traditional neural network to learn ordinal categories. Our
approach is a generalization of the perceptron method for ordinal regression.
On several benchmark datasets, our method (NNRank) outperforms a neural network
classification method. Compared with the ordinal regression methods using
Gaussian processes and support vector machines, NNRank achieves comparable
performance. Moreover, NNRank has the advantages of traditional neural
networks: learning in both online and batch modes, handling very large training
datasets, and making rapid predictions. These features make NNRank a useful and
complementary tool for large-scale data processing tasks such as information
retrieval, web page ranking, collaborative filtering, and protein ranking in
Bioinformatics.
| [
{
"version": "v1",
"created": "Sun, 8 Apr 2007 17:36:00 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Cheng",
"Jianlin",
""
]
] | TITLE: A neural network approach to ordinal regression
ABSTRACT: Ordinal regression is an important type of learning, which has properties of
both classification and regression. Here we describe a simple and effective
approach to adapt a traditional neural network to learn ordinal categories. Our
approach is a generalization of the perceptron method for ordinal regression.
On several benchmark datasets, our method (NNRank) outperforms a neural network
classification method. Compared with the ordinal regression methods using
Gaussian processes and support vector machines, NNRank achieves comparable
performance. Moreover, NNRank has the advantages of traditional neural
networks: learning in both online and batch modes, handling very large training
datasets, and making rapid predictions. These features make NNRank a useful and
complementary tool for large-scale data processing tasks such as information
retrieval, web page ranking, collaborative filtering, and protein ranking in
Bioinformatics.
| no_new_dataset | 0.949342 |
0704.2374 | Daniel Fraiman | Daniel Fraiman | Growing Directed Networks: Estimation and Hypothesis Testing | 4 pages, 3 figures | null | null | null | physics.soc-ph physics.data-an | null | Based only on the information gathered in a snapshot of a directed network,
we present a formal way of checking if the proposed model is correct for the
empirical growing network under study. In particular, we show how to estimate
the attractiveness, and present an application of the model presented in
[arxiv:0704.1847] to the scientific publications network from the ISI dataset.
| [
{
"version": "v1",
"created": "Wed, 18 Apr 2007 16:08:32 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Fraiman",
"Daniel",
""
]
] | TITLE: Growing Directed Networks: Estimation and Hypothesis Testing
ABSTRACT: Based only on the information gathered in a snapshot of a directed network,
we present a formal way of checking if the proposed model is correct for the
empirical growing network under study. In particular, we show how to estimate
the attractiveness, and present an application of the model presented in
[arxiv:0704.1847] to the scientific publications network from the ISI dataset.
| no_new_dataset | 0.950273 |
0704.2668 | Alex Smola J | Le Song, Alex Smola, Arthur Gretton, Karsten Borgwardt, Justin Bedo | Supervised Feature Selection via Dependence Estimation | 9 pages | null | null | null | cs.LG | null | We introduce a framework for filtering features that employs the
Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence
between the features and the labels. The key idea is that good features should
maximise such dependence. Feature selection for various supervised learning
problems (including classification and regression) is unified under this
framework, and the solutions can be approximated using a backward-elimination
algorithm. We demonstrate the usefulness of our method on both artificial and
real world datasets.
| [
{
"version": "v1",
"created": "Fri, 20 Apr 2007 08:26:29 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Song",
"Le",
""
],
[
"Smola",
"Alex",
""
],
[
"Gretton",
"Arthur",
""
],
[
"Borgwardt",
"Karsten",
""
],
[
"Bedo",
"Justin",
""
]
] | TITLE: Supervised Feature Selection via Dependence Estimation
ABSTRACT: We introduce a framework for filtering features that employs the
Hilbert-Schmidt Independence Criterion (HSIC) as a measure of dependence
between the features and the labels. The key idea is that good features should
maximise such dependence. Feature selection for various supervised learning
problems (including classification and regression) is unified under this
framework, and the solutions can be approximated using a backward-elimination
algorithm. We demonstrate the usefulness of our method on both artificial and
real world datasets.
| no_new_dataset | 0.944791 |
0704.2803 | Jure Leskovec | Jure Leskovec, Mary McGlohon, Christos Faloutsos, Natalie Glance,
Matthew Hurst | Cascading Behavior in Large Blog Graphs | null | null | null | null | physics.soc-ph physics.data-an | null | How do blogs cite and influence each other? How do such links evolve? Does
the popularity of old blog posts drop exponentially with time? These are some
of the questions that we address in this work. Our goal is to build a model
that generates realistic cascades, so that it can help us with link prediction
and outlier detection.
Blogs (weblogs) have become an important medium of information because of
their timely publication, ease of use, and wide availability. In fact, they
often make headlines, by discussing and discovering evidence about political
events and facts. Often blogs link to one another, creating a publicly
available record of how information and influence spreads through an underlying
social network. Aggregating links from several blog posts creates a directed
graph which we analyze to discover the patterns of information propagation in
blogspace, and thereby understand the underlying social network. Not only are
blogs interesting on their own merit, but our analysis also sheds light on how
rumors, viruses, and ideas propagate over social and computer networks.
Here we report some surprising findings of the blog linking and information
propagation structure, after we analyzed one of the largest available datasets,
with 45,000 blogs and ~ 2.2 million blog-postings. Our analysis also sheds
light on how rumors, viruses, and ideas propagate over social and computer
networks. We also present a simple model that mimics the spread of information
on the blogosphere, and produces information cascades very similar to those
found in real life.
| [
{
"version": "v1",
"created": "Fri, 20 Apr 2007 22:37:13 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Leskovec",
"Jure",
""
],
[
"McGlohon",
"Mary",
""
],
[
"Faloutsos",
"Christos",
""
],
[
"Glance",
"Natalie",
""
],
[
"Hurst",
"Matthew",
""
]
] | TITLE: Cascading Behavior in Large Blog Graphs
ABSTRACT: How do blogs cite and influence each other? How do such links evolve? Does
the popularity of old blog posts drop exponentially with time? These are some
of the questions that we address in this work. Our goal is to build a model
that generates realistic cascades, so that it can help us with link prediction
and outlier detection.
Blogs (weblogs) have become an important medium of information because of
their timely publication, ease of use, and wide availability. In fact, they
often make headlines, by discussing and discovering evidence about political
events and facts. Often blogs link to one another, creating a publicly
available record of how information and influence spreads through an underlying
social network. Aggregating links from several blog posts creates a directed
graph which we analyze to discover the patterns of information propagation in
blogspace, and thereby understand the underlying social network. Not only are
blogs interesting on their own merit, but our analysis also sheds light on how
rumors, viruses, and ideas propagate over social and computer networks.
Here we report some surprising findings of the blog linking and information
propagation structure, after we analyzed one of the largest available datasets,
with 45,000 blogs and ~ 2.2 million blog-postings. Our analysis also sheds
light on how rumors, viruses, and ideas propagate over social and computer
networks. We also present a simple model that mimics the spread of information
on the blogosphere, and produces information cascades very similar to those
found in real life.
| no_new_dataset | 0.949435 |
0704.2883 | Roehner | Charles Jego, Bertrand M. Roehner | A physicist's view of the notion of "racism" | 14 pages, 3 figures, 1 table | null | null | null | physics.soc-ph | null | It is not uncommon, e.g. in the media, that specific groups are categorized
as being racist. Based on an extensive dataset of intermarriage statistics our
study questions the legitimacy of such characterizations. It suggests that, far
from being group-dependent, segregation mechanisms are instead
situation-dependent. More precisely, the degree of integration of a minority in
terms of the frequency of intermarriage is seen to crucially depend upon the
the proportion p of the minority. Thus, a population may have a segregative
behavior with respect to a high-p (p>20%) minority A and at the same time a
tolerant attitude toward a low-p (p<2%) minority B. This remains true even when
A and B represent the same minority; for instance Black-White intermarriage is
much more frequent in Montana than it is in South Carolina. In short, the
nature of minority groups is largely irrelevant, the key factor being their
proportion in a given area.
| [
{
"version": "v1",
"created": "Sun, 22 Apr 2007 13:42:57 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Jego",
"Charles",
""
],
[
"Roehner",
"Bertrand M.",
""
]
] | TITLE: A physicist's view of the notion of "racism"
ABSTRACT: It is not uncommon, e.g. in the media, that specific groups are categorized
as being racist. Based on an extensive dataset of intermarriage statistics our
study questions the legitimacy of such characterizations. It suggests that, far
from being group-dependent, segregation mechanisms are instead
situation-dependent. More precisely, the degree of integration of a minority in
terms of the frequency of intermarriage is seen to crucially depend upon the
the proportion p of the minority. Thus, a population may have a segregative
behavior with respect to a high-p (p>20%) minority A and at the same time a
tolerant attitude toward a low-p (p<2%) minority B. This remains true even when
A and B represent the same minority; for instance Black-White intermarriage is
much more frequent in Montana than it is in South Carolina. In short, the
nature of minority groups is largely irrelevant, the key factor being their
proportion in a given area.
| no_new_dataset | 0.940134 |
0705.1110 | Edgar Graaf de | Edgar de Graaf Joost Kok Walter Kosters | Mining Patterns with a Balanced Interval | null | null | null | null | cs.AI cs.DB | null | In many applications it will be useful to know those patterns that occur with
a balanced interval, e.g., a certain combination of phone numbers are called
almost every Friday or a group of products are sold a lot on Tuesday and
Thursday.
In previous work we proposed a new measure of support (the number of
occurrences of a pattern in a dataset), where we count the number of times a
pattern occurs (nearly) in the middle between two other occurrences. If the
number of non-occurrences between two occurrences of a pattern stays almost the
same then we call the pattern balanced.
It was noticed that some very frequent patterns obviously also occur with a
balanced interval, meaning in every transaction. However more interesting
patterns might occur, e.g., every three transactions. Here we discuss a
solution using standard deviation and average. Furthermore we propose a simpler
approach for pruning patterns with a balanced interval, making estimating the
pruning threshold more intuitive.
| [
{
"version": "v1",
"created": "Tue, 8 May 2007 15:22:38 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Kosters",
"Edgar de Graaf Joost Kok Walter",
""
]
] | TITLE: Mining Patterns with a Balanced Interval
ABSTRACT: In many applications it will be useful to know those patterns that occur with
a balanced interval, e.g., a certain combination of phone numbers are called
almost every Friday or a group of products are sold a lot on Tuesday and
Thursday.
In previous work we proposed a new measure of support (the number of
occurrences of a pattern in a dataset), where we count the number of times a
pattern occurs (nearly) in the middle between two other occurrences. If the
number of non-occurrences between two occurrences of a pattern stays almost the
same then we call the pattern balanced.
It was noticed that some very frequent patterns obviously also occur with a
balanced interval, meaning in every transaction. However more interesting
patterns might occur, e.g., every three transactions. Here we discuss a
solution using standard deviation and average. Furthermore we propose a simpler
approach for pruning patterns with a balanced interval, making estimating the
pruning threshold more intuitive.
| no_new_dataset | 0.941331 |
0705.1390 | Tshilidzi Marwala | M.A. Herzog, T. Marwala and P.S. Heyns | Machine and Component Residual Life Estimation through the Application
of Neural Networks | 22 pages | null | null | null | cs.CE | null | This paper concerns the use of neural networks for predicting the residual
life of machines and components. In addition, the advantage of using
condition-monitoring data to enhance the predictive capability of these neural
networks was also investigated. A number of neural network variations were
trained and tested with the data of two different reliability-related datasets.
The first dataset represents the renewal case where the failed unit is repaired
and restored to a good-as-new condition. Data was collected in the laboratory
by subjecting a series of similar test pieces to fatigue loading with a
hydraulic actuator. The average prediction error of the various neural networks
being compared varied from 431 to 841 seconds on this dataset, where test
pieces had a characteristic life of 8,971 seconds. The second dataset was
collected from a group of pumps used to circulate a water and magnetite
solution within a plant. The data therefore originated from a repaired system
affected by reliability degradation. When optimized, the multi-layer perceptron
neural networks trained with the Levenberg-Marquardt algorithm and the general
regression neural network produced a sum-of-squares error within 11.1% of each
other. The potential for using neural networks for residual life prediction and
the advantage of incorporating condition-based data into the model were proven
for both examples.
| [
{
"version": "v1",
"created": "Thu, 10 May 2007 05:52:22 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Herzog",
"M. A.",
""
],
[
"Marwala",
"T.",
""
],
[
"Heyns",
"P. S.",
""
]
] | TITLE: Machine and Component Residual Life Estimation through the Application
of Neural Networks
ABSTRACT: This paper concerns the use of neural networks for predicting the residual
life of machines and components. In addition, the advantage of using
condition-monitoring data to enhance the predictive capability of these neural
networks was also investigated. A number of neural network variations were
trained and tested with the data of two different reliability-related datasets.
The first dataset represents the renewal case where the failed unit is repaired
and restored to a good-as-new condition. Data was collected in the laboratory
by subjecting a series of similar test pieces to fatigue loading with a
hydraulic actuator. The average prediction error of the various neural networks
being compared varied from 431 to 841 seconds on this dataset, where test
pieces had a characteristic life of 8,971 seconds. The second dataset was
collected from a group of pumps used to circulate a water and magnetite
solution within a plant. The data therefore originated from a repaired system
affected by reliability degradation. When optimized, the multi-layer perceptron
neural networks trained with the Levenberg-Marquardt algorithm and the general
regression neural network produced a sum-of-squares error within 11.1% of each
other. The potential for using neural networks for residual life prediction and
the advantage of incorporating condition-based data into the model were proven
for both examples.
| no_new_dataset | 0.922132 |
astro-ph/0510688 | Michael Noble S. | M.S. Noble, J.C. Houck, J.E. Davis, A. Young, M. Nowak | Using the Parallel Virtual Machine for Everyday Analysis | 4 pages; manuscript for oral presentation given at ADASS XV, Madrid | null | null | null | astro-ph cs.DC | null | A review of the literature reveals that while parallel computing is sometimes
employed by astronomers for custom, large-scale calculations, no package
fosters the routine application of parallel methods to standard problems in
astronomical data analysis. This paper describes our attempt to close that gap
by wrapping the Parallel Virtual Machine (PVM) as a scriptable S-Lang module.
Using PVM within ISIS, the Interactive Spectral Interpretation System, we've
distributed a number of representive calculations over a network of 25+ CPUs to
achieve dramatic reductions in execution times. We discuss how the approach
applies to a wide class of modeling problems, outline our efforts to make it
more transparent for common use, and note its growing importance in the context
of the large, multi-wavelength datasets used in modern analysis.
| [
{
"version": "v1",
"created": "Mon, 24 Oct 2005 15:17:36 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Noble",
"M. S.",
""
],
[
"Houck",
"J. C.",
""
],
[
"Davis",
"J. E.",
""
],
[
"Young",
"A.",
""
],
[
"Nowak",
"M.",
""
]
] | TITLE: Using the Parallel Virtual Machine for Everyday Analysis
ABSTRACT: A review of the literature reveals that while parallel computing is sometimes
employed by astronomers for custom, large-scale calculations, no package
fosters the routine application of parallel methods to standard problems in
astronomical data analysis. This paper describes our attempt to close that gap
by wrapping the Parallel Virtual Machine (PVM) as a scriptable S-Lang module.
Using PVM within ISIS, the Interactive Spectral Interpretation System, we've
distributed a number of representive calculations over a network of 25+ CPUs to
achieve dramatic reductions in execution times. We discuss how the approach
applies to a wide class of modeling problems, outline our efforts to make it
more transparent for common use, and note its growing importance in the context
of the large, multi-wavelength datasets used in modern analysis.
| no_new_dataset | 0.948632 |
cond-mat/0207711 | Johannes Berg | Johannes Berg, Michael L\"assig (U Cologne), and Andreas Wagner (U New
Mexico) | Structure and evolution of protein interaction networks: A statistical
model for link dynamics and gene duplications | published version | BMC Evolutionary Biology 4:51 (2004) | null | null | cond-mat.stat-mech physics.bio-ph q-bio.MN | null | The structure of molecular networks derives from dynamical processes on
evolutionary time scales. For protein interaction networks, global statistical
features of their structure can now be inferred consistently from several
large-throughput datasets. Understanding the underlying evolutionary dynamics
is crucial for discerning random parts of the network from biologically
important properties shaped by natural selection. We present a detailed
statistical analysis of the protein interactions in Saccharomyces cerevisiae
based on several large-throughput datasets. Protein pairs resulting from gene
duplications are used as tracers into the evolutionary past of the network.
From this analysis, we infer rate estimates for two key evolutionary
processes shaping the network: (i) gene duplications and (ii) gain and loss of
interactions through mutations in existing proteins, which are referred to as
link dynamics. Importantly, the link dynamics is asymmetric, i.e., the
evolutionary steps are mutations in just one of the binding parters. The link
turnover is shown to be much faster than gene duplications. According to this
model, the link dynamics is the dominant evolutionary force shaping the
statistical structure of the network, while the slower gene duplication
dynamics mainly affects its size. Specifically, the model predicts (i) a broad
distribution of the connectivities (i.e., the number of binding partners of a
protein) and (ii) correlations between the connectivities of interacting
proteins.
| [
{
"version": "v1",
"created": "Tue, 30 Jul 2002 14:11:58 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Apr 2003 13:03:55 GMT"
},
{
"version": "v3",
"created": "Sat, 27 Nov 2004 16:20:56 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Berg",
"Johannes",
"",
"U Cologne"
],
[
"Lässig",
"Michael",
"",
"U Cologne"
],
[
"Wagner",
"Andreas",
"",
"U New\n Mexico"
]
] | TITLE: Structure and evolution of protein interaction networks: A statistical
model for link dynamics and gene duplications
ABSTRACT: The structure of molecular networks derives from dynamical processes on
evolutionary time scales. For protein interaction networks, global statistical
features of their structure can now be inferred consistently from several
large-throughput datasets. Understanding the underlying evolutionary dynamics
is crucial for discerning random parts of the network from biologically
important properties shaped by natural selection. We present a detailed
statistical analysis of the protein interactions in Saccharomyces cerevisiae
based on several large-throughput datasets. Protein pairs resulting from gene
duplications are used as tracers into the evolutionary past of the network.
From this analysis, we infer rate estimates for two key evolutionary
processes shaping the network: (i) gene duplications and (ii) gain and loss of
interactions through mutations in existing proteins, which are referred to as
link dynamics. Importantly, the link dynamics is asymmetric, i.e., the
evolutionary steps are mutations in just one of the binding parters. The link
turnover is shown to be much faster than gene duplications. According to this
model, the link dynamics is the dominant evolutionary force shaping the
statistical structure of the network, while the slower gene duplication
dynamics mainly affects its size. Specifically, the model predicts (i) a broad
distribution of the connectivities (i.e., the number of binding partners of a
protein) and (ii) correlations between the connectivities of interacting
proteins.
| no_new_dataset | 0.951097 |
cond-mat/0305279 | Michele Caselle | M. Caselle, F. Di Cunto and P. Provero | A computational approach to regulatory element discovery in eukaryotes | 7 pages, 2 figures | Proceedings of the 2002 ECMTB conference | null | DFTT 13/2003 | cond-mat.dis-nn physics.bio-ph q-bio.GN | null | Gene regulation in Eukaryotes is mainly effected through transcription
factors binding to rather short recognition motifs generally located upstream
of the coding region. We present a novel computational method to identify
regulatory elements in the upstream region of Eukaryotic genes. The genes are
grouped in sets sharing an overrepresented short motif in their upstream
sequence. For each set, the average expression level from a microarray
experiment is determined: if this level is significantly higher or lower than
the average taken over the whole genome, then the overrepresented motif shared
by the genes in the set is likely to play a role in their regulation. We
illustrate the method by applying it to the genome of {\it S. cerevisiae}, for
which many datasets of microarray experiments are publicly available. Several
known binding motifs are correctly recognized by our algorithm, and a new
candidate is suggested for experimental verification.
| [
{
"version": "v1",
"created": "Tue, 13 May 2003 12:40:59 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Caselle",
"M.",
""
],
[
"Di Cunto",
"F.",
""
],
[
"Provero",
"P.",
""
]
] | TITLE: A computational approach to regulatory element discovery in eukaryotes
ABSTRACT: Gene regulation in Eukaryotes is mainly effected through transcription
factors binding to rather short recognition motifs generally located upstream
of the coding region. We present a novel computational method to identify
regulatory elements in the upstream region of Eukaryotic genes. The genes are
grouped in sets sharing an overrepresented short motif in their upstream
sequence. For each set, the average expression level from a microarray
experiment is determined: if this level is significantly higher or lower than
the average taken over the whole genome, then the overrepresented motif shared
by the genes in the set is likely to play a role in their regulation. We
illustrate the method by applying it to the genome of {\it S. cerevisiae}, for
which many datasets of microarray experiments are publicly available. Several
known binding motifs are correctly recognized by our algorithm, and a new
candidate is suggested for experimental verification.
| no_new_dataset | 0.946646 |
cond-mat/0305681 | Gorban | A. N. Gorban, A. Yu. Zinovyev, T. G. Popova | Seven clusters in genomic triplet distributions | Correction of URL. 16 pages, 5 figures. The software and datasets are
available at http://www.ihes.fr/~zinovyev/bullet and
http://www.ihes.fr/~zinovyev/7clusters Paper also available at
http://www.bioinfo.de/isb/2003/03/0039 | In Silico Biology, 3 (2003), 0039, 471-482 | null | null | cond-mat.dis-nn cs.CV physics.bio-ph physics.data-an q-bio.GN | null | In several recent papers new gene-detection algorithms were proposed for
detecting protein-coding regions without requiring learning dataset of already
known genes. The fact that unsupervised gene-detection is possible closely
connected to existence of a cluster structure in oligomer frequency
distributions. In this paper we study cluster structure of several genomes in
the space of their triplet frequencies, using pure data exploration strategy.
Several complete genomic sequences were analyzed, using visualization of tables
of triplet frequencies in a sliding window. The distribution of 64-dimensional
vectors of triplet frequencies displays a well-detectable cluster structure.
The structure was found to consist of seven clusters, corresponding to
protein-coding information in three possible phases in one of the two
complementary strands and in the non-coding regions with high accuracy (higher
than 90% on the nucleotide level). Visualizing and understanding the structure
allows to analyze effectively performance of different gene-prediction tools.
Since the method does not require extraction of ORFs, it can be applied even
for unassembled genomes. The information content of the triplet distributions
and the validity of the mean-field models are analysed.
| [
{
"version": "v1",
"created": "Thu, 29 May 2003 11:36:34 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Apr 2004 17:01:56 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Nov 2004 11:08:03 GMT"
},
{
"version": "v4",
"created": "Tue, 23 Nov 2004 13:09:00 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gorban",
"A. N.",
""
],
[
"Zinovyev",
"A. Yu.",
""
],
[
"Popova",
"T. G.",
""
]
] | TITLE: Seven clusters in genomic triplet distributions
ABSTRACT: In several recent papers new gene-detection algorithms were proposed for
detecting protein-coding regions without requiring learning dataset of already
known genes. The fact that unsupervised gene-detection is possible closely
connected to existence of a cluster structure in oligomer frequency
distributions. In this paper we study cluster structure of several genomes in
the space of their triplet frequencies, using pure data exploration strategy.
Several complete genomic sequences were analyzed, using visualization of tables
of triplet frequencies in a sliding window. The distribution of 64-dimensional
vectors of triplet frequencies displays a well-detectable cluster structure.
The structure was found to consist of seven clusters, corresponding to
protein-coding information in three possible phases in one of the two
complementary strands and in the non-coding regions with high accuracy (higher
than 90% on the nucleotide level). Visualizing and understanding the structure
allows to analyze effectively performance of different gene-prediction tools.
Since the method does not require extraction of ORFs, it can be applied even
for unassembled genomes. The information content of the triplet distributions
and the validity of the mean-field models are analysed.
| no_new_dataset | 0.952131 |
cs/0005005 | Davis King | Davis King, Jarek Rossignac, and Andrzej Szymczak | Connectivity Compression for Irregular Quadrilateral Meshes | null | null | null | GVU Tech Report GIT-GVU-99-36 | cs.GR cs.CG cs.DS | null | Applications that require Internet access to remote 3D datasets are often
limited by the storage costs of 3D models. Several compression methods are
available to address these limits for objects represented by triangle meshes.
Many CAD and VRML models, however, are represented as quadrilateral meshes or
mixed triangle/quadrilateral meshes, and these models may also require
compression. We present an algorithm for encoding the connectivity of such
quadrilateral meshes, and we demonstrate that by preserving and exploiting the
original quad structure, our approach achieves encodings 30 - 80% smaller than
an approach based on randomly splitting quads into triangles. We present both a
code with a proven worst-case cost of 3 bits per vertex (or 2.75 bits per
vertex for meshes without valence-two vertices) and entropy-coding results for
typical meshes ranging from 0.3 to 0.9 bits per vertex, depending on the
regularity of the mesh. Our method may be implemented by a rule for a
particular splitting of quads into triangles and by using the compression and
decompression algorithms introduced in [Rossignac99] and
[Rossignac&Szymczak99]. We also present extensions to the algorithm to compress
meshes with holes and handles and meshes containing triangles and other
polygons as well as quads.
| [
{
"version": "v1",
"created": "Thu, 4 May 2000 18:15:08 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"King",
"Davis",
""
],
[
"Rossignac",
"Jarek",
""
],
[
"Szymczak",
"Andrzej",
""
]
] | TITLE: Connectivity Compression for Irregular Quadrilateral Meshes
ABSTRACT: Applications that require Internet access to remote 3D datasets are often
limited by the storage costs of 3D models. Several compression methods are
available to address these limits for objects represented by triangle meshes.
Many CAD and VRML models, however, are represented as quadrilateral meshes or
mixed triangle/quadrilateral meshes, and these models may also require
compression. We present an algorithm for encoding the connectivity of such
quadrilateral meshes, and we demonstrate that by preserving and exploiting the
original quad structure, our approach achieves encodings 30 - 80% smaller than
an approach based on randomly splitting quads into triangles. We present both a
code with a proven worst-case cost of 3 bits per vertex (or 2.75 bits per
vertex for meshes without valence-two vertices) and entropy-coding results for
typical meshes ranging from 0.3 to 0.9 bits per vertex, depending on the
regularity of the mesh. Our method may be implemented by a rule for a
particular splitting of quads into triangles and by using the compression and
decompression algorithms introduced in [Rossignac99] and
[Rossignac&Szymczak99]. We also present extensions to the algorithm to compress
meshes with holes and handles and meshes containing triangles and other
polygons as well as quads.
| no_new_dataset | 0.941493 |
cs/0006001 | Ninan Sajeeth Philip | Ninan Sajeeth Philip, K. Babu Joseph | Boosting the Differences: A fast Bayesian classifier neural network | latex 18pages no figures | null | null | IDA2000 | cs.CV | null | A Bayesian classifier that up-weights the differences in the attribute values
is discussed. Using four popular datasets from the UCI repository, some
interesting features of the network are illustrated. The network is suitable
for classification problems.
| [
{
"version": "v1",
"created": "Wed, 31 May 2000 23:37:48 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Philip",
"Ninan Sajeeth",
""
],
[
"Joseph",
"K. Babu",
""
]
] | TITLE: Boosting the Differences: A fast Bayesian classifier neural network
ABSTRACT: A Bayesian classifier that up-weights the differences in the attribute values
is discussed. Using four popular datasets from the UCI repository, some
interesting features of the network are illustrated. The network is suitable
for classification problems.
| no_new_dataset | 0.952882 |
cs/0006002 | Ninan Sajeeth Philip | Ninan Sajeeth Philip, K. Babu Joseph | Distorted English Alphabet Identification : An application of Difference
Boosting Algorithm | latex 14pages no figures | null | null | ADCOM2000 | cs.CV | null | The difference-boosting algorithm is used on letters dataset from the UCI
repository to classify distorted raster images of English alphabets. In
contrast to rather complex networks, the difference-boosting is found to
produce comparable or better classification efficiency on this complex problem.
| [
{
"version": "v1",
"created": "Wed, 31 May 2000 23:52:31 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Philip",
"Ninan Sajeeth",
""
],
[
"Joseph",
"K. Babu",
""
]
] | TITLE: Distorted English Alphabet Identification : An application of Difference
Boosting Algorithm
ABSTRACT: The difference-boosting algorithm is used on letters dataset from the UCI
repository to classify distorted raster images of English alphabets. In
contrast to rather complex networks, the difference-boosting is found to
produce comparable or better classification efficiency on this complex problem.
| no_new_dataset | 0.951278 |
cs/0103022 | Judith Beumer | Bill Allcock, Joe Bester, John Bresnahan, Ann L. Chervenak, Ian
Foster, Carl Kesselman, Sam Meder, Veronika Nefedova, Darcy Quesnel, Steven
Tuecke | Secure, Efficient Data Transport and Replica Management for
High-Performance Data-Intensive Computing | 15 pages | null | null | ANL/MCS-P871-0201 | cs.DC cs.DB | null | An emerging class of data-intensive applications involve the geographically
dispersed extraction of complex scientific information from very large
collections of measured or computed data. Such applications arise, for example,
in experimental physics, where the data in question is generated by
accelerators, and in simulation science, where the data is generated by
supercomputers. So-called Data Grids provide essential infrastructure for such
applications, much as the Internet provides essential services for applications
such as e-mail and the Web. We describe here two services that we believe are
fundamental to any Data Grid: reliable, high-speed transporet and replica
management. Our high-speed transport service, GridFTP, extends the popular FTP
protocol with new features required for Data Grid applciations, such as
striping and partial file access. Our replica management service integrates a
replica catalog with GridFTP transfers to provide for the creation,
registration, location, and management of dataset replicas. We present the
design of both services and also preliminary performance results. Our
implementations exploit security and other services provided by the Globus
Toolkit.
| [
{
"version": "v1",
"created": "Wed, 28 Mar 2001 20:42:34 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Allcock",
"Bill",
""
],
[
"Bester",
"Joe",
""
],
[
"Bresnahan",
"John",
""
],
[
"Chervenak",
"Ann L.",
""
],
[
"Foster",
"Ian",
""
],
[
"Kesselman",
"Carl",
""
],
[
"Meder",
"Sam",
""
],
[
"Nefedova",
"Veronika",
""
],
[
"Quesnel",
"Darcy",
""
],
[
"Tuecke",
"Steven",
""
]
] | TITLE: Secure, Efficient Data Transport and Replica Management for
High-Performance Data-Intensive Computing
ABSTRACT: An emerging class of data-intensive applications involve the geographically
dispersed extraction of complex scientific information from very large
collections of measured or computed data. Such applications arise, for example,
in experimental physics, where the data in question is generated by
accelerators, and in simulation science, where the data is generated by
supercomputers. So-called Data Grids provide essential infrastructure for such
applications, much as the Internet provides essential services for applications
such as e-mail and the Web. We describe here two services that we believe are
fundamental to any Data Grid: reliable, high-speed transporet and replica
management. Our high-speed transport service, GridFTP, extends the popular FTP
protocol with new features required for Data Grid applciations, such as
striping and partial file access. Our replica management service integrates a
replica catalog with GridFTP transfers to provide for the creation,
registration, location, and management of dataset replicas. We present the
design of both services and also preliminary performance results. Our
implementations exploit security and other services provided by the Globus
Toolkit.
| no_new_dataset | 0.947962 |
cs/0104009 | Naren Ramakrishnan | Batul J. Mirza, Benjamin J. Keller, and Naren Ramakrishnan | Evaluating Recommendation Algorithms by Graph Analysis | null | null | null | null | cs.IR cs.DM cs.DS | null | We present a novel framework for evaluating recommendation algorithms in
terms of the `jumps' that they make to connect people to artifacts. This
approach emphasizes reachability via an algorithm within the implicit graph
structure underlying a recommender dataset, and serves as a complement to
evaluation in terms of predictive accuracy. The framework allows us to consider
questions relating algorithmic parameters to properties of the datasets. For
instance, given a particular algorithm `jump,' what is the average path length
from a person to an artifact? Or, what choices of minimum ratings and jumps
maintain a connected graph? We illustrate the approach with a common jump
called the `hammock' using movie recommender datasets.
| [
{
"version": "v1",
"created": "Tue, 3 Apr 2001 22:07:28 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Mirza",
"Batul J.",
""
],
[
"Keller",
"Benjamin J.",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Evaluating Recommendation Algorithms by Graph Analysis
ABSTRACT: We present a novel framework for evaluating recommendation algorithms in
terms of the `jumps' that they make to connect people to artifacts. This
approach emphasizes reachability via an algorithm within the implicit graph
structure underlying a recommender dataset, and serves as a complement to
evaluation in terms of predictive accuracy. The framework allows us to consider
questions relating algorithmic parameters to properties of the datasets. For
instance, given a particular algorithm `jump,' what is the average path length
from a person to an artifact? Or, what choices of minimum ratings and jumps
maintain a connected graph? We illustrate the approach with a common jump
called the `hammock' using movie recommender datasets.
| no_new_dataset | 0.947478 |
cs/0109106 | Michael D. Smith | Atip Asvanund, Karen Clay, Ramayya Krishnan, Michael Smith | Bigger May Not Be Better: An Empirical Analysis of Optimal Membership
Rules in Peer-To-Peer Networks | 29th TPRC Conference, 2001 | null | null | TPRC-2001-049 | cs.CY | null | Peer to peer networks will become an increasingly important distribution
channel for consumer information goods and may play a role in the distribution
of information within corporations. Our research analyzes optimal membership
rules for these networks in light of positive and negative externalities
additional users impose on the network. Using a dataset gathered from the six
largest OpenNap-based networks, we find that users impose a positive network
externality based on the desirability of the content they provide and a
negative network externality based on demands they place on the network.
Further we find that the marginal value of additional users is declining and
the marginal cost is increasing in the number of current users. This suggests
that multiple small networks may serve user communities more efficiently than
single monolithic networks and that network operators may wish to specialize in
their content and restrict membership based on capacity constraints and user
content desirability.
| [
{
"version": "v1",
"created": "Tue, 25 Sep 2001 02:05:14 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Oct 2001 17:26:24 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Asvanund",
"Atip",
""
],
[
"Clay",
"Karen",
""
],
[
"Krishnan",
"Ramayya",
""
],
[
"Smith",
"Michael",
""
]
] | TITLE: Bigger May Not Be Better: An Empirical Analysis of Optimal Membership
Rules in Peer-To-Peer Networks
ABSTRACT: Peer to peer networks will become an increasingly important distribution
channel for consumer information goods and may play a role in the distribution
of information within corporations. Our research analyzes optimal membership
rules for these networks in light of positive and negative externalities
additional users impose on the network. Using a dataset gathered from the six
largest OpenNap-based networks, we find that users impose a positive network
externality based on the desirability of the content they provide and a
negative network externality based on demands they place on the network.
Further we find that the marginal value of additional users is declining and
the marginal cost is increasing in the number of current users. This suggests
that multiple small networks may serve user communities more efficiently than
single monolithic networks and that network operators may wish to specialize in
their content and restrict membership based on capacity constraints and user
content desirability.
| no_new_dataset | 0.954774 |
cs/0204047 | Naren Ramakrishnan | Naren Ramakrishnan and Chris Bailey-Kellogg | Sampling Strategies for Mining in Data-Scarce Domains | null | null | null | null | cs.CE cs.AI | null | Data mining has traditionally focused on the task of drawing inferences from
large datasets. However, many scientific and engineering domains, such as fluid
dynamics and aircraft design, are characterized by scarce data, due to the
expense and complexity of associated experiments and simulations. In such
data-scarce domains, it is advantageous to focus the data collection effort on
only those regions deemed most important to support a particular data mining
objective. This paper describes a mechanism that interleaves bottom-up data
mining, to uncover multi-level structures in spatial data, with top-down
sampling, to clarify difficult decisions in the mining process. The mechanism
exploits relevant physical properties, such as continuity, correspondence, and
locality, in a unified framework. This leads to effective mining and sampling
decisions that are explainable in terms of domain knowledge and data
characteristics. This approach is demonstrated in two diverse applications --
mining pockets in spatial data, and qualitative determination of Jordan forms
of matrices.
| [
{
"version": "v1",
"created": "Mon, 22 Apr 2002 19:41:24 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Apr 2002 21:56:55 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Ramakrishnan",
"Naren",
""
],
[
"Bailey-Kellogg",
"Chris",
""
]
] | TITLE: Sampling Strategies for Mining in Data-Scarce Domains
ABSTRACT: Data mining has traditionally focused on the task of drawing inferences from
large datasets. However, many scientific and engineering domains, such as fluid
dynamics and aircraft design, are characterized by scarce data, due to the
expense and complexity of associated experiments and simulations. In such
data-scarce domains, it is advantageous to focus the data collection effort on
only those regions deemed most important to support a particular data mining
objective. This paper describes a mechanism that interleaves bottom-up data
mining, to uncover multi-level structures in spatial data, with top-down
sampling, to clarify difficult decisions in the mining process. The mechanism
exploits relevant physical properties, such as continuity, correspondence, and
locality, in a unified framework. This leads to effective mining and sampling
decisions that are explainable in terms of domain knowledge and data
characteristics. This approach is demonstrated in two diverse applications --
mining pockets in spatial data, and qualitative determination of Jordan forms
of matrices.
| no_new_dataset | 0.952926 |
cs/0204053 | Chris Bailey-Kellogg | Chris Bailey-Kellogg, Naren Ramakrishnan | Qualitative Analysis of Correspondence for Experimental Algorithmics | 11 pages | null | null | null | cs.AI cs.CE | null | Correspondence identifies relationships among objects via similarities among
their components; it is ubiquitous in the analysis of spatial datasets,
including images, weather maps, and computational simulations. This paper
develops a novel multi-level mechanism for qualitative analysis of
correspondence. Operators leverage domain knowledge to establish
correspondence, evaluate implications for model selection, and leverage
identified weaknesses to focus additional data collection. The utility of the
mechanism is demonstrated in two applications from experimental algorithmics --
matrix spectral portrait analysis and graphical assessment of Jordan forms of
matrices. Results show that the mechanism efficiently samples computational
experiments and successfully uncovers high-level problem properties. It
overcomes noise and data sparsity by leveraging domain knowledge to detect
mutually reinforcing interpretations of spatial data.
| [
{
"version": "v1",
"created": "Fri, 26 Apr 2002 17:25:51 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Bailey-Kellogg",
"Chris",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Qualitative Analysis of Correspondence for Experimental Algorithmics
ABSTRACT: Correspondence identifies relationships among objects via similarities among
their components; it is ubiquitous in the analysis of spatial datasets,
including images, weather maps, and computational simulations. This paper
develops a novel multi-level mechanism for qualitative analysis of
correspondence. Operators leverage domain knowledge to establish
correspondence, evaluate implications for model selection, and leverage
identified weaknesses to focus additional data collection. The utility of the
mechanism is demonstrated in two applications from experimental algorithmics --
matrix spectral portrait analysis and graphical assessment of Jordan forms of
matrices. Results show that the mechanism efficiently samples computational
experiments and successfully uncovers high-level problem properties. It
overcomes noise and data sparsity by leveraging domain knowledge to detect
mutually reinforcing interpretations of spatial data.
| no_new_dataset | 0.95222 |
cs/0205065 | Lillian Lee | Regina Barzilay and Lillian Lee | Bootstrapping Lexical Choice via Multiple-Sequence Alignment | 8 pages; to appear in the proceedings of EMNLP-2002 | null | null | null | cs.CL | null | An important component of any generation system is the mapping dictionary, a
lexicon of elementary semantic expressions and corresponding natural language
realizations. Typically, labor-intensive knowledge-based methods are used to
construct the dictionary. We instead propose to acquire it automatically via a
novel multiple-pass algorithm employing multiple-sequence alignment, a
technique commonly used in bioinformatics. Crucially, our method leverages
latent information contained in multi-parallel corpora -- datasets that supply
several verbalizations of the corresponding semantics rather than just one.
We used our techniques to generate natural language versions of
computer-generated mathematical proofs, with good results on both a
per-component and overall-output basis. For example, in evaluations involving a
dozen human judges, our system produced output whose readability and
faithfulness to the semantic input rivaled that of a traditional generation
system.
| [
{
"version": "v1",
"created": "Sat, 25 May 2002 21:32:09 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Barzilay",
"Regina",
""
],
[
"Lee",
"Lillian",
""
]
] | TITLE: Bootstrapping Lexical Choice via Multiple-Sequence Alignment
ABSTRACT: An important component of any generation system is the mapping dictionary, a
lexicon of elementary semantic expressions and corresponding natural language
realizations. Typically, labor-intensive knowledge-based methods are used to
construct the dictionary. We instead propose to acquire it automatically via a
novel multiple-pass algorithm employing multiple-sequence alignment, a
technique commonly used in bioinformatics. Crucially, our method leverages
latent information contained in multi-parallel corpora -- datasets that supply
several verbalizations of the corresponding semantics rather than just one.
We used our techniques to generate natural language versions of
computer-generated mathematical proofs, with good results on both a
per-component and overall-output basis. For example, in evaluations involving a
dozen human judges, our system produced output whose readability and
faithfulness to the semantic input rivaled that of a traditional generation
system.
| no_new_dataset | 0.946051 |
cs/0206004 | Bart Goethals | Toon Calders, Bart Goethals | Mining All Non-Derivable Frequent Itemsets | 3 figures | null | null | null | cs.DB cs.AI | null | Recent studies on frequent itemset mining algorithms resulted in significant
performance improvements. However, if the minimal support threshold is set too
low, or the data is highly correlated, the number of frequent itemsets itself
can be prohibitively large. To overcome this problem, recently several
proposals have been made to construct a concise representation of the frequent
itemsets, instead of mining all frequent itemsets. The main goal of this paper
is to identify redundancies in the set of all frequent itemsets and to exploit
these redundancies in order to reduce the result of a mining operation. We
present deduction rules to derive tight bounds on the support of candidate
itemsets. We show how the deduction rules allow for constructing a minimal
representation for all frequent itemsets. We also present connections between
our proposal and recent proposals for concise representations and we give the
results of experiments on real-life datasets that show the effectiveness of the
deduction rules. In fact, the experiments even show that in many cases, first
mining the concise representation, and then creating the frequent itemsets from
this representation outperforms existing frequent set mining algorithms.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2002 14:13:51 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Calders",
"Toon",
""
],
[
"Goethals",
"Bart",
""
]
] | TITLE: Mining All Non-Derivable Frequent Itemsets
ABSTRACT: Recent studies on frequent itemset mining algorithms resulted in significant
performance improvements. However, if the minimal support threshold is set too
low, or the data is highly correlated, the number of frequent itemsets itself
can be prohibitively large. To overcome this problem, recently several
proposals have been made to construct a concise representation of the frequent
itemsets, instead of mining all frequent itemsets. The main goal of this paper
is to identify redundancies in the set of all frequent itemsets and to exploit
these redundancies in order to reduce the result of a mining operation. We
present deduction rules to derive tight bounds on the support of candidate
itemsets. We show how the deduction rules allow for constructing a minimal
representation for all frequent itemsets. We also present connections between
our proposal and recent proposals for concise representations and we give the
results of experiments on real-life datasets that show the effectiveness of the
deduction rules. In fact, the experiments even show that in many cases, first
mining the concise representation, and then creating the frequent itemsets from
this representation outperforms existing frequent set mining algorithms.
| no_new_dataset | 0.951997 |
cs/0208011 | Jim Gray | Jim Gray, Wyman Chong, Tom Barclay, Alex Szalay, Jan vandenBerg | TeraScale SneakerNet: Using Inexpensive Disks for Backup, Archiving, and
Data Exchange | original at
http://research.microsoft.com/scripts/pubs/view.asp?TR_ID=MSR-TR-2002-54 | null | null | MSR-TR-2002-54 | cs.NI cs.DC | null | Large datasets are most economically trnsmitted via parcel post given the
current economics of wide-area networking. This article describes how the Sloan
Digital Sky Survey ships terabyte scale datasets both within the US and to
Europe and Asia. We 3GT storage bricks (Ghz processor, GB ram, GbpsEthernet, TB
disk) for about 2k$ each. These bricks act as database servers on the LAN. They
are loaded at one site and read at the second site. The paper describes the
bricks, their economics, and some software issues that they raise.
| [
{
"version": "v1",
"created": "Wed, 7 Aug 2002 22:32:46 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gray",
"Jim",
""
],
[
"Chong",
"Wyman",
""
],
[
"Barclay",
"Tom",
""
],
[
"Szalay",
"Alex",
""
],
[
"vandenBerg",
"Jan",
""
]
] | TITLE: TeraScale SneakerNet: Using Inexpensive Disks for Backup, Archiving, and
Data Exchange
ABSTRACT: Large datasets are most economically trnsmitted via parcel post given the
current economics of wide-area networking. This article describes how the Sloan
Digital Sky Survey ships terabyte scale datasets both within the US and to
Europe and Asia. We 3GT storage bricks (Ghz processor, GB ram, GbpsEthernet, TB
disk) for about 2k$ each. These bricks act as database servers on the LAN. They
are loaded at one site and read at the second site. The paper describes the
bricks, their economics, and some software issues that they raise.
| no_new_dataset | 0.941385 |
cs/0208020 | Masaki Murata | Masaki Murata and Hitoshi Isahara | Using the DIFF Command for Natural Language Processing | 10 pages. Computation and Language. This paper is the rough English
translation of our Japanese papar | null | null | null | cs.CL | null | Diff is a software program that detects differences between two data sets and
is useful in natural language processing. This paper shows several examples of
the application of diff. They include the detection of differences between two
different datasets, extraction of rewriting rules, merging of two different
datasets, and the optimal matching of two different data sets. Since diff comes
with any standard UNIX system, it is readily available and very easy to use.
Our studies showed that diff is a practical tool for research into natural
language processing.
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2002 03:39:20 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Murata",
"Masaki",
""
],
[
"Isahara",
"Hitoshi",
""
]
] | TITLE: Using the DIFF Command for Natural Language Processing
ABSTRACT: Diff is a software program that detects differences between two data sets and
is useful in natural language processing. This paper shows several examples of
the application of diff. They include the detection of differences between two
different datasets, extraction of rewriting rules, merging of two different
datasets, and the optimal matching of two different data sets. Since diff comes
with any standard UNIX system, it is readily available and very easy to use.
Our studies showed that diff is a practical tool for research into natural
language processing.
| no_new_dataset | 0.946941 |
cs/0304037 | Judith Beumer | Sudharshan Vazhkudai and Jennifer M. Schopf | Using Regression Techniques to Predict Large Data Transfers | 29 pages, 11 figures | null | null | Preprint ANL/MCS-P1033-0303 | cs.DC | null | The recent proliferation of Data Grids and the increasingly common practice
of using resources as distributed data stores provide a convenient environment
for communities of researchers to share, replicate, and manage access to copies
of large datasets. This has led to the question of which replica can be
accessed most efficiently. In such environments, fetching data from one of the
several replica locations requires accurate predictions of end-to-end transfer
times. The answer to this question can depend on many factors, including
physical characteristics of the resources and the load behavior on the CPUs,
networks, and storage devices that are part of the end-to-end data path linking
possible sources and sinks. Our approach combines end-to-end application
throughput observations with network and disk load variations and captures
whole-system performance and variations in load patterns. Our predictions
characterize the effect of load variations of several shared devices (network
and disk) on file transfer times. We develop a suite of univariate and
multivariate predictors that can use multiple data sources to improve the
accuracy of the predictions as well as address Data Grid variations
(availability of data and sporadic nature of transfers). We ran a large set of
data transfer experiments using GridFTP and observed performance predictions
within 15% error for our testbed sites, which is quite promising for a
pragmatic system.
| [
{
"version": "v1",
"created": "Wed, 23 Apr 2003 20:36:09 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Vazhkudai",
"Sudharshan",
""
],
[
"Schopf",
"Jennifer M.",
""
]
] | TITLE: Using Regression Techniques to Predict Large Data Transfers
ABSTRACT: The recent proliferation of Data Grids and the increasingly common practice
of using resources as distributed data stores provide a convenient environment
for communities of researchers to share, replicate, and manage access to copies
of large datasets. This has led to the question of which replica can be
accessed most efficiently. In such environments, fetching data from one of the
several replica locations requires accurate predictions of end-to-end transfer
times. The answer to this question can depend on many factors, including
physical characteristics of the resources and the load behavior on the CPUs,
networks, and storage devices that are part of the end-to-end data path linking
possible sources and sinks. Our approach combines end-to-end application
throughput observations with network and disk load variations and captures
whole-system performance and variations in load patterns. Our predictions
characterize the effect of load variations of several shared devices (network
and disk) on file transfer times. We develop a suite of univariate and
multivariate predictors that can use multiple data sources to improve the
accuracy of the predictions as well as address Data Grid variations
(availability of data and sporadic nature of transfers). We ran a large set of
data transfer experiments using GridFTP and observed performance predictions
within 15% error for our testbed sites, which is quite promising for a
pragmatic system.
| no_new_dataset | 0.952838 |
cs/0306048 | Judith Beumer | Jianwei Li, Wei-keng Liao, Alok Choudhary, Robert Ross, Rajeev Thakur,
William Gropp, Rob Latham | Parallel netCDF: A Scientific High-Performance I/O Interface | 10 pages,7 figures | null | null | Preprint ANL/MCS-P1048-0503 | cs.DC | null | Dataset storage, exchange, and access play a critical role in scientific
applications. For such purposes netCDF serves as a portable and efficient file
format and programming interface, which is popular in numerous scientific
application domains. However, the original interface does not provide an
efficient mechanism for parallel data storage and access. In this work, we
present a new parallel interface for writing and reading netCDF datasets. This
interface is derived with minimum changes from the serial netCDF interface but
defines semantics for parallel access and is tailored for high performance. The
underlying parallel I/O is achieved through MPI-IO, allowing for dramatic
performance gains through the use of collective I/O optimizations. We compare
the implementation strategies with HDF5 and analyze both. Our tests indicate
programming convenience and significant I/O performance improvement with this
parallel netCDF interface.
| [
{
"version": "v1",
"created": "Wed, 11 Jun 2003 20:25:52 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Li",
"Jianwei",
""
],
[
"Liao",
"Wei-keng",
""
],
[
"Choudhary",
"Alok",
""
],
[
"Ross",
"Robert",
""
],
[
"Thakur",
"Rajeev",
""
],
[
"Gropp",
"William",
""
],
[
"Latham",
"Rob",
""
]
] | TITLE: Parallel netCDF: A Scientific High-Performance I/O Interface
ABSTRACT: Dataset storage, exchange, and access play a critical role in scientific
applications. For such purposes netCDF serves as a portable and efficient file
format and programming interface, which is popular in numerous scientific
application domains. However, the original interface does not provide an
efficient mechanism for parallel data storage and access. In this work, we
present a new parallel interface for writing and reading netCDF datasets. This
interface is derived with minimum changes from the serial netCDF interface but
defines semantics for parallel access and is tailored for high performance. The
underlying parallel I/O is achieved through MPI-IO, allowing for dramatic
performance gains through the use of collective I/O optimizations. We compare
the implementation strategies with HDF5 and analyze both. Our tests indicate
programming convenience and significant I/O performance improvement with this
parallel netCDF interface.
| no_new_dataset | 0.942135 |
cs/0306061 | Artem Trunov | Tofigh Azemoon, Adil Hasan, Wilko Kroeger, Artem Trunov | Operational Aspects of Dealing with the Large BaBar Data Set | Presented for Computing in High Energy Physics, San Diego, March 2003 | null | null | null | cs.DB cs.DC | null | To date, the BaBar experiment has stored over 0.7PB of data in an
Objectivity/DB database. Approximately half this data-set comprises simulated
data of which more than 70% has been produced at more than 20 collaborating
institutes outside of SLAC. The operational aspects of managing such a large
data set and providing access to the physicists in a timely manner is a
challenging and complex problem. We describe the operational aspects of
managing such a large distributed data-set as well as importing and exporting
data from geographically spread BaBar collaborators. We also describe problems
common to dealing with such large datasets.
| [
{
"version": "v1",
"created": "Fri, 13 Jun 2003 00:40:18 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Azemoon",
"Tofigh",
""
],
[
"Hasan",
"Adil",
""
],
[
"Kroeger",
"Wilko",
""
],
[
"Trunov",
"Artem",
""
]
] | TITLE: Operational Aspects of Dealing with the Large BaBar Data Set
ABSTRACT: To date, the BaBar experiment has stored over 0.7PB of data in an
Objectivity/DB database. Approximately half this data-set comprises simulated
data of which more than 70% has been produced at more than 20 collaborating
institutes outside of SLAC. The operational aspects of managing such a large
data set and providing access to the physicists in a timely manner is a
challenging and complex problem. We describe the operational aspects of
managing such a large distributed data-set as well as importing and exporting
data from geographically spread BaBar collaborators. We also describe problems
common to dealing with such large datasets.
| no_new_dataset | 0.935169 |
cs/0306068 | Pablo Saiz | Pablo Saiz, Predrag Buncic, Andreas J. Peters | AliEn Resource Brokers | 5 pages, 8 figures, CHEP 03 conference | null | null | null | cs.DC | null | AliEn (ALICE Environment) is a lightweight GRID framework developed by the
Alice Collaboration. When the experiment starts running, it will collect data
at a rate of approximately 2 PB per year, producing O(109) files per year. All
these files, including all simulated events generated during the preparation
phase of the experiment, must be accounted and reliably tracked in the GRID
environment. The backbone of AliEn is a distributed file catalogue, which
associates universal logical file name to physical file names for each dataset
and provides transparent access to datasets independently of physical location.
The file replication and transport is carried out under the control of the File
Transport Broker. In addition, the file catalogue maintains information about
every job running in the system. The jobs are distributed by the Job Resource
Broker that is implemented using a simplified pull (as opposed to traditional
push) architecture. This paper describes the Job and File Transport Resource
Brokers and shows that a similar architecture can be applied to solve both
problems.
| [
{
"version": "v1",
"created": "Fri, 13 Jun 2003 16:00:45 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Saiz",
"Pablo",
""
],
[
"Buncic",
"Predrag",
""
],
[
"Peters",
"Andreas J.",
""
]
] | TITLE: AliEn Resource Brokers
ABSTRACT: AliEn (ALICE Environment) is a lightweight GRID framework developed by the
Alice Collaboration. When the experiment starts running, it will collect data
at a rate of approximately 2 PB per year, producing O(109) files per year. All
these files, including all simulated events generated during the preparation
phase of the experiment, must be accounted and reliably tracked in the GRID
environment. The backbone of AliEn is a distributed file catalogue, which
associates universal logical file name to physical file names for each dataset
and provides transparent access to datasets independently of physical location.
The file replication and transport is carried out under the control of the File
Transport Broker. In addition, the file catalogue maintains information about
every job running in the system. The jobs are distributed by the Job Resource
Broker that is implemented using a simplified pull (as opposed to traditional
push) architecture. This paper describes the Job and File Transport Resource
Brokers and shows that a similar architecture can be applied to solve both
problems.
| no_new_dataset | 0.947769 |
cs/0306069 | Teela Pulliam | Teela Pulliam, Peter Elmer, Alvise Dorigo | Distributed Offline Data Reconstruction in BaBar | CHEP03 paper, MODT012 | null | null | SLAC-PUB-9903 | cs.DC | null | The BaBar experiment at SLAC is in its fourth year of running. The data
processing system has been continuously evolving to meet the challenges of
higher luminosity running and the increasing bulk of data to re-process each
year. To meet these goals a two-pass processing architecture has been adopted,
where 'rolling calibrations' are quickly calculated on a small fraction of the
events in the first pass and the bulk data reconstruction done in the second.
This allows for quick detector feedback in the first pass and allows for the
parallelization of the second pass over two or more separate farms. This
two-pass system allows also for distribution of processing farms off-site. The
first such site has been setup at INFN Padova. The challenges met here were
many. The software was ported to a full Linux-based, commodity hardware system.
The raw dataset, 90 TB, was imported from SLAC utilizing a 155 Mbps network
link. A system for quality control and export of the processed data back to
SLAC was developed. Between SLAC and Padova we are currently running three
pass-one farms, with 32 CPUs each, and nine pass-two farms with 64 to 80 CPUs
each. The pass-two farms can process between 2 and 4 million events per day.
Details about the implementation and performance of the system will be
presented.
| [
{
"version": "v1",
"created": "Fri, 13 Jun 2003 16:16:44 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Pulliam",
"Teela",
""
],
[
"Elmer",
"Peter",
""
],
[
"Dorigo",
"Alvise",
""
]
] | TITLE: Distributed Offline Data Reconstruction in BaBar
ABSTRACT: The BaBar experiment at SLAC is in its fourth year of running. The data
processing system has been continuously evolving to meet the challenges of
higher luminosity running and the increasing bulk of data to re-process each
year. To meet these goals a two-pass processing architecture has been adopted,
where 'rolling calibrations' are quickly calculated on a small fraction of the
events in the first pass and the bulk data reconstruction done in the second.
This allows for quick detector feedback in the first pass and allows for the
parallelization of the second pass over two or more separate farms. This
two-pass system allows also for distribution of processing farms off-site. The
first such site has been setup at INFN Padova. The challenges met here were
many. The software was ported to a full Linux-based, commodity hardware system.
The raw dataset, 90 TB, was imported from SLAC utilizing a 155 Mbps network
link. A system for quality control and export of the processed data back to
SLAC was developed. Between SLAC and Padova we are currently running three
pass-one farms, with 32 CPUs each, and nine pass-two farms with 64 to 80 CPUs
each. The pass-two farms can process between 2 and 4 million events per day.
Details about the implementation and performance of the system will be
presented.
| no_new_dataset | 0.949529 |
cs/0307032 | Praveen Boinee | M. Frailis, A. De Angelis, V. Roberto | Data Management and Mining in Astrophysical Databases | 10 pages, Latex | S. Ciprini, A. De Angelis, P. Lubrano and O. Mansutti (eds.):
Proc. of ``Science with the New Generation of High Energy Gamma-ray
Experiments'' (Perugia, Italy, May 2003). Forum, Udine 2003, p. 157 | null | null | cs.DB astro-ph physics.data-an | null | We analyse the issues involved in the management and mining of astrophysical
data. The traditional approach to data management in the astrophysical field is
not able to keep up with the increasing size of the data gathered by modern
detectors. An essential role in the astrophysical research will be assumed by
automatic tools for information extraction from large datasets, i.e. data
mining techniques, such as clustering and classification algorithms. This asks
for an approach to data management based on data warehousing, emphasizing the
efficiency and simplicity of data access; efficiency is obtained using
multidimensional access methods and simplicity is achieved by properly handling
metadata. Clustering and classification techniques, on large datasets, pose
additional requirements: computational and memory scalability with respect to
the data size, interpretability and objectivity of clustering or classification
results. In this study we address some possible solutions.
| [
{
"version": "v1",
"created": "Sat, 12 Jul 2003 12:35:37 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Jul 2003 12:49:46 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Frailis",
"M.",
""
],
[
"De Angelis",
"A.",
""
],
[
"Roberto",
"V.",
""
]
] | TITLE: Data Management and Mining in Astrophysical Databases
ABSTRACT: We analyse the issues involved in the management and mining of astrophysical
data. The traditional approach to data management in the astrophysical field is
not able to keep up with the increasing size of the data gathered by modern
detectors. An essential role in the astrophysical research will be assumed by
automatic tools for information extraction from large datasets, i.e. data
mining techniques, such as clustering and classification algorithms. This asks
for an approach to data management based on data warehousing, emphasizing the
efficiency and simplicity of data access; efficiency is obtained using
multidimensional access methods and simplicity is achieved by properly handling
metadata. Clustering and classification techniques, on large datasets, pose
additional requirements: computational and memory scalability with respect to
the data size, interpretability and objectivity of clustering or classification
results. In this study we address some possible solutions.
| no_new_dataset | 0.947088 |
cs/0307038 | Alfred Hero III | Jose Costa and Alfred Hero | Manifold Learning with Geodesic Minimal Spanning Trees | 13 pages, 3 figures | null | null | null | cs.CV cs.LG | null | In the manifold learning problem one seeks to discover a smooth low
dimensional surface, i.e., a manifold embedded in a higher dimensional linear
vector space, based on a set of measured sample points on the surface. In this
paper we consider the closely related problem of estimating the manifold's
intrinsic dimension and the intrinsic entropy of the sample points.
Specifically, we view the sample points as realizations of an unknown
multivariate density supported on an unknown smooth manifold. We present a
novel geometrical probability approach, called the
geodesic-minimal-spanning-tree (GMST), to obtaining asymptotically consistent
estimates of the manifold dimension and the R\'{e}nyi $\alpha$-entropy of the
sample density on the manifold. The GMST approach is striking in its simplicity
and does not require reconstructing the manifold or estimating the multivariate
density of the samples. The GMST method simply constructs a minimal spanning
tree (MST) sequence using a geodesic edge matrix and uses the overall lengths
of the MSTs to simultaneously estimate manifold dimension and entropy. We
illustrate the GMST approach for dimension and entropy estimation of a human
face dataset.
| [
{
"version": "v1",
"created": "Wed, 16 Jul 2003 23:50:53 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Costa",
"Jose",
""
],
[
"Hero",
"Alfred",
""
]
] | TITLE: Manifold Learning with Geodesic Minimal Spanning Trees
ABSTRACT: In the manifold learning problem one seeks to discover a smooth low
dimensional surface, i.e., a manifold embedded in a higher dimensional linear
vector space, based on a set of measured sample points on the surface. In this
paper we consider the closely related problem of estimating the manifold's
intrinsic dimension and the intrinsic entropy of the sample points.
Specifically, we view the sample points as realizations of an unknown
multivariate density supported on an unknown smooth manifold. We present a
novel geometrical probability approach, called the
geodesic-minimal-spanning-tree (GMST), to obtaining asymptotically consistent
estimates of the manifold dimension and the R\'{e}nyi $\alpha$-entropy of the
sample density on the manifold. The GMST approach is striking in its simplicity
and does not require reconstructing the manifold or estimating the multivariate
density of the samples. The GMST method simply constructs a minimal spanning
tree (MST) sequence using a geodesic edge matrix and uses the overall lengths
of the MSTs to simultaneously estimate manifold dimension and entropy. We
illustrate the GMST approach for dimension and entropy estimation of a human
face dataset.
| no_new_dataset | 0.949716 |
cs/0311034 | Gibby Koldenhof | Gibby Koldenhof | Visualization of variations in human brain morphology using
differentiating reflection functions | 10 pages, keywords: MRI, Medical Visualization, Volume rendering,
BRDF, Specular reflection overlap | null | null | null | cs.GR | null | Conventional visualization media such as MRI prints and computer screens are
inherently two dimensional, making them incapable of displaying true 3D volume
data sets. By applying only transparency or intensity projection, and ignoring
light-matter interaction, results will likely fail to give optimal results.
Little research has been done on using reflectance functions to visually
separate the various segments of a MRI volume. We will explore if applying
specific reflectance functions to individual anatomical structures can help in
building an intuitive 2D image from a 3D dataset. We will test our hypothesis
by visualizing a statistical analysis of the genetic influences on variations
in human brain morphology because it inherently contains complex and many
different types of data making it a good candidate for our approach
| [
{
"version": "v1",
"created": "Sat, 22 Nov 2003 18:17:26 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Koldenhof",
"Gibby",
""
]
] | TITLE: Visualization of variations in human brain morphology using
differentiating reflection functions
ABSTRACT: Conventional visualization media such as MRI prints and computer screens are
inherently two dimensional, making them incapable of displaying true 3D volume
data sets. By applying only transparency or intensity projection, and ignoring
light-matter interaction, results will likely fail to give optimal results.
Little research has been done on using reflectance functions to visually
separate the various segments of a MRI volume. We will explore if applying
specific reflectance functions to individual anatomical structures can help in
building an intuitive 2D image from a 3D dataset. We will test our hypothesis
by visualizing a statistical analysis of the genetic influences on variations
in human brain morphology because it inherently contains complex and many
different types of data making it a good candidate for our approach
| no_new_dataset | 0.945751 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.