id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0711.0729 | Massimiliano Zanin | Massimiliano Zanin | Forbidden patterns in financial time series | 4 pages, 4 figures; affiliation updated | null | 10.1063/1.2841197 | null | q-fin.ST physics.data-an physics.soc-ph | null | The existence of forbidden patterns, i.e., certain missing sequences in a
given time series, is a recently proposed instrument of potential application
in the study of time series. Forbidden patterns are related to the permutation
entropy, which has the basic properties of classic chaos indicators, thus
allowing to separate deterministic (usually chaotic) from random series;
however, it requires less values of the series to be calculated, and it is
suitable for using with small datasets. In this Letter, the appearance of
forbidden patterns is studied in different economical indicators like stock
indices (Dow Jones Industrial Average and Nasdaq Composite), NYSE stocks (IBM
and Boeing) and others (10-year Bond interest rate), to find evidences of
deterministic behavior in their evolutions.
| [
{
"version": "v1",
"created": "Mon, 5 Nov 2007 20:02:25 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Nov 2007 18:58:11 GMT"
}
] | 2009-11-13T00:00:00 | [
[
"Zanin",
"Massimiliano",
""
]
] | TITLE: Forbidden patterns in financial time series
ABSTRACT: The existence of forbidden patterns, i.e., certain missing sequences in a
given time series, is a recently proposed instrument of potential application
in the study of time series. Forbidden patterns are related to the permutation
entropy, which has the basic properties of classic chaos indicators, thus
allowing to separate deterministic (usually chaotic) from random series;
however, it requires less values of the series to be calculated, and it is
suitable for using with small datasets. In this Letter, the appearance of
forbidden patterns is studied in different economical indicators like stock
indices (Dow Jones Industrial Average and Nasdaq Composite), NYSE stocks (IBM
and Boeing) and others (10-year Bond interest rate), to find evidences of
deterministic behavior in their evolutions.
| no_new_dataset | 0.952264 |
0802.2138 | Mahesh Pal Dr. | Mahesh Pal and Paul M. Mather | Support Vector classifiers for Land Cover Classification | 11 pages, 1 figure, Published in MapIndia Conference 2003 | null | 10.1080/01431160802007624 | null | cs.NE cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Support vector machines represent a promising development in machine learning
research that is not widely used within the remote sensing community. This
paper reports the results of Multispectral(Landsat-7 ETM+) and Hyperspectral
DAIS)data in which multi-class SVMs are compared with maximum likelihood and
artificial neural network methods in terms of classification accuracy. Our
results show that the SVM achieves a higher level of classification accuracy
than either the maximum likelihood or the neural classifier, and that the
support vector machine can be used with small training datasets and
high-dimensional data.
| [
{
"version": "v1",
"created": "Fri, 15 Feb 2008 04:53:33 GMT"
}
] | 2009-11-13T00:00:00 | [
[
"Pal",
"Mahesh",
""
],
[
"Mather",
"Paul M.",
""
]
] | TITLE: Support Vector classifiers for Land Cover Classification
ABSTRACT: Support vector machines represent a promising development in machine learning
research that is not widely used within the remote sensing community. This
paper reports the results of Multispectral(Landsat-7 ETM+) and Hyperspectral
DAIS)data in which multi-class SVMs are compared with maximum likelihood and
artificial neural network methods in terms of classification accuracy. Our
results show that the SVM achieves a higher level of classification accuracy
than either the maximum likelihood or the neural classifier, and that the
support vector machine can be used with small training datasets and
high-dimensional data.
| no_new_dataset | 0.953708 |
0805.2182 | Frederic Moisy | J. Seiwert, C. Morize, F. Moisy | On the decrease of intermittency in decaying rotating turbulence | 5 pages, 5 figures. In revision for Phys. Fluids Letters | Phys. Fluids 20, 071702 (2008). | 10.1063/1.2949313 | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The scaling of the longitudinal velocity structure functions, $S_q(r) = < |
\delta u (r) |^q > \sim r^{\zeta_q}$, is analyzed up to order $q=8$ in a
decaying rotating turbulence experiment from a large Particle Image Velocimetry
(PIV) dataset. The exponent of the second-order structure function, $\zeta_2$,
increases throughout the self-similar decay regime, up to the Ekman time scale.
The normalized higher-order exponents, $\zeta_q / \zeta_2$, are close to those
of the intermittent non-rotating case at small times, but show a marked
departure at larger times, on a time scale $\Omega^{-1}$ ($\Omega$ is the
rotation rate), although a strictly non-intermittent linear law $\zeta_q /
\zeta_2 = q/2$ is not reached.
| [
{
"version": "v1",
"created": "Thu, 15 May 2008 15:36:45 GMT"
}
] | 2009-11-13T00:00:00 | [
[
"Seiwert",
"J.",
""
],
[
"Morize",
"C.",
""
],
[
"Moisy",
"F.",
""
]
] | TITLE: On the decrease of intermittency in decaying rotating turbulence
ABSTRACT: The scaling of the longitudinal velocity structure functions, $S_q(r) = < |
\delta u (r) |^q > \sim r^{\zeta_q}$, is analyzed up to order $q=8$ in a
decaying rotating turbulence experiment from a large Particle Image Velocimetry
(PIV) dataset. The exponent of the second-order structure function, $\zeta_2$,
increases throughout the self-similar decay regime, up to the Ekman time scale.
The normalized higher-order exponents, $\zeta_q / \zeta_2$, are close to those
of the intermittent non-rotating case at small times, but show a marked
departure at larger times, on a time scale $\Omega^{-1}$ ($\Omega$ is the
rotation rate), although a strictly non-intermittent linear law $\zeta_q /
\zeta_2 = q/2$ is not reached.
| no_new_dataset | 0.946695 |
0807.2515 | Joseph Mohr | Joseph J. Mohr (1), Wayne Barkhouse (2), Cristina Beldica (1),
Emmanuel Bertin (3), Y. Dora Cai (1), Luiz da Costa (4), J. Anthony Darnell
(1), Gregory E. Daues (1), Michael Jarvis (5), Michelle Gower (1), Huan Lin
(6), leandro Martelli (4), Eric Neilsen (6), Chow-Choong Ngeow (1), Ricardo
Ogando (4), Alex Parga (1), Erin Sheldon (7), Douglas Tucker (6), Nikolay
Kuropatkin (6), Chris Stoughton (6) ((1) University of Illinois, (2)
University of North Dakota, (3) Institut d'Astrophysque, Paris, (4)
Observatorio Nacional, Brasil, (5) University of Pennsylvania, (6) Fermilab,
(7) New York University) | The Dark Energy Survey Data Management System | To be published in the proceedings of the SPIE conference on
Astronomical Instrumentation (held in Marseille in June 2008). This preprint
is made available with the permission of SPIE. Further information together
with preprint containing full quality images is available at
http://desweb.cosmology.uiuc.edu/wiki | null | 10.1117/12.789550 | null | astro-ph cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Dark Energy Survey collaboration will study cosmic acceleration with a
5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The
DES data management (DESDM) system will be used to process and archive these
data and the resulting science ready data products. The DESDM system consists
of an integrated archive, a processing framework, an ensemble of astronomy
codes and a data access framework. We are developing the DESDM system for
operation in the high performance computing (HPC) environments at NCSA and
Fermilab. Operating the DESDM system in an HPC environment offers both speed
and flexibility. We will employ it for our regular nightly processing needs,
and for more compute-intensive tasks such as large scale image coaddition
campaigns, extraction of weak lensing shear from the full survey dataset, and
massive seasonal reprocessing of the DES data. Data products will be available
to the Collaboration and later to the public through a virtual-observatory
compatible web portal. Our approach leverages investments in publicly available
HPC systems, greatly reducing hardware and maintenance costs to the project,
which must deploy and maintain only the storage, database platforms and
orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we
tested the current DESDM system on both simulated and real survey data. We used
Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and
calibrating approximately 250 million objects into the DES Archive database. We
also used DESDM to process and calibrate over 50 nights of survey data acquired
with the Mosaic2 camera. Comparison to truth tables in the case of the
simulated data and internal crosschecks in the case of the real data indicate
that astrometric and photometric data quality is excellent.
| [
{
"version": "v1",
"created": "Wed, 16 Jul 2008 08:37:43 GMT"
}
] | 2009-11-13T00:00:00 | [
[
"Mohr",
"Joseph J.",
""
],
[
"Barkhouse",
"Wayne",
""
],
[
"Beldica",
"Cristina",
""
],
[
"Bertin",
"Emmanuel",
""
],
[
"Cai",
"Y. Dora",
""
],
[
"da Costa",
"Luiz",
""
],
[
"Darnell",
"J. Anthony",
""
],
[
"Daues",
"Gregory E.",
""
],
[
"Jarvis",
"Michael",
""
],
[
"Gower",
"Michelle",
""
],
[
"Lin",
"Huan",
""
],
[
"Martelli",
"leandro",
""
],
[
"Neilsen",
"Eric",
""
],
[
"Ngeow",
"Chow-Choong",
""
],
[
"Ogando",
"Ricardo",
""
],
[
"Parga",
"Alex",
""
],
[
"Sheldon",
"Erin",
""
],
[
"Tucker",
"Douglas",
""
],
[
"Kuropatkin",
"Nikolay",
""
],
[
"Stoughton",
"Chris",
""
]
] | TITLE: The Dark Energy Survey Data Management System
ABSTRACT: The Dark Energy Survey collaboration will study cosmic acceleration with a
5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The
DES data management (DESDM) system will be used to process and archive these
data and the resulting science ready data products. The DESDM system consists
of an integrated archive, a processing framework, an ensemble of astronomy
codes and a data access framework. We are developing the DESDM system for
operation in the high performance computing (HPC) environments at NCSA and
Fermilab. Operating the DESDM system in an HPC environment offers both speed
and flexibility. We will employ it for our regular nightly processing needs,
and for more compute-intensive tasks such as large scale image coaddition
campaigns, extraction of weak lensing shear from the full survey dataset, and
massive seasonal reprocessing of the DES data. Data products will be available
to the Collaboration and later to the public through a virtual-observatory
compatible web portal. Our approach leverages investments in publicly available
HPC systems, greatly reducing hardware and maintenance costs to the project,
which must deploy and maintain only the storage, database platforms and
orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we
tested the current DESDM system on both simulated and real survey data. We used
Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and
calibrating approximately 250 million objects into the DES Archive database. We
also used DESDM to process and calibrate over 50 nights of survey data acquired
with the Mosaic2 camera. Comparison to truth tables in the case of the
simulated data and internal crosschecks in the case of the real data indicate
that astrometric and photometric data quality is excellent.
| no_new_dataset | 0.941385 |
0812.1178 | Serge Meimon | Serge Meimon, Laurent M. Mugnier and Guy Le Besnerais | A self-calibration approach for optical long baseline interferometry
imaging | null | null | 10.1364/JOSAA.26.000108 | null | physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current optical interferometers are affected by unknown turbulent phases on
each telescope. In the field of radio-interferometry, the self-calibration
technique is a powerful tool to process interferometric data with missing phase
information. This paper intends to revisit the application of self-calibration
to Optical Long Baseline Interferometry (OLBI). We cast rigorously the OLBI
data processing problem into the self-calibration framework and demonstrate the
efficiency of the method on real astronomical OLBI dataset.
| [
{
"version": "v1",
"created": "Fri, 5 Dec 2008 16:51:34 GMT"
}
] | 2009-11-13T00:00:00 | [
[
"Meimon",
"Serge",
""
],
[
"Mugnier",
"Laurent M.",
""
],
[
"Besnerais",
"Guy Le",
""
]
] | TITLE: A self-calibration approach for optical long baseline interferometry
imaging
ABSTRACT: Current optical interferometers are affected by unknown turbulent phases on
each telescope. In the field of radio-interferometry, the self-calibration
technique is a powerful tool to process interferometric data with missing phase
information. This paper intends to revisit the application of self-calibration
to Optical Long Baseline Interferometry (OLBI). We cast rigorously the OLBI
data processing problem into the self-calibration framework and demonstrate the
efficiency of the method on real astronomical OLBI dataset.
| no_new_dataset | 0.944331 |
physics/0608069 | Andreas P. Nawroth | A. P. Nawroth and J. Peinke | Multiscale reconstruction of time series | 4 pages, 3 figures | null | 10.1016/j.physleta.2006.08.024 | null | physics.data-an | null | A new method is proposed which allows a reconstruction of time series based
on higher order multiscale statistics given by a hierarchical process. This
method is able to model the time series not only on a specific scale but for a
range of scales. It is possible to generate complete new time series, or to
model the next steps for a given sequence of data. The method itself is based
on the joint probability density which can be extracted directly from given
data, thus no estimation of parameters is necessary. The results of this
approach are shown for a real world dataset, namely for turbulence. The
unconditional and conditional probability densities of the original and
reconstructed time series are compared and the ability to reproduce both is
demonstrated. Therefore in the case of Markov properties the method proposed
here is able to generate artificial time series with correct n-point
statistics.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2006 17:33:31 GMT"
}
] | 2009-11-13T00:00:00 | [
[
"Nawroth",
"A. P.",
""
],
[
"Peinke",
"J.",
""
]
] | TITLE: Multiscale reconstruction of time series
ABSTRACT: A new method is proposed which allows a reconstruction of time series based
on higher order multiscale statistics given by a hierarchical process. This
method is able to model the time series not only on a specific scale but for a
range of scales. It is possible to generate complete new time series, or to
model the next steps for a given sequence of data. The method itself is based
on the joint probability density which can be extracted directly from given
data, thus no estimation of parameters is necessary. The results of this
approach are shown for a real world dataset, namely for turbulence. The
unconditional and conditional probability densities of the original and
reconstructed time series are compared and the ability to reproduce both is
demonstrated. Therefore in the case of Markov properties the method proposed
here is able to generate artificial time series with correct n-point
statistics.
| no_new_dataset | 0.951639 |
astro-ph/0605042 | Somak Raychaudhury | Juan C. Cuevas-Tello (1,3), Peter Tino (1) and Somak Raychaudhury (2)
((1) School of Computer Science, University of Birmingham, UK; (2) School of
Physics & Astronomy, University of Birmingham, UK; (3) University of San Luis
Potosi, Mexico) | How accurate are the time delay estimates in gravitational lensing? | 14 pages, 12 figures; accepted for publication in Astronomy &
Astrophysics | Astron.Astrophys. 454 (2006) 695-706 | 10.1051/0004-6361:20054652 | null | astro-ph cs.LG | null | We present a novel approach to estimate the time delay between light curves
of multiple images in a gravitationally lensed system, based on Kernel methods
in the context of machine learning. We perform various experiments with
artificially generated irregularly-sampled data sets to study the effect of the
various levels of noise and the presence of gaps of various size in the
monitoring data. We compare the performance of our method with various other
popular methods of estimating the time delay and conclude, from experiments
with artificial data, that our method is least vulnerable to missing data and
irregular sampling, within reasonable bounds of Gaussian noise. Thereafter, we
use our method to determine the time delays between the two images of quasar
Q0957+561 from radio monitoring data at 4 cm and 6 cm, and conclude that if
only the observations at epochs common to both wavelengths are used, the time
delay gives consistent estimates, which can be combined to yield 408\pm 12
days. The full 6 cm dataset, which covers a longer monitoring period, yields a
value which is 10% larger, but this can be attributed to differences in
sampling and missing data.
| [
{
"version": "v1",
"created": "Mon, 1 May 2006 20:42:03 GMT"
}
] | 2009-11-11T00:00:00 | [
[
"Cuevas-Tello",
"Juan C.",
""
],
[
"Tino",
"Peter",
""
],
[
"Raychaudhury",
"Somak",
""
]
] | TITLE: How accurate are the time delay estimates in gravitational lensing?
ABSTRACT: We present a novel approach to estimate the time delay between light curves
of multiple images in a gravitationally lensed system, based on Kernel methods
in the context of machine learning. We perform various experiments with
artificially generated irregularly-sampled data sets to study the effect of the
various levels of noise and the presence of gaps of various size in the
monitoring data. We compare the performance of our method with various other
popular methods of estimating the time delay and conclude, from experiments
with artificial data, that our method is least vulnerable to missing data and
irregular sampling, within reasonable bounds of Gaussian noise. Thereafter, we
use our method to determine the time delays between the two images of quasar
Q0957+561 from radio monitoring data at 4 cm and 6 cm, and conclude that if
only the observations at epochs common to both wavelengths are used, the time
delay gives consistent estimates, which can be combined to yield 408\pm 12
days. The full 6 cm dataset, which covers a longer monitoring period, yields a
value which is 10% larger, but this can be attributed to differences in
sampling and missing data.
| no_new_dataset | 0.948632 |
physics/0509247 | Jose J. Ramasco | Jose J. Ramasco, Steven A. Morris | Social inertia in collaboration networks | 7 pages, 5 figures | Phys. Rev. E 73, 016122 (2006) | 10.1103/PhysRevE.73.016122 | null | physics.soc-ph cond-mat.stat-mech | null | This work is a study of the properties of collaboration networks employing
the formalism of weighted graphs to represent their one-mode projection. The
weight of the edges is directly the number of times that a partnership has been
repeated. This representation allows us to define the concept of "social
inertia" that measures the tendency of authors to keep on collaborating with
previous partners. We use a collection of empirical datasets to analyze several
aspects of the social inertia: 1) its probability distribution, 2) its
correlation with other properties, and 3) the correlations of the inertia
between neighbors in the network. We also contrast these empirical results with
the predictions of a recently proposed theoretical model for the growth of
collaboration networks.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2005 15:35:00 GMT"
}
] | 2009-11-11T00:00:00 | [
[
"Ramasco",
"Jose J.",
""
],
[
"Morris",
"Steven A.",
""
]
] | TITLE: Social inertia in collaboration networks
ABSTRACT: This work is a study of the properties of collaboration networks employing
the formalism of weighted graphs to represent their one-mode projection. The
weight of the edges is directly the number of times that a partnership has been
repeated. This representation allows us to define the concept of "social
inertia" that measures the tendency of authors to keep on collaborating with
previous partners. We use a collection of empirical datasets to analyze several
aspects of the social inertia: 1) its probability distribution, 2) its
correlation with other properties, and 3) the correlations of the inertia
between neighbors in the network. We also contrast these empirical results with
the predictions of a recently proposed theoretical model for the growth of
collaboration networks.
| no_new_dataset | 0.950088 |
physics/0601223 | Kwang-Il Goh | K.-I. Goh, Y.-H. Eom, H. Jeong, B. Kahng, and D. Kim | Structure and evolution of online social relationships: Heterogeneity in
warm discussions | 7 pages, 7 figures, 2 tables | null | 10.1103/PhysRevE.73.066123 | null | physics.data-an cond-mat.stat-mech physics.soc-ph | null | With the advancement in the information age, people are using electronic
media more frequently for communications, and social relationships are also
increasingly resorting to online channels. While extensive studies on
traditional social networks have been carried out, little has been done on
online social network. Here we analyze the structure and evolution of online
social relationships by examining the temporal records of a bulletin board
system (BBS) in a university. The BBS dataset comprises of 1,908 boards, in
which a total of 7,446 students participate. An edge is assigned to each
dialogue between two students, and it is defined as the appearance of the name
of a student in the from- and to-field in each message. This yields a weighted
network between the communicating students with an unambiguous group
association of individuals. In contrast to a typical community network, where
intracommunities (intercommunities) are strongly (weakly) tied, the BBS network
contains hub members who participate in many boards simultaneously but are
strongly tied, that is, they have a large degree and betweenness centrality and
provide communication channels between communities. On the other hand,
intracommunities are rather homogeneously and weakly connected. Such a
structure, which has never been empirically characterized in the past, might
provide a new perspective on social opinion formation in this digital era.
| [
{
"version": "v1",
"created": "Tue, 31 Jan 2006 17:27:30 GMT"
}
] | 2009-11-11T00:00:00 | [
[
"Goh",
"K. -I.",
""
],
[
"Eom",
"Y. -H.",
""
],
[
"Jeong",
"H.",
""
],
[
"Kahng",
"B.",
""
],
[
"Kim",
"D.",
""
]
] | TITLE: Structure and evolution of online social relationships: Heterogeneity in
warm discussions
ABSTRACT: With the advancement in the information age, people are using electronic
media more frequently for communications, and social relationships are also
increasingly resorting to online channels. While extensive studies on
traditional social networks have been carried out, little has been done on
online social network. Here we analyze the structure and evolution of online
social relationships by examining the temporal records of a bulletin board
system (BBS) in a university. The BBS dataset comprises of 1,908 boards, in
which a total of 7,446 students participate. An edge is assigned to each
dialogue between two students, and it is defined as the appearance of the name
of a student in the from- and to-field in each message. This yields a weighted
network between the communicating students with an unambiguous group
association of individuals. In contrast to a typical community network, where
intracommunities (intercommunities) are strongly (weakly) tied, the BBS network
contains hub members who participate in many boards simultaneously but are
strongly tied, that is, they have a large degree and betweenness centrality and
provide communication channels between communities. On the other hand,
intracommunities are rather homogeneously and weakly connected. Such a
structure, which has never been empirically characterized in the past, might
provide a new perspective on social opinion formation in this digital era.
| no_new_dataset | 0.883588 |
0911.1455 | Loet Leydesdorff | Wilfred Dolfsma, Loet Leydesdorff | "Medium-tech" industries may be of greater importance to a local economy
than "High-tech" firms: New methods for measuring the knowledge base of an
economic system | null | Medical Hypotheses, 71(3) (2008) 330-334 | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we offer a way to measure the knowledge base of an economy in
terms of probabilistic entropy. This measure, we hypothesize, is an indication
of the extent to which a system, including the economic system, self-organizes.
In a self-organizing system, interactions between dimensions or subsystems will
unintentionally give rise to anticipations that are properly aligned. The
potential reduction of uncertainty can be measured as negative entropy in the
mutual information among three (or more) dimensions. For a knowledge-based
economy, three dimensions can be considered as key: the distribution of firm
sizes, the geographical locations, and the technological classifications of
firms. Based on statistics of these three dimensions and drawing on a unique
dataset of all Dutch firms registered with the Chambers of Commerce, we are
able to refine well-known empirical findings for the geographical dimension.
Counter-intuitive, however, are our empirical findings for the dimension of
technology. Knowledge diffusion through medium-tech industry is much more
important for a localized economy than knowledge creation in high-tech
industry. Knowledge-intensive services tend to uncouple economic activities
from the regional dimension.
| [
{
"version": "v1",
"created": "Sat, 7 Nov 2009 19:42:43 GMT"
}
] | 2009-11-10T00:00:00 | [
[
"Dolfsma",
"Wilfred",
""
],
[
"Leydesdorff",
"Loet",
""
]
] | TITLE: "Medium-tech" industries may be of greater importance to a local economy
than "High-tech" firms: New methods for measuring the knowledge base of an
economic system
ABSTRACT: In this paper we offer a way to measure the knowledge base of an economy in
terms of probabilistic entropy. This measure, we hypothesize, is an indication
of the extent to which a system, including the economic system, self-organizes.
In a self-organizing system, interactions between dimensions or subsystems will
unintentionally give rise to anticipations that are properly aligned. The
potential reduction of uncertainty can be measured as negative entropy in the
mutual information among three (or more) dimensions. For a knowledge-based
economy, three dimensions can be considered as key: the distribution of firm
sizes, the geographical locations, and the technological classifications of
firms. Based on statistics of these three dimensions and drawing on a unique
dataset of all Dutch firms registered with the Chambers of Commerce, we are
able to refine well-known empirical findings for the geographical dimension.
Counter-intuitive, however, are our empirical findings for the dimension of
technology. Knowledge diffusion through medium-tech industry is much more
important for a localized economy than knowledge creation in high-tech
industry. Knowledge-intensive services tend to uncouple economic activities
from the regional dimension.
| no_new_dataset | 0.932944 |
astro-ph/0310624 | Anil K. Pradhan | Sultana N. Nahar and Anil K. Pradhan (Ohio State) | Self-Consistent R-matrix Approach To Photoionization And Unified
Electron-Ion Recombination | 33 pages, 13 figures, Review in "Radiation Processes In Physics and
Chemistry", Elsevier (in press). Postscript file with higher resolution
figures at http://www.astronomy.ohio-state.edu/~pradhan/pr.ps | Radiat.Phys.Chem. 70 (2004) 323-344 | 10.1016/j.radphyschem.2003.12.019 | null | astro-ph physics.atom-ph | null | A unified scheme using the R-matrix method has been developed for
electron-ion recombination subsuming heretofore separate treatments of
radiative and dielectronic recombination (RR and DR). The ab initio coupled
channel approach unifies resonant and non-resonant phenomena, and enables a
general and self-consistent treatment of photoionization and electron-ion
recombination employing idential wavefunction expansion. Detailed balance takes
account of interference effects due to resonances in cross sections, calculated
explicitly for a large number of recombined (e+ion) bound levels over extended
energy regions. The theory of DR by Bell and Seaton is adapted for high-n
resonances in the region below series limits. The R-matrix method is employed
for (A) partial and total photoionization and photorecombination cross sections
of (e+ion) bound levels, and (B) DR and (e+ion) scattering cross sections.
Relativistic effects and fine structure are considered in the Breit-Pauli
approximation. Effects such as radiation damping may be taken into account
where necessary. Unfiied recombination cross sections are in excellent
agreement with measurements on ion storage rings to about 10-20%. In addition
to high accuracy, the strengths of the method are: (I) both total and
level-specific cross sections and rate coefficients are obtained, and (II) a
single (e+ion) recombination rate coefficient for any given atom or ion is
obtained over the entire temperature range of practical importance in
laboratory and astrophysical plasmas, (III) self-consistent results are
obtained for photoionization and recombination; comprehensive datasets have
been computed for over 50 atoms and ions. Selected data are presented for iron
ions.
| [
{
"version": "v1",
"created": "Tue, 21 Oct 2003 20:20:15 GMT"
}
] | 2009-11-10T00:00:00 | [
[
"Nahar",
"Sultana N.",
"",
"Ohio State"
],
[
"Pradhan",
"Anil K.",
"",
"Ohio State"
]
] | TITLE: Self-Consistent R-matrix Approach To Photoionization And Unified
Electron-Ion Recombination
ABSTRACT: A unified scheme using the R-matrix method has been developed for
electron-ion recombination subsuming heretofore separate treatments of
radiative and dielectronic recombination (RR and DR). The ab initio coupled
channel approach unifies resonant and non-resonant phenomena, and enables a
general and self-consistent treatment of photoionization and electron-ion
recombination employing idential wavefunction expansion. Detailed balance takes
account of interference effects due to resonances in cross sections, calculated
explicitly for a large number of recombined (e+ion) bound levels over extended
energy regions. The theory of DR by Bell and Seaton is adapted for high-n
resonances in the region below series limits. The R-matrix method is employed
for (A) partial and total photoionization and photorecombination cross sections
of (e+ion) bound levels, and (B) DR and (e+ion) scattering cross sections.
Relativistic effects and fine structure are considered in the Breit-Pauli
approximation. Effects such as radiation damping may be taken into account
where necessary. Unfiied recombination cross sections are in excellent
agreement with measurements on ion storage rings to about 10-20%. In addition
to high accuracy, the strengths of the method are: (I) both total and
level-specific cross sections and rate coefficients are obtained, and (II) a
single (e+ion) recombination rate coefficient for any given atom or ion is
obtained over the entire temperature range of practical importance in
laboratory and astrophysical plasmas, (III) self-consistent results are
obtained for photoionization and recombination; comprehensive datasets have
been computed for over 50 atoms and ions. Selected data are presented for iron
ions.
| no_new_dataset | 0.955236 |
astro-ph/0410487 | Dirk Petry | Dirk Petry (Joint Center for Astrophysics, UMBC & NASA/GSFC) | The Earth's Gamma-ray Albedo as observed by EGRET | To be published in the proceedings of the Gamma 2004 Symposium on
High-Energy Gamma-Ray Astronomy, Heidelberg, July, 2004 (AIP Proceedings
Series) | null | 10.1063/1.1878488 | null | astro-ph physics.geo-ph | null | The Earth's high energy gamma-ray emission is caused by cosmic ray
interactions with the atmosphere. The EGRET detector on-board the CGRO
satellite is only the second experiment (after SAS-2) to provide a suitable
dataset for the comprehensive study of this emission. Approximately 60% of the
EGRET dataset consist of gamma photons from the Earth. This conference
contribution presents the first results from the first analysis project to
tackle this large dataset. Ultimate purpose is to develop an analytical model
of the Earth's emission for use in the GLAST project. The results obtained so
far confirm the earlier results from SAS-2 and extend them in terms of
statistical precision and angular resolution.
| [
{
"version": "v1",
"created": "Wed, 20 Oct 2004 19:09:38 GMT"
}
] | 2009-11-10T00:00:00 | [
[
"Petry",
"Dirk",
"",
"Joint Center for Astrophysics, UMBC & NASA/GSFC"
]
] | TITLE: The Earth's Gamma-ray Albedo as observed by EGRET
ABSTRACT: The Earth's high energy gamma-ray emission is caused by cosmic ray
interactions with the atmosphere. The EGRET detector on-board the CGRO
satellite is only the second experiment (after SAS-2) to provide a suitable
dataset for the comprehensive study of this emission. Approximately 60% of the
EGRET dataset consist of gamma photons from the Earth. This conference
contribution presents the first results from the first analysis project to
tackle this large dataset. Ultimate purpose is to develop an analytical model
of the Earth's emission for use in the GLAST project. The results obtained so
far confirm the earlier results from SAS-2 and extend them in terms of
statistical precision and angular resolution.
| no_new_dataset | 0.933249 |
cs/0402016 | Marco Frailis | M. Frailis, A. De Angelis, V. Roberto | Perspects in astrophysical databases | null | Physica A338 (2004) 54-59 | 10.1016/j.physa.2004.02.024 | null | cs.DB astro-ph | null | Astrophysics has become a domain extremely rich of scientific data. Data
mining tools are needed for information extraction from such large datasets.
This asks for an approach to data management emphasizing the efficiency and
simplicity of data access; efficiency is obtained using multidimensional access
methods and simplicity is achieved by properly handling metadata. Moreover,
clustering and classification techniques on large datasets pose additional
requirements in terms of computation and memory scalability and
interpretability of results. In this study we review some possible solutions.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2004 19:13:17 GMT"
}
] | 2009-11-10T00:00:00 | [
[
"Frailis",
"M.",
""
],
[
"De Angelis",
"A.",
""
],
[
"Roberto",
"V.",
""
]
] | TITLE: Perspects in astrophysical databases
ABSTRACT: Astrophysics has become a domain extremely rich of scientific data. Data
mining tools are needed for information extraction from such large datasets.
This asks for an approach to data management emphasizing the efficiency and
simplicity of data access; efficiency is obtained using multidimensional access
methods and simplicity is achieved by properly handling metadata. Moreover,
clustering and classification techniques on large datasets pose additional
requirements in terms of computation and memory scalability and
interpretability of results. In this study we review some possible solutions.
| no_new_dataset | 0.946101 |
physics/0307098 | Alessandra Retico | A. Lauria, M.E. Fantacci, U. Bottigli, P. Delogu, F. Fauci, B.
Golosio, P.L. Indovina, G.L. Masala, P. Oliva, R. Palmiero, G. Raso, S.
Stumbo, S. Tangaro | Diagnostic performance of radiologists with and without different CAD
systems for mammography | 6 pages, 3 figures; to appear in the Proceedings of The International
Society for Optical Engineering, SPIE Conference, 15-20 February 2003, San
Diego, California, USA | null | 10.1117/12.480079 | null | physics.med-ph | null | The purpose of this study is the evaluation of the variation of performance
in terms of sensitivity and specificity of two radiologists with different
experience in mammography, with and without the assistance of two different CAD
systems. The CAD considered are SecondLookTM (CADx Medical Systems, Canada),
and CALMA (Computer Assisted Library in MAmmography). The first is a commercial
system, the other is the result of a a research project, supported by INFN
(Istituto Nazionale di Fisica Nucleare, Italy); their characteristics have been
already reported in literature. To compare the results with and without these
tools, a dataset composed by 70 images of patients with cancer (biopsy proven)
and 120 images of healthy breasts (with a three years follow up) has been
collected. All the images have been digitized and analysed by two CAD, then two
radiologists with respectively 6 and 2 years of experience in mammography
indipendently made their diagnosis without and with, the support of the two CAD
systems. In this work sensitivity and specificity variation, the Az area under
the ROC curve, are reported. The results show that the use of a CAD allows for
a substantial increment in sensitivity and a less pronounced decrement in
specificity. The extent of these effects depends on the experience of the
readers and is comparable for the two CAD considered.
| [
{
"version": "v1",
"created": "Sat, 19 Jul 2003 13:22:06 GMT"
}
] | 2009-11-10T00:00:00 | [
[
"Lauria",
"A.",
""
],
[
"Fantacci",
"M. E.",
""
],
[
"Bottigli",
"U.",
""
],
[
"Delogu",
"P.",
""
],
[
"Fauci",
"F.",
""
],
[
"Golosio",
"B.",
""
],
[
"Indovina",
"P. L.",
""
],
[
"Masala",
"G. L.",
""
],
[
"Oliva",
"P.",
""
],
[
"Palmiero",
"R.",
""
],
[
"Raso",
"G.",
""
],
[
"Stumbo",
"S.",
""
],
[
"Tangaro",
"S.",
""
]
] | TITLE: Diagnostic performance of radiologists with and without different CAD
systems for mammography
ABSTRACT: The purpose of this study is the evaluation of the variation of performance
in terms of sensitivity and specificity of two radiologists with different
experience in mammography, with and without the assistance of two different CAD
systems. The CAD considered are SecondLookTM (CADx Medical Systems, Canada),
and CALMA (Computer Assisted Library in MAmmography). The first is a commercial
system, the other is the result of a a research project, supported by INFN
(Istituto Nazionale di Fisica Nucleare, Italy); their characteristics have been
already reported in literature. To compare the results with and without these
tools, a dataset composed by 70 images of patients with cancer (biopsy proven)
and 120 images of healthy breasts (with a three years follow up) has been
collected. All the images have been digitized and analysed by two CAD, then two
radiologists with respectively 6 and 2 years of experience in mammography
indipendently made their diagnosis without and with, the support of the two CAD
systems. In this work sensitivity and specificity variation, the Az area under
the ROC curve, are reported. The results show that the use of a CAD allows for
a substantial increment in sensitivity and a less pronounced decrement in
specificity. The extent of these effects depends on the experience of the
readers and is comparable for the two CAD considered.
| new_dataset | 0.971047 |
physics/0312077 | Guglielmo Lacorata | Guglielmo Lacorata, Erik Aurell, Bernard Legras and Angelo Vulpiani | Evidence for a k^{-5/3} spectrum from the EOLE Lagrangian balloons in
the low stratosphere | 19 pages, 1 table + 5 (pdf) figures | J. Atmos. Sci. 61, 23, 2936-2942 (2004) | 10.1175/JAS-3292.1 | null | physics.ao-ph nlin.CD | null | The EOLE Experiment is revisited to study turbulent processes in the lower
stratosphere circulation from a Lagrangian viewpoint and resolve a discrepancy
on the slope of the atmospheric energy spectrum between the work of Morel and
Larcheveque (1974) and recent studies using aircraft data. Relative dispersion
of balloon pairs is studied by calculating the Finite Scale Lyapunov Exponent,
an exit time-based technique which is particularly efficient in cases where
processes with different spatial scales are interfering. Our main result is to
reconciliate the EOLE dataset with recent studies supporting a k^{-5/3} energy
spectrum in the range 100-1000 km. Our results also show exponential separation
at smaller scale, with characteristic time of order 1 day, and agree with the
standard diffusion of about 10^7 m^2/s at large scales. A still open question
is the origin of a k^{-5/3} spectrum in the mesoscale range, between 100 and
1000 km.
| [
{
"version": "v1",
"created": "Thu, 11 Dec 2003 16:46:41 GMT"
}
] | 2009-11-10T00:00:00 | [
[
"Lacorata",
"Guglielmo",
""
],
[
"Aurell",
"Erik",
""
],
[
"Legras",
"Bernard",
""
],
[
"Vulpiani",
"Angelo",
""
]
] | TITLE: Evidence for a k^{-5/3} spectrum from the EOLE Lagrangian balloons in
the low stratosphere
ABSTRACT: The EOLE Experiment is revisited to study turbulent processes in the lower
stratosphere circulation from a Lagrangian viewpoint and resolve a discrepancy
on the slope of the atmospheric energy spectrum between the work of Morel and
Larcheveque (1974) and recent studies using aircraft data. Relative dispersion
of balloon pairs is studied by calculating the Finite Scale Lyapunov Exponent,
an exit time-based technique which is particularly efficient in cases where
processes with different spatial scales are interfering. Our main result is to
reconciliate the EOLE dataset with recent studies supporting a k^{-5/3} energy
spectrum in the range 100-1000 km. Our results also show exponential separation
at smaller scale, with characteristic time of order 1 day, and agree with the
standard diffusion of about 10^7 m^2/s at large scales. A still open question
is the origin of a k^{-5/3} spectrum in the mesoscale range, between 100 and
1000 km.
| no_new_dataset | 0.95096 |
physics/0104028 | Wentian Li | Wentian Li | Zipf's Law in Importance of Genes for Cancer Classification Using
Microarray Data | 11 pages, 5 figures. submitted | W Li and Y Yang (2002), J. Theoretical Biology, 219(4):539-551. | 10.1006/jtbi.2002.3145 | physics/0104028 | physics.bio-ph physics.data-an q-bio.QM | null | Microarray data consists of mRNA expression levels of thousands of genes
under certain conditions. A difference in the expression level of a gene at two
different conditions/phenotypes, such as cancerous versus non-cancerous, one
subtype of cancer versus another, before versus after a drug treatment, is
indicative of the relevance of that gene to the difference of the high-level
phenotype. Each gene can be ranked by its ability to distinguish the two
conditions. We study how the single-gene classification ability decreases with
its rank (a Zipf's plot). Power-law function in the Zipf's plot is observed for
the four microarray datasets obtained from various cancer studies. This
power-law behavior in the Zipf's plot is reminiscent of similar power-law
curves in other natural and social phenomena (Zipf's law). However, due to our
choice of the measure of importance in classification ability, i.e., the
maximized likelihood in a logistic regression, the exponent of the power-law
function is a function of the sample size, instead of a fixed value close to 1
for a typical example of Zipf's law. The presence of this power-law behavior is
important for deciding the number of genes to be used for a discriminant
microarray data analysis.
| [
{
"version": "v1",
"created": "Fri, 6 Apr 2001 00:07:44 GMT"
}
] | 2009-11-09T00:00:00 | [
[
"Li",
"Wentian",
""
]
] | TITLE: Zipf's Law in Importance of Genes for Cancer Classification Using
Microarray Data
ABSTRACT: Microarray data consists of mRNA expression levels of thousands of genes
under certain conditions. A difference in the expression level of a gene at two
different conditions/phenotypes, such as cancerous versus non-cancerous, one
subtype of cancer versus another, before versus after a drug treatment, is
indicative of the relevance of that gene to the difference of the high-level
phenotype. Each gene can be ranked by its ability to distinguish the two
conditions. We study how the single-gene classification ability decreases with
its rank (a Zipf's plot). Power-law function in the Zipf's plot is observed for
the four microarray datasets obtained from various cancer studies. This
power-law behavior in the Zipf's plot is reminiscent of similar power-law
curves in other natural and social phenomena (Zipf's law). However, due to our
choice of the measure of importance in classification ability, i.e., the
maximized likelihood in a logistic regression, the exponent of the power-law
function is a function of the sample size, instead of a fixed value close to 1
for a typical example of Zipf's law. The presence of this power-law behavior is
important for deciding the number of genes to be used for a discriminant
microarray data analysis.
| no_new_dataset | 0.95469 |
cs/0208013 | Jim Gray | Alexander S. Szalay, Jim Gray, Jan vandenBerg | Petabyte Scale Data Mining: Dream or Reality? | originals at
http://research.microsoft.com/scripts/pubs/view.asp?TR_ID=MSR-TR-2002-84 | SIPE Astronmy Telescopes and Instruments, 22-28 August 2002,
Waikoloa, Hawaii | 10.1117/12.461427 | MSR-TR-2002-84 | cs.DB cs.CE | null | Science is becoming very data intensive1. Today's astronomy datasets with
tens of millions of galaxies already present substantial challenges for data
mining. In less than 10 years the catalogs are expected to grow to billions of
objects, and image archives will reach Petabytes. Imagine having a 100GB
database in 1996, when disk scanning speeds were 30MB/s, and database tools
were immature. Such a task today is trivial, almost manageable with a laptop.
We think that the issue of a PB database will be very similar in six years. In
this paper we scale our current experiments in data archiving and analysis on
the Sloan Digital Sky Survey2,3 data six years into the future. We analyze
these projections and look at the requirements of performing data mining on
such data sets. We conclude that the task scales rather well: we could do the
job today, although it would be expensive. There do not seem to be any
show-stoppers that would prevent us from storing and using a Petabyte dataset
six years from today.
| [
{
"version": "v1",
"created": "Wed, 7 Aug 2002 22:49:56 GMT"
}
] | 2009-11-07T00:00:00 | [
[
"Szalay",
"Alexander S.",
""
],
[
"Gray",
"Jim",
""
],
[
"vandenBerg",
"Jan",
""
]
] | TITLE: Petabyte Scale Data Mining: Dream or Reality?
ABSTRACT: Science is becoming very data intensive1. Today's astronomy datasets with
tens of millions of galaxies already present substantial challenges for data
mining. In less than 10 years the catalogs are expected to grow to billions of
objects, and image archives will reach Petabytes. Imagine having a 100GB
database in 1996, when disk scanning speeds were 30MB/s, and database tools
were immature. Such a task today is trivial, almost manageable with a laptop.
We think that the issue of a PB database will be very similar in six years. In
this paper we scale our current experiments in data archiving and analysis on
the Sloan Digital Sky Survey2,3 data six years into the future. We analyze
these projections and look at the requirements of performing data mining on
such data sets. We conclude that the task scales rather well: we could do the
job today, although it would be expensive. There do not seem to be any
show-stoppers that would prevent us from storing and using a Petabyte dataset
six years from today.
| no_new_dataset | 0.929824 |
cs/0208015 | Jim Gray | Alexander S. Szalay, Tamas Budavari, Andrew Connolly, Jim Gray,
Takahiko Matsubara, Adrian Pope, Istvan Szapudi | Spatial Clustering of Galaxies in Large Datasets | original documents at
http://research.microsoft.com/scripts/pubs/view.asp?TR_ID=MSR-TR-2002-86 | SIPE Astronomy Telescopes and Instruments, 22-28 August 2002,
Waikoloa, Hawaii | 10.1117/12.476761 | TR_ID=MSR-TR-2002-86 | cs.DB cs.DS | null | Datasets with tens of millions of galaxies present new challenges for the
analysis of spatial clustering. We have built a framework that integrates a
database of object catalogs, tools for creating masks of bad regions, and a
fast (NlogN) correlation code. This system has enabled unprecedented efficiency
in carrying out the analysis of galaxy clustering in the SDSS catalog. A
similar approach is used to compute the three-dimensional spatial clustering of
galaxies on very large scales. We describe our strategy to estimate the effect
of photometric errors using a database. We discuss our efforts as an early
example of data-intensive science. While it would have been possible to get
these results without the framework we describe, it will be infeasible to
perform these computations on the future huge datasets without using this
framework.
| [
{
"version": "v1",
"created": "Wed, 7 Aug 2002 23:06:40 GMT"
}
] | 2009-11-07T00:00:00 | [
[
"Szalay",
"Alexander S.",
""
],
[
"Budavari",
"Tamas",
""
],
[
"Connolly",
"Andrew",
""
],
[
"Gray",
"Jim",
""
],
[
"Matsubara",
"Takahiko",
""
],
[
"Pope",
"Adrian",
""
],
[
"Szapudi",
"Istvan",
""
]
] | TITLE: Spatial Clustering of Galaxies in Large Datasets
ABSTRACT: Datasets with tens of millions of galaxies present new challenges for the
analysis of spatial clustering. We have built a framework that integrates a
database of object catalogs, tools for creating masks of bad regions, and a
fast (NlogN) correlation code. This system has enabled unprecedented efficiency
in carrying out the analysis of galaxy clustering in the SDSS catalog. A
similar approach is used to compute the three-dimensional spatial clustering of
galaxies on very large scales. We describe our strategy to estimate the effect
of photometric errors using a database. We discuss our efforts as an early
example of data-intensive science. While it would have been possible to get
these results without the framework we describe, it will be infeasible to
perform these computations on the future huge datasets without using this
framework.
| no_new_dataset | 0.951006 |
astro-ph/0103178 | Anil K. Pradhan | Hong Lin Zhang (Los Alamos National Laboratory), Sultana N. Nahar and
Anil K. Pradhan (Ohio State University) | Relativistic close coupling calculations for photoionization and
recombination of Ne-like Fe XVII | 19 pages, 8 figures, Phys. Rev. A (submitted) | null | 10.1103/PhysRevA.64.032719 | null | astro-ph physics.atom-ph | null | Relativistic and channel coupling effects in photoionization and unified
electronic recombination of Fe XVII are demonstrated with an extensive 60-level
close coupling calculation using the Breit-Pauli R-matrix method.
Photoionization and (e + ion) recombination calculations are carried out for
the total and the level-specific cross sections, including the ground and
several hundred excited bound levels of Fe XVII (up to fine structure levels
with n = 10). The unified (e + ion) recombination calculations for (e + Fe
XVIII --> Fe XVII) include both the non-resonant and resonant recombination
(`radiative' and `dielectronic recombination' -- RR and DR). The low-energy and
the high energy cross sections are compared from: (i) a 3-level calculation
with 2s^2p^5 (^2P^o_{1/2,3/2}) and 2s2p^6 (^2S_{1/2}), and (ii) the first
60-level calculation with \Delta n > 0 coupled channels with spectroscopic
2s^2p^5, 2s2p^6, 2s^22p^4 3s, 3p, 3d, configurations, and a number of
correlation configurations. Strong channel coupling effects are demonstrated
throughout the energy ranges considered, in particular via giant
photoexcitation-of-core (PEC) resonances due to L-M shell dipole transition
arrays 2p^5 --> 2p^4 3s, 3d in Fe XIII that enhance effective cross sections by
orders of magnitude. Comparison is made with previous theoretical and
experimental works on photoionization and recombination that considered the
relatively small low-energy region (i), and the weaker \Delta n = 0 couplings.
While the 3-level results are inadequate, the present 60-level results should
provide reasonably complete and accurate datasets for both photoionization and
(e + ion) recombination of Fe~XVII in laboratory and astrophysical plasmas.
| [
{
"version": "v1",
"created": "Mon, 12 Mar 2001 22:33:34 GMT"
}
] | 2009-11-06T00:00:00 | [
[
"Zhang",
"Hong Lin",
"",
"Los Alamos National Laboratory"
],
[
"Nahar",
"Sultana N.",
"",
"Ohio State University"
],
[
"Pradhan",
"Anil K.",
"",
"Ohio State University"
]
] | TITLE: Relativistic close coupling calculations for photoionization and
recombination of Ne-like Fe XVII
ABSTRACT: Relativistic and channel coupling effects in photoionization and unified
electronic recombination of Fe XVII are demonstrated with an extensive 60-level
close coupling calculation using the Breit-Pauli R-matrix method.
Photoionization and (e + ion) recombination calculations are carried out for
the total and the level-specific cross sections, including the ground and
several hundred excited bound levels of Fe XVII (up to fine structure levels
with n = 10). The unified (e + ion) recombination calculations for (e + Fe
XVIII --> Fe XVII) include both the non-resonant and resonant recombination
(`radiative' and `dielectronic recombination' -- RR and DR). The low-energy and
the high energy cross sections are compared from: (i) a 3-level calculation
with 2s^2p^5 (^2P^o_{1/2,3/2}) and 2s2p^6 (^2S_{1/2}), and (ii) the first
60-level calculation with \Delta n > 0 coupled channels with spectroscopic
2s^2p^5, 2s2p^6, 2s^22p^4 3s, 3p, 3d, configurations, and a number of
correlation configurations. Strong channel coupling effects are demonstrated
throughout the energy ranges considered, in particular via giant
photoexcitation-of-core (PEC) resonances due to L-M shell dipole transition
arrays 2p^5 --> 2p^4 3s, 3d in Fe XIII that enhance effective cross sections by
orders of magnitude. Comparison is made with previous theoretical and
experimental works on photoionization and recombination that considered the
relatively small low-energy region (i), and the weaker \Delta n = 0 couplings.
While the 3-level results are inadequate, the present 60-level results should
provide reasonably complete and accurate datasets for both photoionization and
(e + ion) recombination of Fe~XVII in laboratory and astrophysical plasmas.
| no_new_dataset | 0.948728 |
physics/0004009 | Gaddy Getz | G. Getz, E. Levine and E. Domany | Coupled Two-Way Clustering Analysis of Gene Microarray Data | null | null | 10.1073/pnas.210134797 | null | physics.bio-ph physics.comp-ph physics.data-an q-bio.QM | null | We present a novel coupled two-way clustering approach to gene microarray
data analysis. The main idea is to identify subsets of the genes and samples,
such that when one of these is used to cluster the other, stable and
significant partitions emerge. The search for such subsets is a computationally
complex task: we present an algorithm, based on iterative clustering, which
performs such a search. This analysis is especially suitable for gene
microarray data, where the contributions of a variety of biological mechanisms
to the gene expression levels are entangled in a large body of experimental
data. The method was applied to two gene microarray data sets, on colon cancer
and leukemia. By identifying relevant subsets of the data and focusing on them
we were able to discover partitions and correlations that were masked and
hidden when the full dataset was used in the analysis. Some of these partitions
have clear biological interpretation; others can serve to identify possible
directions for future research.
| [
{
"version": "v1",
"created": "Tue, 4 Apr 2000 14:10:53 GMT"
}
] | 2009-11-06T00:00:00 | [
[
"Getz",
"G.",
""
],
[
"Levine",
"E.",
""
],
[
"Domany",
"E.",
""
]
] | TITLE: Coupled Two-Way Clustering Analysis of Gene Microarray Data
ABSTRACT: We present a novel coupled two-way clustering approach to gene microarray
data analysis. The main idea is to identify subsets of the genes and samples,
such that when one of these is used to cluster the other, stable and
significant partitions emerge. The search for such subsets is a computationally
complex task: we present an algorithm, based on iterative clustering, which
performs such a search. This analysis is especially suitable for gene
microarray data, where the contributions of a variety of biological mechanisms
to the gene expression levels are entangled in a large body of experimental
data. The method was applied to two gene microarray data sets, on colon cancer
and leukemia. By identifying relevant subsets of the data and focusing on them
we were able to discover partitions and correlations that were masked and
hidden when the full dataset was used in the analysis. Some of these partitions
have clear biological interpretation; others can serve to identify possible
directions for future research.
| no_new_dataset | 0.950549 |
0911.0674 | James Bagrow | James P. Bagrow, Tal Koren | Investigating Bimodal Clustering in Human Mobility | 4 pages, 2 figures | International Conference on Computational Science and Engineering,
4: 944-947, 2009 | 10.1109/CSE.2009.283 | null | physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We apply a simple clustering algorithm to a large dataset of cellular
telecommunication records, reducing the complexity of mobile phone users' full
trajectories and allowing for simple statistics to characterize their
properties. For the case of two clusters, we quantify how clustered human
mobility is, how much of a user's spatial dispersion is due to motion between
clusters, and how spatially and temporally separated clusters are from one
another.
| [
{
"version": "v1",
"created": "Tue, 3 Nov 2009 21:42:02 GMT"
}
] | 2009-11-05T00:00:00 | [
[
"Bagrow",
"James P.",
""
],
[
"Koren",
"Tal",
""
]
] | TITLE: Investigating Bimodal Clustering in Human Mobility
ABSTRACT: We apply a simple clustering algorithm to a large dataset of cellular
telecommunication records, reducing the complexity of mobile phone users' full
trajectories and allowing for simple statistics to characterize their
properties. For the case of two clusters, we quantify how clustered human
mobility is, how much of a user's spatial dispersion is due to motion between
clusters, and how spatially and temporally separated clusters are from one
another.
| no_new_dataset | 0.939415 |
0911.0787 | Rdv Ijcsis | Shailendra Singh, Sanjay Silakari | Generalized Discriminant Analysis algorithm for feature reduction in
Cyber Attack Detection System | 8 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.423,
http://sites.google.com/site/ijcsis/ | International Journal of Computer Science and Information
Security, IJCSIS, Vol. 6, No. 1, pp. 173-180, October 2009, USA | null | ISSN 1947 5500 | cs.CR cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This Generalized Discriminant Analysis (GDA) has provided an extremely
powerful approach to extracting non linear features. The network traffic data
provided for the design of intrusion detection system always are large with
ineffective information, thus we need to remove the worthless information from
the original high dimensional database. To improve the generalization ability,
we usually generate a small set of features from the original input variables
by feature extraction. The conventional Linear Discriminant Analysis (LDA)
feature reduction technique has its limitations. It is not suitable for non
linear dataset. Thus we propose an efficient algorithm based on the Generalized
Discriminant Analysis (GDA) feature reduction technique which is novel approach
used in the area of cyber attack detection. This not only reduces the number of
the input features but also increases the classification accuracy and reduces
the training and testing time of the classifiers by selecting most
discriminating features. We use Artificial Neural Network (ANN) and C4.5
classifiers to compare the performance of the proposed technique. The result
indicates the superiority of algorithm.
| [
{
"version": "v1",
"created": "Wed, 4 Nov 2009 11:29:57 GMT"
}
] | 2009-11-05T00:00:00 | [
[
"Singh",
"Shailendra",
""
],
[
"Silakari",
"Sanjay",
""
]
] | TITLE: Generalized Discriminant Analysis algorithm for feature reduction in
Cyber Attack Detection System
ABSTRACT: This Generalized Discriminant Analysis (GDA) has provided an extremely
powerful approach to extracting non linear features. The network traffic data
provided for the design of intrusion detection system always are large with
ineffective information, thus we need to remove the worthless information from
the original high dimensional database. To improve the generalization ability,
we usually generate a small set of features from the original input variables
by feature extraction. The conventional Linear Discriminant Analysis (LDA)
feature reduction technique has its limitations. It is not suitable for non
linear dataset. Thus we propose an efficient algorithm based on the Generalized
Discriminant Analysis (GDA) feature reduction technique which is novel approach
used in the area of cyber attack detection. This not only reduces the number of
the input features but also increases the classification accuracy and reduces
the training and testing time of the classifiers by selecting most
discriminating features. We use Artificial Neural Network (ANN) and C4.5
classifiers to compare the performance of the proposed technique. The result
indicates the superiority of algorithm.
| no_new_dataset | 0.947088 |
0911.0460 | Lester Mackey | Joseph Sill, Gabor Takacs, Lester Mackey, David Lin | Feature-Weighted Linear Stacking | 17 pages, 1 figure, 2 tables | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensemble methods, such as stacking, are designed to boost predictive accuracy
by blending the predictions of multiple machine learning models. Recent work
has shown that the use of meta-features, additional inputs describing each
example in a dataset, can boost the performance of ensemble methods, but the
greatest reported gains have come from nonlinear procedures requiring
significant tuning and training time. Here, we present a linear technique,
Feature-Weighted Linear Stacking (FWLS), that incorporates meta-features for
improved accuracy while retaining the well-known virtues of linear regression
regarding speed, stability, and interpretability. FWLS combines model
predictions linearly using coefficients that are themselves linear functions of
meta-features. This technique was a key facet of the solution of the second
place team in the recently concluded Netflix Prize competition. Significant
increases in accuracy over standard linear stacking are demonstrated on the
Netflix Prize collaborative filtering dataset.
| [
{
"version": "v1",
"created": "Tue, 3 Nov 2009 08:17:05 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Nov 2009 08:55:28 GMT"
}
] | 2009-11-04T00:00:00 | [
[
"Sill",
"Joseph",
""
],
[
"Takacs",
"Gabor",
""
],
[
"Mackey",
"Lester",
""
],
[
"Lin",
"David",
""
]
] | TITLE: Feature-Weighted Linear Stacking
ABSTRACT: Ensemble methods, such as stacking, are designed to boost predictive accuracy
by blending the predictions of multiple machine learning models. Recent work
has shown that the use of meta-features, additional inputs describing each
example in a dataset, can boost the performance of ensemble methods, but the
greatest reported gains have come from nonlinear procedures requiring
significant tuning and training time. Here, we present a linear technique,
Feature-Weighted Linear Stacking (FWLS), that incorporates meta-features for
improved accuracy while retaining the well-known virtues of linear regression
regarding speed, stability, and interpretability. FWLS combines model
predictions linearly using coefficients that are themselves linear functions of
meta-features. This technique was a key facet of the solution of the second
place team in the recently concluded Netflix Prize competition. Significant
increases in accuracy over standard linear stacking are demonstrated on the
Netflix Prize collaborative filtering dataset.
| no_new_dataset | 0.94625 |
0911.0465 | Konstantinos Pelechrinis | Theodoros Lappas, Konstantinos Pelechrinis, Michalis Faloutsos | A Simple Conceptual Generator for the Internet Graph | 9 pages | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of the Internet during the last years, has lead to a dramatic
increase of the size of its graph at the Autonomous System (AS) level. Soon -
if not already - its size will make the latter impractical for use from the
research community, e.g. for protocol testing. Reproducing a smaller size,
snapshot of the AS graph is thus important. However, the first step towards
this direction is to obtain the ability to faithfully reproduce the full AS
topology. The objective of our work, is to create a generator able to
accurately emulate and reproduce the distinctive properties of the Internet
graph. Our approach is based on (a) the identification of the jellyfish-like
structure [1] of the Internet and (b) the consideration of the peer-to-peer and
customer-provider relations between ASs. We are the first to exploit the
distinctive structure of the Internet graph together with utilizing the
information provided by the AS relationships in order to create a tool with the
aforementioned capabilities. Comparing our generator with the existing ones in
the literature, the main difference is found on the fact that our tool does not
try to satisfy specific metrics, but tries to remain faithful to the conceptual
model of the Internet structure. In addition, our approach can lead to (i) the
identification of important attributes and patterns in the Internet AS
topology, as well as, (ii) the extraction of valuable information on the
various relationships between ASs and their effect on the formulation of the
Internet structure. We implement our graph generator and we evaluate it using
the largest and most recent available dataset for the AS topology. Our
evaluations, clearly show the ability of our tool to capture the structural
properties of the Internet topology at the AS level with high accuracy.
| [
{
"version": "v1",
"created": "Tue, 3 Nov 2009 00:38:01 GMT"
}
] | 2009-11-04T00:00:00 | [
[
"Lappas",
"Theodoros",
""
],
[
"Pelechrinis",
"Konstantinos",
""
],
[
"Faloutsos",
"Michalis",
""
]
] | TITLE: A Simple Conceptual Generator for the Internet Graph
ABSTRACT: The evolution of the Internet during the last years, has lead to a dramatic
increase of the size of its graph at the Autonomous System (AS) level. Soon -
if not already - its size will make the latter impractical for use from the
research community, e.g. for protocol testing. Reproducing a smaller size,
snapshot of the AS graph is thus important. However, the first step towards
this direction is to obtain the ability to faithfully reproduce the full AS
topology. The objective of our work, is to create a generator able to
accurately emulate and reproduce the distinctive properties of the Internet
graph. Our approach is based on (a) the identification of the jellyfish-like
structure [1] of the Internet and (b) the consideration of the peer-to-peer and
customer-provider relations between ASs. We are the first to exploit the
distinctive structure of the Internet graph together with utilizing the
information provided by the AS relationships in order to create a tool with the
aforementioned capabilities. Comparing our generator with the existing ones in
the literature, the main difference is found on the fact that our tool does not
try to satisfy specific metrics, but tries to remain faithful to the conceptual
model of the Internet structure. In addition, our approach can lead to (i) the
identification of important attributes and patterns in the Internet AS
topology, as well as, (ii) the extraction of valuable information on the
various relationships between ASs and their effect on the formulation of the
Internet structure. We implement our graph generator and we evaluate it using
the largest and most recent available dataset for the AS topology. Our
evaluations, clearly show the ability of our tool to capture the structural
properties of the Internet topology at the AS level with high accuracy.
| no_new_dataset | 0.945801 |
astro-ph/0002230 | Anil K. Pradhan | Sultana N. Nahar (1), Franck Delahaye (1), Anil K. Pradhan (1), C.J.
Zeippen (2) (1 - Ohio State University, 2 - Observatoire de Meudon) | Atomic data from the Iron Project.XLIII. Transition probabilities for Fe
V | 19 pages, 1 figure. This paper marks the beginning of a large-scale
effort of ab initio atomic calculations that should eventually lead to
re-calculation of accurate iron opacities. Astron. Astrophys. Suppl. Ser. (in
press) | null | 10.1051/aas:2000339 | null | astro-ph physics.atom-ph | null | An extensive set of dipole-allowed, intercombination, and forbidden
transition probabilities for Fe V is presented. The Breit-Pauli R-matrix (BPRM)
method is used to calculate 1.46 x 10^6 oscillator strengths for the allowed
and intercombination E1 transitions among 3,865 fine-structure levels dominated
by configuration complexes with n <= 10 and l <= 9. These data are complemented
by an atomic structure configuration interaction (CI) calculation using the
SUPERSTRUCTURE program for 362 relativistic quadrupole (E2) and magnetic dipole
(M1) transitions among 65 low-lying levels dominated by the 3d^4 and 3d^ 4s
configurations. Procedures have been developed for the identification of the
large number of fine-structure levels and transitions obtained through the BPRM
calculations. The target ion Fe VI is represented by an eigenfunction expansion
of 19 fine-structure levels of 3d^3 and a set of correlation configurations. Fe
V bound levels are obtained with angular and spin symmetries SL\pi and J\pi of
the (e + Fe VI) system such that 2S+1 = 5,3,1, L <= 10, J <= 8 of even and odd
parities. The completeness of the calculated dataset is verified in terms of
all possible bound levels belonging to relevant LS terms and transitions in
correspondence with the LS terms. The fine-structure averaged relativistic
values are compared with previous Opacity Project LS coupling data and other
works. The 362 forbidden transition probabilities considerably extend the
available data for the E2 and M1 transtions, and are in good agreement with
those computed by Garstang for the 3d^4 transitions.
| [
{
"version": "v1",
"created": "Thu, 10 Feb 2000 16:27:57 GMT"
}
] | 2009-10-31T00:00:00 | [
[
"Nahar",
"Sultana N.",
""
],
[
"Delahaye",
"Franck",
""
],
[
"Pradhan",
"Anil K.",
""
],
[
"Zeippen",
"C. J.",
""
]
] | TITLE: Atomic data from the Iron Project.XLIII. Transition probabilities for Fe
V
ABSTRACT: An extensive set of dipole-allowed, intercombination, and forbidden
transition probabilities for Fe V is presented. The Breit-Pauli R-matrix (BPRM)
method is used to calculate 1.46 x 10^6 oscillator strengths for the allowed
and intercombination E1 transitions among 3,865 fine-structure levels dominated
by configuration complexes with n <= 10 and l <= 9. These data are complemented
by an atomic structure configuration interaction (CI) calculation using the
SUPERSTRUCTURE program for 362 relativistic quadrupole (E2) and magnetic dipole
(M1) transitions among 65 low-lying levels dominated by the 3d^4 and 3d^ 4s
configurations. Procedures have been developed for the identification of the
large number of fine-structure levels and transitions obtained through the BPRM
calculations. The target ion Fe VI is represented by an eigenfunction expansion
of 19 fine-structure levels of 3d^3 and a set of correlation configurations. Fe
V bound levels are obtained with angular and spin symmetries SL\pi and J\pi of
the (e + Fe VI) system such that 2S+1 = 5,3,1, L <= 10, J <= 8 of even and odd
parities. The completeness of the calculated dataset is verified in terms of
all possible bound levels belonging to relevant LS terms and transitions in
correspondence with the LS terms. The fine-structure averaged relativistic
values are compared with previous Opacity Project LS coupling data and other
works. The 362 forbidden transition probabilities considerably extend the
available data for the E2 and M1 transtions, and are in good agreement with
those computed by Garstang for the 3d^4 transitions.
| no_new_dataset | 0.946597 |
astro-ph/9911102 | Raul Jimenez | Alan Heavens (IfA, Edinburgh), Raul Jimenez (IfA, Edinburgh), Ofer
Lahav (IoA, Cambridge) | Massive Lossless Data Compression and Multiple Parameter Estimation from
Galaxy Spectra | Minor modifications to match revised version accepted by MNRAS | Mon.Not.Roy.Astron.Soc. 317 (2000) 965 | 10.1046/j.1365-8711.2000.03692.x | null | astro-ph math.RA physics.data-an | null | We present a method for radical linear compression of datasets where the data
are dependent on some number $M$ of parameters. We show that, if the noise in
the data is independent of the parameters, we can form $M$ linear combinations
of the data which contain as much information about all the parameters as the
entire dataset, in the sense that the Fisher information matrices are
identical; i.e. the method is lossless. We explore how these compressed numbers
fare when the noise is dependent on the parameters, and show that the method,
although not precisely lossless, increases errors by a very modest factor. The
method is general, but we illustrate it with a problem for which it is
well-suited: galaxy spectra, whose data typically consist of $\sim 10^3$
fluxes, and whose properties are set by a handful of parameters such as age,
brightness and a parametrised star formation history. The spectra are reduced
to a small number of data, which are connected to the physical processes
entering the problem. This data compression offers the possibility of a large
increase in the speed of determining physical parameters. This is an important
consideration as datasets of galaxy spectra reach $10^6$ in size, and the
complexity of model spectra increases. In addition to this practical advantage,
the compressed data may offer a classification scheme for galaxy spectra which
is based rather directly on physical processes.
| [
{
"version": "v1",
"created": "Sat, 6 Nov 1999 01:01:09 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2000 10:29:36 GMT"
}
] | 2009-10-31T00:00:00 | [
[
"Heavens",
"Alan",
"",
"IfA, Edinburgh"
],
[
"Jimenez",
"Raul",
"",
"IfA, Edinburgh"
],
[
"Lahav",
"Ofer",
"",
"IoA, Cambridge"
]
] | TITLE: Massive Lossless Data Compression and Multiple Parameter Estimation from
Galaxy Spectra
ABSTRACT: We present a method for radical linear compression of datasets where the data
are dependent on some number $M$ of parameters. We show that, if the noise in
the data is independent of the parameters, we can form $M$ linear combinations
of the data which contain as much information about all the parameters as the
entire dataset, in the sense that the Fisher information matrices are
identical; i.e. the method is lossless. We explore how these compressed numbers
fare when the noise is dependent on the parameters, and show that the method,
although not precisely lossless, increases errors by a very modest factor. The
method is general, but we illustrate it with a problem for which it is
well-suited: galaxy spectra, whose data typically consist of $\sim 10^3$
fluxes, and whose properties are set by a handful of parameters such as age,
brightness and a parametrised star formation history. The spectra are reduced
to a small number of data, which are connected to the physical processes
entering the problem. This data compression offers the possibility of a large
increase in the speed of determining physical parameters. This is an important
consideration as datasets of galaxy spectra reach $10^6$ in size, and the
complexity of model spectra increases. In addition to this practical advantage,
the compressed data may offer a classification scheme for galaxy spectra which
is based rather directly on physical processes.
| no_new_dataset | 0.940188 |
0806.3284 | Daniel M. Gordon | Daniel M. Gordon, Victor Miller and Peter Ostapenko | Optimal hash functions for approximate closest pairs on the n-cube | IEEE Transactions on Information Theory, to appear | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One way to find closest pairs in large datasets is to use hash functions. In
recent years locality-sensitive hash functions for various metrics have been
given: projecting an n-cube onto k bits is simple hash function that performs
well. In this paper we investigate alternatives to projection. For various
parameters hash functions given by complete decoding algorithms for codes work
better, and asymptotically random codes perform better than projection.
| [
{
"version": "v1",
"created": "Fri, 20 Jun 2008 17:19:44 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Oct 2009 15:40:02 GMT"
}
] | 2009-10-15T00:00:00 | [
[
"Gordon",
"Daniel M.",
""
],
[
"Miller",
"Victor",
""
],
[
"Ostapenko",
"Peter",
""
]
] | TITLE: Optimal hash functions for approximate closest pairs on the n-cube
ABSTRACT: One way to find closest pairs in large datasets is to use hash functions. In
recent years locality-sensitive hash functions for various metrics have been
given: projecting an n-cube onto k bits is simple hash function that performs
well. In this paper we investigate alternatives to projection. For various
parameters hash functions given by complete decoding algorithms for codes work
better, and asymptotically random codes perform better than projection.
| no_new_dataset | 0.951414 |
0910.2279 | Chunhua Shen | Chunhua Shen, Junae Kim, Lei Wang, Anton van den Hengel | Positive Semidefinite Metric Learning with Boosting | 11 pages, Twenty-Third Annual Conference on Neural Information
Processing Systems (NIPS 2009), Vancouver, Canada | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The learning of appropriate distance metrics is a critical problem in image
classification and retrieval. In this work, we propose a boosting-based
technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One
of the primary difficulties in learning such a metric is to ensure that the
Mahalanobis matrix remains positive semidefinite. Semidefinite programming is
sometimes used to enforce this constraint, but does not scale well.
\BoostMetric is instead based on a key observation that any positive
semidefinite matrix can be decomposed into a linear positive combination of
trace-one rank-one matrices. \BoostMetric thus uses rank-one positive
semidefinite matrices as weak learners within an efficient and scalable
boosting-based learning process. The resulting method is easy to implement,
does not require tuning, and can accommodate various types of constraints.
Experiments on various datasets show that the proposed algorithm compares
favorably to those state-of-the-art methods in terms of classification accuracy
and running time.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2009 00:54:31 GMT"
}
] | 2009-10-14T00:00:00 | [
[
"Shen",
"Chunhua",
""
],
[
"Kim",
"Junae",
""
],
[
"Wang",
"Lei",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Positive Semidefinite Metric Learning with Boosting
ABSTRACT: The learning of appropriate distance metrics is a critical problem in image
classification and retrieval. In this work, we propose a boosting-based
technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One
of the primary difficulties in learning such a metric is to ensure that the
Mahalanobis matrix remains positive semidefinite. Semidefinite programming is
sometimes used to enforce this constraint, but does not scale well.
\BoostMetric is instead based on a key observation that any positive
semidefinite matrix can be decomposed into a linear positive combination of
trace-one rank-one matrices. \BoostMetric thus uses rank-one positive
semidefinite matrices as weak learners within an efficient and scalable
boosting-based learning process. The resulting method is easy to implement,
does not require tuning, and can accommodate various types of constraints.
Experiments on various datasets show that the proposed algorithm compares
favorably to those state-of-the-art methods in terms of classification accuracy
and running time.
| no_new_dataset | 0.945298 |
0910.2405 | Maya Ramanath | Maya Ramanath, Kondreddi Sarath Kumar, Georgiana Ifrim | Generating Concise and Readable Summaries of XML Documents | null | null | null | MPI-I-2009-5-002 | cs.IR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | XML has become the de-facto standard for data representation and exchange,
resulting in large scale repositories and warehouses of XML data. In order for
users to understand and explore these large collections, a summarized, bird's
eye view of the available data is a necessity. In this paper, we are interested
in semantic XML document summaries which present the "important" information
available in an XML document to the user. In the best case, such a summary is a
concise replacement for the original document itself. At the other extreme, it
should at least help the user make an informed choice as to the relevance of
the document to his needs. In this paper, we address the two main issues which
arise in producing such meaningful and concise summaries: i) which tags or text
units are important and should be included in the summary, ii) how to generate
summaries of different sizes.%for different memory budgets. We conduct user
studies with different real-life datasets and show that our methods are useful
and effective in practice.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2009 14:19:01 GMT"
}
] | 2009-10-14T00:00:00 | [
[
"Ramanath",
"Maya",
""
],
[
"Kumar",
"Kondreddi Sarath",
""
],
[
"Ifrim",
"Georgiana",
""
]
] | TITLE: Generating Concise and Readable Summaries of XML Documents
ABSTRACT: XML has become the de-facto standard for data representation and exchange,
resulting in large scale repositories and warehouses of XML data. In order for
users to understand and explore these large collections, a summarized, bird's
eye view of the available data is a necessity. In this paper, we are interested
in semantic XML document summaries which present the "important" information
available in an XML document to the user. In the best case, such a summary is a
concise replacement for the original document itself. At the other extreme, it
should at least help the user make an informed choice as to the relevance of
the document to his needs. In this paper, we address the two main issues which
arise in producing such meaningful and concise summaries: i) which tags or text
units are important and should be included in the summary, ii) how to generate
summaries of different sizes.%for different memory budgets. We conduct user
studies with different real-life datasets and show that our methods are useful
and effective in practice.
| no_new_dataset | 0.950457 |
0910.1849 | N Vunka Jungum | Sanjay Silakari, Mahesh Motwani and Manish Maheshwari | Color Image Clustering using Block Truncation Algorithm | " International Journal of Computer Science Issues, IJCSI, Volume 4,
Issue 2, pp31-35, September 2009" | S. Silakari, M. Motwani and M. Maheshwari," Color Image Clustering
using Block Truncation Algorithm", International Journal of Computer Science
Issues, IJCSI, Volume 4, Issue 2, pp31-35, September 2009 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advancement in image capturing device, the image data been generated
at high volume. If images are analyzed properly, they can reveal useful
information to the human users. Content based image retrieval address the
problem of retrieving images relevant to the user needs from image databases on
the basis of low-level visual features that can be derived from the images.
Grouping images into meaningful categories to reveal useful information is a
challenging and important problem. Clustering is a data mining technique to
group a set of unsupervised data based on the conceptual clustering principal:
maximizing the intraclass similarity and minimizing the interclass similarity.
Proposed framework focuses on color as feature. Color Moment and Block
Truncation Coding (BTC) are used to extract features for image dataset.
Experimental study using K-Means clustering algorithm is conducted to group the
image dataset into various clusters.
| [
{
"version": "v1",
"created": "Fri, 9 Oct 2009 20:21:23 GMT"
}
] | 2009-10-13T00:00:00 | [
[
"Silakari",
"Sanjay",
""
],
[
"Motwani",
"Mahesh",
""
],
[
"Maheshwari",
"Manish",
""
]
] | TITLE: Color Image Clustering using Block Truncation Algorithm
ABSTRACT: With the advancement in image capturing device, the image data been generated
at high volume. If images are analyzed properly, they can reveal useful
information to the human users. Content based image retrieval address the
problem of retrieving images relevant to the user needs from image databases on
the basis of low-level visual features that can be derived from the images.
Grouping images into meaningful categories to reveal useful information is a
challenging and important problem. Clustering is a data mining technique to
group a set of unsupervised data based on the conceptual clustering principal:
maximizing the intraclass similarity and minimizing the interclass similarity.
Proposed framework focuses on color as feature. Color Moment and Block
Truncation Coding (BTC) are used to extract features for image dataset.
Experimental study using K-Means clustering algorithm is conducted to group the
image dataset into various clusters.
| no_new_dataset | 0.95018 |
0910.1650 | Dingyin Xia | Dingyin Xia, Fei Wu, Xuqing Zhang, Yueting Zhuang | Local and global approaches of affinity propagation clustering for large
scale data | 9 pages | J Zhejiang Univ Sci A 2008 9(10):1373-1381 | 10.1631/jzus.A0720058 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently a new clustering algorithm called 'affinity propagation' (AP) has
been proposed, which efficiently clustered sparsely related data by passing
messages between data points. However, we want to cluster large scale data
where the similarities are not sparse in many cases. This paper presents two
variants of AP for grouping large scale data with a dense similarity matrix.
The local approach is partition affinity propagation (PAP) and the global
method is landmark affinity propagation (LAP). PAP passes messages in the
subsets of data first and then merges them as the number of initial step of
iterations; it can effectively reduce the number of iterations of clustering.
LAP passes messages between the landmark data points first and then clusters
non-landmark data points; it is a large global approximation method to speed up
clustering. Experiments are conducted on many datasets, such as random data
points, manifold subspaces, images of faces and Chinese calligraphy, and the
results demonstrate that the two approaches are feasible and practicable.
| [
{
"version": "v1",
"created": "Fri, 9 Oct 2009 04:55:41 GMT"
}
] | 2009-10-12T00:00:00 | [
[
"Xia",
"Dingyin",
""
],
[
"Wu",
"Fei",
""
],
[
"Zhang",
"Xuqing",
""
],
[
"Zhuang",
"Yueting",
""
]
] | TITLE: Local and global approaches of affinity propagation clustering for large
scale data
ABSTRACT: Recently a new clustering algorithm called 'affinity propagation' (AP) has
been proposed, which efficiently clustered sparsely related data by passing
messages between data points. However, we want to cluster large scale data
where the similarities are not sparse in many cases. This paper presents two
variants of AP for grouping large scale data with a dense similarity matrix.
The local approach is partition affinity propagation (PAP) and the global
method is landmark affinity propagation (LAP). PAP passes messages in the
subsets of data first and then merges them as the number of initial step of
iterations; it can effectively reduce the number of iterations of clustering.
LAP passes messages between the landmark data points first and then clusters
non-landmark data points; it is a large global approximation method to speed up
clustering. Experiments are conducted on many datasets, such as random data
points, manifold subspaces, images of faces and Chinese calligraphy, and the
results demonstrate that the two approaches are feasible and practicable.
| no_new_dataset | 0.954308 |
0910.0820 | Rdv Ijcsis | Adhistya Erna Permanasari, Dayang Rohaya Awang Rambli, Dhanapal Durai
Dominic | Prediction of Zoonosis Incidence in Human using Seasonal Auto Regressive
Integrated Moving Average (SARIMA) | 8 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.423,
http://sites.google.com/site/ijcsis/ | International Journal of Computer Science and Information
Security, IJCSIS, Vol. 5, No. 1, pp. 103-110, September 2009, USA | null | ISSn 1947 5500 | cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zoonosis refers to the transmission of infectious diseases from animal to
human. The increasing number of zoonosis incidence makes the great losses to
lives, including humans and animals, and also the impact in social economic. It
motivates development of a system that can predict the future number of
zoonosis occurrences in human. This paper analyses and presents the use of
Seasonal Autoregressive Integrated Moving Average (SARIMA) method for
developing a forecasting model that able to support and provide prediction
number of zoonosis human incidence. The dataset for model development was
collected on a time series data of human tuberculosis occurrences in United
States which comprises of fourteen years of monthly data obtained from a study
published by Centers for Disease Control and Prevention (CDC). Several trial
models of SARIMA were compared to obtain the most appropriate model. Then,
diagnostic tests were used to determine model validity. The result showed that
the SARIMA(9,0,14)(12,1,24)12 is the fittest model. While in the measure of
accuracy, the selected model achieved 0.062 of Theils U value. It implied that
the model was highly accurate and a close fit. It was also indicated the
capability of final model to closely represent and made prediction based on the
tuberculosis historical dataset.
| [
{
"version": "v1",
"created": "Mon, 5 Oct 2009 18:36:11 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Oct 2009 11:05:26 GMT"
}
] | 2009-10-08T00:00:00 | [
[
"Permanasari",
"Adhistya Erna",
""
],
[
"Rambli",
"Dayang Rohaya Awang",
""
],
[
"Dominic",
"Dhanapal Durai",
""
]
] | TITLE: Prediction of Zoonosis Incidence in Human using Seasonal Auto Regressive
Integrated Moving Average (SARIMA)
ABSTRACT: Zoonosis refers to the transmission of infectious diseases from animal to
human. The increasing number of zoonosis incidence makes the great losses to
lives, including humans and animals, and also the impact in social economic. It
motivates development of a system that can predict the future number of
zoonosis occurrences in human. This paper analyses and presents the use of
Seasonal Autoregressive Integrated Moving Average (SARIMA) method for
developing a forecasting model that able to support and provide prediction
number of zoonosis human incidence. The dataset for model development was
collected on a time series data of human tuberculosis occurrences in United
States which comprises of fourteen years of monthly data obtained from a study
published by Centers for Disease Control and Prevention (CDC). Several trial
models of SARIMA were compared to obtain the most appropriate model. Then,
diagnostic tests were used to determine model validity. The result showed that
the SARIMA(9,0,14)(12,1,24)12 is the fittest model. While in the measure of
accuracy, the selected model achieved 0.062 of Theils U value. It implied that
the model was highly accurate and a close fit. It was also indicated the
capability of final model to closely represent and made prediction based on the
tuberculosis historical dataset.
| no_new_dataset | 0.944689 |
0910.1273 | Fabien Moutarde | Taoufik Bdiri (CAOR), Fabien Moutarde (CAOR), Nicolas Bourdis (CAOR),
Bruno Steux (CAOR) | Adaboost with "Keypoint Presence Features" for Real-Time Vehicle Visual
Detection | null | 16th World Congress on Intelligent Transport Systems (ITSwc'2009),
Su\`ede (2009) | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present promising results for real-time vehicle visual detection, obtained
with adaBoost using new original ?keypoints presence features?. These
weak-classifiers produce a boolean response based on presence or absence in the
tested image of a ?keypoint? (~ a SURF interest point) with a descriptor
sufficiently similar (i.e. within a given distance) to a reference descriptor
characterizing the feature. A first experiment was conducted on a public image
dataset containing lateral-viewed cars, yielding 95% recall with 95% precision
on test set. Moreover, analysis of the positions of adaBoost-selected keypoints
show that they correspond to a specific part of the object category (such as
?wheel? or ?side skirt?) and thus have a ?semantic? meaning.
| [
{
"version": "v1",
"created": "Wed, 7 Oct 2009 14:26:01 GMT"
}
] | 2009-10-08T00:00:00 | [
[
"Bdiri",
"Taoufik",
"",
"CAOR"
],
[
"Moutarde",
"Fabien",
"",
"CAOR"
],
[
"Bourdis",
"Nicolas",
"",
"CAOR"
],
[
"Steux",
"Bruno",
"",
"CAOR"
]
] | TITLE: Adaboost with "Keypoint Presence Features" for Real-Time Vehicle Visual
Detection
ABSTRACT: We present promising results for real-time vehicle visual detection, obtained
with adaBoost using new original ?keypoints presence features?. These
weak-classifiers produce a boolean response based on presence or absence in the
tested image of a ?keypoint? (~ a SURF interest point) with a descriptor
sufficiently similar (i.e. within a given distance) to a reference descriptor
characterizing the feature. A first experiment was conducted on a public image
dataset containing lateral-viewed cars, yielding 95% recall with 95% precision
on test set. Moreover, analysis of the positions of adaBoost-selected keypoints
show that they correspond to a specific part of the object category (such as
?wheel? or ?side skirt?) and thus have a ?semantic? meaning.
| no_new_dataset | 0.943086 |
0910.1294 | Fabien Moutarde | Taoufik Bdiri (CAOR), Fabien Moutarde (CAOR), Bruno Steux (CAOR) | Visual object categorization with new keypoint-based adaBoost features | null | IEEE Symposium on Intelligent Vehicles (IV'2009), XiAn : China
(2009) | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present promising results for visual object categorization, obtained with
adaBoost using new original ?keypoints-based features?. These weak-classifiers
produce a boolean response based on presence or absence in the tested image of
a ?keypoint? (a kind of SURF interest point) with a descriptor sufficiently
similar (i.e. within a given distance) to a reference descriptor characterizing
the feature. A first experiment was conducted on a public image dataset
containing lateral-viewed cars, yielding 95% recall with 95% precision on test
set. Preliminary tests on a small subset of a pedestrians database also gives
promising 97% recall with 92 % precision, which shows the generality of our new
family of features. Moreover, analysis of the positions of adaBoost-selected
keypoints show that they correspond to a specific part of the object category
(such as ?wheel? or ?side skirt? in the case of lateral-cars) and thus have a
?semantic? meaning. We also made a first test on video for detecting vehicles
from adaBoostselected keypoints filtered in real-time from all detected
keypoints.
| [
{
"version": "v1",
"created": "Wed, 7 Oct 2009 15:42:30 GMT"
}
] | 2009-10-08T00:00:00 | [
[
"Bdiri",
"Taoufik",
"",
"CAOR"
],
[
"Moutarde",
"Fabien",
"",
"CAOR"
],
[
"Steux",
"Bruno",
"",
"CAOR"
]
] | TITLE: Visual object categorization with new keypoint-based adaBoost features
ABSTRACT: We present promising results for visual object categorization, obtained with
adaBoost using new original ?keypoints-based features?. These weak-classifiers
produce a boolean response based on presence or absence in the tested image of
a ?keypoint? (a kind of SURF interest point) with a descriptor sufficiently
similar (i.e. within a given distance) to a reference descriptor characterizing
the feature. A first experiment was conducted on a public image dataset
containing lateral-viewed cars, yielding 95% recall with 95% precision on test
set. Preliminary tests on a small subset of a pedestrians database also gives
promising 97% recall with 92 % precision, which shows the generality of our new
family of features. Moreover, analysis of the positions of adaBoost-selected
keypoints show that they correspond to a specific part of the object category
(such as ?wheel? or ?side skirt? in the case of lateral-cars) and thus have a
?semantic? meaning. We also made a first test on video for detecting vehicles
from adaBoostselected keypoints filtered in real-time from all detected
keypoints.
| no_new_dataset | 0.946151 |
physics/0701339 | David Smith | David M.D. Smith, Jukka-Pekka Onnela, Neil F. Johnson | Accelerating networks | 12 pages, 8 figures | New J. Phys. 9 181 (2007) | 10.1088/1367-2630/9/6/181 | null | physics.soc-ph cond-mat.dis-nn | null | Evolving out-of-equilibrium networks have been under intense scrutiny
recently. In many real-world settings the number of links added per new node is
not constant but depends on the time at which the node is introduced in the
system. This simple idea gives rise to the concept of accelerating networks,
for which we review an existing definition and -- after finding it somewhat
constrictive -- offer a new definition. The new definition provided here views
network acceleration as a time dependent property of a given system, as opposed
to being a property of the specific algorithm applied to grow the network. The
defnition also covers both unweighted and weighted networks. As time-stamped
network data becomes increasingly available, the proposed measures may be
easily carried out on empirical datasets. As a simple case study we apply the
concepts to study the evolution of three different instances of Wikipedia,
namely, those in English, German, and Japanese, and find that the networks
undergo different acceleration regimes in their evolution.
| [
{
"version": "v1",
"created": "Tue, 30 Jan 2007 14:53:48 GMT"
}
] | 2009-10-08T00:00:00 | [
[
"Smith",
"David M. D.",
""
],
[
"Onnela",
"Jukka-Pekka",
""
],
[
"Johnson",
"Neil F.",
""
]
] | TITLE: Accelerating networks
ABSTRACT: Evolving out-of-equilibrium networks have been under intense scrutiny
recently. In many real-world settings the number of links added per new node is
not constant but depends on the time at which the node is introduced in the
system. This simple idea gives rise to the concept of accelerating networks,
for which we review an existing definition and -- after finding it somewhat
constrictive -- offer a new definition. The new definition provided here views
network acceleration as a time dependent property of a given system, as opposed
to being a property of the specific algorithm applied to grow the network. The
defnition also covers both unweighted and weighted networks. As time-stamped
network data becomes increasingly available, the proposed measures may be
easily carried out on empirical datasets. As a simple case study we apply the
concepts to study the evolution of three different instances of Wikipedia,
namely, those in English, German, and Japanese, and find that the networks
undergo different acceleration regimes in their evolution.
| no_new_dataset | 0.947381 |
0910.0542 | Om Patri | Om Prasad Patri, Amit Kumar Mishra | Pre-processing in AI based Prediction of QSARs | 6 pages, 12 figures, In the Proceedings of the 12th International
Conference on Information Technology, ICIT 2009, December 21-24 2009,
Bhubaneswar, India | null | null | null | cs.AI cs.NE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning, data mining and artificial intelligence (AI) based methods
have been used to determine the relations between chemical structure and
biological activity, called quantitative structure activity relationships
(QSARs) for the compounds. Pre-processing of the dataset, which includes the
mapping from a large number of molecular descriptors in the original high
dimensional space to a small number of components in the lower dimensional
space while retaining the features of the original data, is the first step in
this process. A common practice is to use a mapping method for a dataset
without prior analysis. This pre-analysis has been stressed in our work by
applying it to two important classes of QSAR prediction problems: drug design
(predicting anti-HIV-1 activity) and predictive toxicology (estimating
hepatocarcinogenicity of chemicals). We apply one linear and two nonlinear
mapping methods on each of the datasets. Based on this analysis, we conclude
the nature of the inherent relationships between the elements of each dataset,
and hence, the mapping method best suited for it. We also show that proper
preprocessing can help us in choosing the right feature extraction tool as well
as give an insight about the type of classifier pertinent for the given
problem.
| [
{
"version": "v1",
"created": "Sat, 3 Oct 2009 18:46:00 GMT"
}
] | 2009-10-06T00:00:00 | [
[
"Patri",
"Om Prasad",
""
],
[
"Mishra",
"Amit Kumar",
""
]
] | TITLE: Pre-processing in AI based Prediction of QSARs
ABSTRACT: Machine learning, data mining and artificial intelligence (AI) based methods
have been used to determine the relations between chemical structure and
biological activity, called quantitative structure activity relationships
(QSARs) for the compounds. Pre-processing of the dataset, which includes the
mapping from a large number of molecular descriptors in the original high
dimensional space to a small number of components in the lower dimensional
space while retaining the features of the original data, is the first step in
this process. A common practice is to use a mapping method for a dataset
without prior analysis. This pre-analysis has been stressed in our work by
applying it to two important classes of QSAR prediction problems: drug design
(predicting anti-HIV-1 activity) and predictive toxicology (estimating
hepatocarcinogenicity of chemicals). We apply one linear and two nonlinear
mapping methods on each of the datasets. Based on this analysis, we conclude
the nature of the inherent relationships between the elements of each dataset,
and hence, the mapping method best suited for it. We also show that proper
preprocessing can help us in choosing the right feature extraction tool as well
as give an insight about the type of classifier pertinent for the given
problem.
| no_new_dataset | 0.949809 |
0907.3426 | Theodore Alexandrov | Theodore Alexandrov, Klaus Steinhorst, Oliver Keszoecze, Stefan
Schiffler | SparseCodePicking: feature extraction in mass spectrometry using sparse
coding algorithms | 10 pages, 6 figures | null | null | null | stat.ML physics.med-ph stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mass spectrometry (MS) is an important technique for chemical profiling which
calculates for a sample a high dimensional histogram-like spectrum. A crucial
step of MS data processing is the peak picking which selects peaks containing
information about molecules with high concentrations which are of interest in
an MS investigation. We present a new procedure of the peak picking based on a
sparse coding algorithm. Given a set of spectra of different classes, i.e. with
different positions and heights of the peaks, this procedure can extract peaks
by means of unsupervised learning. Instead of an $l_1$-regularization penalty
term used in the original sparse coding algorithm we propose using an
elastic-net penalty term for better regularization. The evaluation is done by
means of simulation. We show that for a large region of parameters the proposed
peak picking method based on the sparse coding features outperforms a mean
spectrum-based method. Moreover, we demonstrate the procedure applying it to
two real-life datasets.
| [
{
"version": "v1",
"created": "Mon, 20 Jul 2009 15:50:22 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Oct 2009 08:58:10 GMT"
}
] | 2009-10-05T00:00:00 | [
[
"Alexandrov",
"Theodore",
""
],
[
"Steinhorst",
"Klaus",
""
],
[
"Keszoecze",
"Oliver",
""
],
[
"Schiffler",
"Stefan",
""
]
] | TITLE: SparseCodePicking: feature extraction in mass spectrometry using sparse
coding algorithms
ABSTRACT: Mass spectrometry (MS) is an important technique for chemical profiling which
calculates for a sample a high dimensional histogram-like spectrum. A crucial
step of MS data processing is the peak picking which selects peaks containing
information about molecules with high concentrations which are of interest in
an MS investigation. We present a new procedure of the peak picking based on a
sparse coding algorithm. Given a set of spectra of different classes, i.e. with
different positions and heights of the peaks, this procedure can extract peaks
by means of unsupervised learning. Instead of an $l_1$-regularization penalty
term used in the original sparse coding algorithm we propose using an
elastic-net penalty term for better regularization. The evaluation is done by
means of simulation. We show that for a large region of parameters the proposed
peak picking method based on the sparse coding features outperforms a mean
spectrum-based method. Moreover, we demonstrate the procedure applying it to
two real-life datasets.
| no_new_dataset | 0.949623 |
0910.0253 | Ken Bloom | Kenneth Bloom | The CMS Computing System: Successes and Challenges | To be published in the proceedings of DPF-2009, Detroit, MI, July
2009, eConf C090726 | null | null | CMS CR-2009/90 | physics.ins-det hep-ex | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Each LHC experiment will produce datasets with sizes of order one petabyte
per year. All of this data must be stored, processed, transferred, simulated
and analyzed, which requires a computing system of a larger scale than ever
mounted for any particle physics experiment, and possibly for any enterprise in
the world. I discuss how CMS has chosen to address these challenges, focusing
on recent tests of the system that demonstrate the experiment's readiness for
producing physics results with the first LHC data.
| [
{
"version": "v1",
"created": "Thu, 1 Oct 2009 20:01:45 GMT"
}
] | 2009-10-05T00:00:00 | [
[
"Bloom",
"Kenneth",
""
]
] | TITLE: The CMS Computing System: Successes and Challenges
ABSTRACT: Each LHC experiment will produce datasets with sizes of order one petabyte
per year. All of this data must be stored, processed, transferred, simulated
and analyzed, which requires a computing system of a larger scale than ever
mounted for any particle physics experiment, and possibly for any enterprise in
the world. I discuss how CMS has chosen to address these challenges, focusing
on recent tests of the system that demonstrate the experiment's readiness for
producing physics results with the first LHC data.
| no_new_dataset | 0.948251 |
0909.5530 | Xiaokui Xiao | Xiaokui Xiao, Guozhang Wang, Johannes Gehrke | Differential Privacy via Wavelet Transforms | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy preserving data publishing has attracted considerable research
interest in recent years. Among the existing solutions, {\em
$\epsilon$-differential privacy} provides one of the strongest privacy
guarantees. Existing data publishing methods that achieve
$\epsilon$-differential privacy, however, offer little data utility. In
particular, if the output dataset is used to answer count queries, the noise in
the query answers can be proportional to the number of tuples in the data,
which renders the results useless.
In this paper, we develop a data publishing technique that ensures
$\epsilon$-differential privacy while providing accurate answers for {\em
range-count queries}, i.e., count queries where the predicate on each attribute
is a range. The core of our solution is a framework that applies {\em wavelet
transforms} on the data before adding noise to it. We present instantiations of
the proposed framework for both ordinal and nominal data, and we provide a
theoretical analysis on their privacy and utility guarantees. In an extensive
experimental study on both real and synthetic data, we show the effectiveness
and efficiency of our solution.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2009 07:16:38 GMT"
}
] | 2009-10-01T00:00:00 | [
[
"Xiao",
"Xiaokui",
""
],
[
"Wang",
"Guozhang",
""
],
[
"Gehrke",
"Johannes",
""
]
] | TITLE: Differential Privacy via Wavelet Transforms
ABSTRACT: Privacy preserving data publishing has attracted considerable research
interest in recent years. Among the existing solutions, {\em
$\epsilon$-differential privacy} provides one of the strongest privacy
guarantees. Existing data publishing methods that achieve
$\epsilon$-differential privacy, however, offer little data utility. In
particular, if the output dataset is used to answer count queries, the noise in
the query answers can be proportional to the number of tuples in the data,
which renders the results useless.
In this paper, we develop a data publishing technique that ensures
$\epsilon$-differential privacy while providing accurate answers for {\em
range-count queries}, i.e., count queries where the predicate on each attribute
is a range. The core of our solution is a framework that applies {\em wavelet
transforms} on the data before adding noise to it. We present instantiations of
the proposed framework for both ordinal and nominal data, and we provide a
theoretical analysis on their privacy and utility guarantees. In an extensive
experimental study on both real and synthetic data, we show the effectiveness
and efficiency of our solution.
| no_new_dataset | 0.950503 |
0906.4284 | Eric Lerner | Eric J. Lerner | Tolman Test from z = 0.1 to z = 5.5: Preliminary results challenge the
expanding universe model | 12 pages, 4 figures. 2nd Crisis in Cosmology Conference, 7-11
September, 2008, Port Angeles, WA. accepted in Proceedings of the 2nd Crisis
in Cosmology Conference, Astronomical Society of the Pacific Conference
series | null | null | null | physics.gen-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We performed the Tolman surface-brightness test for the expansion of the
universe using a large UV dataset of disk galaxies in a wide range of redshifts
(from 0.03 to 5.7). We combined data for low-z galaxies from GALEX observations
with those for high-z objects from HST UltraDeep Field images. Starting from
the data in publicly- available GALEX and UDF catalogs, we created 6 samples of
galaxies with observations in a rest-frame band centered at 141 nm and 5 with
data from one centered on 225 nm. These bands correspond, respectively, to the
FUV and NUV bands of GALEX for objects at z = 0.1. By maintaining the same
rest-frame wave-band of all observations we greatly minimized the effects of
k-correction and filter transformation. Since SB depends on the absolute
magnitude, all galaxy samples were then matched for the absolute magnitude
range (-17.7 < M(AB) < -19.0) and for mean absolute magnitude. We performed
homogeneous measurements of the magnitude and half-light radius for all the
galaxies in the 11 samples, obtaining the median UV surface brightness for each
sample. We compared the data with two models: 1) The LCDM expanding universe
model with the widely-accepted evolution of galaxy size R prop H(z)-1 and 2) a
simple, Euclidean, non-expanding (ENE) model with the distance given by
d=cz/H0. We found that the ENE model was a significantly better fit to the data
than the LCDM model with galaxy size evolution. While the LCDM model provides a
good fit to the HUDF data alone, there is a 1.2 magnitude difference in the SB
predicted from the model for the GALEX data and observations, a difference at
least 5 times larger than any statistical error. The ENE provides a good fit to
all the data except the two points with z>4.
| [
{
"version": "v1",
"created": "Tue, 23 Jun 2009 15:19:07 GMT"
}
] | 2009-09-30T00:00:00 | [
[
"Lerner",
"Eric J.",
""
]
] | TITLE: Tolman Test from z = 0.1 to z = 5.5: Preliminary results challenge the
expanding universe model
ABSTRACT: We performed the Tolman surface-brightness test for the expansion of the
universe using a large UV dataset of disk galaxies in a wide range of redshifts
(from 0.03 to 5.7). We combined data for low-z galaxies from GALEX observations
with those for high-z objects from HST UltraDeep Field images. Starting from
the data in publicly- available GALEX and UDF catalogs, we created 6 samples of
galaxies with observations in a rest-frame band centered at 141 nm and 5 with
data from one centered on 225 nm. These bands correspond, respectively, to the
FUV and NUV bands of GALEX for objects at z = 0.1. By maintaining the same
rest-frame wave-band of all observations we greatly minimized the effects of
k-correction and filter transformation. Since SB depends on the absolute
magnitude, all galaxy samples were then matched for the absolute magnitude
range (-17.7 < M(AB) < -19.0) and for mean absolute magnitude. We performed
homogeneous measurements of the magnitude and half-light radius for all the
galaxies in the 11 samples, obtaining the median UV surface brightness for each
sample. We compared the data with two models: 1) The LCDM expanding universe
model with the widely-accepted evolution of galaxy size R prop H(z)-1 and 2) a
simple, Euclidean, non-expanding (ENE) model with the distance given by
d=cz/H0. We found that the ENE model was a significantly better fit to the data
than the LCDM model with galaxy size evolution. While the LCDM model provides a
good fit to the HUDF data alone, there is a 1.2 magnitude difference in the SB
predicted from the model for the GALEX data and observations, a difference at
least 5 times larger than any statistical error. The ENE provides a good fit to
all the data except the two points with z>4.
| no_new_dataset | 0.955527 |
0908.3131 | Mehdi Moussaid | Mehdi Moussaid, Dirk Helbing, Simon Garnier, Anders Johansson, Maud
Combe, Guy Theraulaz | Experimental study of the behavioural mechanisms underlying
self-organization in human crowds | null | M. Moussaid et al. (2009) Proceedings of the Royal Society B 276,
2755-2762 | 10.1098/rspb.2009.0405 | null | physics.soc-ph physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In animal societies as well as in human crowds, many observed collective
behaviours result from self-organized processes based on local interactions
among individuals. However, models of crowd dynamics are still lacking a
systematic individual-level experimental verification, and the local mechanisms
underlying the formation of collective patterns are not yet known in detail. We
have conducted a set of well-controlled experiments with pedestrians performing
simple avoidance tasks in order to determine the laws ruling their behaviour
during interactions. The analysis of the large trajectory dataset was used to
compute a behavioural map that describes the average change of the direction
and speed of a pedestrian for various interaction distances and angles. The
experimental results reveal features of the decision process when pedestrians
choose the side on which they evade, and show a side preference that is
amplified by mutual interactions. The predictions of a binary interaction model
based on the above findings were then compared to bidirectional flows of people
recorded in a crowded street. Simulations generate two asymmetric lanes with
opposite directions of motion, in quantitative agreement with our empirical
observations. The knowledge of pedestrian behavioural laws is an important step
ahead in the understanding of the underlying dynamics of crowd behaviour and
allows for reliable predictions of collective pedestrian movements under
natural conditions.
| [
{
"version": "v1",
"created": "Fri, 21 Aug 2009 14:13:48 GMT"
}
] | 2009-09-30T00:00:00 | [
[
"Moussaid",
"Mehdi",
""
],
[
"Helbing",
"Dirk",
""
],
[
"Garnier",
"Simon",
""
],
[
"Johansson",
"Anders",
""
],
[
"Combe",
"Maud",
""
],
[
"Theraulaz",
"Guy",
""
]
] | TITLE: Experimental study of the behavioural mechanisms underlying
self-organization in human crowds
ABSTRACT: In animal societies as well as in human crowds, many observed collective
behaviours result from self-organized processes based on local interactions
among individuals. However, models of crowd dynamics are still lacking a
systematic individual-level experimental verification, and the local mechanisms
underlying the formation of collective patterns are not yet known in detail. We
have conducted a set of well-controlled experiments with pedestrians performing
simple avoidance tasks in order to determine the laws ruling their behaviour
during interactions. The analysis of the large trajectory dataset was used to
compute a behavioural map that describes the average change of the direction
and speed of a pedestrian for various interaction distances and angles. The
experimental results reveal features of the decision process when pedestrians
choose the side on which they evade, and show a side preference that is
amplified by mutual interactions. The predictions of a binary interaction model
based on the above findings were then compared to bidirectional flows of people
recorded in a crowded street. Simulations generate two asymmetric lanes with
opposite directions of motion, in quantitative agreement with our empirical
observations. The knowledge of pedestrian behavioural laws is an important step
ahead in the understanding of the underlying dynamics of crowd behaviour and
allows for reliable predictions of collective pedestrian movements under
natural conditions.
| no_new_dataset | 0.938857 |
0808.3296 | Stevan Harnad | Stevan Harnad | Confirmation Bias and the Open Access Advantage: Some Methodological
Suggestions for the Davis Citation Study | 17 pages, 17 references, 1 table; comment on 0808.2428v1 | null | null | null | cs.DL cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Davis (2008) analyzes citations from 2004-2007 in 11 biomedical journals. 15%
of authors paid to make them Open Access (OA). The outcome is a significant OA
citation Advantage, but a small one (21%). The author infers that the OA
advantage has been shrinking yearly, but the data suggest the opposite. Further
analyses are necessary:
(1) Not just author-choice (paid) OA but Free OA self-archiving needs to be
taken into account rather than being counted as non-OA.
(2) proportion of OA articles per journal per year needs to be reported and
taken into account.
(3) The Journal Impact Factor and the relation between the size of the OA
Advantage article 'citation-bracket' need to be taken into account.
(4) The sample-size for the highest-impact, largest-sample journal analyzed,
PNAS, is restricted and excluded from some of the analyses. The full PNAS
dataset is needed.
(5) The interaction between OA and time, 2004-2007, is based on retrospective
data from a June 2008 total cumulative citation count. The dates of both the
cited articles and the citing articles need to be taken into account.
The author proposes that author self-selection bias for is the primary cause
of the observed OA Advantage, but this study does not test this or of any of
the other potential causal factors. The author suggests that paid OA is not
worth the cost, per extra citation. But with OA self-archiving both the OA and
the extra citations are free.
| [
{
"version": "v1",
"created": "Mon, 25 Aug 2008 03:36:14 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Aug 2008 17:09:08 GMT"
}
] | 2009-09-29T00:00:00 | [
[
"Harnad",
"Stevan",
""
]
] | TITLE: Confirmation Bias and the Open Access Advantage: Some Methodological
Suggestions for the Davis Citation Study
ABSTRACT: Davis (2008) analyzes citations from 2004-2007 in 11 biomedical journals. 15%
of authors paid to make them Open Access (OA). The outcome is a significant OA
citation Advantage, but a small one (21%). The author infers that the OA
advantage has been shrinking yearly, but the data suggest the opposite. Further
analyses are necessary:
(1) Not just author-choice (paid) OA but Free OA self-archiving needs to be
taken into account rather than being counted as non-OA.
(2) proportion of OA articles per journal per year needs to be reported and
taken into account.
(3) The Journal Impact Factor and the relation between the size of the OA
Advantage article 'citation-bracket' need to be taken into account.
(4) The sample-size for the highest-impact, largest-sample journal analyzed,
PNAS, is restricted and excluded from some of the analyses. The full PNAS
dataset is needed.
(5) The interaction between OA and time, 2004-2007, is based on retrospective
data from a June 2008 total cumulative citation count. The dates of both the
cited articles and the citing articles need to be taken into account.
The author proposes that author self-selection bias for is the primary cause
of the observed OA Advantage, but this study does not test this or of any of
the other potential causal factors. The author suggests that paid OA is not
worth the cost, per extra citation. But with OA self-archiving both the OA and
the extra citations are free.
| no_new_dataset | 0.955444 |
0811.4013 | Matthew Jackson | Benjamin Golub and Matthew O. Jackson | How Homophily Affects Diffusion and Learning in Networks | Expanded version includes additional empirical analysis | null | null | null | physics.soc-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine how three different communication processes operating through
social networks are affected by homophily -- the tendency of individuals to
associate with others similar to themselves. Homophily has no effect if
messages are broadcast or sent via shortest paths; only connection density
matters. In contrast, homophily substantially slows learning based on repeated
averaging of neighbors' information and Markovian diffusion processes such as
the Google random surfer model. Indeed, the latter processes are strongly
affected by homophily but completely independent of connection density,
provided this density exceeds a low threshold. We obtain these results by
establishing new results on the spectra of large random graphs and relating the
spectra to homophily. We conclude by checking the theoretical predictions using
observed high school friendship networks from the Adolescent Health dataset.
| [
{
"version": "v1",
"created": "Tue, 25 Nov 2008 04:40:37 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Feb 2009 05:03:24 GMT"
}
] | 2009-09-29T00:00:00 | [
[
"Golub",
"Benjamin",
""
],
[
"Jackson",
"Matthew O.",
""
]
] | TITLE: How Homophily Affects Diffusion and Learning in Networks
ABSTRACT: We examine how three different communication processes operating through
social networks are affected by homophily -- the tendency of individuals to
associate with others similar to themselves. Homophily has no effect if
messages are broadcast or sent via shortest paths; only connection density
matters. In contrast, homophily substantially slows learning based on repeated
averaging of neighbors' information and Markovian diffusion processes such as
the Google random surfer model. Indeed, the latter processes are strongly
affected by homophily but completely independent of connection density,
provided this density exceeds a low threshold. We obtain these results by
establishing new results on the spectra of large random graphs and relating the
spectra to homophily. We conclude by checking the theoretical predictions using
observed high school friendship networks from the Adolescent Health dataset.
| no_new_dataset | 0.946051 |
cs/0211018 | Vladimir Pestov | Vladimir Pestov and Aleksandar Stojmirovic | Indexing schemes for similarity search: an illustrated paradigm | 19 pages, LaTeX with 8 figures, prepared using Fundamenta
Informaticae style file | Fundamenta Informaticae Vol. 70 (2006), No. 4, 367-385 | null | null | cs.DS | null | We suggest a variation of the Hellerstein--Koutsoupias--Papadimitriou
indexability model for datasets equipped with a similarity measure, with the
aim of better understanding the structure of indexing schemes for
similarity-based search and the geometry of similarity workloads. This in
particular provides a unified approach to a great variety of schemes used to
index into metric spaces and facilitates their transfer to more general
similarity measures such as quasi-metrics. We discuss links between performance
of indexing schemes and high-dimensional geometry. The concepts and results are
illustrated on a very large concrete dataset of peptide fragments equipped with
a biologically significant similarity measure.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2002 19:10:16 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Oct 2005 21:06:17 GMT"
}
] | 2009-09-29T00:00:00 | [
[
"Pestov",
"Vladimir",
""
],
[
"Stojmirovic",
"Aleksandar",
""
]
] | TITLE: Indexing schemes for similarity search: an illustrated paradigm
ABSTRACT: We suggest a variation of the Hellerstein--Koutsoupias--Papadimitriou
indexability model for datasets equipped with a similarity measure, with the
aim of better understanding the structure of indexing schemes for
similarity-based search and the geometry of similarity workloads. This in
particular provides a unified approach to a great variety of schemes used to
index into metric spaces and facilitates their transfer to more general
similarity measures such as quasi-metrics. We discuss links between performance
of indexing schemes and high-dimensional geometry. The concepts and results are
illustrated on a very large concrete dataset of peptide fragments equipped with
a biologically significant similarity measure.
| new_dataset | 0.964954 |
cs/9503102 | null | P. D. Turney | Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic
Decision Tree Induction Algorithm | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 2, (1995),
369-409 | null | null | cs.AI | null | This paper introduces ICET, a new algorithm for cost-sensitive
classification. ICET uses a genetic algorithm to evolve a population of biases
for a decision tree induction algorithm. The fitness function of the genetic
algorithm is the average cost of classification when using the decision tree,
including both the costs of tests (features, measurements) and the costs of
classification errors. ICET is compared here with three other algorithms for
cost-sensitive classification - EG2, CS-ID3, and IDX - and also with C4.5,
which classifies without regard to cost. The five algorithms are evaluated
empirically on five real-world medical datasets. Three sets of experiments are
performed. The first set examines the baseline performance of the five
algorithms on the five datasets and establishes that ICET performs
significantly better than its competitors. The second set tests the robustness
of ICET under a variety of conditions and shows that ICET maintains its
advantage. The third set looks at ICET's search in bias space and discovers a
way to improve the search.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 1995 00:00:00 GMT"
}
] | 2009-09-25T00:00:00 | [
[
"Turney",
"P. D.",
""
]
] | TITLE: Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic
Decision Tree Induction Algorithm
ABSTRACT: This paper introduces ICET, a new algorithm for cost-sensitive
classification. ICET uses a genetic algorithm to evolve a population of biases
for a decision tree induction algorithm. The fitness function of the genetic
algorithm is the average cost of classification when using the decision tree,
including both the costs of tests (features, measurements) and the costs of
classification errors. ICET is compared here with three other algorithms for
cost-sensitive classification - EG2, CS-ID3, and IDX - and also with C4.5,
which classifies without regard to cost. The five algorithms are evaluated
empirically on five real-world medical datasets. Three sets of experiments are
performed. The first set examines the baseline performance of the five
algorithms on the five datasets and establishes that ICET performs
significantly better than its competitors. The second set tests the robustness
of ICET under a variety of conditions and shows that ICET maintains its
advantage. The third set looks at ICET's search in bias space and discovers a
way to improve the search.
| no_new_dataset | 0.943712 |
cs/9701101 | null | D. R. Wilson, T. R. Martinez | Improved Heterogeneous Distance Functions | See http://www.jair.org/ for an online appendix and other files
accompanying this article | Journal of Artificial Intelligence Research, Vol 6, (1997), 1-34 | null | null | cs.AI | null | Instance-based learning techniques typically handle continuous and linear
input values well, but often do not handle nominal input attributes
appropriately. The Value Difference Metric (VDM) was designed to find
reasonable distance values between nominal attribute values, but it largely
ignores continuous attributes, requiring discretization to map continuous
values into nominal values. This paper proposes three new heterogeneous
distance functions, called the Heterogeneous Value Difference Metric (HVDM),
the Interpolated Value Difference Metric (IVDM), and the Windowed Value
Difference Metric (WVDM). These new distance functions are designed to handle
applications with nominal attributes, continuous attributes, or both. In
experiments on 48 applications the new distance metrics achieve higher
classification accuracy on average than three previous distance functions on
those datasets that have both nominal and continuous attributes.
| [
{
"version": "v1",
"created": "Wed, 1 Jan 1997 00:00:00 GMT"
}
] | 2009-09-25T00:00:00 | [
[
"Wilson",
"D. R.",
""
],
[
"Martinez",
"T. R.",
""
]
] | TITLE: Improved Heterogeneous Distance Functions
ABSTRACT: Instance-based learning techniques typically handle continuous and linear
input values well, but often do not handle nominal input attributes
appropriately. The Value Difference Metric (VDM) was designed to find
reasonable distance values between nominal attribute values, but it largely
ignores continuous attributes, requiring discretization to map continuous
values into nominal values. This paper proposes three new heterogeneous
distance functions, called the Heterogeneous Value Difference Metric (HVDM),
the Interpolated Value Difference Metric (IVDM), and the Windowed Value
Difference Metric (WVDM). These new distance functions are designed to handle
applications with nominal attributes, continuous attributes, or both. In
experiments on 48 applications the new distance metrics achieve higher
classification accuracy on average than three previous distance functions on
those datasets that have both nominal and continuous attributes.
| no_new_dataset | 0.951997 |
cs/9803102 | null | A. Moore, M. S. Lee | Cached Sufficient Statistics for Efficient Machine Learning with Large
Datasets | See http://www.jair.org/ for any accompanying files | Journal of Artificial Intelligence Research, Vol 8, (1998), 67-91 | null | null | cs.AI | null | This paper introduces new algorithms and data structures for quick counting
for machine learning datasets. We focus on the counting task of constructing
contingency tables, but our approach is also applicable to counting the number
of records in a dataset that match conjunctive queries. Subject to certain
assumptions, the costs of these operations can be shown to be independent of
the number of records in the dataset and loglinear in the number of non-zero
entries in the contingency table. We provide a very sparse data structure, the
ADtree, to minimize memory use. We provide analytical worst-case bounds for
this structure for several models of data distribution. We empirically
demonstrate that tractably-sized data structures can be produced for large
real-world datasets by (a) using a sparse tree structure that never allocates
memory for counts of zero, (b) never allocating memory for counts that can be
deduced from other counts, and (c) not bothering to expand the tree fully near
its leaves. We show how the ADtree can be used to accelerate Bayes net
structure finding algorithms, rule learning algorithms, and feature selection
algorithms, and we provide a number of empirical results comparing ADtree
methods against traditional direct counting approaches. We also discuss the
possible uses of ADtrees in other machine learning methods, and discuss the
merits of ADtrees in comparison with alternative representations such as
kd-trees, R-trees and Frequent Sets.
| [
{
"version": "v1",
"created": "Sun, 1 Mar 1998 00:00:00 GMT"
}
] | 2009-09-25T00:00:00 | [
[
"Moore",
"A.",
""
],
[
"Lee",
"M. S.",
""
]
] | TITLE: Cached Sufficient Statistics for Efficient Machine Learning with Large
Datasets
ABSTRACT: This paper introduces new algorithms and data structures for quick counting
for machine learning datasets. We focus on the counting task of constructing
contingency tables, but our approach is also applicable to counting the number
of records in a dataset that match conjunctive queries. Subject to certain
assumptions, the costs of these operations can be shown to be independent of
the number of records in the dataset and loglinear in the number of non-zero
entries in the contingency table. We provide a very sparse data structure, the
ADtree, to minimize memory use. We provide analytical worst-case bounds for
this structure for several models of data distribution. We empirically
demonstrate that tractably-sized data structures can be produced for large
real-world datasets by (a) using a sparse tree structure that never allocates
memory for counts of zero, (b) never allocating memory for counts that can be
deduced from other counts, and (c) not bothering to expand the tree fully near
its leaves. We show how the ADtree can be used to accelerate Bayes net
structure finding algorithms, rule learning algorithms, and feature selection
algorithms, and we provide a number of empirical results comparing ADtree
methods against traditional direct counting approaches. We also discuss the
possible uses of ADtrees in other machine learning methods, and discuss the
merits of ADtrees in comparison with alternative representations such as
kd-trees, R-trees and Frequent Sets.
| no_new_dataset | 0.946745 |
0909.3609 | Vinay Jethava | Vinay Jethava, Krishnan Suresh, Chiranjib Bhattacharyya, Ramesh
Hariharan | Randomized Algorithms for Large scale SVMs | 17 pages, Submitted to Machine Learning journal (October 2008) -
under revision | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a randomized algorithm for training Support vector machines(SVMs)
on large datasets. By using ideas from Random projections we show that the
combinatorial dimension of SVMs is $O({log} n)$ with high probability. This
estimate of combinatorial dimension is used to derive an iterative algorithm,
called RandSVM, which at each step calls an existing solver to train SVMs on a
randomly chosen subset of size $O({log} n)$. The algorithm has probabilistic
guarantees and is capable of training SVMs with Kernels for both classification
and regression problems. Experiments done on synthetic and real life data sets
demonstrate that the algorithm scales up existing SVM learners, without loss of
accuracy.
| [
{
"version": "v1",
"created": "Sat, 19 Sep 2009 23:40:10 GMT"
}
] | 2009-09-22T00:00:00 | [
[
"Jethava",
"Vinay",
""
],
[
"Suresh",
"Krishnan",
""
],
[
"Bhattacharyya",
"Chiranjib",
""
],
[
"Hariharan",
"Ramesh",
""
]
] | TITLE: Randomized Algorithms for Large scale SVMs
ABSTRACT: We propose a randomized algorithm for training Support vector machines(SVMs)
on large datasets. By using ideas from Random projections we show that the
combinatorial dimension of SVMs is $O({log} n)$ with high probability. This
estimate of combinatorial dimension is used to derive an iterative algorithm,
called RandSVM, which at each step calls an existing solver to train SVMs on a
randomly chosen subset of size $O({log} n)$. The algorithm has probabilistic
guarantees and is capable of training SVMs with Kernels for both classification
and regression problems. Experiments done on synthetic and real life data sets
demonstrate that the algorithm scales up existing SVM learners, without loss of
accuracy.
| no_new_dataset | 0.954478 |
0909.3481 | Pan Hui | Pan Hui, Richard Mortier, Tristan Henderson, Jon Crowcroft | Planet-scale Human Mobility Measurement | 6 pages, 2 figures | null | null | null | cs.NI cs.CY cs.GL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research into, and design and construction of mobile systems and algorithms
requires access to large-scale mobility data. Unfortunately, the wireless and
mobile research community lacks such data. For instance, the largest available
human contact traces contain only 100 nodes with very sparse connectivity,
limited by experimental logistics. In this paper we pose a challenge to the
community: how can we collect mobility data from billions of human
participants? We re-assert the importance of large-scale datasets in
communication network design, and claim that this could impact fundamental
studies in other academic disciplines. In effect, we argue that planet-scale
mobility measurements can help to save the world. For example, through
understanding large-scale human mobility, we can track and model and contain
the spread of epidemics of various kinds.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2009 16:27:51 GMT"
}
] | 2009-09-21T00:00:00 | [
[
"Hui",
"Pan",
""
],
[
"Mortier",
"Richard",
""
],
[
"Henderson",
"Tristan",
""
],
[
"Crowcroft",
"Jon",
""
]
] | TITLE: Planet-scale Human Mobility Measurement
ABSTRACT: Research into, and design and construction of mobile systems and algorithms
requires access to large-scale mobility data. Unfortunately, the wireless and
mobile research community lacks such data. For instance, the largest available
human contact traces contain only 100 nodes with very sparse connectivity,
limited by experimental logistics. In this paper we pose a challenge to the
community: how can we collect mobility data from billions of human
participants? We re-assert the importance of large-scale datasets in
communication network design, and claim that this could impact fundamental
studies in other academic disciplines. In effect, we argue that planet-scale
mobility measurements can help to save the world. For example, through
understanding large-scale human mobility, we can track and model and contain
the spread of epidemics of various kinds.
| no_new_dataset | 0.948632 |
0909.3193 | Loet Leydesdorff | Loet Leydesdorff, Felix de Moya-Anegon and Vicente P. Guerrero-Bote | Journal Maps on the Basis of Scopus Data: A comparison with the Journal
Citation Reports of the ISI | Journal of the American Society for Information Science and
Technology (forthcoming) | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using the Scopus dataset (1996-2007) a grand matrix of aggregated
journal-journal citations was constructed. This matrix can be compared in terms
of the network structures with the matrix contained in the Journal Citation
Reports (JCR) of the Institute of Scientific Information (ISI). Since the
Scopus database contains a larger number of journals and covers also the
humanities, one would expect richer maps. However, the matrix is in this case
sparser than in the case of the ISI data. This is due to (i) the larger number
of journals covered by Scopus and (ii) the historical record of citations older
than ten years contained in the ISI database. When the data is highly
structured, as in the case of large journals, the maps are comparable, although
one may have to vary a threshold (because of the differences in densities). In
the case of interdisciplinary journals and journals in the social sciences and
humanities, the new database does not add a lot to what is possible with the
ISI databases.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2009 17:43:55 GMT"
}
] | 2009-09-18T00:00:00 | [
[
"Leydesdorff",
"Loet",
""
],
[
"de Moya-Anegon",
"Felix",
""
],
[
"Guerrero-Bote",
"Vicente P.",
""
]
] | TITLE: Journal Maps on the Basis of Scopus Data: A comparison with the Journal
Citation Reports of the ISI
ABSTRACT: Using the Scopus dataset (1996-2007) a grand matrix of aggregated
journal-journal citations was constructed. This matrix can be compared in terms
of the network structures with the matrix contained in the Journal Citation
Reports (JCR) of the Institute of Scientific Information (ISI). Since the
Scopus database contains a larger number of journals and covers also the
humanities, one would expect richer maps. However, the matrix is in this case
sparser than in the case of the ISI data. This is due to (i) the larger number
of journals covered by Scopus and (ii) the historical record of citations older
than ten years contained in the ISI database. When the data is highly
structured, as in the case of large journals, the maps are comparable, although
one may have to vary a threshold (because of the differences in densities). In
the case of interdisciplinary journals and journals in the social sciences and
humanities, the new database does not add a lot to what is possible with the
ISI databases.
| no_new_dataset | 0.937498 |
0709.1981 | Bin Jiang | Bin Jiang and Chengke Liu | Street-based Topological Representations and Analyses for Predicting
Traffic Flow in GIS | 14 pages, 9 figures, 6 tables, submitted to International Journal of
Geographic Information Science | International Journal of Geographical Information Science, 23(9),
2009, 1119-1137. | 10.1080/13658810701690448 | null | physics.data-an | null | It is well received in the space syntax community that traffic flow is
significantly correlated to a morphological property of streets, which are
represented by axial lines, forming a so called axial map. The correlation
co-efficient (R square value) approaches 0.8 and even a higher value according
to the space syntax literature. In this paper, we study the same issue using
the Hong Kong street network and the Hong Kong Annual Average Daily Traffic
(AADT) datasets, and find surprisingly that street-based topological
representations (or street-street topologies) tend to be better representations
than the axial map. In other words, vehicle flow is correlated to a
morphological property of streets better than that of axial lines. Based on the
finding, we suggest the street-based topological representations as an
alternative GIS representation, and the topological analyses as a new
analytical means for geographic knowledge discovery.
| [
{
"version": "v1",
"created": "Thu, 13 Sep 2007 03:27:23 GMT"
}
] | 2009-09-15T00:00:00 | [
[
"Jiang",
"Bin",
""
],
[
"Liu",
"Chengke",
""
]
] | TITLE: Street-based Topological Representations and Analyses for Predicting
Traffic Flow in GIS
ABSTRACT: It is well received in the space syntax community that traffic flow is
significantly correlated to a morphological property of streets, which are
represented by axial lines, forming a so called axial map. The correlation
co-efficient (R square value) approaches 0.8 and even a higher value according
to the space syntax literature. In this paper, we study the same issue using
the Hong Kong street network and the Hong Kong Annual Average Daily Traffic
(AADT) datasets, and find surprisingly that street-based topological
representations (or street-street topologies) tend to be better representations
than the axial map. In other words, vehicle flow is correlated to a
morphological property of streets better than that of axial lines. Based on the
finding, we suggest the street-based topological representations as an
alternative GIS representation, and the topological analyses as a new
analytical means for geographic knowledge discovery.
| no_new_dataset | 0.956796 |
0909.1766 | Yi Zhang | Yi Zhang (Duke University), Herodotos Herodotou, Jun Yang (Duke) | RIOT: I/O-Efficient Numerical Computing without SQL | CIDR 2009 | null | null | null | cs.DB | http://creativecommons.org/licenses/by/3.0/ | R is a numerical computing environment that is widely popular for statistical
data analysis. Like many such environments, R performs poorly for large
datasets whose sizes exceed that of physical memory. We present our vision of
RIOT (R with I/O Transparency), a system that makes R programs I/O-efficient in
a way transparent to the users. We describe our experience with RIOT-DB, an
initial prototype that uses a relational database system as a backend. Despite
the overhead and inadequacy of generic database systems in handling array data
and numerical computation, RIOT-DB significantly outperforms R in many
large-data scenarios, thanks to a suite of high-level, inter-operation
optimizations that integrate seamlessly into R. While many techniques in RIOT
are inspired by databases (and, for RIOT-DB, realized by a database system),
RIOT users are insulated from anything database related. Compared with previous
approaches that require users to learn new languages and rewrite their programs
to interface with a database, RIOT will, we believe, be easier to adopt by the
majority of the R users.
| [
{
"version": "v1",
"created": "Wed, 9 Sep 2009 18:09:27 GMT"
}
] | 2009-09-15T00:00:00 | [
[
"Zhang",
"Yi",
"",
"Duke University"
],
[
"Herodotou",
"Herodotos",
"",
"Duke"
],
[
"Yang",
"Jun",
"",
"Duke"
]
] | TITLE: RIOT: I/O-Efficient Numerical Computing without SQL
ABSTRACT: R is a numerical computing environment that is widely popular for statistical
data analysis. Like many such environments, R performs poorly for large
datasets whose sizes exceed that of physical memory. We present our vision of
RIOT (R with I/O Transparency), a system that makes R programs I/O-efficient in
a way transparent to the users. We describe our experience with RIOT-DB, an
initial prototype that uses a relational database system as a backend. Despite
the overhead and inadequacy of generic database systems in handling array data
and numerical computation, RIOT-DB significantly outperforms R in many
large-data scenarios, thanks to a suite of high-level, inter-operation
optimizations that integrate seamlessly into R. While many techniques in RIOT
are inspired by databases (and, for RIOT-DB, realized by a database system),
RIOT users are insulated from anything database related. Compared with previous
approaches that require users to learn new languages and rewrite their programs
to interface with a database, RIOT will, we believe, be easier to adopt by the
majority of the R users.
| no_new_dataset | 0.939637 |
0909.2345 | Andri Mirzal M.Sc. | Andri Mirzal | Weblog Clustering in Multilinear Algebra Perspective | 16 pages, 7 figures | International Journal of Information Technology, Vol. 15 No. 1,
2009 | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper describes a clustering method to group the most similar and
important weblogs with their descriptive shared words by using a technique from
multilinear algebra known as PARAFAC tensor decomposition. The proposed method
first creates labeled-link network representation of the weblog datasets, where
the nodes are the blogs and the labels are the shared words. Then, 3-way
adjacency tensor is extracted from the network and the PARAFAC decomposition is
applied to the tensor to get pairs of node lists and label lists with scores
attached to each list as the indication of the degree of importance. The
clustering is done by sorting the lists in decreasing order and taking the
pairs of top ranked blogs and words. Thus, unlike standard co-clustering
methods, this method not only groups the similar blogs with their descriptive
words but also tends to produce clusters of important blogs and descriptive
words.
| [
{
"version": "v1",
"created": "Sat, 12 Sep 2009 15:53:33 GMT"
}
] | 2009-09-15T00:00:00 | [
[
"Mirzal",
"Andri",
""
]
] | TITLE: Weblog Clustering in Multilinear Algebra Perspective
ABSTRACT: This paper describes a clustering method to group the most similar and
important weblogs with their descriptive shared words by using a technique from
multilinear algebra known as PARAFAC tensor decomposition. The proposed method
first creates labeled-link network representation of the weblog datasets, where
the nodes are the blogs and the labels are the shared words. Then, 3-way
adjacency tensor is extracted from the network and the PARAFAC decomposition is
applied to the tensor to get pairs of node lists and label lists with scores
attached to each list as the indication of the degree of importance. The
clustering is done by sorting the lists in decreasing order and taking the
pairs of top ranked blogs and words. Thus, unlike standard co-clustering
methods, this method not only groups the similar blogs with their descriptive
words but also tends to produce clusters of important blogs and descriptive
words.
| no_new_dataset | 0.951774 |
0909.0844 | Francis Bach | Francis Bach (INRIA Rocquencourt) | High-Dimensional Non-Linear Variable Selection through Hierarchical
Kernel Learning | null | null | null | null | cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of high-dimensional non-linear variable selection for
supervised learning. Our approach is based on performing linear selection among
exponentially many appropriately defined positive definite kernels that
characterize non-linear interactions between the original variables. To select
efficiently from these many kernels, we use the natural hierarchical structure
of the problem to extend the multiple kernel learning framework to kernels that
can be embedded in a directed acyclic graph; we show that it is then possible
to perform kernel selection through a graph-adapted sparsity-inducing norm, in
polynomial time in the number of selected kernels. Moreover, we study the
consistency of variable selection in high-dimensional settings, showing that
under certain assumptions, our regularization framework allows a number of
irrelevant variables which is exponential in the number of observations. Our
simulations on synthetic datasets and datasets from the UCI repository show
state-of-the-art predictive performance for non-linear regression problems.
| [
{
"version": "v1",
"created": "Fri, 4 Sep 2009 09:43:38 GMT"
}
] | 2009-09-08T00:00:00 | [
[
"Bach",
"Francis",
"",
"INRIA Rocquencourt"
]
] | TITLE: High-Dimensional Non-Linear Variable Selection through Hierarchical
Kernel Learning
ABSTRACT: We consider the problem of high-dimensional non-linear variable selection for
supervised learning. Our approach is based on performing linear selection among
exponentially many appropriately defined positive definite kernels that
characterize non-linear interactions between the original variables. To select
efficiently from these many kernels, we use the natural hierarchical structure
of the problem to extend the multiple kernel learning framework to kernels that
can be embedded in a directed acyclic graph; we show that it is then possible
to perform kernel selection through a graph-adapted sparsity-inducing norm, in
polynomial time in the number of selected kernels. Moreover, we study the
consistency of variable selection in high-dimensional settings, showing that
under certain assumptions, our regularization framework allows a number of
irrelevant variables which is exponential in the number of observations. Our
simulations on synthetic datasets and datasets from the UCI repository show
state-of-the-art predictive performance for non-linear regression problems.
| no_new_dataset | 0.945551 |
0909.1127 | Raymond Chi-Wing Wong | Raymond Chi-Wing Wong, Ada Wai-Chee Fu, Ke Wang, Yabo Xu, Jian Pei,
Philip S. Yu | Anonymization with Worst-Case Distribution-Based Background Knowledge | null | null | null | null | cs.DB cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background knowledge is an important factor in privacy preserving data
publishing. Distribution-based background knowledge is one of the well studied
background knowledge. However, to the best of our knowledge, there is no
existing work considering the distribution-based background knowledge in the
worst case scenario, by which we mean that the adversary has accurate knowledge
about the distribution of sensitive values according to some tuple attributes.
Considering this worst case scenario is essential because we cannot overlook
any breaching possibility. In this paper, we propose an algorithm to anonymize
dataset in order to protect individual privacy by considering this background
knowledge. We prove that the anonymized datasets generated by our proposed
algorithm protects individual privacy. Our empirical studies show that our
method preserves high utility for the published data at the same time.
| [
{
"version": "v1",
"created": "Mon, 7 Sep 2009 01:44:36 GMT"
}
] | 2009-09-08T00:00:00 | [
[
"Wong",
"Raymond Chi-Wing",
""
],
[
"Fu",
"Ada Wai-Chee",
""
],
[
"Wang",
"Ke",
""
],
[
"Xu",
"Yabo",
""
],
[
"Pei",
"Jian",
""
],
[
"Yu",
"Philip S.",
""
]
] | TITLE: Anonymization with Worst-Case Distribution-Based Background Knowledge
ABSTRACT: Background knowledge is an important factor in privacy preserving data
publishing. Distribution-based background knowledge is one of the well studied
background knowledge. However, to the best of our knowledge, there is no
existing work considering the distribution-based background knowledge in the
worst case scenario, by which we mean that the adversary has accurate knowledge
about the distribution of sensitive values according to some tuple attributes.
Considering this worst case scenario is essential because we cannot overlook
any breaching possibility. In this paper, we propose an algorithm to anonymize
dataset in order to protect individual privacy by considering this background
knowledge. We prove that the anonymized datasets generated by our proposed
algorithm protects individual privacy. Our empirical studies show that our
method preserves high utility for the published data at the same time.
| no_new_dataset | 0.949106 |
0909.0572 | Andri Mirzal M.Sc. | Andri Mirzal and Masashi Furukawa | A Method for Accelerating the HITS Algorithm | 10 pages, 3 figures, to be appear in Journal of Advanced
Computational Intelligence and Intelligent Informatics, Vol. 14 No. 1, 2010 | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We present a new method to accelerate the HITS algorithm by exploiting
hyperlink structure of the web graph. The proposed algorithm extends the idea
of authority and hub scores from HITS by introducing two diagonal matrices
which contain constants that act as weights to make authority pages more
authoritative and hub pages more hubby. This method works because in the web
graph good authorities are pointed to by good hubs and good hubs point to good
authorities. Consequently, these pages will collect their scores faster under
the proposed algorithm than under the standard HITS. We show that the authority
and hub vectors of the proposed algorithm exist but are not necessarily be
unique, and then give a treatment to ensure the uniqueness property of the
vectors. The experimental results show that the proposed algorithm can improve
HITS computations, especially for back button datasets.
| [
{
"version": "v1",
"created": "Thu, 3 Sep 2009 05:34:35 GMT"
}
] | 2009-09-04T00:00:00 | [
[
"Mirzal",
"Andri",
""
],
[
"Furukawa",
"Masashi",
""
]
] | TITLE: A Method for Accelerating the HITS Algorithm
ABSTRACT: We present a new method to accelerate the HITS algorithm by exploiting
hyperlink structure of the web graph. The proposed algorithm extends the idea
of authority and hub scores from HITS by introducing two diagonal matrices
which contain constants that act as weights to make authority pages more
authoritative and hub pages more hubby. This method works because in the web
graph good authorities are pointed to by good hubs and good hubs point to good
authorities. Consequently, these pages will collect their scores faster under
the proposed algorithm than under the standard HITS. We show that the authority
and hub vectors of the proposed algorithm exist but are not necessarily be
unique, and then give a treatment to ensure the uniqueness property of the
vectors. The experimental results show that the proposed algorithm can improve
HITS computations, especially for back button datasets.
| no_new_dataset | 0.950595 |
0906.0684 | Chris Giannella | Chris Giannella | New Instability Results for High Dimensional Nearest Neighbor Search | null | Information Processing Letters 109(19), 2009. | null | null | cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consider a dataset of n(d) points generated independently from R^d according
to a common p.d.f. f_d with support(f_d) = [0,1]^d and sup{f_d([0,1]^d)}
growing sub-exponentially in d. We prove that: (i) if n(d) grows
sub-exponentially in d, then, for any query point q^d in [0,1]^d and any
epsilon>0, the ratio of the distance between any two dataset points and q^d is
less that 1+epsilon with probability -->1 as d-->infinity; (ii) if
n(d)>[4(1+epsilon)]^d for large d, then for all q^d in [0,1]^d (except a small
subset) and any epsilon>0, the distance ratio is less than 1+epsilon with
limiting probability strictly bounded away from one. Moreover, we provide
preliminary results along the lines of (i) when f_d=N(mu_d,Sigma_d).
| [
{
"version": "v1",
"created": "Wed, 3 Jun 2009 15:13:12 GMT"
}
] | 2009-09-01T00:00:00 | [
[
"Giannella",
"Chris",
""
]
] | TITLE: New Instability Results for High Dimensional Nearest Neighbor Search
ABSTRACT: Consider a dataset of n(d) points generated independently from R^d according
to a common p.d.f. f_d with support(f_d) = [0,1]^d and sup{f_d([0,1]^d)}
growing sub-exponentially in d. We prove that: (i) if n(d) grows
sub-exponentially in d, then, for any query point q^d in [0,1]^d and any
epsilon>0, the ratio of the distance between any two dataset points and q^d is
less that 1+epsilon with probability -->1 as d-->infinity; (ii) if
n(d)>[4(1+epsilon)]^d for large d, then for all q^d in [0,1]^d (except a small
subset) and any epsilon>0, the distance ratio is less than 1+epsilon with
limiting probability strictly bounded away from one. Moreover, we provide
preliminary results along the lines of (i) when f_d=N(mu_d,Sigma_d).
| no_new_dataset | 0.943919 |
0908.4349 | Michael Hapgood | Mike Hapgood | Scientific Understanding and the Risk from Extreme Space Weather | Submitted to Advances in Space Research | null | null | null | physics.space-ph physics.plasm-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Like all natural hazards, space weather exhibits occasional extreme events
over timescales of decades to centuries. Historical events provoked much
interest but had little economic impact. However, the widespread adoption of
advanced technological infrastructures over the past fifty years gives these
events the potential to disrupt those infrastructures - and thus create
profound economic and societal impact. However, like all extreme hazards, such
events are rare, so we have limited data on which to build our understanding of
the events. Many other natural hazards (e.g. flash floods) are highly
localised, so statistically significant datasets can be assembled by combining
data from independent instances of the hazard recorded over a few decades. But
we have a single instance of space weather so we would have to make
observations for many centuries in order to build a statistically significant
dataset. Instead we must exploit our knowledge of solar-terrestrial physics to
find other ways to assess these risks. We discuss three alternative approaches:
(a) use of proxy data, (b) studies of other solar systems, and (c) use of
physics-based modelling. The proxy data approach is well-established as a
technique for assessing the long-term risk from radiation storms, but does not
yet provide any means to assess the risk from severe geomagnetic storms. This
latter risk is more suited to the other approaches. We need to develop and
expand techniques to monitoring key space weather features in other solar
systems. To make progress in modelling severe space weather, we need to focus
on the physics that controls severe geomagnetic storms, e.g. how can dayside
and tail reconnection be modulated to expand the region of open flux to envelop
mid-latitudes?
| [
{
"version": "v1",
"created": "Sat, 29 Aug 2009 17:28:06 GMT"
}
] | 2009-09-01T00:00:00 | [
[
"Hapgood",
"Mike",
""
]
] | TITLE: Scientific Understanding and the Risk from Extreme Space Weather
ABSTRACT: Like all natural hazards, space weather exhibits occasional extreme events
over timescales of decades to centuries. Historical events provoked much
interest but had little economic impact. However, the widespread adoption of
advanced technological infrastructures over the past fifty years gives these
events the potential to disrupt those infrastructures - and thus create
profound economic and societal impact. However, like all extreme hazards, such
events are rare, so we have limited data on which to build our understanding of
the events. Many other natural hazards (e.g. flash floods) are highly
localised, so statistically significant datasets can be assembled by combining
data from independent instances of the hazard recorded over a few decades. But
we have a single instance of space weather so we would have to make
observations for many centuries in order to build a statistically significant
dataset. Instead we must exploit our knowledge of solar-terrestrial physics to
find other ways to assess these risks. We discuss three alternative approaches:
(a) use of proxy data, (b) studies of other solar systems, and (c) use of
physics-based modelling. The proxy data approach is well-established as a
technique for assessing the long-term risk from radiation storms, but does not
yet provide any means to assess the risk from severe geomagnetic storms. This
latter risk is more suited to the other approaches. We need to develop and
expand techniques to monitoring key space weather features in other solar
systems. To make progress in modelling severe space weather, we need to focus
on the physics that controls severe geomagnetic storms, e.g. how can dayside
and tail reconnection be modulated to expand the region of open flux to envelop
mid-latitudes?
| no_new_dataset | 0.919859 |
0908.4144 | Ping Li | Ping Li | ABC-LogitBoost for Multi-class Classification | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop abc-logitboost, based on the prior work on abc-boost and robust
logitboost. Our extensive experiments on a variety of datasets demonstrate the
considerable improvement of abc-logitboost over logitboost and abc-mart.
| [
{
"version": "v1",
"created": "Fri, 28 Aug 2009 07:09:19 GMT"
}
] | 2009-08-31T00:00:00 | [
[
"Li",
"Ping",
""
]
] | TITLE: ABC-LogitBoost for Multi-class Classification
ABSTRACT: We develop abc-logitboost, based on the prior work on abc-boost and robust
logitboost. Our extensive experiments on a variety of datasets demonstrate the
considerable improvement of abc-logitboost over logitboost and abc-mart.
| no_new_dataset | 0.953794 |
0903.2999 | Filippo Radicchi | Filippo Radicchi | Human Activity in the Web | 10 pages, 9 figures. Final version accepted for publication in
Physical Review E | Phys. Rev. E 80, 026118 (2009) | 10.1103/PhysRevE.80.026118 | null | physics.soc-ph cond-mat.stat-mech cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent information technology revolution has enabled the analysis and
processing of large-scale datasets describing human activities. The main source
of data is represented by the Web, where humans generally use to spend a
relevant part of their day. Here we study three large datasets containing the
information about Web human activities in different contexts. We study in
details inter-event and waiting time statistics. In both cases, the number of
subsequent operations which differ by tau units of time decays power-like as
tau increases. We use non-parametric statistical tests in order to estimate the
significance level of reliability of global distributions to describe activity
patterns of single users. Global inter-event time probability distributions are
not representative for the behavior of single users: the shape of single
users'inter-event distributions is strongly influenced by the total number of
operations performed by the users and distributions of the total number of
operations performed by users are heterogeneous. A universal behavior can be
anyway found by suppressing the intrinsic dependence of the global probability
distribution on the activity of the users. This suppression can be performed by
simply dividing the inter-event times with their average values. Differently,
waiting time probability distributions seem to be independent of the activity
of users and global probability distributions are able to significantly
represent the replying activity patterns of single users.
| [
{
"version": "v1",
"created": "Tue, 17 Mar 2009 16:24:02 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jul 2009 15:21:41 GMT"
}
] | 2009-08-20T00:00:00 | [
[
"Radicchi",
"Filippo",
""
]
] | TITLE: Human Activity in the Web
ABSTRACT: The recent information technology revolution has enabled the analysis and
processing of large-scale datasets describing human activities. The main source
of data is represented by the Web, where humans generally use to spend a
relevant part of their day. Here we study three large datasets containing the
information about Web human activities in different contexts. We study in
details inter-event and waiting time statistics. In both cases, the number of
subsequent operations which differ by tau units of time decays power-like as
tau increases. We use non-parametric statistical tests in order to estimate the
significance level of reliability of global distributions to describe activity
patterns of single users. Global inter-event time probability distributions are
not representative for the behavior of single users: the shape of single
users'inter-event distributions is strongly influenced by the total number of
operations performed by the users and distributions of the total number of
operations performed by users are heterogeneous. A universal behavior can be
anyway found by suppressing the intrinsic dependence of the global probability
distribution on the activity of the users. This suppression can be performed by
simply dividing the inter-event times with their average values. Differently,
waiting time probability distributions seem to be independent of the activity
of users and global probability distributions are able to significantly
represent the replying activity patterns of single users.
| no_new_dataset | 0.933673 |
0908.1453 | R Doomun | Roya Asadi, Norwati Mustapha, Nasir Sulaiman | Training Process Reduction Based On Potential Weights Linear Analysis To
Accelarate Back Propagation Network | 11 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS 2009, ISSN 1947 5500, Impact factor 0.423 | International Journal of Computer Science and Information
Security, IJCSIS, Vol. 3, No. 1, July 2009, USA | null | ISSN 1947 5500 | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning is the important property of Back Propagation Network (BPN) and
finding the suitable weights and thresholds during training in order to improve
training time as well as achieve high accuracy. Currently, data pre-processing
such as dimension reduction input values and pre-training are the contributing
factors in developing efficient techniques for reducing training time with high
accuracy and initialization of the weights is the important issue which is
random and creates paradox, and leads to low accuracy with high training time.
One good data preprocessing technique for accelerating BPN classification is
dimension reduction technique but it has problem of missing data. In this
paper, we study current pre-training techniques and new preprocessing technique
called Potential Weight Linear Analysis (PWLA) which combines normalization,
dimension reduction input values and pre-training. In PWLA, the first data
preprocessing is performed for generating normalized input values and then
applying them by pre-training technique in order to obtain the potential
weights. After these phases, dimension of input values matrix will be reduced
by using real potential weights. For experiment results XOR problem and three
datasets, which are SPECT Heart, SPECTF Heart and Liver disorders (BUPA) will
be evaluated. Our results, however, will show that the new technique of PWLA
will change BPN to new Supervised Multi Layer Feed Forward Neural Network
(SMFFNN) model with high accuracy in one epoch without training cycle. Also
PWLA will be able to have power of non linear supervised and unsupervised
dimension reduction property for applying by other supervised multi layer feed
forward neural network model in future work.
| [
{
"version": "v1",
"created": "Tue, 11 Aug 2009 05:30:01 GMT"
}
] | 2009-08-12T00:00:00 | [
[
"Asadi",
"Roya",
""
],
[
"Mustapha",
"Norwati",
""
],
[
"Sulaiman",
"Nasir",
""
]
] | TITLE: Training Process Reduction Based On Potential Weights Linear Analysis To
Accelarate Back Propagation Network
ABSTRACT: Learning is the important property of Back Propagation Network (BPN) and
finding the suitable weights and thresholds during training in order to improve
training time as well as achieve high accuracy. Currently, data pre-processing
such as dimension reduction input values and pre-training are the contributing
factors in developing efficient techniques for reducing training time with high
accuracy and initialization of the weights is the important issue which is
random and creates paradox, and leads to low accuracy with high training time.
One good data preprocessing technique for accelerating BPN classification is
dimension reduction technique but it has problem of missing data. In this
paper, we study current pre-training techniques and new preprocessing technique
called Potential Weight Linear Analysis (PWLA) which combines normalization,
dimension reduction input values and pre-training. In PWLA, the first data
preprocessing is performed for generating normalized input values and then
applying them by pre-training technique in order to obtain the potential
weights. After these phases, dimension of input values matrix will be reduced
by using real potential weights. For experiment results XOR problem and three
datasets, which are SPECT Heart, SPECTF Heart and Liver disorders (BUPA) will
be evaluated. Our results, however, will show that the new technique of PWLA
will change BPN to new Supervised Multi Layer Feed Forward Neural Network
(SMFFNN) model with high accuracy in one epoch without training cycle. Also
PWLA will be able to have power of non linear supervised and unsupervised
dimension reduction property for applying by other supervised multi layer feed
forward neural network model in future work.
| no_new_dataset | 0.952397 |
0811.1067 | Lek-Heng Lim | Xiaoye Jiang, Lek-Heng Lim, Yuan Yao, Yinyu Ye | Statistical ranking and combinatorial Hodge theory | 42 pages; minor changes throughout; numerical experiments added | null | null | null | stat.ML cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a number of techniques for obtaining a global ranking from data
that may be incomplete and imbalanced -- characteristics almost universal to
modern datasets coming from e-commerce and internet applications. We are
primarily interested in score or rating-based cardinal data. From raw ranking
data, we construct pairwise rankings, represented as edge flows on an
appropriate graph. Our statistical ranking method uses the graph Helmholtzian,
the graph theoretic analogue of the Helmholtz operator or vector Laplacian, in
much the same way the graph Laplacian is an analogue of the Laplace operator or
scalar Laplacian. We study the graph Helmholtzian using combinatorial Hodge
theory: we show that every edge flow representing pairwise ranking can be
resolved into two orthogonal components, a gradient flow that represents the
L2-optimal global ranking and a divergence-free flow (cyclic) that measures the
validity of the global ranking obtained -- if this is large, then the data does
not have a meaningful global ranking. This divergence-free flow can be further
decomposed orthogonally into a curl flow (locally cyclic) and a harmonic flow
(locally acyclic but globally cyclic); these provides information on whether
inconsistency arises locally or globally. An obvious advantage over the NP-hard
Kemeny optimization is that discrete Hodge decomposition may be computed via a
linear least squares regression. We also investigated the L1-projection of edge
flows, showing that this is dual to correlation maximization over bounded
divergence-free flows, and the L1-approximate sparse cyclic ranking, showing
that this is dual to correlation maximization over bounded curl-free flows. We
discuss relations with Kemeny optimization, Borda count, and Kendall-Smith
consistency index from social choice theory and statistics.
| [
{
"version": "v1",
"created": "Fri, 7 Nov 2008 01:23:09 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Aug 2009 10:34:29 GMT"
}
] | 2009-08-10T00:00:00 | [
[
"Jiang",
"Xiaoye",
""
],
[
"Lim",
"Lek-Heng",
""
],
[
"Yao",
"Yuan",
""
],
[
"Ye",
"Yinyu",
""
]
] | TITLE: Statistical ranking and combinatorial Hodge theory
ABSTRACT: We propose a number of techniques for obtaining a global ranking from data
that may be incomplete and imbalanced -- characteristics almost universal to
modern datasets coming from e-commerce and internet applications. We are
primarily interested in score or rating-based cardinal data. From raw ranking
data, we construct pairwise rankings, represented as edge flows on an
appropriate graph. Our statistical ranking method uses the graph Helmholtzian,
the graph theoretic analogue of the Helmholtz operator or vector Laplacian, in
much the same way the graph Laplacian is an analogue of the Laplace operator or
scalar Laplacian. We study the graph Helmholtzian using combinatorial Hodge
theory: we show that every edge flow representing pairwise ranking can be
resolved into two orthogonal components, a gradient flow that represents the
L2-optimal global ranking and a divergence-free flow (cyclic) that measures the
validity of the global ranking obtained -- if this is large, then the data does
not have a meaningful global ranking. This divergence-free flow can be further
decomposed orthogonally into a curl flow (locally cyclic) and a harmonic flow
(locally acyclic but globally cyclic); these provides information on whether
inconsistency arises locally or globally. An obvious advantage over the NP-hard
Kemeny optimization is that discrete Hodge decomposition may be computed via a
linear least squares regression. We also investigated the L1-projection of edge
flows, showing that this is dual to correlation maximization over bounded
divergence-free flows, and the L1-approximate sparse cyclic ranking, showing
that this is dual to correlation maximization over bounded curl-free flows. We
discuss relations with Kemeny optimization, Borda count, and Kendall-Smith
consistency index from social choice theory and statistics.
| no_new_dataset | 0.953405 |
0907.5442 | Jian Li | Jian Li, Amol Deshpande, Samir Khuller | On Computing Compression Trees for Data Collection in Sensor Networks | null | null | null | null | cs.NI cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of efficiently gathering correlated data from a wired
or a wireless sensor network, with the aim of designing algorithms with
provable optimality guarantees, and understanding how close we can get to the
known theoretical lower bounds. Our proposed approach is based on finding an
optimal or a near-optimal {\em compression tree} for a given sensor network: a
compression tree is a directed tree over the sensor network nodes such that the
value of a node is compressed using the value of its parent. We consider this
problem under different communication models, including the {\em broadcast
communication} model that enables many new opportunities for energy-efficient
data collection. We draw connections between the data collection problem and a
previously studied graph concept, called {\em weakly connected dominating
sets}, and we use this to develop novel approximation algorithms for the
problem. We present comparative results on several synthetic and real-world
datasets showing that our algorithms construct near-optimal compression trees
that yield a significant reduction in the data collection cost.
| [
{
"version": "v1",
"created": "Thu, 30 Jul 2009 22:40:53 GMT"
}
] | 2009-08-03T00:00:00 | [
[
"Li",
"Jian",
""
],
[
"Deshpande",
"Amol",
""
],
[
"Khuller",
"Samir",
""
]
] | TITLE: On Computing Compression Trees for Data Collection in Sensor Networks
ABSTRACT: We address the problem of efficiently gathering correlated data from a wired
or a wireless sensor network, with the aim of designing algorithms with
provable optimality guarantees, and understanding how close we can get to the
known theoretical lower bounds. Our proposed approach is based on finding an
optimal or a near-optimal {\em compression tree} for a given sensor network: a
compression tree is a directed tree over the sensor network nodes such that the
value of a node is compressed using the value of its parent. We consider this
problem under different communication models, including the {\em broadcast
communication} model that enables many new opportunities for energy-efficient
data collection. We draw connections between the data collection problem and a
previously studied graph concept, called {\em weakly connected dominating
sets}, and we use this to develop novel approximation algorithms for the
problem. We present comparative results on several synthetic and real-world
datasets showing that our algorithms construct near-optimal compression trees
that yield a significant reduction in the data collection cost.
| no_new_dataset | 0.948346 |
0907.3315 | Zi-Ke Zhang Mr. | Zi-Ke Zhang, Tao Zhou | Effective Personalized Recommendation in Collaborative Tagging Systems | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, collaborative tagging systems have attracted more and more
attention and have been widely applied in web systems. Tags provide highly
abstracted information about personal preferences and item content, and are
therefore potential to help in improving better personalized recommendations.
In this paper, we propose a tag-based recommendation algorithm considering the
personal vocabulary and evaluate it in a real-world dataset: Del.icio.us.
Experimental results demonstrate that the usage of tag information can
significantly improve the accuracy of personalized recommendations.
| [
{
"version": "v1",
"created": "Sun, 19 Jul 2009 18:56:37 GMT"
}
] | 2009-07-21T00:00:00 | [
[
"Zhang",
"Zi-Ke",
""
],
[
"Zhou",
"Tao",
""
]
] | TITLE: Effective Personalized Recommendation in Collaborative Tagging Systems
ABSTRACT: Recently, collaborative tagging systems have attracted more and more
attention and have been widely applied in web systems. Tags provide highly
abstracted information about personal preferences and item content, and are
therefore potential to help in improving better personalized recommendations.
In this paper, we propose a tag-based recommendation algorithm considering the
personal vocabulary and evaluate it in a real-world dataset: Del.icio.us.
Experimental results demonstrate that the usage of tag information can
significantly improve the accuracy of personalized recommendations.
| no_new_dataset | 0.951459 |
0907.1815 | Hal Daum\'e III | Hal Daum\'e III | Frustratingly Easy Domain Adaptation | null | ACL 2007 | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe an approach to domain adaptation that is appropriate exactly in
the case when one has enough ``target'' data to do slightly better than just
using only ``source'' data. Our approach is incredibly simple, easy to
implement as a preprocessing step (10 lines of Perl!) and outperforms
state-of-the-art approaches on a range of datasets. Moreover, it is trivially
extended to a multi-domain adaptation problem, where one has data from a
variety of different domains.
| [
{
"version": "v1",
"created": "Fri, 10 Jul 2009 13:25:48 GMT"
}
] | 2009-07-13T00:00:00 | [
[
"Daumé",
"Hal",
"III"
]
] | TITLE: Frustratingly Easy Domain Adaptation
ABSTRACT: We describe an approach to domain adaptation that is appropriate exactly in
the case when one has enough ``target'' data to do slightly better than just
using only ``source'' data. Our approach is incredibly simple, easy to
implement as a preprocessing step (10 lines of Perl!) and outperforms
state-of-the-art approaches on a range of datasets. Moreover, it is trivially
extended to a multi-domain adaptation problem, where one has data from a
variety of different domains.
| no_new_dataset | 0.944434 |
0904.3761 | Charalampos Tsourakakis | Charalampos E. Tsourakakis, Mihail N. Kolountzakis, Gary L. Miller | Approximate Triangle Counting | 1) 16 pages, 2 figures, under submission 2) Removed the erroneous
random projection part. Thanks to Ioannis Koutis for pointing out the error.
3) Added experimental session | null | null | null | cs.DS cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Triangle counting is an important problem in graph mining. Clustering
coefficients of vertices and the transitivity ratio of the graph are two
metrics often used in complex network analysis. Furthermore, triangles have
been used successfully in several real-world applications. However, exact
triangle counting is an expensive computation. In this paper we present the
analysis of a practical sampling algorithm for counting triangles in graphs.
Our analysis yields optimal values for the sampling rate, thus resulting in
tremendous speedups ranging from \emph{2800}x to \emph{70000}x when applied to
real-world networks. At the same time the accuracy of the estimation is
excellent.
Our contributions include experimentation on graphs with several millions of
nodes and edges, where we show how practical our proposed method is. Finally,
our algorithm's implementation is a part of the \pegasus library (Code and
datasets are available at (http://www.cs.cmu.edu/~ctsourak/).) a Peta-Graph
Mining library implemented in Hadoop, the open source version of Mapreduce.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2009 14:21:13 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Jun 2009 09:02:34 GMT"
}
] | 2009-06-30T00:00:00 | [
[
"Tsourakakis",
"Charalampos E.",
""
],
[
"Kolountzakis",
"Mihail N.",
""
],
[
"Miller",
"Gary L.",
""
]
] | TITLE: Approximate Triangle Counting
ABSTRACT: Triangle counting is an important problem in graph mining. Clustering
coefficients of vertices and the transitivity ratio of the graph are two
metrics often used in complex network analysis. Furthermore, triangles have
been used successfully in several real-world applications. However, exact
triangle counting is an expensive computation. In this paper we present the
analysis of a practical sampling algorithm for counting triangles in graphs.
Our analysis yields optimal values for the sampling rate, thus resulting in
tremendous speedups ranging from \emph{2800}x to \emph{70000}x when applied to
real-world networks. At the same time the accuracy of the estimation is
excellent.
Our contributions include experimentation on graphs with several millions of
nodes and edges, where we show how practical our proposed method is. Finally,
our algorithm's implementation is a part of the \pegasus library (Code and
datasets are available at (http://www.cs.cmu.edu/~ctsourak/).) a Peta-Graph
Mining library implemented in Hadoop, the open source version of Mapreduce.
| no_new_dataset | 0.947672 |
0906.4927 | Lijun Chang | Lijun Chang, Jeffrey Xu Yu, Lu Qin | Fast Probabilistic Ranking under x-Relation Model | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The probabilistic top-k queries based on the interplay of score and
probability, under the possible worlds semantic, become an important research
issue that considers both score and uncertainty on the same basis. In the
literature, many different probabilistic top-k queries are proposed. Almost all
of them need to compute the probability of a tuple t_i to be ranked at the j-th
position across the entire set of possible worlds. The cost of such computing
is the dominant cost and is known as O(kn^2), where n is the size of dataset.
In this paper, we propose a new novel algorithm that computes such probability
in O(kn).
| [
{
"version": "v1",
"created": "Fri, 26 Jun 2009 13:24:57 GMT"
}
] | 2009-06-29T00:00:00 | [
[
"Chang",
"Lijun",
""
],
[
"Yu",
"Jeffrey Xu",
""
],
[
"Qin",
"Lu",
""
]
] | TITLE: Fast Probabilistic Ranking under x-Relation Model
ABSTRACT: The probabilistic top-k queries based on the interplay of score and
probability, under the possible worlds semantic, become an important research
issue that considers both score and uncertainty on the same basis. In the
literature, many different probabilistic top-k queries are proposed. Almost all
of them need to compute the probability of a tuple t_i to be ranked at the j-th
position across the entire set of possible worlds. The cost of such computing
is the dominant cost and is known as O(kn^2), where n is the size of dataset.
In this paper, we propose a new novel algorithm that computes such probability
in O(kn).
| no_new_dataset | 0.94625 |
0906.3741 | Lillian Lee | Cristian Danescu-Niculescu-Mizil, Gueorgi Kossinets, Jon Kleinberg,
Lillian Lee | How opinions are received by online communities: A case study on
Amazon.com helpfulness votes | null | Proceedings of WWW, pp. 141--150, 2009 | null | null | cs.CL cs.IR physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are many on-line settings in which users publicly express opinions. A
number of these offer mechanisms for other users to evaluate these opinions; a
canonical example is Amazon.com, where reviews come with annotations like "26
of 32 people found the following review helpful." Opinion evaluation appears in
many off-line settings as well, including market research and political
campaigns. Reasoning about the evaluation of an opinion is fundamentally
different from reasoning about the opinion itself: rather than asking, "What
did Y think of X?", we are asking, "What did Z think of Y's opinion of X?" Here
we develop a framework for analyzing and modeling opinion evaluation, using a
large-scale collection of Amazon book reviews as a dataset. We find that the
perceived helpfulness of a review depends not just on its content but also but
also in subtle ways on how the expressed evaluation relates to other
evaluations of the same product. As part of our approach, we develop novel
methods that take advantage of the phenomenon of review "plagiarism" to control
for the effects of text in opinion evaluation, and we provide a simple and
natural mathematical model consistent with our findings. Our analysis also
allows us to distinguish among the predictions of competing theories from
sociology and social psychology, and to discover unexpected differences in the
collective opinion-evaluation behavior of user populations from different
countries.
| [
{
"version": "v1",
"created": "Sun, 21 Jun 2009 01:59:21 GMT"
}
] | 2009-06-24T00:00:00 | [
[
"Danescu-Niculescu-Mizil",
"Cristian",
""
],
[
"Kossinets",
"Gueorgi",
""
],
[
"Kleinberg",
"Jon",
""
],
[
"Lee",
"Lillian",
""
]
] | TITLE: How opinions are received by online communities: A case study on
Amazon.com helpfulness votes
ABSTRACT: There are many on-line settings in which users publicly express opinions. A
number of these offer mechanisms for other users to evaluate these opinions; a
canonical example is Amazon.com, where reviews come with annotations like "26
of 32 people found the following review helpful." Opinion evaluation appears in
many off-line settings as well, including market research and political
campaigns. Reasoning about the evaluation of an opinion is fundamentally
different from reasoning about the opinion itself: rather than asking, "What
did Y think of X?", we are asking, "What did Z think of Y's opinion of X?" Here
we develop a framework for analyzing and modeling opinion evaluation, using a
large-scale collection of Amazon book reviews as a dataset. We find that the
perceived helpfulness of a review depends not just on its content but also but
also in subtle ways on how the expressed evaluation relates to other
evaluations of the same product. As part of our approach, we develop novel
methods that take advantage of the phenomenon of review "plagiarism" to control
for the effects of text in opinion evaluation, and we provide a simple and
natural mathematical model consistent with our findings. Our analysis also
allows us to distinguish among the predictions of competing theories from
sociology and social psychology, and to discover unexpected differences in the
collective opinion-evaluation behavior of user populations from different
countries.
| new_dataset | 0.729231 |
0906.2274 | D\v{z}enan Zuki\'c | D\v{z}enan Zuki\'c, Christof Rezk-Salama, Andreas Kolb | A Neural Network Classifier of Volume Datasets | 10 pages, 10 figures, 1 table, 3IA conference http://3ia.teiath.gr/ | International Conference on Computer Graphics and Artificial
Intelligence, Proceedings (2009) 53-62 | null | null | cs.GR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many state-of-the art visualization techniques must be tailored to the
specific type of dataset, its modality (CT, MRI, etc.), the recorded object or
anatomical region (head, spine, abdomen, etc.) and other parameters related to
the data acquisition process. While parts of the information (imaging modality
and acquisition sequence) may be obtained from the meta-data stored with the
volume scan, there is important information which is not stored explicitly
(anatomical region, tracing compound). Also, meta-data might be incomplete,
inappropriate or simply missing.
This paper presents a novel and simple method of determining the type of
dataset from previously defined categories. 2D histograms based on intensity
and gradient magnitude of datasets are used as input to a neural network, which
classifies it into one of several categories it was trained with. The proposed
method is an important building block for visualization systems to be used
autonomously by non-experts. The method has been tested on 80 datasets, divided
into 3 classes and a "rest" class.
A significant result is the ability of the system to classify datasets into a
specific class after being trained with only one dataset of that class. Other
advantages of the method are its easy implementation and its high computational
performance.
| [
{
"version": "v1",
"created": "Fri, 12 Jun 2009 11:17:05 GMT"
}
] | 2009-06-15T00:00:00 | [
[
"Zukić",
"Dženan",
""
],
[
"Rezk-Salama",
"Christof",
""
],
[
"Kolb",
"Andreas",
""
]
] | TITLE: A Neural Network Classifier of Volume Datasets
ABSTRACT: Many state-of-the art visualization techniques must be tailored to the
specific type of dataset, its modality (CT, MRI, etc.), the recorded object or
anatomical region (head, spine, abdomen, etc.) and other parameters related to
the data acquisition process. While parts of the information (imaging modality
and acquisition sequence) may be obtained from the meta-data stored with the
volume scan, there is important information which is not stored explicitly
(anatomical region, tracing compound). Also, meta-data might be incomplete,
inappropriate or simply missing.
This paper presents a novel and simple method of determining the type of
dataset from previously defined categories. 2D histograms based on intensity
and gradient magnitude of datasets are used as input to a neural network, which
classifies it into one of several categories it was trained with. The proposed
method is an important building block for visualization systems to be used
autonomously by non-experts. The method has been tested on 80 datasets, divided
into 3 classes and a "rest" class.
A significant result is the ability of the system to classify datasets into a
specific class after being trained with only one dataset of that class. Other
advantages of the method are its easy implementation and its high computational
performance.
| no_new_dataset | 0.948346 |
0806.2925 | Dzenan Zukic | D\v{z}enan Zuki\'c, Andreas Elsner, Zikrija Avdagi\'c, Gitta Domik | Neural networks in 3D medical scan visualization | 8 pages, 6 figures published on conference 3IA'2008 in Athens, Greece
(http://3ia.teiath.gr) | International Conference on Computer Graphics and Artificial
Intelligence, Proceedings (2008) 183-190 | null | null | cs.AI cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For medical volume visualization, one of the most important tasks is to
reveal clinically relevant details from the 3D scan (CT, MRI ...), e.g. the
coronary arteries, without obscuring them with less significant parts. These
volume datasets contain different materials which are difficult to extract and
visualize with 1D transfer functions based solely on the attenuation
coefficient. Multi-dimensional transfer functions allow a much more precise
classification of data which makes it easier to separate different surfaces
from each other. Unfortunately, setting up multi-dimensional transfer functions
can become a fairly complex task, generally accomplished by trial and error.
This paper explains neural networks, and then presents an efficient way to
speed up visualization process by semi-automatic transfer function generation.
We describe how to use neural networks to detect distinctive features shown in
the 2D histogram of the volume data and how to use this information for data
classification.
| [
{
"version": "v1",
"created": "Wed, 18 Jun 2008 08:36:15 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Jun 2009 08:25:23 GMT"
}
] | 2009-06-12T00:00:00 | [
[
"Zukić",
"Dženan",
""
],
[
"Elsner",
"Andreas",
""
],
[
"Avdagić",
"Zikrija",
""
],
[
"Domik",
"Gitta",
""
]
] | TITLE: Neural networks in 3D medical scan visualization
ABSTRACT: For medical volume visualization, one of the most important tasks is to
reveal clinically relevant details from the 3D scan (CT, MRI ...), e.g. the
coronary arteries, without obscuring them with less significant parts. These
volume datasets contain different materials which are difficult to extract and
visualize with 1D transfer functions based solely on the attenuation
coefficient. Multi-dimensional transfer functions allow a much more precise
classification of data which makes it easier to separate different surfaces
from each other. Unfortunately, setting up multi-dimensional transfer functions
can become a fairly complex task, generally accomplished by trial and error.
This paper explains neural networks, and then presents an efficient way to
speed up visualization process by semi-automatic transfer function generation.
We describe how to use neural networks to detect distinctive features shown in
the 2D histogram of the volume data and how to use this information for data
classification.
| no_new_dataset | 0.954732 |
0906.1814 | Renqiang Min | Martin Renqiang Min, David A. Stanley, Zineng Yuan, Anthony Bonner,
and Zhaolei Zhang | Large-Margin kNN Classification Using a Deep Encoder Network | 13 pages (preliminary version) | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | KNN is one of the most popular classification methods, but it often fails to
work well with inappropriate choice of distance metric or due to the presence
of numerous class-irrelevant features. Linear feature transformation methods
have been widely applied to extract class-relevant information to improve kNN
classification, which is very limited in many applications. Kernels have been
used to learn powerful non-linear feature transformations, but these methods
fail to scale to large datasets. In this paper, we present a scalable
non-linear feature mapping method based on a deep neural network pretrained
with restricted boltzmann machines for improving kNN classification in a
large-margin framework, which we call DNet-kNN. DNet-kNN can be used for both
classification and for supervised dimensionality reduction. The experimental
results on two benchmark handwritten digit datasets show that DNet-kNN has much
better performance than large-margin kNN using a linear mapping and kNN based
on a deep autoencoder pretrained with retricted boltzmann machines.
| [
{
"version": "v1",
"created": "Tue, 9 Jun 2009 20:06:45 GMT"
}
] | 2009-06-11T00:00:00 | [
[
"Min",
"Martin Renqiang",
""
],
[
"Stanley",
"David A.",
""
],
[
"Yuan",
"Zineng",
""
],
[
"Bonner",
"Anthony",
""
],
[
"Zhang",
"Zhaolei",
""
]
] | TITLE: Large-Margin kNN Classification Using a Deep Encoder Network
ABSTRACT: KNN is one of the most popular classification methods, but it often fails to
work well with inappropriate choice of distance metric or due to the presence
of numerous class-irrelevant features. Linear feature transformation methods
have been widely applied to extract class-relevant information to improve kNN
classification, which is very limited in many applications. Kernels have been
used to learn powerful non-linear feature transformations, but these methods
fail to scale to large datasets. In this paper, we present a scalable
non-linear feature mapping method based on a deep neural network pretrained
with restricted boltzmann machines for improving kNN classification in a
large-margin framework, which we call DNet-kNN. DNet-kNN can be used for both
classification and for supervised dimensionality reduction. The experimental
results on two benchmark handwritten digit datasets show that DNet-kNN has much
better performance than large-margin kNN using a linear mapping and kNN based
on a deep autoencoder pretrained with retricted boltzmann machines.
| no_new_dataset | 0.951142 |
0904.2623 | James Petterson | James Petterson, Tiberio Caetano, Julian McAuley, Jin Yu | Exponential Family Graph Matching and Ranking | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for learning max-weight matching predictors in bipartite
graphs. The method consists of performing maximum a posteriori estimation in
exponential families with sufficient statistics that encode permutations and
data features. Although inference is in general hard, we show that for one very
relevant application - web page ranking - exact inference is efficient. For
general model instances, an appropriate sampler is readily available. Contrary
to existing max-margin matching models, our approach is statistically
consistent and, in addition, experiments with increasing sample sizes indicate
superior improvement over such models. We apply the method to graph matching in
computer vision as well as to a standard benchmark dataset for learning web
page ranking, in which we obtain state-of-the-art results, in particular
improving on max-margin variants. The drawback of this method with respect to
max-margin alternatives is its runtime for large graphs, which is comparatively
high.
| [
{
"version": "v1",
"created": "Fri, 17 Apr 2009 03:48:02 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jun 2009 03:54:58 GMT"
}
] | 2009-06-05T00:00:00 | [
[
"Petterson",
"James",
""
],
[
"Caetano",
"Tiberio",
""
],
[
"McAuley",
"Julian",
""
],
[
"Yu",
"Jin",
""
]
] | TITLE: Exponential Family Graph Matching and Ranking
ABSTRACT: We present a method for learning max-weight matching predictors in bipartite
graphs. The method consists of performing maximum a posteriori estimation in
exponential families with sufficient statistics that encode permutations and
data features. Although inference is in general hard, we show that for one very
relevant application - web page ranking - exact inference is efficient. For
general model instances, an appropriate sampler is readily available. Contrary
to existing max-margin matching models, our approach is statistically
consistent and, in addition, experiments with increasing sample sizes indicate
superior improvement over such models. We apply the method to graph matching in
computer vision as well as to a standard benchmark dataset for learning web
page ranking, in which we obtain state-of-the-art results, in particular
improving on max-margin variants. The drawback of this method with respect to
max-margin alternatives is its runtime for large graphs, which is comparatively
high.
| no_new_dataset | 0.945601 |
0903.4217 | John Langford | Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory Sorkin, and
Alex Strehl | Conditional Probability Tree Estimation Analysis and Algorithms | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of estimating the conditional probability of a label
in time $O(\log n)$, where $n$ is the number of possible labels. We analyze a
natural reduction of this problem to a set of binary regression problems
organized in a tree structure, proving a regret bound that scales with the
depth of the tree. Motivated by this analysis, we propose the first online
algorithm which provably constructs a logarithmic depth tree on the set of
labels to solve this problem. We test the algorithm empirically, showing that
it works succesfully on a dataset with roughly $10^6$ labels.
| [
{
"version": "v1",
"created": "Wed, 25 Mar 2009 00:28:44 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Jun 2009 21:19:34 GMT"
}
] | 2009-06-04T00:00:00 | [
[
"Beygelzimer",
"Alina",
""
],
[
"Langford",
"John",
""
],
[
"Lifshits",
"Yuri",
""
],
[
"Sorkin",
"Gregory",
""
],
[
"Strehl",
"Alex",
""
]
] | TITLE: Conditional Probability Tree Estimation Analysis and Algorithms
ABSTRACT: We consider the problem of estimating the conditional probability of a label
in time $O(\log n)$, where $n$ is the number of possible labels. We analyze a
natural reduction of this problem to a set of binary regression problems
organized in a tree structure, proving a regret bound that scales with the
depth of the tree. Motivated by this analysis, we propose the first online
algorithm which provably constructs a logarithmic depth tree on the set of
labels to solve this problem. We test the algorithm empirically, showing that
it works succesfully on a dataset with roughly $10^6$ labels.
| no_new_dataset | 0.94428 |
0707.4638 | Fengzhong Wang | Fengzhong Wang, Kazuko Yamasaki, Shlomo Havlin and H. Eugene Stanley | Indication of multiscaling in the volatility return intervals of stock
markets | 19 pages, 6 figures | Phys. Rev. E 77, 016109 (2008) | 10.1103/PhysRevE.77.016109 | null | q-fin.ST physics.soc-ph | null | The distribution of the return intervals $\tau$ between volatilities above a
threshold $q$ for financial records has been approximated by a scaling
behavior. To explore how accurate is the scaling and therefore understand the
underlined non-linear mechanism, we investigate intraday datasets of 500 stocks
which consist of the Standard & Poor's 500 index. We show that the cumulative
distribution of return intervals has systematic deviations from scaling. We
support this finding by studying the m-th moment $\mu_m \equiv
<(\tau/<\tau>)^m>^{1/m}$, which show a certain trend with the mean interval
$<\tau>$. We generate surrogate records using the Schreiber method, and find
that their cumulative distributions almost collapse to a single curve and
moments are almost constant for most range of $<\tau>$. Those substantial
differences suggest that non-linear correlations in the original volatility
sequence account for the deviations from a single scaling law. We also find
that the original and surrogate records exhibit slight tendencies for short and
long $<\tau>$, due to the discreteness and finite size effects of the records
respectively. To avoid as possible those effects for testing the multiscaling
behavior, we investigate the moments in the range $10<<\tau>\leq100$, and find
the exponent $\alpha$ from the power law fitting $\mu_m\sim<\tau>^\alpha$ has a
narrow distribution around $\alpha\neq0$ which depend on m for the 500 stocks.
The distribution of $\alpha$ for the surrogate records are very narrow and
centered around $\alpha=0$. This suggests that the return interval distribution
exhibit multiscaling behavior due to the non-linear correlations in the
original volatility.
| [
{
"version": "v1",
"created": "Tue, 31 Jul 2007 15:14:47 GMT"
}
] | 2009-06-02T00:00:00 | [
[
"Wang",
"Fengzhong",
""
],
[
"Yamasaki",
"Kazuko",
""
],
[
"Havlin",
"Shlomo",
""
],
[
"Stanley",
"H. Eugene",
""
]
] | TITLE: Indication of multiscaling in the volatility return intervals of stock
markets
ABSTRACT: The distribution of the return intervals $\tau$ between volatilities above a
threshold $q$ for financial records has been approximated by a scaling
behavior. To explore how accurate is the scaling and therefore understand the
underlined non-linear mechanism, we investigate intraday datasets of 500 stocks
which consist of the Standard & Poor's 500 index. We show that the cumulative
distribution of return intervals has systematic deviations from scaling. We
support this finding by studying the m-th moment $\mu_m \equiv
<(\tau/<\tau>)^m>^{1/m}$, which show a certain trend with the mean interval
$<\tau>$. We generate surrogate records using the Schreiber method, and find
that their cumulative distributions almost collapse to a single curve and
moments are almost constant for most range of $<\tau>$. Those substantial
differences suggest that non-linear correlations in the original volatility
sequence account for the deviations from a single scaling law. We also find
that the original and surrogate records exhibit slight tendencies for short and
long $<\tau>$, due to the discreteness and finite size effects of the records
respectively. To avoid as possible those effects for testing the multiscaling
behavior, we investigate the moments in the range $10<<\tau>\leq100$, and find
the exponent $\alpha$ from the power law fitting $\mu_m\sim<\tau>^\alpha$ has a
narrow distribution around $\alpha\neq0$ which depend on m for the 500 stocks.
The distribution of $\alpha$ for the surrogate records are very narrow and
centered around $\alpha=0$. This suggests that the return interval distribution
exhibit multiscaling behavior due to the non-linear correlations in the
original volatility.
| no_new_dataset | 0.944842 |
0905.4627 | Claudio Lucchese | Paolo Bolettieri, Andrea Esuli, Fabrizio Falchi, Claudio Lucchese,
Raffaele Perego, Tommaso Piccioli and Fausto Rabitti | CoPhIR: a Test Collection for Content-Based Image Retrieval | 15 pages | null | null | null | cs.MM cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The scalability, as well as the effectiveness, of the different Content-based
Image Retrieval (CBIR) approaches proposed in literature, is today an important
research issue. Given the wealth of images on the Web, CBIR systems must in
fact leap towards Web-scale datasets. In this paper, we report on our
experience in building a test collection of 100 million images, with the
corresponding descriptive features, to be used in experimenting new scalable
techniques for similarity searching, and comparing their results. In the
context of the SAPIR (Search on Audio-visual content using Peer-to-peer
Information Retrieval) European project, we had to experiment our distributed
similarity searching technology on a realistic data set. Therefore, since no
large-scale collection was available for research purposes, we had to tackle
the non-trivial process of image crawling and descriptive feature extraction
(we used five MPEG-7 features) using the European EGEE computer GRID. The
result of this effort is CoPhIR, the first CBIR test collection of such scale.
CoPhIR is now open to the research community for experiments and comparisons,
and access to the collection was already granted to more than 50 research
groups worldwide.
| [
{
"version": "v1",
"created": "Thu, 28 May 2009 12:14:07 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Jun 2009 07:44:19 GMT"
}
] | 2009-06-01T00:00:00 | [
[
"Bolettieri",
"Paolo",
""
],
[
"Esuli",
"Andrea",
""
],
[
"Falchi",
"Fabrizio",
""
],
[
"Lucchese",
"Claudio",
""
],
[
"Perego",
"Raffaele",
""
],
[
"Piccioli",
"Tommaso",
""
],
[
"Rabitti",
"Fausto",
""
]
] | TITLE: CoPhIR: a Test Collection for Content-Based Image Retrieval
ABSTRACT: The scalability, as well as the effectiveness, of the different Content-based
Image Retrieval (CBIR) approaches proposed in literature, is today an important
research issue. Given the wealth of images on the Web, CBIR systems must in
fact leap towards Web-scale datasets. In this paper, we report on our
experience in building a test collection of 100 million images, with the
corresponding descriptive features, to be used in experimenting new scalable
techniques for similarity searching, and comparing their results. In the
context of the SAPIR (Search on Audio-visual content using Peer-to-peer
Information Retrieval) European project, we had to experiment our distributed
similarity searching technology on a realistic data set. Therefore, since no
large-scale collection was available for research purposes, we had to tackle
the non-trivial process of image crawling and descriptive feature extraction
(we used five MPEG-7 features) using the European EGEE computer GRID. The
result of this effort is CoPhIR, the first CBIR test collection of such scale.
CoPhIR is now open to the research community for experiments and comparisons,
and access to the collection was already granted to more than 50 research
groups worldwide.
| new_dataset | 0.87925 |
0905.4138 | Christos Attikos | Christos Attikos, Michael Doumpos | Faster estimation of the correlation fractal dimension using
box-counting | 4 pages, to appear in BCI 2009 - 4th Balkan Conference in Informatics | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fractal dimension is widely adopted in spatial databases and data mining,
among others as a measure of dataset skewness. State-of-the-art algorithms for
estimating the fractal dimension exhibit linear runtime complexity whether
based on box-counting or approximation schemes. In this paper, we revisit a
correlation fractal dimension estimation algorithm that redundantly rescans the
dataset and, extending that work, we propose another linear, yet faster and as
accurate method, which completes in a single pass.
| [
{
"version": "v1",
"created": "Tue, 26 May 2009 08:52:42 GMT"
}
] | 2009-05-27T00:00:00 | [
[
"Attikos",
"Christos",
""
],
[
"Doumpos",
"Michael",
""
]
] | TITLE: Faster estimation of the correlation fractal dimension using
box-counting
ABSTRACT: Fractal dimension is widely adopted in spatial databases and data mining,
among others as a measure of dataset skewness. State-of-the-art algorithms for
estimating the fractal dimension exhibit linear runtime complexity whether
based on box-counting or approximation schemes. In this paper, we revisit a
correlation fractal dimension estimation algorithm that redundantly rescans the
dataset and, extending that work, we propose another linear, yet faster and as
accurate method, which completes in a single pass.
| no_new_dataset | 0.956513 |
0905.4022 | Paramveer Dhillon | Paramveer S. Dhillon, Dean Foster and Lyle Ungar | Transfer Learning Using Feature Selection | Masters' Thesis | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present three related ways of using Transfer Learning to improve feature
selection. The three methods address different problems, and hence share
different kinds of information between tasks or feature classes, but all three
are based on the information theoretic Minimum Description Length (MDL)
principle and share the same underlying Bayesian interpretation. The first
method, MIC, applies when predictive models are to be built simultaneously for
multiple tasks (``simultaneous transfer'') that share the same set of features.
MIC allows each feature to be added to none, some, or all of the task models
and is most beneficial for selecting a small set of predictive features from a
large pool of features, as is common in genomic and biological datasets. Our
second method, TPC (Three Part Coding), uses a similar methodology for the case
when the features can be divided into feature classes. Our third method,
Transfer-TPC, addresses the ``sequential transfer'' problem in which the task
to which we want to transfer knowledge may not be known in advance and may have
different amounts of data than the other tasks. Transfer-TPC is most beneficial
when we want to transfer knowledge between tasks which have unequal amounts of
labeled data, for example the data for disambiguating the senses of different
verbs. We demonstrate the effectiveness of these approaches with experimental
results on real world data pertaining to genomics and to Word Sense
Disambiguation (WSD).
| [
{
"version": "v1",
"created": "Mon, 25 May 2009 14:29:59 GMT"
}
] | 2009-05-26T00:00:00 | [
[
"Dhillon",
"Paramveer S.",
""
],
[
"Foster",
"Dean",
""
],
[
"Ungar",
"Lyle",
""
]
] | TITLE: Transfer Learning Using Feature Selection
ABSTRACT: We present three related ways of using Transfer Learning to improve feature
selection. The three methods address different problems, and hence share
different kinds of information between tasks or feature classes, but all three
are based on the information theoretic Minimum Description Length (MDL)
principle and share the same underlying Bayesian interpretation. The first
method, MIC, applies when predictive models are to be built simultaneously for
multiple tasks (``simultaneous transfer'') that share the same set of features.
MIC allows each feature to be added to none, some, or all of the task models
and is most beneficial for selecting a small set of predictive features from a
large pool of features, as is common in genomic and biological datasets. Our
second method, TPC (Three Part Coding), uses a similar methodology for the case
when the features can be divided into feature classes. Our third method,
Transfer-TPC, addresses the ``sequential transfer'' problem in which the task
to which we want to transfer knowledge may not be known in advance and may have
different amounts of data than the other tasks. Transfer-TPC is most beneficial
when we want to transfer knowledge between tasks which have unequal amounts of
labeled data, for example the data for disambiguating the senses of different
verbs. We demonstrate the effectiveness of these approaches with experimental
results on real world data pertaining to genomics and to Word Sense
Disambiguation (WSD).
| no_new_dataset | 0.947137 |
0905.2200 | Debprakash Patnaik | Yong Cao, Debprakash Patnaik, Sean Ponce, Jeremy Archuleta, Patrick
Butler, Wu-chun Feng, and Naren Ramakrishnan | Towards Chip-on-Chip Neuroscience: Fast Mining of Frequent Episodes
Using Graphics Processors | null | null | null | null | cs.DC cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational neuroscience is being revolutionized with the advent of
multi-electrode arrays that provide real-time, dynamic, perspectives into brain
function. Mining event streams from these chips is critical to understanding
the firing patterns of neurons and to gaining insight into the underlying
cellular activity. We present a GPGPU solution to mining spike trains. We focus
on mining frequent episodes which captures coordinated events across time even
in the presence of intervening background/"junk" events. Our algorithmic
contributions are two-fold: MapConcatenate, a new computation-to-core mapping
scheme, and a two-pass elimination approach to quickly find supported episodes
from a large number of candidates. Together, they help realize a real-time
"chip-on-chip" solution to neuroscience data mining, where one chip (the
multi-electrode array) supplies the spike train data and another (the GPGPU)
mines it at a scale unachievable previously. Evaluation on both synthetic and
real datasets demonstrate the potential of our approach.
| [
{
"version": "v1",
"created": "Wed, 13 May 2009 21:04:03 GMT"
}
] | 2009-05-15T00:00:00 | [
[
"Cao",
"Yong",
""
],
[
"Patnaik",
"Debprakash",
""
],
[
"Ponce",
"Sean",
""
],
[
"Archuleta",
"Jeremy",
""
],
[
"Butler",
"Patrick",
""
],
[
"Feng",
"Wu-chun",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Towards Chip-on-Chip Neuroscience: Fast Mining of Frequent Episodes
Using Graphics Processors
ABSTRACT: Computational neuroscience is being revolutionized with the advent of
multi-electrode arrays that provide real-time, dynamic, perspectives into brain
function. Mining event streams from these chips is critical to understanding
the firing patterns of neurons and to gaining insight into the underlying
cellular activity. We present a GPGPU solution to mining spike trains. We focus
on mining frequent episodes which captures coordinated events across time even
in the presence of intervening background/"junk" events. Our algorithmic
contributions are two-fold: MapConcatenate, a new computation-to-core mapping
scheme, and a two-pass elimination approach to quickly find supported episodes
from a large number of candidates. Together, they help realize a real-time
"chip-on-chip" solution to neuroscience data mining, where one chip (the
multi-electrode array) supplies the spike train data and another (the GPGPU)
mines it at a scale unachievable previously. Evaluation on both synthetic and
real datasets demonstrate the potential of our approach.
| no_new_dataset | 0.946941 |
0905.2203 | Debprakash Patnaik | Debprakash Patnaik, Sean P. Ponce, Yong Cao, Naren Ramakrishnan | Accelerator-Oriented Algorithm Transformation for Temporal Data Mining | null | null | null | null | cs.DC cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal data mining algorithms are becoming increasingly important in many
application domains including computational neuroscience, especially the
analysis of spike train data. While application scientists have been able to
readily gather multi-neuronal datasets, analysis capabilities have lagged
behind, due to both lack of powerful algorithms and inaccessibility to powerful
hardware platforms. The advent of GPU architectures such as Nvidia's GTX 280
offers a cost-effective option to bring these capabilities to the
neuroscientist's desktop. Rather than port existing algorithms onto this
architecture, we advocate the need for algorithm transformation, i.e.,
rethinking the design of the algorithm in a way that need not necessarily
mirror its serial implementation strictly. We present a novel implementation of
a frequent episode discovery algorithm by revisiting "in-the-large" issues such
as problem decomposition as well as "in-the-small" issues such as data layouts
and memory access patterns. This is non-trivial because frequent episode
discovery does not lend itself to GPU-friendly data-parallel mapping
strategies. Applications to many datasets and comparisons to CPU as well as
prior GPU implementations showcase the advantages of our approach.
| [
{
"version": "v1",
"created": "Wed, 13 May 2009 21:18:31 GMT"
}
] | 2009-05-15T00:00:00 | [
[
"Patnaik",
"Debprakash",
""
],
[
"Ponce",
"Sean P.",
""
],
[
"Cao",
"Yong",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Accelerator-Oriented Algorithm Transformation for Temporal Data Mining
ABSTRACT: Temporal data mining algorithms are becoming increasingly important in many
application domains including computational neuroscience, especially the
analysis of spike train data. While application scientists have been able to
readily gather multi-neuronal datasets, analysis capabilities have lagged
behind, due to both lack of powerful algorithms and inaccessibility to powerful
hardware platforms. The advent of GPU architectures such as Nvidia's GTX 280
offers a cost-effective option to bring these capabilities to the
neuroscientist's desktop. Rather than port existing algorithms onto this
architecture, we advocate the need for algorithm transformation, i.e.,
rethinking the design of the algorithm in a way that need not necessarily
mirror its serial implementation strictly. We present a novel implementation of
a frequent episode discovery algorithm by revisiting "in-the-large" issues such
as problem decomposition as well as "in-the-small" issues such as data layouts
and memory access patterns. This is non-trivial because frequent episode
discovery does not lend itself to GPU-friendly data-parallel mapping
strategies. Applications to many datasets and comparisons to CPU as well as
prior GPU implementations showcase the advantages of our approach.
| no_new_dataset | 0.94474 |
0905.2288 | Michele Marchesi | Hongyu Zhang, Hee Beng Kuan Tan, Michele Marchesi | The Distribution of Program Sizes and Its Implications: An Eclipse Case
Study | 10 pages, 2 figures, 6 tables | null | null | null | cs.SE cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large software system is often composed of many inter-related programs of
different sizes. Using the public Eclipse dataset, we replicate our previous
study on the distribution of program sizes. Our results confirm that the
program sizes follow the lognormal distribution. We also investigate the
implications of the program size distribution on size estimation and quality
predication. We find that the nature of size distribution can be used to
estimate the size of a large Java system. We also find that a small percentage
of largest programs account for a large percentage of defects, and the number
of defects across programs follows the Weibull distribution when the programs
are ranked by their sizes. Our results show that the distribution of program
sizes is an important property for understanding large and complex software
systems.
| [
{
"version": "v1",
"created": "Thu, 14 May 2009 09:24:51 GMT"
}
] | 2009-05-15T00:00:00 | [
[
"Zhang",
"Hongyu",
""
],
[
"Tan",
"Hee Beng Kuan",
""
],
[
"Marchesi",
"Michele",
""
]
] | TITLE: The Distribution of Program Sizes and Its Implications: An Eclipse Case
Study
ABSTRACT: A large software system is often composed of many inter-related programs of
different sizes. Using the public Eclipse dataset, we replicate our previous
study on the distribution of program sizes. Our results confirm that the
program sizes follow the lognormal distribution. We also investigate the
implications of the program size distribution on size estimation and quality
predication. We find that the nature of size distribution can be used to
estimate the size of a large Java system. We also find that a small percentage
of largest programs account for a large percentage of defects, and the number
of defects across programs follows the Weibull distribution when the programs
are ranked by their sizes. Our results show that the distribution of program
sizes is an important property for understanding large and complex software
systems.
| no_new_dataset | 0.947284 |
0905.2141 | Ilya Volnyansky | Ilya Volnyansky | Curse of Dimensionality in the Application of Pivot-based Indexes to the
Similarity Search Problem | 56 pages, 7 figures Master's Thesis in Mathematics, University of
Ottawa (Canada) Supervisor: Vladimir Pestov | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we study the validity of the so-called curse of dimensionality
for indexing of databases for similarity search. We perform an asymptotic
analysis, with a test model based on a sequence of metric spaces $(\Omega_d)$
from which we pick datasets $X_d$ in an i.i.d. fashion. We call the subscript
$d$ the dimension of the space $\Omega_d$ (e.g. for $\mathbb{R}^d$ the
dimension is just the usual one) and we allow the size of the dataset $n=n_d$
to be such that $d$ is superlogarithmic but subpolynomial in $n$.
We study the asymptotic performance of pivot-based indexing schemes where the
number of pivots is $o(n/d)$. We pick the relatively simple cost model of
similarity search where we count each distance calculation as a single
computation and disregard the rest.
We demonstrate that if the spaces $\Omega_d$ exhibit the (fairly common)
concentration of measure phenomenon the performance of similarity search using
such indexes is asymptotically linear in $n$. That is for large enough $d$ the
difference between using such an index and performing a search without an index
at all is negligeable. Thus we confirm the curse of dimensionality in this
setting.
| [
{
"version": "v1",
"created": "Wed, 13 May 2009 16:24:21 GMT"
}
] | 2009-05-14T00:00:00 | [
[
"Volnyansky",
"Ilya",
""
]
] | TITLE: Curse of Dimensionality in the Application of Pivot-based Indexes to the
Similarity Search Problem
ABSTRACT: In this work we study the validity of the so-called curse of dimensionality
for indexing of databases for similarity search. We perform an asymptotic
analysis, with a test model based on a sequence of metric spaces $(\Omega_d)$
from which we pick datasets $X_d$ in an i.i.d. fashion. We call the subscript
$d$ the dimension of the space $\Omega_d$ (e.g. for $\mathbb{R}^d$ the
dimension is just the usual one) and we allow the size of the dataset $n=n_d$
to be such that $d$ is superlogarithmic but subpolynomial in $n$.
We study the asymptotic performance of pivot-based indexing schemes where the
number of pivots is $o(n/d)$. We pick the relatively simple cost model of
similarity search where we count each distance calculation as a single
computation and disregard the rest.
We demonstrate that if the spaces $\Omega_d$ exhibit the (fairly common)
concentration of measure phenomenon the performance of similarity search using
such indexes is asymptotically linear in $n$. That is for large enough $d$ the
difference between using such an index and performing a search without an index
at all is negligeable. Thus we confirm the curse of dimensionality in this
setting.
| no_new_dataset | 0.943764 |
0905.1744 | Fahad Saeed | Fahad Saeed and Ashfaq Khokhar | A Domain Decomposition Strategy for Alignment of Multiple Biological
Sequences on Multiprocessor Platforms | 36 pages, 17 figures, Accepted manuscript in Journal of Parallel and
Distributed Computing(JPDC) | as: F. Saeed, A. Khokhar, A domain decomposition strategy for
alignment of multiple biological sequences on multiprocessor platforms, J.
Parallel Distrib. Comput. (2009) | 10.1016/j.jpdc.2009.03.006 | null | cs.DC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple Sequences Alignment (MSA) of biological sequences is a fundamental
problem in computational biology due to its critical significance in wide
ranging applications including haplotype reconstruction, sequence homology,
phylogenetic analysis, and prediction of evolutionary origins. The MSA problem
is considered NP-hard and known heuristics for the problem do not scale well
with increasing number of sequences. On the other hand, with the advent of new
breed of fast sequencing techniques it is now possible to generate thousands of
sequences very quickly. For rapid sequence analysis, it is therefore desirable
to develop fast MSA algorithms that scale well with the increase in the dataset
size. In this paper, we present a novel domain decomposition based technique to
solve the MSA problem on multiprocessing platforms. The domain decomposition
based technique, in addition to yielding better quality, gives enormous
advantage in terms of execution time and memory requirements. The proposed
strategy allows to decrease the time complexity of any known heuristic of
O(N)^x complexity by a factor of O(1/p)^x, where N is the number of sequences,
x depends on the underlying heuristic approach, and p is the number of
processing nodes. In particular, we propose a highly scalable algorithm,
Sample-Align-D, for aligning biological sequences using Muscle system as the
underlying heuristic. The proposed algorithm has been implemented on a cluster
of workstations using MPI library. Experimental results for different problem
sizes are analyzed in terms of quality of alignment, execution time and
speed-up.
| [
{
"version": "v1",
"created": "Tue, 12 May 2009 01:04:40 GMT"
}
] | 2009-05-13T00:00:00 | [
[
"Saeed",
"Fahad",
""
],
[
"Khokhar",
"Ashfaq",
""
]
] | TITLE: A Domain Decomposition Strategy for Alignment of Multiple Biological
Sequences on Multiprocessor Platforms
ABSTRACT: Multiple Sequences Alignment (MSA) of biological sequences is a fundamental
problem in computational biology due to its critical significance in wide
ranging applications including haplotype reconstruction, sequence homology,
phylogenetic analysis, and prediction of evolutionary origins. The MSA problem
is considered NP-hard and known heuristics for the problem do not scale well
with increasing number of sequences. On the other hand, with the advent of new
breed of fast sequencing techniques it is now possible to generate thousands of
sequences very quickly. For rapid sequence analysis, it is therefore desirable
to develop fast MSA algorithms that scale well with the increase in the dataset
size. In this paper, we present a novel domain decomposition based technique to
solve the MSA problem on multiprocessing platforms. The domain decomposition
based technique, in addition to yielding better quality, gives enormous
advantage in terms of execution time and memory requirements. The proposed
strategy allows to decrease the time complexity of any known heuristic of
O(N)^x complexity by a factor of O(1/p)^x, where N is the number of sequences,
x depends on the underlying heuristic approach, and p is the number of
processing nodes. In particular, we propose a highly scalable algorithm,
Sample-Align-D, for aligning biological sequences using Muscle system as the
underlying heuristic. The proposed algorithm has been implemented on a cluster
of workstations using MPI library. Experimental results for different problem
sizes are analyzed in terms of quality of alignment, execution time and
speed-up.
| no_new_dataset | 0.948489 |
0905.1755 | Raymond Chi-Wing Wong | Raymond Chi-Wing Wong, Ada Wai-Chee Fu, Ke Wang, Yabo Xu, Philip S. Yu | Can the Utility of Anonymized Data be used for Privacy Breaches? | 11 pages | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Group based anonymization is the most widely studied approach for privacy
preserving data publishing. This includes k-anonymity, l-diversity, and
t-closeness, to name a few. The goal of this paper is to raise a fundamental
issue on the privacy exposure of the current group based approach. This has
been overlooked in the past. The group based anonymization approach basically
hides each individual record behind a group to preserve data privacy. If not
properly anonymized, patterns can actually be derived from the published data
and be used by the adversary to breach individual privacy. For example, from
the medical records released, if patterns such as people from certain countries
rarely suffer from some disease can be derived, then the information can be
used to imply linkage of other people in an anonymized group with this disease
with higher likelihood. We call the derived patterns from the published data
the foreground knowledge. This is in contrast to the background knowledge that
the adversary may obtain from other channels as studied in some previous work.
Finally, we show by experiments that the attack is realistic in the privacy
benchmark dataset under the traditional group based anonymization approach.
| [
{
"version": "v1",
"created": "Tue, 12 May 2009 03:36:26 GMT"
}
] | 2009-05-13T00:00:00 | [
[
"Wong",
"Raymond Chi-Wing",
""
],
[
"Fu",
"Ada Wai-Chee",
""
],
[
"Wang",
"Ke",
""
],
[
"Xu",
"Yabo",
""
],
[
"Yu",
"Philip S.",
""
]
] | TITLE: Can the Utility of Anonymized Data be used for Privacy Breaches?
ABSTRACT: Group based anonymization is the most widely studied approach for privacy
preserving data publishing. This includes k-anonymity, l-diversity, and
t-closeness, to name a few. The goal of this paper is to raise a fundamental
issue on the privacy exposure of the current group based approach. This has
been overlooked in the past. The group based anonymization approach basically
hides each individual record behind a group to preserve data privacy. If not
properly anonymized, patterns can actually be derived from the published data
and be used by the adversary to breach individual privacy. For example, from
the medical records released, if patterns such as people from certain countries
rarely suffer from some disease can be derived, then the information can be
used to imply linkage of other people in an anonymized group with this disease
with higher likelihood. We call the derived patterns from the published data
the foreground knowledge. This is in contrast to the background knowledge that
the adversary may obtain from other channels as studied in some previous work.
Finally, we show by experiments that the attack is realistic in the privacy
benchmark dataset under the traditional group based anonymization approach.
| no_new_dataset | 0.945851 |
0902.1475 | Frank E. Walter | Frank E. Walter, Stefano Battiston, Frank Schweitzer | Personalised and Dynamic Trust in Social Networks | Revised, added Empirical Validation, submitted to Recommender Systems
2009 | null | null | null | cs.CY cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel trust metric for social networks which is suitable for
application in recommender systems. It is personalised and dynamic and allows
to compute the indirect trust between two agents which are not neighbours based
on the direct trust between agents that are neighbours. In analogy to some
personalised versions of PageRank, this metric makes use of the concept of
feedback centrality and overcomes some of the limitations of other trust
metrics.In particular, it does not neglect cycles and other patterns
characterising social networks, as some other algorithms do. In order to apply
the metric to recommender systems, we propose a way to make trust dynamic over
time. We show by means of analytical approximations and computer simulations
that the metric has the desired properties. Finally, we carry out an empirical
validation on a dataset crawled from an Internet community and compare the
performance of a recommender system using our metric to one using collaborative
filtering.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2009 16:53:01 GMT"
},
{
"version": "v2",
"created": "Sat, 9 May 2009 17:48:23 GMT"
}
] | 2009-05-09T00:00:00 | [
[
"Walter",
"Frank E.",
""
],
[
"Battiston",
"Stefano",
""
],
[
"Schweitzer",
"Frank",
""
]
] | TITLE: Personalised and Dynamic Trust in Social Networks
ABSTRACT: We propose a novel trust metric for social networks which is suitable for
application in recommender systems. It is personalised and dynamic and allows
to compute the indirect trust between two agents which are not neighbours based
on the direct trust between agents that are neighbours. In analogy to some
personalised versions of PageRank, this metric makes use of the concept of
feedback centrality and overcomes some of the limitations of other trust
metrics.In particular, it does not neglect cycles and other patterns
characterising social networks, as some other algorithms do. In order to apply
the metric to recommender systems, we propose a way to make trust dynamic over
time. We show by means of analytical approximations and computer simulations
that the metric has the desired properties. Finally, we carry out an empirical
validation on a dataset crawled from an Internet community and compare the
performance of a recommender system using our metric to one using collaborative
filtering.
| no_new_dataset | 0.949106 |
0904.4041 | Mario Nascimento | Jie Luo and Mario A. Nascimento | Content-Based Sub-Image Retrieval with Relevance Feedback | A preliminary version of this paper appeared in the Proceedings of
the 1st ACM International Workshop on Multimedia Databases, p. 63-69. 2003 | null | null | null | cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The typical content-based image retrieval problem is to find images within a
database that are similar to a given query image. This paper presents a
solution to a different problem, namely that of content based sub-image
retrieval, i.e., finding images from a database that contains another image.
Note that this is different from finding a region in a (segmented) image that
is similar to another image region given as a query. We present a technique for
CBsIR that explores relevance feedback, i.e., the user's input on intermediary
results, in order to improve retrieval efficiency. Upon modeling images as a
set of overlapping and recursive tiles, we use a tile re-weighting scheme that
assigns penalties to each tile of the database images and updates the tile
penalties for all relevant images retrieved at each iteration using both the
relevant and irrelevant images identified by the user. Each tile is modeled by
means of its color content using a compact but very efficient method which can,
indirectly, capture some notion of texture as well, despite the fact that only
color information is maintained. Performance evaluation on a largely
heterogeneous dataset of over 10,000 images shows that the system can achieve a
stable average recall value of 70% within the top 20 retrieved (and presented)
images after only 5 iterations, with each such iteration taking about 2 seconds
on an off-the-shelf desktop computer.
| [
{
"version": "v1",
"created": "Sun, 26 Apr 2009 17:50:33 GMT"
}
] | 2009-04-28T00:00:00 | [
[
"Luo",
"Jie",
""
],
[
"Nascimento",
"Mario A.",
""
]
] | TITLE: Content-Based Sub-Image Retrieval with Relevance Feedback
ABSTRACT: The typical content-based image retrieval problem is to find images within a
database that are similar to a given query image. This paper presents a
solution to a different problem, namely that of content based sub-image
retrieval, i.e., finding images from a database that contains another image.
Note that this is different from finding a region in a (segmented) image that
is similar to another image region given as a query. We present a technique for
CBsIR that explores relevance feedback, i.e., the user's input on intermediary
results, in order to improve retrieval efficiency. Upon modeling images as a
set of overlapping and recursive tiles, we use a tile re-weighting scheme that
assigns penalties to each tile of the database images and updates the tile
penalties for all relevant images retrieved at each iteration using both the
relevant and irrelevant images identified by the user. Each tile is modeled by
means of its color content using a compact but very efficient method which can,
indirectly, capture some notion of texture as well, despite the fact that only
color information is maintained. Performance evaluation on a largely
heterogeneous dataset of over 10,000 images shows that the system can achieve a
stable average recall value of 70% within the top 20 retrieved (and presented)
images after only 5 iterations, with each such iteration taking about 2 seconds
on an off-the-shelf desktop computer.
| no_new_dataset | 0.947332 |
0904.3316 | Shariq Bashir Mr. | Shariq Bashir, and Abdul Rauf Baig | Ramp: Fast Frequent Itemset Mining with Efficient Bit-Vector Projection
Technique | null | null | null | null | cs.DB cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining frequent itemset using bit-vector representation approach is very
efficient for dense type datasets, but highly inefficient for sparse datasets
due to lack of any efficient bit-vector projection technique. In this paper we
present a novel efficient bit-vector projection technique, for sparse and dense
datasets. To check the efficiency of our bit-vector projection technique, we
present a new frequent itemset mining algorithm Ramp (Real Algorithm for Mining
Patterns) build upon our bit-vector projection technique. The performance of
the Ramp is compared with the current best (all, maximal and closed) frequent
itemset mining algorithms on benchmark datasets. Different experimental results
on sparse and dense datasets show that mining frequent itemset using Ramp is
faster than the current best algorithms, which show the effectiveness of our
bit-vector projection idea. We also present a new local maximal frequent
itemsets propagation and maximal itemset superset checking approach FastLMFI,
build upon our PBR bit-vector projection technique. Our different computational
experiments suggest that itemset maximality checking using FastLMFI is fast and
efficient than a previous will known progressive focusing approach.
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2009 18:49:13 GMT"
}
] | 2009-04-22T00:00:00 | [
[
"Bashir",
"Shariq",
""
],
[
"Baig",
"Abdul Rauf",
""
]
] | TITLE: Ramp: Fast Frequent Itemset Mining with Efficient Bit-Vector Projection
Technique
ABSTRACT: Mining frequent itemset using bit-vector representation approach is very
efficient for dense type datasets, but highly inefficient for sparse datasets
due to lack of any efficient bit-vector projection technique. In this paper we
present a novel efficient bit-vector projection technique, for sparse and dense
datasets. To check the efficiency of our bit-vector projection technique, we
present a new frequent itemset mining algorithm Ramp (Real Algorithm for Mining
Patterns) build upon our bit-vector projection technique. The performance of
the Ramp is compared with the current best (all, maximal and closed) frequent
itemset mining algorithms on benchmark datasets. Different experimental results
on sparse and dense datasets show that mining frequent itemset using Ramp is
faster than the current best algorithms, which show the effectiveness of our
bit-vector projection idea. We also present a new local maximal frequent
itemsets propagation and maximal itemset superset checking approach FastLMFI,
build upon our PBR bit-vector projection technique. Our different computational
experiments suggest that itemset maximality checking using FastLMFI is fast and
efficient than a previous will known progressive focusing approach.
| no_new_dataset | 0.951639 |
0904.3319 | Shariq Bashir Mr. | Shariq Bashir, Zahoor Jan, Abdul Rauf Baig | Fast Algorithms for Mining Interesting Frequent Itemsets without Minimum
Support | 25 Pages | null | null | null | cs.DB cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real world datasets are sparse, dirty and contain hundreds of items. In such
situations, discovering interesting rules (results) using traditional frequent
itemset mining approach by specifying a user defined input support threshold is
not appropriate. Since without any domain knowledge, setting support threshold
small or large can output nothing or a large number of redundant uninteresting
results. Recently a novel approach of mining only N-most/Top-K interesting
frequent itemsets has been proposed, which discovers the top N interesting
results without specifying any user defined support threshold. However, mining
interesting frequent itemsets without minimum support threshold are more costly
in terms of itemset search space exploration and processing cost. Thereby, the
efficiency of their mining highly depends upon three main factors (1) Database
representation approach used for itemset frequency counting, (2) Projection of
relevant transactions to lower level nodes of search space and (3) Algorithm
implementation technique. Therefore, to improve the efficiency of mining
process, in this paper we present two novel algorithms called (N-MostMiner and
Top-K-Miner) using the bit-vector representation approach which is very
efficient in terms of itemset frequency counting and transactions projection.
In addition to this, several efficient implementation techniques of N-MostMiner
and Top-K-Miner are also present which we experienced in our implementation.
Our experimental results on benchmark datasets suggest that the NMostMiner and
Top-K-Miner are very efficient in terms of processing time as compared to
current best algorithms BOMO and TFP.
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2009 19:07:35 GMT"
}
] | 2009-04-22T00:00:00 | [
[
"Bashir",
"Shariq",
""
],
[
"Jan",
"Zahoor",
""
],
[
"Baig",
"Abdul Rauf",
""
]
] | TITLE: Fast Algorithms for Mining Interesting Frequent Itemsets without Minimum
Support
ABSTRACT: Real world datasets are sparse, dirty and contain hundreds of items. In such
situations, discovering interesting rules (results) using traditional frequent
itemset mining approach by specifying a user defined input support threshold is
not appropriate. Since without any domain knowledge, setting support threshold
small or large can output nothing or a large number of redundant uninteresting
results. Recently a novel approach of mining only N-most/Top-K interesting
frequent itemsets has been proposed, which discovers the top N interesting
results without specifying any user defined support threshold. However, mining
interesting frequent itemsets without minimum support threshold are more costly
in terms of itemset search space exploration and processing cost. Thereby, the
efficiency of their mining highly depends upon three main factors (1) Database
representation approach used for itemset frequency counting, (2) Projection of
relevant transactions to lower level nodes of search space and (3) Algorithm
implementation technique. Therefore, to improve the efficiency of mining
process, in this paper we present two novel algorithms called (N-MostMiner and
Top-K-Miner) using the bit-vector representation approach which is very
efficient in terms of itemset frequency counting and transactions projection.
In addition to this, several efficient implementation techniques of N-MostMiner
and Top-K-Miner are also present which we experienced in our implementation.
Our experimental results on benchmark datasets suggest that the NMostMiner and
Top-K-Miner are very efficient in terms of processing time as compared to
current best algorithms BOMO and TFP.
| no_new_dataset | 0.949529 |
0904.3320 | Shariq Bashir Mr. | Shariq Bashir, Saad Razzaq, Umer Maqbool, Sonya Tahir, Abdul Rauf Baig | Using Association Rules for Better Treatment of Missing Values | null | null | null | null | cs.DB cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The quality of training data for knowledge discovery in databases (KDD) and
data mining depends upon many factors, but handling missing values is
considered to be a crucial factor in overall data quality. Today real world
datasets contains missing values due to human, operational error, hardware
malfunctioning and many other factors. The quality of knowledge extracted,
learning and decision problems depend directly upon the quality of training
data. By considering the importance of handling missing values in KDD and data
mining tasks, in this paper we propose a novel Hybrid Missing values Imputation
Technique (HMiT) using association rules mining and hybrid combination of
k-nearest neighbor approach. To check the effectiveness of our HMiT missing
values imputation technique, we also perform detail experimental results on
real world datasets. Our results suggest that the HMiT technique is not only
better in term of accuracy but it also take less processing time as compared to
current best missing values imputation technique based on k-nearest neighbor
approach, which shows the effectiveness of our missing values imputation
technique.
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2009 19:09:57 GMT"
}
] | 2009-04-22T00:00:00 | [
[
"Bashir",
"Shariq",
""
],
[
"Razzaq",
"Saad",
""
],
[
"Maqbool",
"Umer",
""
],
[
"Tahir",
"Sonya",
""
],
[
"Baig",
"Abdul Rauf",
""
]
] | TITLE: Using Association Rules for Better Treatment of Missing Values
ABSTRACT: The quality of training data for knowledge discovery in databases (KDD) and
data mining depends upon many factors, but handling missing values is
considered to be a crucial factor in overall data quality. Today real world
datasets contains missing values due to human, operational error, hardware
malfunctioning and many other factors. The quality of knowledge extracted,
learning and decision problems depend directly upon the quality of training
data. By considering the importance of handling missing values in KDD and data
mining tasks, in this paper we propose a novel Hybrid Missing values Imputation
Technique (HMiT) using association rules mining and hybrid combination of
k-nearest neighbor approach. To check the effectiveness of our HMiT missing
values imputation technique, we also perform detail experimental results on
real world datasets. Our results suggest that the HMiT technique is not only
better in term of accuracy but it also take less processing time as compared to
current best missing values imputation technique based on k-nearest neighbor
approach, which shows the effectiveness of our missing values imputation
technique.
| no_new_dataset | 0.952264 |
0904.3321 | Shariq Bashir Mr. | Shariq Bashir, Saad Razzaq, Umer Maqbool, Sonya Tahir, Abdul Rauf Baig | Introducing Partial Matching Approach in Association Rules for Better
Treatment of Missing Values | null | null | null | null | cs.DB cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Handling missing values in training datasets for constructing learning models
or extracting useful information is considered to be an important research task
in data mining and knowledge discovery in databases. In recent years, lot of
techniques are proposed for imputing missing values by considering attribute
relationships with missing value observation and other observations of training
dataset. The main deficiency of such techniques is that, they depend upon
single approach and do not combine multiple approaches, that why they are less
accurate. To improve the accuracy of missing values imputation, in this paper
we introduce a novel partial matching concept in association rules mining,
which shows better results as compared to full matching concept that we
described in our previous work. Our imputation technique combines the partial
matching concept in association rules with k-nearest neighbor approach. Since
this is a hybrid technique, therefore its accuracy is much better than as
compared to those techniques which depend upon single approach. To check the
efficiency of our technique, we also provide detail experimental results on
number of benchmark datasets which show better results as compared to previous
approaches.
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2009 19:16:00 GMT"
}
] | 2009-04-22T00:00:00 | [
[
"Bashir",
"Shariq",
""
],
[
"Razzaq",
"Saad",
""
],
[
"Maqbool",
"Umer",
""
],
[
"Tahir",
"Sonya",
""
],
[
"Baig",
"Abdul Rauf",
""
]
] | TITLE: Introducing Partial Matching Approach in Association Rules for Better
Treatment of Missing Values
ABSTRACT: Handling missing values in training datasets for constructing learning models
or extracting useful information is considered to be an important research task
in data mining and knowledge discovery in databases. In recent years, lot of
techniques are proposed for imputing missing values by considering attribute
relationships with missing value observation and other observations of training
dataset. The main deficiency of such techniques is that, they depend upon
single approach and do not combine multiple approaches, that why they are less
accurate. To improve the accuracy of missing values imputation, in this paper
we introduce a novel partial matching concept in association rules mining,
which shows better results as compared to full matching concept that we
described in our previous work. Our imputation technique combines the partial
matching concept in association rules with k-nearest neighbor approach. Since
this is a hybrid technique, therefore its accuracy is much better than as
compared to those techniques which depend upon single approach. To check the
efficiency of our technique, we also provide detail experimental results on
number of benchmark datasets which show better results as compared to previous
approaches.
| no_new_dataset | 0.950457 |
0904.2476 | Alessandra Retico | I. Gori, F. Bagagli, M.E. Fantacci, A. Preite Martinez, A. Retico, I.
De Mitri, S. Donadio, C. Fulcheri, G. Gargano, R. Magro, M. Santoro, S.
Stumbo | Multi-scale analysis of lung computed tomography images | 18 pages, 12 low-resolution figures | 2007 JINST 2 P09007 | 10.1088/1748-0221/2/09/P09007 | null | physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A computer-aided detection (CAD) system for the identification of lung
internal nodules in low-dose multi-detector helical Computed Tomography (CT)
images was developed in the framework of the MAGIC-5 project. The three modules
of our lung CAD system, a segmentation algorithm for lung internal region
identification, a multi-scale dot-enhancement filter for nodule candidate
selection and a multi-scale neural technique for false positive finding
reduction, are described. The results obtained on a dataset of low-dose and
thin-slice CT scans are shown in terms of free response receiver operating
characteristic (FROC) curves and discussed.
| [
{
"version": "v1",
"created": "Thu, 16 Apr 2009 12:29:04 GMT"
}
] | 2009-04-17T00:00:00 | [
[
"Gori",
"I.",
""
],
[
"Bagagli",
"F.",
""
],
[
"Fantacci",
"M. E.",
""
],
[
"Martinez",
"A. Preite",
""
],
[
"Retico",
"A.",
""
],
[
"De Mitri",
"I.",
""
],
[
"Donadio",
"S.",
""
],
[
"Fulcheri",
"C.",
""
],
[
"Gargano",
"G.",
""
],
[
"Magro",
"R.",
""
],
[
"Santoro",
"M.",
""
],
[
"Stumbo",
"S.",
""
]
] | TITLE: Multi-scale analysis of lung computed tomography images
ABSTRACT: A computer-aided detection (CAD) system for the identification of lung
internal nodules in low-dose multi-detector helical Computed Tomography (CT)
images was developed in the framework of the MAGIC-5 project. The three modules
of our lung CAD system, a segmentation algorithm for lung internal region
identification, a multi-scale dot-enhancement filter for nodule candidate
selection and a multi-scale neural technique for false positive finding
reduction, are described. The results obtained on a dataset of low-dose and
thin-slice CT scans are shown in terms of free response receiver operating
characteristic (FROC) curves and discussed.
| no_new_dataset | 0.949995 |
0904.2160 | Debprakash Patnaik | Debprakash Patnaik and Srivatsan Laxman and Naren Ramakrishnan | Inferring Dynamic Bayesian Networks using Frequent Episode Mining | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Several different threads of research have been proposed for
modeling and mining temporal data. On the one hand, approaches such as dynamic
Bayesian networks (DBNs) provide a formal probabilistic basis to model
relationships between time-indexed random variables but these models are
intractable to learn in the general case. On the other, algorithms such as
frequent episode mining are scalable to large datasets but do not exhibit the
rigorous probabilistic interpretations that are the mainstay of the graphical
models literature.
Results: We present a unification of these two seemingly diverse threads of
research, by demonstrating how dynamic (discrete) Bayesian networks can be
inferred from the results of frequent episode mining. This helps bridge the
modeling emphasis of the former with the counting emphasis of the latter.
First, we show how, under reasonable assumptions on data characteristics and on
influences of random variables, the optimal DBN structure can be computed using
a greedy, local, algorithm. Next, we connect the optimality of the DBN
structure with the notion of fixed-delay episodes and their counts of distinct
occurrences. Finally, to demonstrate the practical feasibility of our approach,
we focus on a specific (but broadly applicable) class of networks, called
excitatory networks, and show how the search for the optimal DBN structure can
be conducted using just information from frequent episodes. Application on
datasets gathered from mathematical models of spiking neurons as well as real
neuroscience datasets are presented.
Availability: Algorithmic implementations, simulator codebases, and datasets
are available from our website at http://neural-code.cs.vt.edu/dbn
| [
{
"version": "v1",
"created": "Tue, 14 Apr 2009 17:32:00 GMT"
}
] | 2009-04-15T00:00:00 | [
[
"Patnaik",
"Debprakash",
""
],
[
"Laxman",
"Srivatsan",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Inferring Dynamic Bayesian Networks using Frequent Episode Mining
ABSTRACT: Motivation: Several different threads of research have been proposed for
modeling and mining temporal data. On the one hand, approaches such as dynamic
Bayesian networks (DBNs) provide a formal probabilistic basis to model
relationships between time-indexed random variables but these models are
intractable to learn in the general case. On the other, algorithms such as
frequent episode mining are scalable to large datasets but do not exhibit the
rigorous probabilistic interpretations that are the mainstay of the graphical
models literature.
Results: We present a unification of these two seemingly diverse threads of
research, by demonstrating how dynamic (discrete) Bayesian networks can be
inferred from the results of frequent episode mining. This helps bridge the
modeling emphasis of the former with the counting emphasis of the latter.
First, we show how, under reasonable assumptions on data characteristics and on
influences of random variables, the optimal DBN structure can be computed using
a greedy, local, algorithm. Next, we connect the optimality of the DBN
structure with the notion of fixed-delay episodes and their counts of distinct
occurrences. Finally, to demonstrate the practical feasibility of our approach,
we focus on a specific (but broadly applicable) class of networks, called
excitatory networks, and show how the search for the optimal DBN structure can
be conducted using just information from frequent episodes. Application on
datasets gathered from mathematical models of spiking neurons as well as real
neuroscience datasets are presented.
Availability: Algorithmic implementations, simulator codebases, and datasets
are available from our website at http://neural-code.cs.vt.edu/dbn
| no_new_dataset | 0.94801 |
physics/0701244 | Alessandra Retico | P. Delogu, M.E. Fantacci, P. Kasae, A. Retico | An automatic system to discriminate malignant from benign massive
lesions in mammograms | 4 pages, 2 figure; Proceedings of the Frontier Science 2005, 4th
International Conference on Frontier Science, 12-17 September, 2005, Milano,
Italy | Volume XL. Frontier Science 2005 - New Frontiers in Subnuclear
Physics. Eds. A. Pullia and M. Paganoni. | null | null | physics.med-ph | null | Evaluating the degree of malignancy of a massive lesion on the basis of the
mere visual analysis of the mammogram is a non-trivial task. We developed a
semi-automated system for massive-lesion characterization with the aim to
support the radiological diagnosis. A dataset of 226 masses has been used in
the present analysis. The system performances have been evaluated in terms of
the area under the ROC curve, obtaining A_z=0.80+-0.04.
| [
{
"version": "v1",
"created": "Mon, 22 Jan 2007 10:58:23 GMT"
}
] | 2009-04-15T00:00:00 | [
[
"Delogu",
"P.",
""
],
[
"Fantacci",
"M. E.",
""
],
[
"Kasae",
"P.",
""
],
[
"Retico",
"A.",
""
]
] | TITLE: An automatic system to discriminate malignant from benign massive
lesions in mammograms
ABSTRACT: Evaluating the degree of malignancy of a massive lesion on the basis of the
mere visual analysis of the mammogram is a non-trivial task. We developed a
semi-automated system for massive-lesion characterization with the aim to
support the radiological diagnosis. A dataset of 226 masses has been used in
the present analysis. The system performances have been evaluated in terms of
the area under the ROC curve, obtaining A_z=0.80+-0.04.
| new_dataset | 0.830457 |
0901.0148 | Michal Zerola | Michal Zerola, Jerome Lauret, Roman Bartak and Michal Sumbera | Using constraint programming to resolve the multi-source/multi-site data
movement paradigm on the Grid | 10 pages; ACAT 2008 workshop | null | null | null | cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to achieve both fast and coordinated data transfer to collaborative
sites as well as to create a distribution of data over multiple sites,
efficient data movement is one of the most essential aspects in distributed
environment. With such capabilities at hand, truly distributed task scheduling
with minimal latencies would be reachable by internationally distributed
collaborations (such as ones in HENP) seeking for scavenging or maximizing on
geographically spread computational resources. But it is often not all clear
(a) how to move data when available from multiple sources or (b) how to move
data to multiple compute resources to achieve an optimal usage of available
resources. We present a method of creating a Constraint Programming (CP) model
consisting of sites, links and their attributes such as bandwidth for grid
network data transfer also considering user tasks as part of the objective
function for an optimal solution. We will explore and explain trade-off between
schedule generation time and divergence from the optimal solution and show how
to improve and render viable the solution's finding time by using search tree
time limit, approximations, restrictions such as symmetry breaking or grouping
similar tasks together, or generating sequence of optimal schedules by
splitting the input problem. Results of data transfer simulation for each case
will also include a well known Peer-2-Peer model, and time taken to generate a
schedule as well as time needed for a schedule execution will be compared to a
CP optimal solution. We will additionally present a possible implementation
aimed to bring a distributed datasets (multiple sources) to a given site in a
minimal time.
| [
{
"version": "v1",
"created": "Wed, 31 Dec 2008 21:25:32 GMT"
}
] | 2009-04-14T00:00:00 | [
[
"Zerola",
"Michal",
""
],
[
"Lauret",
"Jerome",
""
],
[
"Bartak",
"Roman",
""
],
[
"Sumbera",
"Michal",
""
]
] | TITLE: Using constraint programming to resolve the multi-source/multi-site data
movement paradigm on the Grid
ABSTRACT: In order to achieve both fast and coordinated data transfer to collaborative
sites as well as to create a distribution of data over multiple sites,
efficient data movement is one of the most essential aspects in distributed
environment. With such capabilities at hand, truly distributed task scheduling
with minimal latencies would be reachable by internationally distributed
collaborations (such as ones in HENP) seeking for scavenging or maximizing on
geographically spread computational resources. But it is often not all clear
(a) how to move data when available from multiple sources or (b) how to move
data to multiple compute resources to achieve an optimal usage of available
resources. We present a method of creating a Constraint Programming (CP) model
consisting of sites, links and their attributes such as bandwidth for grid
network data transfer also considering user tasks as part of the objective
function for an optimal solution. We will explore and explain trade-off between
schedule generation time and divergence from the optimal solution and show how
to improve and render viable the solution's finding time by using search tree
time limit, approximations, restrictions such as symmetry breaking or grouping
similar tasks together, or generating sequence of optimal schedules by
splitting the input problem. Results of data transfer simulation for each case
will also include a well known Peer-2-Peer model, and time taken to generate a
schedule as well as time needed for a schedule execution will be compared to a
CP optimal solution. We will additionally present a possible implementation
aimed to bring a distributed datasets (multiple sources) to a given site in a
minimal time.
| no_new_dataset | 0.947672 |
0904.1931 | Byron Gao | Obi L. Griffith, Byron J. Gao, Mikhail Bilenky, Yuliya Prichyna,
Martin Ester, Steven J.M. Jones | KiWi: A Scalable Subspace Clustering Algorithm for Gene Expression
Analysis | International Conference on Bioinformatics and Biomedical Engineering
(iCBBE), 2009 | null | null | null | cs.DB cs.AI q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Subspace clustering has gained increasing popularity in the analysis of gene
expression data. Among subspace cluster models, the recently introduced
order-preserving sub-matrix (OPSM) has demonstrated high promise. An OPSM,
essentially a pattern-based subspace cluster, is a subset of rows and columns
in a data matrix for which all the rows induce the same linear ordering of
columns. Existing OPSM discovery methods do not scale well to increasingly
large expression datasets. In particular, twig clusters having few genes and
many experiments incur explosive computational costs and are completely pruned
off by existing methods. However, it is of particular interest to determine
small groups of genes that are tightly coregulated across many conditions. In
this paper, we present KiWi, an OPSM subspace clustering algorithm that is
scalable to massive datasets, capable of discovering twig clusters and
identifying negative as well as positive correlations. We extensively validate
KiWi using relevant biological datasets and show that KiWi correctly assigns
redundant probes to the same cluster, groups experiments with common clinical
annotations, differentiates real promoter sequences from negative control
sequences, and shows good association with cis-regulatory motif predictions.
| [
{
"version": "v1",
"created": "Mon, 13 Apr 2009 08:16:53 GMT"
}
] | 2009-04-14T00:00:00 | [
[
"Griffith",
"Obi L.",
""
],
[
"Gao",
"Byron J.",
""
],
[
"Bilenky",
"Mikhail",
""
],
[
"Prichyna",
"Yuliya",
""
],
[
"Ester",
"Martin",
""
],
[
"Jones",
"Steven J. M.",
""
]
] | TITLE: KiWi: A Scalable Subspace Clustering Algorithm for Gene Expression
Analysis
ABSTRACT: Subspace clustering has gained increasing popularity in the analysis of gene
expression data. Among subspace cluster models, the recently introduced
order-preserving sub-matrix (OPSM) has demonstrated high promise. An OPSM,
essentially a pattern-based subspace cluster, is a subset of rows and columns
in a data matrix for which all the rows induce the same linear ordering of
columns. Existing OPSM discovery methods do not scale well to increasingly
large expression datasets. In particular, twig clusters having few genes and
many experiments incur explosive computational costs and are completely pruned
off by existing methods. However, it is of particular interest to determine
small groups of genes that are tightly coregulated across many conditions. In
this paper, we present KiWi, an OPSM subspace clustering algorithm that is
scalable to massive datasets, capable of discovering twig clusters and
identifying negative as well as positive correlations. We extensively validate
KiWi using relevant biological datasets and show that KiWi correctly assigns
redundant probes to the same cluster, groups experiments with common clinical
annotations, differentiates real promoter sequences from negative control
sequences, and shows good association with cis-regulatory motif predictions.
| no_new_dataset | 0.946051 |
0904.1313 | Hao Zhang | Hao Zhang, Gang Li, Huadong Meng | A Class of Novel STAP Algorithms Using Sparse Recovery Technique | 8 pages, 5 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A class of novel STAP algorithms based on sparse recovery technique were
presented. Intrinsic sparsity of distribution of clutter and target energy on
spatial-frequency plane was exploited from the viewpoint of compressed sensing.
The original sample data and distribution of target and clutter energy was
connected by a ill-posed linear algebraic equation and popular $L_1$
optimization method could be utilized to search for its solution with sparse
characteristic. Several new filtering algorithm acting on this solution were
designed to clean clutter component on spatial-frequency plane effectively for
detecting invisible targets buried in clutter. The method above is called
CS-STAP in general. CS-STAP showed their advantage compared with conventional
STAP technique, such as SMI, in two ways: Firstly, the resolution of CS-STAP on
estimation for distribution of clutter and target energy is ultra-high such
that clutter energy might be annihilated almost completely by carefully tuned
filter. Output SCR of CS-STAP algorithms is far superior to the requirement of
detection; Secondly, a much smaller size of training sample support compared
with SMI method is requested for CS-STAP method. Even with only one snapshot
(from target range cell) could CS-STAP method be able to reveal the existence
of target clearly. CS-STAP method display its great potential to be used in
heterogeneous situation. Experimental result on dataset from mountaintop
program has provided the evidence for our assertion on CS-STAP.
| [
{
"version": "v1",
"created": "Wed, 8 Apr 2009 11:58:02 GMT"
}
] | 2009-04-09T00:00:00 | [
[
"Zhang",
"Hao",
""
],
[
"Li",
"Gang",
""
],
[
"Meng",
"Huadong",
""
]
] | TITLE: A Class of Novel STAP Algorithms Using Sparse Recovery Technique
ABSTRACT: A class of novel STAP algorithms based on sparse recovery technique were
presented. Intrinsic sparsity of distribution of clutter and target energy on
spatial-frequency plane was exploited from the viewpoint of compressed sensing.
The original sample data and distribution of target and clutter energy was
connected by a ill-posed linear algebraic equation and popular $L_1$
optimization method could be utilized to search for its solution with sparse
characteristic. Several new filtering algorithm acting on this solution were
designed to clean clutter component on spatial-frequency plane effectively for
detecting invisible targets buried in clutter. The method above is called
CS-STAP in general. CS-STAP showed their advantage compared with conventional
STAP technique, such as SMI, in two ways: Firstly, the resolution of CS-STAP on
estimation for distribution of clutter and target energy is ultra-high such
that clutter energy might be annihilated almost completely by carefully tuned
filter. Output SCR of CS-STAP algorithms is far superior to the requirement of
detection; Secondly, a much smaller size of training sample support compared
with SMI method is requested for CS-STAP method. Even with only one snapshot
(from target range cell) could CS-STAP method be able to reveal the existence
of target clearly. CS-STAP method display its great potential to be used in
heterogeneous situation. Experimental result on dataset from mountaintop
program has provided the evidence for our assertion on CS-STAP.
| no_new_dataset | 0.946001 |
0903.4035 | Iraklis Varlamis | A. Kritikopoulos, M. Sideri, I. Varlamis | BLOGRANK: Ranking Weblogs Based On Connectivity And Similarity Features | 9 pages, in 2nd international workshop on Advanced architectures and
algorithms for internet delivery and applications | Proceedings of the 2nd international Workshop on Advanced
Architectures and Algorithms For internet Delivery and Applications (Pisa,
Italy, October 10 - 10, 2006). AAA-IDEA '06 | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large part of the hidden web resides in weblog servers. New content is
produced in a daily basis and the work of traditional search engines turns to
be insufficient due to the nature of weblogs. This work summarizes the
structure of the blogosphere and highlights the special features of weblogs. In
this paper we present a method for ranking weblogs based on the link graph and
on several similarity characteristics between weblogs. First we create an
enhanced graph of connected weblogs and add new types of edges and weights
utilising many weblog features. Then, we assign a ranking to each weblog using
our algorithm, BlogRank, which is a modified version of PageRank. For the
validation of our method we run experiments on a weblog dataset, which we
process and adapt to our search engine. (http://spiderwave.aueb.gr/Blogwave).
The results suggest that the use of the enhanced graph and the BlogRank
algorithm is preferred by the users.
| [
{
"version": "v1",
"created": "Tue, 24 Mar 2009 08:36:21 GMT"
}
] | 2009-03-25T00:00:00 | [
[
"Kritikopoulos",
"A.",
""
],
[
"Sideri",
"M.",
""
],
[
"Varlamis",
"I.",
""
]
] | TITLE: BLOGRANK: Ranking Weblogs Based On Connectivity And Similarity Features
ABSTRACT: A large part of the hidden web resides in weblog servers. New content is
produced in a daily basis and the work of traditional search engines turns to
be insufficient due to the nature of weblogs. This work summarizes the
structure of the blogosphere and highlights the special features of weblogs. In
this paper we present a method for ranking weblogs based on the link graph and
on several similarity characteristics between weblogs. First we create an
enhanced graph of connected weblogs and add new types of edges and weights
utilising many weblog features. Then, we assign a ranking to each weblog using
our algorithm, BlogRank, which is a modified version of PageRank. For the
validation of our method we run experiments on a weblog dataset, which we
process and adapt to our search engine. (http://spiderwave.aueb.gr/Blogwave).
The results suggest that the use of the enhanced graph and the BlogRank
algorithm is preferred by the users.
| new_dataset | 0.766731 |
0801.1647 | Vincenzo Nicosia | V. Nicosia, G. Mangioni, V. Carchiolo and M. Malgeri | Extending the definition of modularity to directed graphs with
overlapping communities | 22 pages, 11 figures | J. Stat. Mech. (2009) P03024 | 10.1088/1742-5468/2009/03/P03024 | null | physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex networks topologies present interesting and surprising properties,
such as community structures, which can be exploited to optimize communication,
to find new efficient and context-aware routing algorithms or simply to
understand the dynamics and meaning of relationships among nodes. Complex
networks are gaining more and more importance as a reference model and are a
powerful interpretation tool for many different kinds of natural, biological
and social networks, where directed relationships and contextual belonging of
nodes to many different communities is a matter of fact. This paper starts from
the definition of modularity function, given by M. Newman to evaluate the
goodness of network community decompositions, and extends it to the more
general case of directed graphs with overlapping community structures.
Interesting properties of the proposed extension are discussed, a method for
finding overlapping communities is proposed and results of its application to
benchmark case-studies are reported. We also propose a new dataset which could
be used as a reference benchmark for overlapping community structures
identification.
| [
{
"version": "v1",
"created": "Thu, 10 Jan 2008 18:04:35 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Jan 2008 16:05:02 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Jan 2008 17:57:26 GMT"
},
{
"version": "v4",
"created": "Tue, 24 Mar 2009 18:43:28 GMT"
}
] | 2009-03-24T00:00:00 | [
[
"Nicosia",
"V.",
""
],
[
"Mangioni",
"G.",
""
],
[
"Carchiolo",
"V.",
""
],
[
"Malgeri",
"M.",
""
]
] | TITLE: Extending the definition of modularity to directed graphs with
overlapping communities
ABSTRACT: Complex networks topologies present interesting and surprising properties,
such as community structures, which can be exploited to optimize communication,
to find new efficient and context-aware routing algorithms or simply to
understand the dynamics and meaning of relationships among nodes. Complex
networks are gaining more and more importance as a reference model and are a
powerful interpretation tool for many different kinds of natural, biological
and social networks, where directed relationships and contextual belonging of
nodes to many different communities is a matter of fact. This paper starts from
the definition of modularity function, given by M. Newman to evaluate the
goodness of network community decompositions, and extends it to the more
general case of directed graphs with overlapping community structures.
Interesting properties of the proposed extension are discussed, a method for
finding overlapping communities is proposed and results of its application to
benchmark case-studies are reported. We also propose a new dataset which could
be used as a reference benchmark for overlapping community structures
identification.
| new_dataset | 0.95995 |
0903.3228 | Alberto Accomazzi | Michael J. Kurtz, Alberto Accomazzi, Stephen S. Murray | The Smithsonian/NASA Astrophysics Data System (ADS) Decennial Report | 6 pages, whitepaper submitted to the National Research Council
Astronomy and Astrophysics Decadal Survey | null | null | null | astro-ph.IM cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eight years after the ADS first appeared the last decadal survey wrote:
"NASA's initiative for the Astrophysics Data System has vastly increased the
accessibility of the scientific literature for astronomers. NASA deserves
credit for this valuable initiative and is urged to continue it." Here we
summarize some of the changes concerning the ADS which have occurred in the
past ten years, and we describe the current status of the ADS. We then point
out two areas where the ADS is building an improved capability which could
benefit from a policy statement of support in the ASTRO2010 report. These are:
The Semantic Interlinking of Astronomy Observations and Datasets and The
Indexing of the Full Text of Astronomy Research Publications.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2009 19:36:57 GMT"
}
] | 2009-03-19T00:00:00 | [
[
"Kurtz",
"Michael J.",
""
],
[
"Accomazzi",
"Alberto",
""
],
[
"Murray",
"Stephen S.",
""
]
] | TITLE: The Smithsonian/NASA Astrophysics Data System (ADS) Decennial Report
ABSTRACT: Eight years after the ADS first appeared the last decadal survey wrote:
"NASA's initiative for the Astrophysics Data System has vastly increased the
accessibility of the scientific literature for astronomers. NASA deserves
credit for this valuable initiative and is urged to continue it." Here we
summarize some of the changes concerning the ADS which have occurred in the
past ten years, and we describe the current status of the ADS. We then point
out two areas where the ADS is building an improved capability which could
benefit from a policy statement of support in the ASTRO2010 report. These are:
The Semantic Interlinking of Astronomy Observations and Datasets and The
Indexing of the Full Text of Astronomy Research Publications.
| no_new_dataset | 0.943556 |
0807.0023 | Marko A. Rodriguez | Marko A. Rodriguez, Johan Bollen, Herbert Van de Sompel | Automatic Metadata Generation using Associative Networks | null | ACM Transactions on Information Systems, volume 27, number 2,
pages 1-20, ISSN: 1046-8188, ACM Press, February 2009 | 10.1145/1462198.1462199 | LA-UR-06-3445 | cs.IR cs.DL | http://creativecommons.org/licenses/publicdomain/ | In spite of its tremendous value, metadata is generally sparse and
incomplete, thereby hampering the effectiveness of digital information
services. Many of the existing mechanisms for the automated creation of
metadata rely primarily on content analysis which can be costly and
inefficient. The automatic metadata generation system proposed in this article
leverages resource relationships generated from existing metadata as a medium
for propagation from metadata-rich to metadata-poor resources. Because of its
independence from content analysis, it can be applied to a wide variety of
resource media types and is shown to be computationally inexpensive. The
proposed method operates through two distinct phases. Occurrence and
co-occurrence algorithms first generate an associative network of repository
resources leveraging existing repository metadata. Second, using the
associative network as a substrate, metadata associated with metadata-rich
resources is propagated to metadata-poor resources by means of a discrete-form
spreading activation algorithm. This article discusses the general framework
for building associative networks, an algorithm for disseminating metadata
through such networks, and the results of an experiment and validation of the
proposed method using a standard bibliographic dataset.
| [
{
"version": "v1",
"created": "Mon, 30 Jun 2008 21:23:28 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Mar 2009 01:20:48 GMT"
}
] | 2009-03-07T00:00:00 | [
[
"Rodriguez",
"Marko A.",
""
],
[
"Bollen",
"Johan",
""
],
[
"Van de Sompel",
"Herbert",
""
]
] | TITLE: Automatic Metadata Generation using Associative Networks
ABSTRACT: In spite of its tremendous value, metadata is generally sparse and
incomplete, thereby hampering the effectiveness of digital information
services. Many of the existing mechanisms for the automated creation of
metadata rely primarily on content analysis which can be costly and
inefficient. The automatic metadata generation system proposed in this article
leverages resource relationships generated from existing metadata as a medium
for propagation from metadata-rich to metadata-poor resources. Because of its
independence from content analysis, it can be applied to a wide variety of
resource media types and is shown to be computationally inexpensive. The
proposed method operates through two distinct phases. Occurrence and
co-occurrence algorithms first generate an associative network of repository
resources leveraging existing repository metadata. Second, using the
associative network as a substrate, metadata associated with metadata-rich
resources is propagated to metadata-poor resources by means of a discrete-form
spreading activation algorithm. This article discusses the general framework
for building associative networks, an algorithm for disseminating metadata
through such networks, and the results of an experiment and validation of the
proposed method using a standard bibliographic dataset.
| no_new_dataset | 0.953966 |
0903.0625 | Edith Cohen | Edith Cohen, Haim Kaplan | Leveraging Discarded Samples for Tighter Estimation of Multiple-Set
Aggregates | 16 pages | null | null | null | cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many datasets such as market basket data, text or hypertext documents, and
sensor observations recorded in different locations or time periods, are
modeled as a collection of sets over a ground set of keys. We are interested in
basic aggregates such as the weight or selectivity of keys that satisfy some
selection predicate defined over keys' attributes and membership in particular
sets. This general formulation includes basic aggregates such as the Jaccard
coefficient, Hamming distance, and association rules.
On massive data sets, exact computation can be inefficient or infeasible.
Sketches based on coordinated random samples are classic summaries that support
approximate query processing.
Queries are resolved by generating a sketch (sample) of the union of sets
used in the predicate from the sketches these sets and then applying an
estimator to this union-sketch.
We derive novel tighter (unbiased) estimators that leverage sampled keys that
are present in the union of applicable sketches but excluded from the union
sketch. We establish analytically that our estimators dominate estimators
applied to the union-sketch for {\em all queries and data sets}. Empirical
evaluation on synthetic and real data reveals that on typical applications we
can expect a 25%-4 fold reduction in estimation error.
| [
{
"version": "v1",
"created": "Tue, 3 Mar 2009 21:21:02 GMT"
}
] | 2009-03-05T00:00:00 | [
[
"Cohen",
"Edith",
""
],
[
"Kaplan",
"Haim",
""
]
] | TITLE: Leveraging Discarded Samples for Tighter Estimation of Multiple-Set
Aggregates
ABSTRACT: Many datasets such as market basket data, text or hypertext documents, and
sensor observations recorded in different locations or time periods, are
modeled as a collection of sets over a ground set of keys. We are interested in
basic aggregates such as the weight or selectivity of keys that satisfy some
selection predicate defined over keys' attributes and membership in particular
sets. This general formulation includes basic aggregates such as the Jaccard
coefficient, Hamming distance, and association rules.
On massive data sets, exact computation can be inefficient or infeasible.
Sketches based on coordinated random samples are classic summaries that support
approximate query processing.
Queries are resolved by generating a sketch (sample) of the union of sets
used in the predicate from the sketches these sets and then applying an
estimator to this union-sketch.
We derive novel tighter (unbiased) estimators that leverage sampled keys that
are present in the union of applicable sketches but excluded from the union
sketch. We establish analytically that our estimators dominate estimators
applied to the union-sketch for {\em all queries and data sets}. Empirical
evaluation on synthetic and real data reveals that on typical applications we
can expect a 25%-4 fold reduction in estimation error.
| no_new_dataset | 0.941547 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.