source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
M must enter a universal state and check that each of the k constant substrings are in the appropriate place (as determined by the contents of the first 2k work tapes) on the input tape.
These clusters are computed using an SVD variant without relying on transitional structure.
0
2 70.7 52.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
To make the projection practical, we rely on the twelve universal part-of-speech tags of Petrov et al. (2011).
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Further, the special hash 0 suffices to flag empty buckets.
They have made use of local and global features to deal with the instances of same token in a document.
0
This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Compared with SRILM, IRSTLM adds several features: lower memory consumption, a binary file format with memory mapping, caching to increase speed, and quantization.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
All the sentences have been analyzed by our chunker and NE tag- ger.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
This process is repeated 5 times by rotating the data appropriately.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Nonetheless, parse quality is much lower in the joint model because a lattice is effectively a long sentence.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
This makes memory usage comparable to our PROBING model.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
TIS systems in general need to do more than simply compute the.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
If the log backoff of wnf is also zero (it may not be in filtered models), then wf should be omitted from the state.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(4) gives In order to minimize Zt, at each iteration the final algorithm should choose the weak hypothesis (i.e., a feature xt) which has values for W+ and W_ that minimize Equ.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Morphological segmentation decisions in our model are delegated to a lexeme-based PCFG and we show that using a simple treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling our model outperforms (Tsarfaty, 2006) and (Cohen and Smith, 2007) on the joint task and achieves state-of-the-art results on a par with current respective standalone models.2
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
1
We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The features are weighted within a logistic model to give an overall weight that is applied to the phrase pair’s frequency prior to making MAP-smoothed relative-frequency estimates (different weights are learned for each conditioning direction).
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
vierten 12.
Combining multiple highly-accurate independent parsers yields promising results.
0
The precision and recall of similarity switching and constituent voting are both significantly better than the best individual parser, and constituent voting is significantly better than parser switching in precision.4 Constituent voting gives the highest accuracy for parsing the Penn Treebank reported to date.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For example, the company may refer to Company X in one paragraph and Company Y in another.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Each trie node is individually allocated and full 64-bit pointers are used to find them, wasting memory.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Xim, } associated with the ith example.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Time for Moses itself to load, including loading the language model and phrase table, is included.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The precision and recall of similarity switching and constituent voting are both significantly better than the best individual parser, and constituent voting is significantly better than parser switching in precision.4 Constituent voting gives the highest accuracy for parsing the Penn Treebank reported to date.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
As a (crude) approximation, we normalize the extraction patterns with respect to active and passive voice and label those extractions as agents or patients.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
It should be clear from the onset that the particle b (“in”) in ‘bcl’ may then attach higher than the bare noun cl (“shadow”).
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Linguistic intuitions like those in the previous section inform language-specific annotation choices.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
This feature has a linguistic justification.
All the texts were annotated by two people.
0
That is, we can use the discourse parser on PCC texts, emulating for instance a “co-reference oracle” that adds the information from our co-reference annotations.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
We used 22 features for the logistic weighting model, divided into two groups: one intended to reflect the degree to which a phrase pair belongs to general language, and one intended to capture similarity to the IN domain.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
In (b) is a plausible segmentation for this sentence; in (c) is an implausible segmentation.
They focused on phrases which two Named Entities, and proceed in two stages.
0
3.1 Corpora.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
For the translation model Pr(fJ 1 jeI 1), we go on the assumption that each source word is aligned to exactly one target word.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
In the label propagation stage, we propagate the automatic English tags to the aligned Italian trigram types, followed by further propagation solely among the Italian vertices. the Italian vertices are connected to an automatically labeled English vertex.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Instead, we extend the variation n-gram method of Dickinson (2005) to compare annotation error rates in the WSJ and ATB.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply “Arabic”) because of the unusual opportunity it presents for comparison to English parsing results.
This paper conducted research in the area of automatic paraphrase discovery.
0
We will describe the evaluation of such clusters in the next subsection.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
Table 2 shows the features used in the current version of the parser.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
These systems rely on a training corpus that has been manually annotated with coreference links.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
The 95% confidence intervals for type-level errors are (5580, 9440) for the ATB and (1400, 4610) for the WSJ.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The two new terms force the two classifiers to agree, as much as possible, on the unlabeled examples.
There is no global pruning.
0
German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
This paper, however, provides a comprehensive overview of the data collection effort and its current state.
There is no global pruning.
0
In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
A corollary of the result of Section 4.3 is that polynomial time recognition of MCTAG's is possible.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
We follow the guidelines developed in the TIGER project (Brants et al. 2002) for syntactic annotation of German newspaper text, using the Annotate3 tool for interactive construction of tree structures.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Then, token- level HMM emission parameters are drawn conditioned on these assignments such that each word is only allowed probability mass on a single assigned tag.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
This may seem surprising, given the experiments reported in section 4, but the explanation is probably that the non-projective dependencies that can be recovered at all are of the simple kind that only requires a single lift, where the encoding of path information is often redundant.
Here we present two algorithms.
0
The first unsupervised algorithm we describe is based on the decision list method from (Yarowsky 95).
They have made use of local and global features to deal with the instances of same token in a document.
0
MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
However, it can be noted that the results for the least informative encoding, Path, are almost comparable, while the third encoding, Head, gives substantially worse results for both data sets.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
This WFST represents the segmentation of the text into the words AB and CD, word boundaries being marked by arcs mapping between f and part-of-speech labels.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext.
Combining multiple highly-accurate independent parsers yields promising results.
0
None of the parsers produce parses with crossing brackets, so none of them votes for both of the assumed constituents.
These clusters are computed using an SVD variant without relying on transitional structure.
0
La ng ua ge # To ke ns # W or d Ty pe s # Ta gs E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 1 1 7 3 7 6 6 9 4 3 8 6 2 0 3 5 6 8 6 9 9 6 0 5 2 0 6 6 7 8 8 9 3 3 4 1 9 1 4 6 7 4 9 2 0 6 1 8 3 5 6 2 8 3 9 3 7 2 3 2 5 2 8 9 3 1 1 6 4 5 8 2 0 0 5 7 4 5 2 5 1 2 5 4 2 2 4 7 4 1 Table 2: Statistics for various corpora utilized in experiments.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Given a sufficient number of randomly drawn unlabeled examples (i.e., edges), we will induce two completely connected components that together span the entire graph.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
There are clearly eight orthographic words in the example given, but if one were doing syntactic analysis one would probably want to consider I'm to consist of two syntactic words, namely I and am.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
63 81.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
A straightforward way to find the shortest tour is by trying all possible permutations of the n cities.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
We therefore used the arithmetic mean of each interjudge precision-recall pair as a single measure of interjudge similarity.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
A straightforward way to find the shortest tour is by trying all possible permutations of the n cities.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Table 2 shows our complete set of results.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
3 Throughout this paper we shall give Chinese examples in traditional orthography, followed.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
It would have therefore also been possible to use the integer programming (IP) based approach of Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
It is known that CFG's, HG's, and TAG's can be recognized in polynomial time since polynomial time algorithms exist in for each of these formalisms.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
The dev corpus was taken from the NIST05 evaluation set, augmented with some randomly-selected material reserved from the training set.
The AdaBoost algorithm was developed for supervised learning.
0
(7), such as the likelihood function used in maximum-entropy problems and other generalized additive models (Lafferty 99).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The question is how to normalize the probabilities in such a way that smaller groupings have a better shot at winning.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Step 3.
Their results show that their high performance NER use less training data than other systems.
0
Local features are features that are based on neighboring tokens, as well as the token itself.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
For all languages, the vocabulary sizes increase by several thousand words.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(−i), w, α) ∝ n P (w|Ti, t(−i), w(−i), α) (tb ,ta ) P (Ti, t(i)|T , W , t(−i), w, α, β) = P (T |tb, t(−i), α)P (ta|T , t(−i), α) −i (i) i i (−i) P (Ti|W , T −i, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(−i)where T −i denotes all type-level tag assignment ex cept Ti and t(−i) denotes all token-level tags except and w (−i) (Johnson, 2007).
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
In addition to the optimizations specific to each datastructure described in Section 2, we implement several general optimizations for language modeling.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
37 79.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The difference in performance between pronouns and definite noun phrases surprised us.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Wu and Fung introduce an evaluation method they call nk-blind.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus.
They have made use of local and global features to deal with the instances of same token in a document.
0
Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Because of their size, the examples (Figures 2 to 4) appear at the end of the paper.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
The rest of the paper is structured as follows.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The sequence of states needed to carry out the word reordering example in Fig.
This paper talks about Pseudo-Projective Dependency Parsing.
0
At each point during the derivation, the prediction is based on six word tokens, the two topmost tokens on the stack, and the next four input tokens.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
An easy way to achieve this is to put the domain-specific LMs and TMs into the top-level log-linear model and learn optimal weights with MERT (Och, 2003).
Replacing this with a ranked evaluation seems to be more suitable.
0
In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
BABAR uses unsupervised learning to acquire this knowledge from plain text without the need for annotated training data.
They have made use of local and global features to deal with the instances of same token in a document.
0
As can be seen in Table 4, our training data is a lot less than those used by MENE and IdentiFinder3.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
2.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
7 Conclusion and Future Work.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
However, there is again local grammatical information that should favor the split in the case of (1a): both .ma3 'horse' and .ma3 lu4 are nouns, but only .ma3 is consistent with the classifier pil, the classifier for horses.21 By a similar argument, the preference for not splitting , lm could be strengthened in (lb) by the observation that the classifier 1'1* tiao2 is consistent with long or winding objects like , lm ma3lu4 'road' but not with,ma3 'horse.'
This paper presents a maximum entropy-based named entity recognizer (NER).
0
If they are found in a list, then a feature for that list will be set to 1.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Each xii is a member of X, where X is a set of possible features.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
This has solutions: where pI(s|t) is derived from the IN corpus using relative-frequency estimates, and po(s|t) is an instance-weighted model derived from the OUT corpus.