source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | We formulate the update as follows: where ∀ui ∈ Vf \ Vfl, γi(y) and κi are defined as: We ran this procedure for 10 iterations. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | This drastic tree manipulation is not appropriate for situations in which we want to assign particular structures to sentences. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | The other half was replaced by other participants, so we ended up with roughly the same number. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | In this way, the method reported on here will necessarily be similar to a greedy method, though of course not identical. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | For our experiments we also report the mean of precision and recall, which we denote by (P + R)I2 and F-measure. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | In all figures, we present the per-sentence normalized judgements. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The first two rows of the table are baselines. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The 2(Yarowsky 95) describes the use of more sophisticated smoothing methods. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Several methods exist to handle collisions; we use linear probing because it has less memory overhead when entries are small. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 4.3 Morphological Analysis. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Without using the same test corpus, direct comparison is obviously difficult; fortunately, Chang et al. include a list of about 60 sentence fragments that exemplify various categories of performance for their system. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | As a first step towards modeling transliterated names, we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary, and we estimate the probabil ity of occurrence of each hanzi in a transliteration (pTN(hanzi;)) using the maximum likelihood estimate. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Presence of the determiner J Al. 2. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We measured recall (Rec), precision (Pr), and the F-measure (F) with recall and precision equally weighted. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Source sentence words are aligned with hypothesized target sentence words, where the choice of a new source word, which has not been aligned with a target word yet, is restricted1. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 8 1 2. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 96 75. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | We settled on contrastive evaluations of 5 system outputs for a single test sentence. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The implication of this ambiguity for a parser is that the yield of syntactic trees no longer consists of spacedelimited tokens, and the expected number of leaves in the syntactic analysis in not known in advance. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | So, we set a threshold that at least two examples are required to build a link. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Step 2. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The domain is general politics, economics and science. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Interpolation search is therefore a form of binary search with better estimates informed by the uniform key distribution. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | While our method also enforces a singe tag per word constraint, it leverages the transition distribution encoded in an HMM, thereby benefiting from a richer representation of context. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | In the initial release of the ATB, inter-annotator agreement was inferior to other LDC treebanks (Maamouri et al., 2008). |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | But we will show that the use of unlabeled data can drastically reduce the need for supervision. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | IRSTLM’s quantized variant is the inspiration for our quantized variant. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Evaluation within a set The evaluation of paraphrases within a set of phrases which share a keyword is illustrated in Figure 4. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Note, however, that there might be situations in which Zco in fact increases. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | The resulting structural differences between tree- banks can account for relative differences in parsing performance. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Figure 2 shows a screenshot (which is of somewhat limited value, though, as color plays a major role in signalling the different statuses of the information). |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 3 Techniques for automatic vocalization have been studied (Zitouni et al., 2006; Habash and Rambow, 2007). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | For each pair of judges consider one judge as the standard,. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | 52 15. |
Their results show that their high performance NER use less training data than other systems. | 0 | This group consists of (3 total number of possible zones) features. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993). |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 3 68.9 50. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | We tagged each noun with the top-level semantic classes assigned to it in Word- Net. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | 11 www.ling.unipotsdam.de/sfb/projekt a3.php 12 This step was carried out in the course of the diploma thesis work of David Reitter (2003), which de serves special mention here. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Performance improvements transfer to the Moses (Koehn et al., 2007), cdec (Dyer et al., 2010), and Joshua (Li et al., 2009) translation systems where our code has been integrated. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | The CFLex and CFNet knowledge sources provide positive evidence that a candidate NP and anaphor might be coreferent. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | In the following, we use the notation wi wj to mean that (wi, r, wj) E A; r we also use wi wj to denote an arc with unspecified label and wi —*∗ wj for the reflexive and transitive closure of the (unlabeled) arc relation. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Tokens tagged as PUNC are not discarded unless they consist entirely of punctuation. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Given a key k, it estimates the position If the estimate is exact (A[pivot] = k), then the algorithm terminates succesfully. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | In the experiments below, we employ a data-driven deterministic dependency parser producing labeled projective dependency graphs,3 previously tested on Swedish (Nivre et al., 2004) and English (Nivre and Scholz, 2004). |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Their default variant implements a forward trie, in which words are looked up in their natural left-to-right order. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | In the appositive case, the contextual predictor was the head of the modifying appositive (president in the Maury Cooper example); in the second case, the contextual predictor was the preposition together with the noun it modifies (plant_in in the Georgia example). |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Maamouri et al. |
Here both parametric and non-parametric models are explored. | 0 | The maximum precision oracle is an upper bound on the possible gain we can achieve by parse hybridization. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | Having explained the various layers of annotation in PCC, we now turn to the question what all this might be good for. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Consider the following sentences: (a) Jose Maria Martinez, Roberto Lisandy, and Dino Rossy, who were staying at a Tecun Uman hotel, were kidnapped by armed men who took them to an unknown place. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Combining multiple highly-accurate independent parsers yields promising results. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | We did have a problem classifying some modified noun phrases where the modified phrase does not represent a qualified or restricted form of the head, like âchairmanâ and âvice chairmanâ, as these are both represented by the keyword âchairmanâ. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | Ablation Analysis We evaluate the impact of incorporating various linguistic features into our model in Table 3. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | The same argument holds for resolving PP attachment of a prefixed preposition or marking conjunction of elements of any kind. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | There are two key benefits of this model architecture. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | We use the log-linear tagger of Toutanova et al. |
Their results show that their high performance NER use less training data than other systems. | 0 | A list of words occurring more than 10 times in the training data is also collected (commonWords). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | However, there are several reasons why this approach will not in general work: 1. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | In the numerator, however, the counts of ni1s are quite irregular, in cluding several zeros (e.g., RAT, none of whose members were seen). |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | For each sentence, we counted how many n-grams in the system output also occurred in the reference translation. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | gaolxing4 'happy'=> F.i'JF.i'JJI!JI! |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | We elected run Moses single-threaded to minimize the impact of RandLM’s cache on memory use. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | A tree set may be said to have dependencies between paths if some "appropriate" subset can be shown to have dependent paths as defined above. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | They first collect the NE instance pairs and contexts, just like our method. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | The Danish Dependency Treebank (DDT) comprises about 100K words of text selected from the Danish PAROLE corpus, with annotation of primary and secondary dependencies (Kromann, 2003). |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | All systems (except for Systran, which was not tuned to Europarl) did considerably worse on outof-domain training data. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The first point we need to address is what type of linguistic object a hanzi repre sents. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | By taking the ratio of matching n-grams to the total number of n-grams in the system output, we obtain the precision pn for each n-gram order n. These values for n-gram precision are combined into a BLEU score: The formula for the BLEU metric also includes a brevity penalty for too short output, which is based on the total number of words in the system output c and in the reference r. BLEU is sensitive to tokenization. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | However, since we want to preserve as much of the original structure as possible, we are interested in finding a transformation that involves a minimal number of lifts. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Here, the term frequency (TF) is the frequency of a word in the bag and the inverse term frequency (ITF) is the inverse of the log of the frequency in the entire corpus. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | With the exception of the Dutch data set, no other processing is performed on the annotated tags. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | While the proportion of sentences containing non-projective dependencies is often 15–25%, the total proportion of non-projective arcs is normally only 1–2%. |
The texts were annotated with the RSTtool. | 0 | Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of studentsâ work in course projects, and to some extent of paid assistentships. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Equ. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 3 Throughout this paper we shall give Chinese examples in traditional orthography, followed. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Lexicalizing several POS tags improves performance. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Sc opi ng filters candidate if outside the anaphorâs scope. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | In addition to using the simple features directly, we also trained an SVM classifier with these features to distinguish between IN and OUT phrase pairs. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | (Riloff and Shepherd 97) describe a bootstrapping approach for acquiring nouns in particular categories (such as "vehicle" or "weapon" categories). |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Fortunately, performance was stable across various values, and we were able to use the same hyperparameters for all languages. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | 0 D/ nc 5.0 The minimal dictionary encoding this information is represented by the WFST in Figure 2(a). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Two measures that can be used to compare judgments are: 1. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | We evaluate the system's performance by comparing its segmentation 'Tudgments" with the judgments of a pool of human segmenters, and the system is shown to perform quite well. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | A spelling rule might be a simple look-up for the string (e.g., a rule that Honduras is a location) or a rule that looks at words within a string (e.g., a rule that any string containing Mr. is a person). |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | A summary of the corpus used in the experiments is given in Table 3. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | BABAR employs information extraction techniques to represent and learn role relationships. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Methods for expanding the dictionary include, of course, morphological rules, rules for segmenting personal names, as well as numeral sequences, expressions for dates, and so forth (Chen and Liu 1992; Wang, Li, and Chang 1992; Chang and Chen 1993; Nie, Jin, and Hannan 1994). |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Given parameter estimates, the label for a test example x is defined as We should note that the model in equation 9 is deficient, in that it assigns greater than zero probability to some feature combinations that are impossible. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Systems that generally do worse than others will receive a negative one. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | First, we will describe their method and compare it with our method. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | For developers of Statistical Machine Translation (SMT) systems, an additional complication is the heterogeneous nature of SMT components (word-alignment model, language model, translation model, etc. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.