Скачать 389.48 Kb.
Lexicase reference manual, file
Lexicase: an on-line reference manual [to hold.doc]
Representations 103 [reps]
In this section I will present and illustrate the basic notational conventions of lexicase dependency grammar. The system is basically simple: a sentence or phrase is a network of words connected by binary unidirectional lexically licensed links. This information may be optionally represented as a dependency stemma, but this is only an expository convenience; all the information on dependency and linear precedence is included as atomic and valency (contextual) features in the lexical matrices themselves. Representing some of this information redundantly as edges in a stemma does not alter anything.
In a phrase or sentence, each word is given an index based on its linear position, and each word depends on at most one regent word and/or governs one or more dependent words. The binary link between pairs of words is formalized by copying the linear index of the dependent into a valency feature of the regent, as illustrated in ).
) Dependency links
The Chomskyan 'minimalist program' advocates 'bare phrase structure', constituent structure trees without node labels &&. Like other basic components of X-bar theory, this idea seems to have been taken without attribution from dependency grammar. Since 1979/1988 && (Starosta 1979 EoPS &&, 1988:9,24), lexicase dependency trees have been 'bare' because given the constraints applied to lexicase representations, node labels are predictable from lexical information. This is far from saying that the two systems are now identical, however. Minimalism would have to shed its empty and 'light' categories and binary structure (and of course all movement rules) before it began to approximate the level of constraints and empirical content characteristic of lexicase.
Node labels are redundant in lexicase dependency representations because of the locality constraint: all dependency links are strictly local. Stated in stemma terms, there can be no intermediate nodes such as N, N', or N" on the path from a word to its regent or dependent. Given locality, N, N', and N" must be non-distinct, and an N node is redundant because the same information is given by the word class feature [N]. For convenience of reference, a 'phrase' can be defined as a word and all the words that depend on it directly or indirectly. If the head word is an Adj, then the phrase headed by the Adj is an AdjP. However, phrases as such do not play a role in the statement of generalizations in a pure dependency grammar.1
It is in fact technically possible to streamline such representations even further, In ), dependency stemmas are shown with a vertical mast above each word, and dependency link edges are tied to these masts. An even simpler approach would be to eliminate these masts and link words by edges directly, as shown in ). (I have kept the lexicase convention of using horizontal edges to represent exocentric constructions.)
) Minimal stemmas
This notation has several advantages over the mast notation: 1) it is closer to Tesnière's original version, and 2) it emphasizes the fact that dependency representations are composed of direct word-to-word links. In spite of that, I have kept the mast notation, which is adapted from John Anderson's (Anderson 1971) and David Hays' && and Jane Robinson's && dependency stemma representations. The mast notation is perhaps a bit easier to read when feature matrices are added to the lexical entries, and I think easier to draw precisely. No question of theory is involved, since the two notations give exactly the same information, and since all stemmas are redundant because the same information is included in the lexical entries.