39 The Lexical Perspective

The semantic perspective for analyzing relationships is the fundamental one, but it is intrinsically tied to the lexical one because a relationship is always expressed using words in a specific language. For example, we understand the relationships among the concepts or classes of “food,” “meat,” and “beef” by using the words “food,” “meat,” and “beef” to identify progressively smaller classes of edible things in a class hierarchy.

The connection between concept and words is not so simple. In the Simpson family example with which we began this chapter, we noted with “father” and “padre” that languages differ in the words they use to describe particular kinship relationships. Furthermore, we pointed out that cultures differ in which kinship relationships are conceptually distinct, so that languages like Chinese make distinctions about the relative ages of siblings that are not made in English.[1]

This is not to suggest that an English speaker cannot notice the difference between his older and younger sisters, only that this distinction is not lexicalizedcaptured in a single wordas it is in Chinese. This “missing word” in English from the perspective of Chinese is called a lexical gap. Exactly when a lexical gap exists is sometimes tricky, because it depends on how we define “wordpolar bear and sea horse are not lexicalized but they are a single meaning-bearing unit because we do not decompose and reassemble meaning from the two separate words. These “lexical gaps” differ from language to language, whereas “conceptual gapsthe things we cannot think of or directly experience, like the pull of gravity may be innate and universal. We revisit this issue as “linguistic relativity” in Categorization: Describing Resource Classes and Types. [2]

Earlier in this book we discussed the naming of resources (“The Problems of Naming”) and the design of a vocabulary for resource description (“Scope, Scale, and Resource Description”), and we explained how increasing the scope and scale of an organizing system made it essential to be more systematic and precise in assigning names and descriptions. We need to be sure that the terms we use to organize resources capture the similarities and differences between them well enough to support our interactions with them. After our discussion about semantic relationships in this chapter, we now have a clearer sense of what is required to bring like things together, keep different things separate, and to satisfy any other goals for the organizing system.

For example, if we are organizing cars, buses, bicycles, and sleds, all of which are vehicles, there is an important distinction between vehicles that are motorized and those that are powered by human effort. It might also be useful to distinguish vehicles with wheels from those that lack them. Not making these distinctions leaves an unbalanced or uneven organizing system for describing the semantics of the vehicle domain. However, only the “motorized” concept is lexicalized in English, which is why we needed to invent the “wheeled vehicle” term in the second case.[3]

Simply put, we need to use words effectively in organizing systems. To do that, we need to be careful about how we talk about the relationships among words and how words relate to concepts. There are two different contexts for those relationships.

Relationships among Word Meanings

There are several different types of relationships of word meanings. Not surprisingly, in most cases they parallel the types of relationships among concepts that we described in “The Semantic Perspective”.

Hyponymy and Hyperonymy

When words encode the semantic distinctions expressed by class inclusion, the word for the more specific class in this relationship is called the hyponym, while the word for the more general class to which it belongs is called the hypernym. George Miller suggested an exemplary formula for defining a hyponym as its hypernym preceded by adjectives or followed by relative clauses that distinguish it from its co-hyponyms, mutually exclusive subtypes of the same hypernym.

    hyponym = {adjective+} hypernym {distinguishing clause+}

For example, robin is a hyponym of bird, and could be defined as “a migratory bird that has a clear melodious song and a reddish breast with gray or black upper plumage.” This definition does not describe every property of robins, but it is sufficient to differentiate robins from bluebirds or eagles.[4]


Part-whole or meronymic semantic relationships have lexical analogues in metonomy, when an entity is described by something that is contained in or otherwise part of it. A country’s capital city or a building where its top leaders reside is often used as a metonym for the entire government: “The White House announced today…” Similarly, important concentrations of business activity are often metonyms for their entire industries: “Wall Street was bailed out again…


Synonymy is the relationship between words that express the same semantic concept. The strictest definition is that synonymsare words that can replace each other in some class of contexts with insignificant changes of the whole text’s meaning.[5] This is an extremely hard test to pass, except for acronyms or compound terms like “USA,” “United States,” and “United States of America” that are completely substitutable.

Most synonyms are not absolute synonyms, and instead are considered propositional synonyms. Propositional synonyms are not identical in meaning, but they are equivalent enough that substituting one for the other will not change the truth value of the sentence. This weaker test lets us treat word as synonyms even though their meanings subtly differ. For example, if Lisa Simpson can play the violin, then because “violin” and “fiddle” are propositional synonyms, no one would disagree with an assertion that Lisa Simpson can play the fiddle.

An unordered set of synonyms is often called a synset, a term first used by the WordNetsemantic dictionary” project started in 1985 by George Miller at Princeton.[6] Instead of using spelling as the primary organizing principle for words, WordNet uses their semantic properties and relationships to create a network that captures the idea that words and concepts are an inseparable system. Synsets are interconnected by both semantic relationships and lexical ones, enabling navigation in either space.[7]


We introduced the lexical relationship of polysemy, when a word has several different meanings or senses, in the context of problems with names (“Homonymy, Polysemy, and False Cognates”). For example, the word “bank” can refer to a: river bank, money bank, bank shots in basketball and billiards, an aircraft maneuver, and other concepts.[8]

Polysemy is represented in WordNet by including a word in multiple synsets. This enables WordNet to be an extremely useful resource for sense disambiguation in natural language processing research and applications. When a polysemous word is encountered, it and the words that are nearby in the text are looked up in WordNet. By following the lexical relationships in the synset hierarchy, a “synset distance” can be calculated. The smallest semantic distance between the words, which identifies their most semantically specific hypernym, can be used to identify the correct sense. For example, in the sentence:

Put the money in the bank

Two of the three WordNet senses for “money” are:

1) the most common medium of exchange
2) the official currency issued by a government or national bank

and the first two of the ten WordNet senses for “bank” are:

1) a financial institution that accepts deposits
2) sloping land, especially the slope beside a body of water

The synset hierarchies for the two senses of “money” intersect after a very short path with the hierarchy for the first sense of “bank,” but do not intersect with the second sense of “bank” until they reach very abstract concepts.[9]


Antonymy is the lexical relationship between two words that have opposite meanings. Antonymy is a very salient lexical relationship, and for adjectives it is even more powerful than synonymy. In word association tests, when the probe word is a familiar adjective, the most common response is its antonym; a probe of “good” elicits “bad,” and vice versa. Like synonymy, antonymy is sometimes exact and sometimes more graded.[10]

Contrasting or binary antonyms are used in mutually exclusive contexts where one or the other word can be used, but never both. For example, “alive” and “dead” can never be used at the same time to describe the state of some entity, because the meaning of one excludes or contradicts the meaning of the other.

Other antonymic relationships between word pairs are less semantically sharp because they can sometimes appear in the same context as a result of the broader semantic scope of one of the words. Large” and “small,” or “old” and “young” generally suggest particular regions on size or age continua, but “how large is it?” or “how old is it?” can be asked about resources that are objectively small or young.[11]


The words that people naturally use when they describe resources reflect their unique experiences and perspectives, and this means that people often use different words for the same resource and the same words for different ones. Guiding people when they select description words from a controlled vocabulary is a partial solution to this vocabulary problem (“The Vocabulary Problem”) that becomes increasingly essential as the scope and scale of the organizing system grows. A thesaurus is a reference work that organizes words according to their semantic and lexical relationships. Thesauri are often used by professionals when they describe resources.

Thesauri have been created for many domains and subject areas. Some thesauri are very broad and contain words from many disciplines, like the Library of Congress Subject Headings(LOC-SH) used to classify any published content. Other commonly used thesauri are more focused, like the Art and Architecture Thesaurus(AAT) developed by the Getty Trust and the Legislative Indexing Vocabulary developed by the Library of Congress.[12]

We can return to our simple food taxonomy to illustrate how a thesaurus annotates vocabulary terms with lexical and semantic relationships. The class inclusion relationships of hypernomy and hyponymy are usually encoded using BT (“broader term”) and NT (“narrower term”):

Food BT Meat
Beef NT Meat

The BT and NT relationships in a thesaurus create a hierarchical system of words, but a thesaurus is more than a lexical taxonomy for some domain because it also encodes additional lexical relationships for the most important words. Many thesauri emphasize the cluster of relationships for these key words and de-emphasize the overall lexical hierarchy.

Because the purpose of a thesaurus is to reduce synonymy, it distinguishes among synonyms or near-synonyms by indicating one of them as a preferred term using UF (“used for”):

Food UF Sustenance, Nourishment

A thesaurus might employ USE as the inverse of the UF relationship to refer from a less preferred or variant term to a preferred one:

Victuals USE Food

Thesauri also use RT (“related term” or “see also”) to indicate terms that are not synonyms but which often occur in similar contexts:

Food RT Cooking, Dining, Cuisine

Relationships among Word Forms

The relationships among word meanings are critically important. Whenever we create, combine, or compare resource descriptions we also need to pay attention to relationships between word forms. These relationships begin with the idea that all natural languages create words and word forms from smaller units. The basic building blocks for words are called morphemes and can express semantic concepts (when they are called root words ) or abstract concepts like “pastness” or “plural”). The analysis of the ways by which languages combine morphemes is called morphology.[13]

Simple examples illustrate this:

dogs” = “dog” (root) + “s” (plural)
uncertain” = “certain” (root) + “un” (negation)
denied” = “deny” (root) + “ed” (past tense)

Morphological analysis of a language is heavily used in text processing to create indexes for information retrieval. For example, stemming (discussed in more detail in Interactions with Resources) is morphological processing which removes prefixes and suffixes to leave the root form of words. Similarly, simple text processing applications like hyphenation and spelling correction solve word form problems using roots and rules because it is more scalable and robust than solving them using word lists. Many misspellings of common words (e.g., “pain”) are words of lower frequency (e.g., “pane”), so adding “pane” to a list of misspelled words would occasionally identify it incorrectly. In addition, because natural languages are generative and create new words all the time, a word list can never be complete; for example, when “flickr” occurs in text, is it a misspelling of “flicker” or the correct spelling of the popular photo-sharing site?

Derivational Morphology

Derivational morphology deals with how words are created by combining morphemes. Compounding, putting two “free morphemes” together as in “batman” or “catwoman,” is an extremely powerful mechanism. The meaning of some compounds is easy to understand when the first morpheme qualifies or restricts the meaning of the second, as in “birdcage” and “tollbooth.[14] However, many compounds take on new meanings that are not as literally derived from the meaning of their constituents, like “seahorse” and “batman.

Other types of derivations using “bound” morphemes follow more precise rules for combining them with “base” morphemes. The most common types of bound morphemes are prefixes and suffixes, which usually create a word of a different part-of-speech category when they are added. Familiar English prefixes include “a-,” “ab-,” “anti-,” “co-,” “de-,” “pre-,” and “un-.” Among the most common English suffixes are “-able,” “-ation,” “-ify,” “ing,” “-ity,” “-ize,” “-ment,” and “-ness.” Compounding and adding prefixes or suffixes are simple mechanisms, but very complex words like “unimaginability” can be formed by using them in combination.

Inflectional Morphology

Inflectional mechanisms change the form of a word to represent tense, aspect, agreement, or other grammatical information. Unlike derivation, inflection never changes the part-of-speech of the base morpheme. The inflectional morphology of English is relatively simple compared with other languages.[15]

  1. Languages and cultures differ in how they distinguish and describe kinship, so Bart might find the system of family organization easier to master in some countries and cultures and more difficult in others.

  2. (Bentivogli and Pianta 2000).

  3. This example comes from (Fellbaum 2010, pages 236-237). German has a word Kufenfahrzeug for vehicle on runners.

  4. (Miller 1998).

  5. (Bolshakov and Gelbukh 2004), p, 314. The quote continues “The references to ‘some class’ and to ‘insignificant change’ make this definition rather vague, but we are not aware of any significantly stricter definition. Hence the creation of synonymy dictionaries, which are known to be quite large, is rather a matter of art and insight.

  6. George Miller made many important contributions to the study of mind and language during his long scientific career. His most famous article, The Magical Number Seven, Plus or Minus Two (Miller 1956), was seminal in its proposals about information organization in human memory, even though it is one of the most misquoted scientific papers of all time. Relatively late in his career Miller began the WordNet project to build a semantic dictionary, which is now an essential resource in natural language processing applications. See http://wordnet.princeton.edu/.

  7. This navigation is easiest to carry out using the commercial product called “The Visual Thesaurus” at http://www.visualthesaurus.com/.

  8. These contrasting meanings for “bank” are clear cases of polysemy, but there are often much subtler differences in meaning that arise from context. The verb “save” seems to mean something different in “The shopper saved...” versus “The lifeguard saved...” although they overlap in some ways. (Fillmore and Atkins 2000) and others have proposed definitions of polysemy, but there is no rigorous test for determining when word meanings diverge sufficiently to be called different senses.

  9. Many techniques for using WordNet to calculate measures of semantic similarity have been proposed. See (Budanitsky and Hirst 2006).

  10. See (Gross and Miller, 1990).

  11. This type of “lexical asymmetry” is called “markedness.” The broader or dominant term is the unmarked one and the narrower one is the marked one. See (Battistella 1996).

  12. http://www.loc.gov/library/libarch-thesauri.html, http://www.getty.edu/research/tools/vocabularies/aat/index.html.

  13. Languages differ a great deal in morphological complexity and in the nature of their morphological mechanisms. Mandarin Chinese has relatively few morphemes and few grammatical inflections, which leads to a huge number of homophones. English is pretty average on this scale. A popular textbook on morphology is (Haspelmath and Sims 2010).

  14. These so-called endocentric compounds essentially mean what the morphemes would have meant separately. But if a “birdcage” is exactly a “bird cage,” what is gained by creating a new word? This question has long been debated in subject classification, where it is framed as the contrast between “pre-coordination” and “post-coordination.” For example, is it better to pre-classify some resources as about “Sports Gambling” or should such resources be found by intersecting those classified as about “Sports” and about “Gambling.” See (Svenonius 2000, pages 187-192).

  15. English nouns have plural (book/books) and possessive forms (the professor’s book), adjectives have comparatives and superlatives (big/bigger/biggest), and regular verbs have only four inflected forms (see http://cla.calpoly.edu/~jrubba/morph/morph.over.html). In contrast, in Classical Greek each noun can have 11 word forms, each adjective 30, and every regular verb over 300 (Anderson 2001).


Share This Book