On Linguistics and Politics
Noam Chomsky interviewed by Günther Grewendorf
Protosociology, Vol. 6, 1994, pp. 293-303
QUESTION: Professor Chomsky, for nearly four decades now you have been the leading figure in generative grammar, which is the most influential paradigm in modern linguistics, and you have become one of the leading Western intellectuals in the political campaign against manipulation and oppression. I would like to ask you a few questions ranging from linguistic theory to your political opinions.

Surveying the development of Generative Grammar from Syntactic Structures (1957) up until the Minimalist Program, would you characterize this development as a continuous process whose inner logic has culminated in a minimalist program for linguistic theory, or has generative grammar gone astray at some point and then returned to insights of a former stage?

CHOMSKY: In retrospect, I think one can detect a kind of inner logic, though it would be an exaggeration to say that it was evident all along. And over the years, there have been many tendencies, experiments, and conflicting ideas, some of which have proven more fruitful than others, some of which seem (again in retrospect) to have been wrong turnings.

QUESTION: Are there stages within the history of Generative Linguistics that you would highlight as rather revolutionary, marking major breakthroughs in the field?

CHOMSKY: I think there was a really significant change that crystallized about 1980, in the so-called principles-and-parameters (P&P) approach. To explain, it's necessary to sketch some basic problems of the field and the way they were addressed.

The first attempt to give a general presentation of generative grammar was my Logical Structure of Linguistic Theory (LSLT, 1955; parts of a 1956 revision were published in 1975); Syntactic Structures, which you mention, was an adaptation of this for an undergraduate course at MIT, with material added on topics of particular interest in this context (specifically automata theory). The main line of work since then will be easier to sketch with some current terminology.

We are concerned with the human language faculty. Like other biological systems, it has a modular structure. We can at once distinguish two components: a cognitive system that stores information, and performance systems that access this information for articulation, perception, talking about the world, asking questions, and so on. Let's focus on the cognitive component. It has an initial state, close to invariant across the species, and undergoes changes of state until it pretty much stabilizes, apparently pre-puberty. We may refer to a state of the language faculty as a language, or to avoid pointless terminological debate, an I-language, where I is to suggest internal, individual, and intensional, the approach being internalist and individualist and characterizing in intension a procedure that generates expressions (thus, it provides a specific way of generating the class of expressions that constitute its extension). We say that a theory of a language (a grammar) achieves descriptive adequacy to the extent that it characterizes the I-language correctly, and thus provides a correct account of the expressions generated and their properties. We say that a theory of the language faculty attains explanatory adequacy to the extent that it gives the initial state. The term explanation is appropriate in the sense that the initial state can be considered abstractly as a procedure that determines an I-language, given data, and thus provides an explanation for the properties of the I-language and the expressions it generates, given the boundary conditions of experience. By language henceforth I mean I-language, in the technical sense.

LSLT and other detailed work of the 1950s (particularly G.H. Matthews, Hidatsa Syntax) at once revealed a tension between descriptive and explanatory adequacy. As soon as serious descriptive work was undertaken, it was discovered that available accounts of language, however extensive, barely scratched the surface; even the most comprehensive grammar provided little more than hints that sufficed for the intelligent reader; the language faculty was tacitly presupposed (without awareness, of course). The same is true of the most comprehensive dictionary. To attain descriptive adequacy, it seemed necessary to construct extremely intricate and complex grammars, radically different for different languages. On the other hand, to approach explanatory adequacy it was necessary to assume that the states attained are determined to an overwhelming extent by the initial state, which is language-invariant. Thus languages must all be cast to the same mold, differing only superficially. The major research project was aimed at overcoming this tension by showing that the apparent complexity and variety of language was only superficial, the result of minor changes in a fixed and invariant system.

This work reached its culmination in the P&P model, which, unlike earlier work in generative grammar, constituted a major break from a tradition of some 2500 years. Traditional grammars are rule- and construction-based. Thus a grammar will contain rules for constructing verb phrases or relative clauses in English, and these are different from the rules for constructions in other languages. As noted, traditional grammars only gave hints and examples, but generative grammar proceeded pretty much on the same model, attempting to fill in the gaps, which were quickly discovered to be huge. The P&P approach dispensed entirely with both rules and constructions. It postulated general principles of the language faculty and a finite array of options (parameters), which appear to be limited to a subpart of the lexicon and peripheral parts of the phonology. A language is determined by a choice of values for the parameters.

The constructions of traditional and early generative grammar disappear; they are taxonomic artifacts, like terrestrial mammal. The verb phrases and relative clauses of English result from the interplay of fixed principles that are not specific to these constructions, with particular values for parameters. This approach could be called a breakthrough, in that it provided a radically new conception of language and also led to a great deal of new empirical work in languages of broad typological variety. New theoretical ideas were also developed in the effort to show that the variety of possible human languages can be reduced to fixed and invariant principles with limited options of variation -- so that, in effect, we can deduce Hungarian by setting parameters in the fixed system one way, and deduce Swahili by setting them a different way. That goal is, naturally, far from attainment, but for the first time it has been intelligibly formulated and the task of approaching it can be addressed constructively, and indeed has been.

QUESTION: With your 1991 paper on economy of derivation and representation, a discussion has been generated as to what extent the language faculty is contingent on the operation of cognitive principles of economy. Doesn't this appeal to general cognitive principles of economy create a problem for the claim of the autonomous nature of the language faculty?

CHOMSKY: The question of autonomy of the language faculty is not a dogma, but an apparent discovery, one that remains unaffected by the introduction of principles of economy of derivation and representation. Even the earliest work in generative grammar made crucial appeal to such principles, though at the time, in a different way: as part of the evaluation procedure that selected among grammars of a permissible format, given data. The principles of economy that have been proposed in recent work are quite natural, in that they seem to provide a kind of optimal design for a system like the language faculty (an intuitive judgment, but not a vacuous one). But they still appear to be specific to the language faculty, not general cognitive principles. More recent work in the minimalist program carries these efforts further, attempting to reduce assumptions to what is conceptually necessary and to show that the properties of the I-Language are otherwise determined by interface conditions (that is, by the ways performance systems access the linguistic expressions generated by the I-language).

If this program succeeds, the specificity of the language faculty will reside in the nature of the interface conditions, the properties of the lexicon, the economy conditions, and some elementary features of the computational procedure that forms linguistic expressions from lexical items -- perhaps close to invariant across languages. If correct, this approach will identify the specificity of the language faculty, and we may then ask about its autonomy: are there cognitive systems, or other biological systems, which share these properties in significant ways? For the moment, the answer to that question seems negative -- but again, it is an empirical issue, not a theological doctrine.

QUESTION: Contrary to views that have proclaimed the end of transformationalism, the derivational nature of the computational system still plays a crucial role in the most recent developments of Generative Grammar. Can this role only be established through theory-internal considerations or can you think of any external evidence that might support this view?

CHOMSKY: First, we have to distinguish the two notions transformationalism and depth of derivation. On derivational depth, I'd go beyond what you suggest. As principles and assumptions become simpler, quite typically explanations become longer and more complex. Much the same seems to be true here. As the principles of the language faculty have become more refined, the derivation of linguistic expressions, though determined by fixed principles and parametric choices, becomes increasingly intricate; properties of expressions are not directly stated by rules specific to them but derived from an interplay of invariant principles. Accordingly, the derivational nature of the computational system becomes even more fundamental than before. As for transformationalism, that has to do with a specific property of natural languages, namely, that expressions are commonly interpreted in a position different from the one in which they physically appear. Thus in the sentence the book seems to have been written t by John, the phrase the book is interpreted in the position of t (its trace); it is given the same interpretation as in has written the book. That property of natural language is pervasive and widespread, and every theory of language has to capture it somehow. Speaking abstractly, the relation of the phrase to the position of its interpretation (in the example given, the relation of the book to its trace) is a transformation. Exactly how transformations are to be described is an empirical question. For now, the evidence seems to me considerable that they are operations that move an expression from one position to another, leaving a copy (a trace). That's a theory-internal conclusion, but one that seems reasonably solid at the moment.

As for external evidence, the term is not very clear, in my opinion. The example I just gave is one of innumerably many like it that provide evidence of the kind of displacement that is captured by transformational rules. Is it internal or external? Suppose that a study of priming effects in parsing distinguishes effects of trace from other unpronounced items. Would that be internal or external evidence? The same question can be asked about studies of electrical activity of the brain that distinguish between the effects of different transformational operations. The notions external and internal derive from an approach to the study of language that seems to me dubious from the start, an approach that seeks to distinguish linguistic from psychological evidence. A specific datum does not come with its purpose written on its sleeve. It is just a bit of data, which may be understood as evidence for something, within a particular theoretical framework. Judgments about the interpretation of sentences (essentially, perceptual judgments) are data, along with the results of priming studies and electrical activity of the brain. We seek any data that may provide evidence bearing on the nature of the language faculty.

QUESTION: In his recent book, Bright Air, Brilliant Fire. On the Matter of the Mind, G.M. Edelman claims that cognitive science rests on a set of unexamined assumptions, one of its crucial deficiencies being that it makes only marginal reference to the biological foundations that underlie the mechanisms that cognitive science purports to explain. Do you feel that this view has something to it and that it might also affect the kind of linguistics that you are advocating?

CHOMSKY: Edelman's description is correct: there is a great gap between computational and connectionist theories of the brain, on the one hand, and the study of the anatomy and physiology of the brain, on the other. But his conclusions from this familiar observation are seriously in error, in my opinion. Our common concern is to learn about the brain: its architecture and components, their states and properties, their interactions, their constitution, and so on. Within the cognitive sciences, computational and neural net theories have been developed to account for some of these aspects of the brain and its function. There are also accounts of the brain in other terms: in terms of cells, electrical activity, and so on. Naturally, we hope to unify these: to show, for example, how electrical activity relates to representations and derivations, or how these elements of computational systems relate to cells. For the present, there is no real understanding of the problems of unification. That is a rather typical feature of the history of science. A century ago, for example, there was no way to unify chemistry with physics. Chemists described the world in terms of elements, valence, the periodic table, Kekulé's rational formulae, atomic weights, etc. Physicists described the world in terms of particles in motion, electromagnetic fields, etc. The two approaches could not then be unified, why, no one knew. It turned out that the more fundamental discipline had to be radically modified for unification to proceed. With the Bohr atom and the quantum theoretic revolution, Linus Pauling was able to give an account of the chemical bond in terms of the new physics, thus unifying the disciplines (note that this was not a case of reduction, in any meaningful sense of the term).

Suppose that a century ago someone had argued that chemistry "rests on a set of unexamined assumptions, one of its crucial deficiencies being that it makes only marginal reference to the physical foundations that underlie the mechanisms that it purports to explain." That would have been true in the sense that the unification problem remained unsolved, and indeed was unsolvable in terms of the physics of the day. The standard Norton History of Chemistry points out that "The chemist's matter was discrete and discontinuous, the physicist's energy continuous", a "nebulous mathematical world of energy and electromagnetic waves ..." It seemed (and indeed was) impossible to relate the former to the latter.

Like many others before him, Edelman is properly intrigued by the distinction between the discrete and discontinuous nature of brain as seen from the standpoint of the cognitive sciences and the continuous and endlessly varied character of the brain as seen by the neuroscientist. He sees this gap as a "crisis", which seems to me a bit melodramatic. However one evaluates that, he then proceeds to a completely erroneous conclusion: that it is a crisis for the cognitive sciences, which would be on a par with the claim, a century ago, that chemistry is facing a "crisis" because it is based on "unexamined assumptions" of discontinuity that are "refuted" by the physics of the day -- inadequate physics, as was later learned. Neither move makes any sense at all.

What we should do is to pursue all approaches to the brain as best we can, seeing what one can learn from the other. The discoveries of the chemist provided certain guidelines for the revolution in physics, and it could turn out that the discoveries of the cognitive scientists will do the same for the brain sciences. Or, the latter might develop some new approach to properties of language and other aspects of cognition that would suggest new directions for the cognitive sciences. One can have no doctrines about such matters.

QUESTION: It has been claimed by representatives of the generative paradigm that linguistics can be ascribed a foundational role among the cognitive sciences. Don't you think that this claim is a bit overstated?

CHOMSKY: I'm not familiar with such claims, but I see no merit to them. As far as I understand these matters, the language faculty seems to have quite different properties from other cognitive systems, and thus plays no foundational role (whatever exactly that might mean). It is possible that some other systems derive in some fashion from the language faculty. It's not implausible to speculate, for example, that the innate basis for mathematical understanding is a kind of abstraction from properties of the language faculty; that might explain the fact that humans have an innate capacity for mathematics that has surely not been specifically selected. The same might be suggested for some aspects of musical ability. It has also been suggested that properties of language derive in some fundamental way from properties of the visual system. Again, these are questions about which little is known; the chips fall where they may.

QUESTION: What are your views on the sociobiological research program? Do you think that there are any connections with the basic assumptions of generative linguistics?

CHOMSKY: Sociobiology is reasonable enough as a research program. It has substantial results for simpler organisms, but little to say about humans, to my knowledge, beyond speculations of various kinds, the earliest, to my knowledge, in Kropotkin's work on mutual aid as a factor in evolution. The research program is concerned with biologically-determined elements of human behavior and understanding, and in this respect, has a similarity to the study of language -- or, for that matter, any biological system. Beyond that, I see no clear connections.

QUESTION: Donald Davidson's view of the Private Language Problem seems to come close to what you said about this problem in Knowledge of Language. Do you see any correspondence between Davidson's theory of language and your theory of linguistic knowledge, or is there any other contemporary philosopher of language you would consider to advocate a related notion of linguistic knowledge?

CHOMSKY: Davidson observes correctly that in an ordinary communication situation between (say) Smith and Jones, each will use any means to determine the intentions of the other. Thus in his terms, Smith will construct a passing theory to interpret what Jones is saying, employing any evidence available. From that correct observation he concludes that there is no such thing as language. This conclusion has two aspects. First: there is no need to postulate a common language shared by Jones and Smith to account for their communication. Second: there is no portable interpreting machine set to grind out the meaning of an arbitrary utterance -- that is, no I-language.

Three comments are in order. First, his argument is invalid in both cases. From the fact that Smith constructs a passing theory, nothing follows about the basis on which he does so. It would be like concluding from the chaotic properties of weather systems that there is no jetstream. In particular, nothing follows about common language or I-language.

Second,despite the invalidity of the argument, Davidson's first conclusion is correct, though understated. Not only is there no need to postulate a common language, but there is no intelligible notion of common language to postulate. That is a truism that has always been assumed, virtually without comment, in the empirical study of language, which has no place for such notions as Chinese or German, as is well known.

Third, Davidson's conclusion about a portable interpreting machine (an I-language) is incorrect, as far as we know; he suggests no reason to assume otherwise.

There are philosophers whose concept of linguistic knowledge is close to my own: James Higginbotham, Sylvain Bromberger, Julius Moravcsik, Akeel Bilgrami, Jerry Fodor, and others in varying degrees and ways, though there are also differences among us. I do, however, feel that the leading currents in philosophy of language and mind are seriously in error, for reasons I've outlined elsewhere.

QUESTION: In your political work you studied the structure of power and the manufacturing of consent. Do you think the conclusions that you reached with respect to politics are applicable to the development of scientific paradigms and the dynamics established in academic fields? For example, in a recent book by Randy Harris, The Linguistic Wars, a picture is put forth that suggests the existence of analogous structures in the linguistic debates.

CHOMSKY: Studies of human interactions in social and political systems reveal factors that surely enter into scientific work as well. Doubtless a close look at the world of scholarship and science will reveal all sorts of conniving, malice, pursuit of self-interest, attempts to establish a guild structure that protects interests and power, and so on. James Watson depicts his work with Francis Crick in such terms, and cocoon-like protective structures are quite common, and highly destructive, in the humanities and social sciences, surely. The history of modern linguistics reflects such factors. I think they are largely responsible for the fact that generative grammar has largely found its academic home outside institutions in which the humanities faculty wields extensive influence. Within generative grammar, fortunately, there has not been too much of this, at least in the parts I'm familiar with.

I've experienced a good deal of this personally, for idiosyncratic reasons. I actually have no serious professional qualifications in any field that was identifiable 40 years ago -- which is why I am teaching at MIT, a scientific university, where no one cared much about credentials. I'm largely self-taught (including linguistics), and my work happens to have ranged fairly widely. Some years ago I did some work on mathematical theory of automata. At the time, I gave invited lectures in mathematics and engineering departments at major universities. No one believed that I was an accomplished professional mathematician, but no one cared either; people were interested in determining whether what I said was true or false, interesting or not, susceptible to improvement and further work or not. On the other hand, when I've worked in such areas as history of ideas or international affairs, the reaction has commonly been quite different, ranging from near-hysteria of an often comical variety to fury that I should even dare to step upon this sacred turf without the proper letters after my name. I don't think it's very hard to explain the difference, which is quite striking.

Turning to Harris's book, I'm afraid it is largely fantasy, of a currently fashionable kind. Harris constructs a breathless account of a great war, in which I'm supposed to have led one of the contending armies, fighting a great battle to maintain my iron control of the discipline -- a construction based on the needs of the text, if I have the terminology straight. Unfortunately for the story, I had little interest in the war and took no part in it. At the time (late '60s and early '70s), I was much engaged in a war, but it was a different and rather more significant one; the games he is describing struck me as largely childish, and I kept to what seemed to me more serious pursuits. In order to sustain his story line, Harris claims that the papers I wrote were my effort to destroy the new heresy. One paper was indeed in part a response to criticisms from generative semantics; it was based on a talk at a 1969 conference in which the organizer, Stanley Peters, virtually pleaded with me to respond to such criticisms, which I had previously ignored; having little time, I flew to the conference, gave the paper, and went on to other and more pressing demands. The other papers that Harris mentions had quite different sources and motivations, as is perfectly clear from internal evidence. As for my efforts to destroy the heresy, the story also requires omission of the fact that every syntax appointment in my own department in those years happened to be a generative semanticist, that we made serious efforts to induce them to stay on, and that I also went well out of my way to help other young generative semanticists to establish their own departments. But in postmodern fairy tales, facts are an irrelevance.

Attempts to provide psychosocial Foucaultian accounts of what happens in science may or may not have some interest (in my opinion, they are of little interest), but they have to be done seriously and accurately. Otherwise we have something on the level of gossip columns.

QUESTION: What do you think about the recent agreements between Israel and the PLO?

CHOMSKY: This is a long story, which I've written about extensively elsewhere. To understand the agreements, one has to be clear about the course of Middle East diplomacy since the 1967 war. Two major issues have held up a settlement: territorial arrangements and Palestinian rights. There has been general agreement since late 1967 that a settlement of the Israel-Palestine issue should be based on UN 242, which calls for a full peace settlement among states, saying nothing about Palestinian rights, and calls for Israeli withdrawal from territories occupied in the war. The international consensus at the time was that withdrawal meant complete withdrawal to the pre-war borders, apart from minor and mutual adjustments. In particular, that was the explicit stand of the United States, as the documentary record shows very clearly. At the time, the Arab states rejected full peace and Israel rejected full withdrawal. Thus an impasse.

Matters changed in February 1971, when President Sadat of Egypt endorsed the proposal of UN negotiator Gunnar Jarring for a full peace settlement with Israel in these terms (with nothing for the Palestinians). Israel recognized this as a genuine peace offer, but rejected it, stating that it would not withdraw to the pre-war borders. The US then had to make a decision: would it keep to its earlier policy, thus accepting Sadat's offer, or would it abandon its earlier policy and adopt Israel's refusal to withdraw? There was an internal bureaucratic struggle, in which Kissinger prevailed. In accord with his position, the US insisted upon stalemate, meaning no diplomacy. The Jarring-Sadat initiative collapsed, and it has been largely erased from history, being inconsistent with the preferred image of Washington the peacemaker. In the real world, since 1971 the US and Israel have opposed the international consensus on withdrawal, and still do.

The matter of Palestinian national rights came to a head in January 1976, when the UN Security Council debated a resolution calling for a settlement incorporating the provisions of UN 242, but adding the call for a Palestinian state in the West Bank and Gaza. The resolution was backed by virtually the entire world, including the Soviet bloc, the PLO, and the major Arab states (including Egypt, Jordan, Syria). It was opposed by Israel and vetoed by the US. Since then the US and Israel have led the rejectionist camp, refusing to accept the right of national self-determination of one of the two contending parties: the indigenous population of the former Palestine. The US-Israeli position is thus a counterpart to that of the fringe rejection front in the Arab world, although matters are not described in these accurate but politically incorrect terms, for obvious reasons. The events just described have also been essentially excised from history, even in most of scholarship.

Given its international isolation on these issues, the US has been compelled throughout this period to veto Security Council resolutions, vote alone with Israel (and occasionally one or another client state) against General Assembly resolutions, and block diplomatic initiatives from all sides. That continued until the Gulf War, which established that "What We Say Goes", as President Bush defined the New World Order while the bombs and missiles were falling. Specifically, the war in effect extended the Monroe Doctrine to the Middle East, establishing unchallenged US control over the region and putting Washington in a position to implement its own rejectionist solution. By then, the PLO was a declining force, increasingly unpopular in the territories. By 1993, Israel had come to recognize that Arafat would be more capitulationist than the Palestinian negotiators, and therefore decided to make an agreement with him directly; presumably Arafat agreed on the assumption that he could salvage some personal authority and power in no other way. The Oslo agreements basically affirm US-Israeli rejectionism. The long-term solution is to be based on UN 242 alone, with no recognition of Palestinian national rights. There is to be no general Israeli withdrawal. Rather, with US financial aid and diplomatic support, Israel will continue to establish facts in the territories according to long-standing plans to take over the resources and usable land, but without responsibility for the population; there have been a series of such Israeli plans, from the Allon Plan of 1968 to the Sharon plan and others of the last few years. Settlement and huge infrastructure projects continue in the occupied territories after the Oslo agreements, always with US support. The clear intention is to divide the West Bank into two cantons, separated by the expanding region of greater Jerusalem, which extends to a few miles from Jericho. Israel has also made it clear that it intends to keep the Jordan valley and the more valuable parts of the Gaza Strip (Gush Katif). The Palestinian cantons are to be absorbed within the Israeli economy, along lines spelled out in the May 1994 Cairo agreements.

For the Palestinians, the accords are an almost complete surrender, showing again that those who have the guns usually get their way. It could be argued that this is the most they can obtain, under prevailing conditions of external power. Perhaps so, but that is a different matter.

It should be stressed that these matters are presented in an entirely different light by intellectual opinion, which has its own tasks and commitments. But that's the way the facts look to me. As noted, I have more detailed discussion elsewhere.

QUESTION: Thank you very much for this interview, Professor Chomsky.