That language and cognition are closely related in humans is undisputed, but in machines the connection is more problematical and disputed. A dialogue is a form that can sharpen difficult questions, at least that is our hope, but not one that lends itself to historical or scholarly introductions, though it should be said at the outset that the central question in what follows, the nature, origin, acquisition and use of the symbols in AI representations, is one that has been discussed in print many times, by AI researchers (e.g. ; ) among others) as well as by many spectators of AI in philosophy and other cognitive sciences. After the discussion created by some of these papers, the issues raised then disappear again, unresolved, perhaps because they are unresolvable. This paper starts with common-ground between the two opposed, and apparently irreconcilable positions, and then attempts to refine the differences between them piecemeal: these positions are that the symbols we use in AI representations are (or, conversely, are utterly different from) the English words they plainly resemble. Much AI research is only comprehensible on the negative assumption (in parentheses above), yet an important question, rarely asked, is whether it makes any real difference to the course of, or results from, AI research which of the above positions is true or false.
This paper, as will become clear, is in a very conservative tradition: core representationalist AI. It retains Newell's assumptions  about semantics as information processing as much as it does McCarthy's vision of AI as the method of heuristics , as opposed to continuing attempts to make properties of representational structures provable in some strong sense. Indeed some version of this dialogue gets published as a paper every five years or so, as in the citations above, and readers will have to judge for themselves whether any progress is being made. We, above all, want to clarify the questions and seek clear differences among the answers.
Within the representationalist camp, we wish to separate ourselves from those aspects of the formalist movement, whether within linguistics or mainstream AI, who believe the solution, to whatever problem there is here, is to continue to seek formalisms with a logical semantics. We have discussed elsewhere ,  the claims of the formal approach to natural language processing (NLP) and will not repeat them here: in a nutshell, our view is that there is no reason to believe that systems for which notions like deductive closure are important have any demonstrable relationship to NLP, either as an empirical, engineering task or as a model of human processing.
The central issues for us are: first, whether or not one believes the symbols in representations (whether of language itself or some other part of the world) are fundamentally language-like in nature, and, secondly, whether or not the answer to this question affects our expectations concerning the development of large-scale application systems and largely automatic acquisition of representational resources (lexicons, knowledge-bases etc.).
If there were no relationship between our enquiry here into the nature of symbols and the processes within which we intend (as AI researchers) to use them, then our enterprise would be purely philosophical. The second issue above is currently of great practical importance in NLP. But we will argue here that the former, more apparently philosophical, question may influence the outcome of any research program.
Our discussion will be organized round the following five questions: