YW: Acquisition in our sense is linked to the necessity or otherwise of symbol ambiguity, because much acquisition (especially automatic acquisition, i.e. machine learning) is of new ambiguities or senses of symbols.
SN: The extent to which automatic acquisition of content is possible may indeed be a major practical undercurrent of this paper. A question for you: does explaining the meaning of an ambiguous symbol in terms of another ambiguous symbol actually constitute disambiguation?
YW: This a practical question, too, of course. We are seeking, in our everyday research, and outside dialogues like this, practical, robust, NL processors, not necessarily wedded to one particular theory, but ones that tackle areas of NL and KR representations. I am, in a sense, rather neutral about particular representations but strong on assessment and large systems and data. On your question: again, I accept that an (ambiguous) symbol can be defined, more or less, by a string that is not, as a whole, ambiguous.
An assumption about communication behind all this is that the trivial diagram we are all familiar with of humans communicating with their separate representations (in head balloons) via the very narrow linear language stream from their mouths, is wrong in one crucial respect. It is normally shown with the SAME structure in the two heads. But there is no reason at all to believe that human communication requires identical logics, lexicons, grammars, parsers etc. in both heads, any more than it does identical beliefs.
I suggest the most striking feature of communication is that humans who differ about these structures can communicate, just as can individuals with different dialects, or those writing to others at later historical periods.
SN: Yes, there should be no presupposition of a similarity between the knowledge and processing resources of various people, modulo the hardware (wetware?) and possibly some other, perhaps genetic, constraints. The difference is clear in the case of conversations between people who are native speakers of different languages or belonging to different professional and social strata, people of different ages, etc. It is indeed amazing how adaptable people are when viewed as information processors. At the same time, on the surface, what this shows is only that there may be as many ``proprietary'' devices for processing language as there are people.
YW: The commonsense fact is that communication can take place within a bandwidth of difference, and human-computer communication in a way explores the limits of this bandwidth and how far it can be extended in special cases by tuning lexicon structures and beliefs to each other in the course of communication itself. But this issue cannot be separated from the problem of language representation itself, for we cannot understand the nature of the representation of meaning in lexicons, say, unless we can see how to extend lexicons in the presence of incoming data that does not fit the lexicon we started with. Extension of representation is part of an adequate theory of representation.
SN: I think I understand your intended meaning: first, no set of static knowledge sources will have complete coverage; therefore, representations need to be extensible; therefore there must be a mechanism of adding elements to representations, preferably, on the fly.
Further, many of such representation elements are lexical. And the easiest way of naming these new elements would be through the natural language strings that refer to them in the input and which triggered the representation augmentation process in the first place.
This, of course, presupposes automatic acquisition, because if a human is involved in acquisition other suggestions could become quite palatable. In short, the argument for allowing natural language into a representation becomes thus also practical: we need it because otherwise we will have problems naming new atoms.
YW: Supppose we write
as a basic model of acquisition of a representational structure, be it an ontology or a lexicon, to indicate that a state of the structure itself plays a role in the acquisition, of which structure2 is then a proper extension (capturing new concepts, senses etc). This is a different model from the wholly automatic model of lexicon aquisition in, say TIPSTER related work (e.g. 20), which can be written:
This case is one which does not update or ``tune'' an existing lexicon but derives one directly and automatically from a corpus. We are arguing the essential role of representational structure in this process, and hence the first process, which we may also take to involve some essential human intervention as well. But whatever is the case about that, we are not discussing the ab initio / tabula rasa case. Interestingly perhaps, neither of these is an analogue to the Chomskian approach to (first) language acquisition , which might be written:
If the constraints here are of the same format as a lexicon structure then this third form is closer to I above, especially in Fodor's work, where the constraints become a sort of primitive-ontology or -lexicon.
SN: This classification seems to skirt the issue of human involvement. In reality, fully automatic acquisition of lexical information does not, at this time, go anywhere deep enough to yield material of use in solving hard problems such as full-text lexical disambiguation or even syntactic analysis. In TIPSTER, for instance, as far as I know, the automatic acquisition of subcategorization patterns for some English verbs was accompanied by massive manual acquisition. Personally, I would choose to use a combination of all three of the above methods of acquisition, depending on the quality of the input data and availability of good-quality constraints and structures.