YW: How can anything that is a language be other than extensible? If that is obvious, one can then ask how can such extended information about a language be acquired. This could be seen as a traditional Chomskian question  about language and the child's learning of L1, its first language, but we intend it in the more accessible sense of an enquiry about how a computer can come to acquire new information about language, and whether that could ever be equated with the mastery of a merely finite, static, resource.
SN: Of course, language is extensible. However, any sublanguage used in an application, has, up till now, been finite and static. In AI applications, acquisition of knowledge typically precedes its use. When a new word must be entered in the lexicon of an MT system, it has been done by people or, at least, sanctioned by people. One can argue that the associated representation language was static and was used on any new input text as such, until the need for further extensions arose.
YW: Another of our key questions here is whether this feature of language is universal and, if so, must it be also be possessed by RLs, too?
SN: It is well known that people have difficulty recognizing ambiguities; they immediately choose the contextually appropriate sense for each word or phrase. This seems to suggest that, if indeed meanings are represented, the elements of the representation are not ambiguous, as the operation of retrieving the other senses of an input language element is so expensive.
YW: Ah, yes, this is Wittgenstein's  famous point that ``the senses of a word do not pass in front of my mind''. But your point does not, to me, prove anything about the nature of the representation: it is only a point about our lack of ACCESS to our processes. And in any case I am not claiming that representations are ambiguous: only that the items in them can be ambiguous (out of context presumably) in just the way NL items can. A difference of emphasis between us reflects our intellectuial upbringings, You, I fear, focus on the whole representation (in RL), I on the RLs constituents!
Do we therefore need to discuss the issue of what it is to know, or assess, objectively, in some sense, that a symbol in a representational system/language is ambiguous (within or out of context). It is clear from the variation of lexicographic intuitions (10 senses for a given word versus 2, in different dictionaries) that mere intuition is not enough. Remember, too, Wierzbicka's argument  that polysemy is mostly an illusion.
SN: Surely, lexicographic intuitions are about NL, not necessarily RL. That lexicographers disagree may simply mean that there does not exist some ``correct'' number of senses. I intuitively dislike the suggestion (that there is such a number), but maybe in some system-operational approach, one could define word senses cross-linguistically. This latter point connects with the idea of using an almost Hjelmslevian  view of the semes across languages as an impetus for humans to select senses for representation even in an internalized RL.
YW: Yes, the translation-as-representation case, between NLs, has had a new lease of life recently hasn't it, and it is a strange reprise of the Fodorian comedy of the LOT as the translation one can't get at. I used to suffer the temptation at meetings to ask Fodor how he KNEW the LOT wasn't, say, Latin, but I fortunately never gave way to it, since I know he doesn't know.
More seriously, and given that LISP was considered almost a Language-of-Thought by AIers in the seventies: consider NIL in LISP, now usually thought of as 3-ways ambiguous (an empty list, an atom, and a Boolean value). Was there an objective test of that? Did it matter until it was noticed, in terms of the usefulness or otherwise of LISP? Was there a formal criterion for spotting it: i.e. is ``giving a formal semantics of a representation'' a revelatory mechanism for exposing ``ambiguity''? I suspect not.
SN: The fact that a lexical ambiguity in a representation language can be contextually ``benign'' does not necessarily prove that ambiguity can be introduced with similar impunity into RLs designed for the purpose of representing meanings of texts.
YW: Agreed, and as we know, NLs, unlike RLs, can be metalanguages for themselves, and this is probably a point on your side showing a clear NL/RL difference. Though I still do not need to concede, what you insist on, that we can allow or prevent ambiguity in RLs. These matters are under no one's control: in RLs like CYC  no one was able to control the coders' use of the predicates effectively. There is no RL/NL distinction there, where you seem to want it for RL coding, and this, for me, rebuts your earlier claim that applications are static and finite.
The case of corpus statistics may be interesting here because its users (e.g. ) generally have no use for terms like ``word sense'' which they find unbearably intuitive; for them, symbols simply occur in environments which may or may not be usefully separable into classes of occurrence.
I am not sure there is any objective demonstration of the ambiguity of a symbol, which would require showing the Reality of Word Senses? I have always used the Schvaneveldt Pathfinder nets  as a justification; they can show ``bank'' having separable subgraphs with an algorithm that requires no seeding or stimulation to do that. The other well-known statistical methods usually do not show ambiguity unless you assume it to start with.
SN: Your position here is similar to that of the ``lexical-rule''-oriented lexical semanticists (e.g.  who prefer to propose few (usually, one) word sense for recording in the dictionary, and then to add rules for accommodating meanings that do not directly conform to statically defined constraints. This single word sense is, indeed, ambiguous. Unfortunately, the generative lexicon approach does not discuss the vocabulary of the representation language in any detail.
YW: No, I am not assuming single senses for words, nor lexical rules for creating dictionaries. Look at this a slightly different way: the relation between an expression in NL and in its corresponding RL may be either a relationship like that between a language and its metalanguage or one of (presumably mutual) translation. If the former case holds, then the languages need not really differ in type; they simply have an asymmetric relationship and might differ in expressiveness, but, as is well known, a meta-RL is as much in need of its meta-language as the object-NL. There is an agreement in the formal world to stop worrying about this, and probably rightly, but, if the relationship is of that sort, there is no reason to believe the two levels differ over, say, polysemousness or extensibility of meanings.
Alternatively, if the relationship is one of translation, then, almost by definition, TRANSLATE (X, Y) if X and Y are both symbolic, requires that X and Y be of the same TYPE, that is, both are NL-like, in this case!
SN: Of course, we cannot tolerate an infinite regression of metalanguages. The relation between NL and RL is, to me, asymmetrical, though there will be both many to one relations between elements of NL and RL (e.g., synonymy) and one-to-many ones (most notably, polysemy). Internal consistency is achieved for RL through maintaining the complex cross-relationships in an ontology (the RL vocabulary). The issue of meaning grounding is more difficult and we might want to state, cautiously, that it is achieved through the multiple connections of elements of an RL with multiple NLs, through human judgement of quality of translation correspondence.
Your argument about the relation of translation hinges centrally on how one defines TYPE. It may be that we do not disagree, but you elect to stress similarities between NL and RL while I persist in looking for differences. Let them be of the same type, but RLs must support machine inferencing while this cannot be asked of NLs. The case of the Dutch company BSO working with Esperanto as its interlingua  for MT clearly showed how much a human-oriented (though invented) language had to be modified in order to serve as a kind of RL. Even the developers themselves, Esperanto enthusiasts all, had to call the new language somewhat differently: BCE or ``binary-coded Esperanto.''