SN: Representations for humans assume the presence of an extremely powerful analysis system and a huge amount of background knowledge. One has to specify things at a much finer grain of description for machines than for humans even if the purposes of the two descriptions are compatible.
YW: This is an excellent question and not as much discussed as it should be. A difference in machine versus human handling of representations used to be called the Gensym issue (e.g. ): a machine can handle English expressed as arbitrary Gensyms substituted for words, but a human native speaker cannot without vast retraining, if then. We can both accept the difference between the comprehensible representations that humans need and the fact that they have no meaning for machines, and use it to prove opposite views as regards NL and RL. Your observation proves to me that, for that very reason, RLs must be accessible to humans (as well as machines) and THEREFORE must be NL like in certain respects.
SN: This is a weaker form of your original argument about NLs as RLs, and I would fully agree with the premise: just like the computer programs, which are written in part to be read by people (an estimated 80% of the time of software engineers is spent on maintenance: that is, reading and improving other peoples' code), so should the knowledge structures in an RL. That in, say, Mikrokosmos names of atoms are words or phrases in English is due exactly to this fact. It is, on my view, a conceptual fallacy to read more into this state of affairs: for instance, to claim that there is an intrinsic necessity for RL elements to be also elements of an NL.
YW: Charniak's final argument  against connectionism was that you couldnt understand the structures such systems acquired; and they were therefore not acceptable representations, regardless of whether machines could use them or not. How much of our underlying disagreement is over whether structure must be comprehensible?
SN: Well, to comprehend anything which is non-trivial, one must learn. One can, in fact, learn to read meaning representations. It has been proved in practice. Of course, it is very desirable to avoid having people read unadorned RL structures, but this might be a premature hope.
YW: Would we be helped by thinking about how actual coders use RLs? An example that has interested me is that of how some Japanese researchers use interlinguas for, say, MT or in the Tokyo EDR dictionary project , but with English symbols. It has been argued that it may be an advantage for them because they do not, in many cases, see more than the main sense for any primitive and this makes its use easier and less confusing than for a native speaker of the interlingua, if you will allow that term. The question might then be: has that fact any analogies with how you see a machine as handling a representation: the difference between human and machine handling of representations being, I think, crucial for your position, though not for mine?
SN: The analogy with understanding by machines is clear: they operate with fewer word and phrase senses (to say nothing about connotations) than people. However, I do not see any bearing that this observation has on the differences between RLs and NLs. If the Japanese researchers you mention do not know English well enough, this does not impinge in any way on the issue of whether RLs for computers should be either bad or good English or any other NL, or an artificial language (with either narrow or broad coverage of meanings in an NL).
YW: Maybe when we model understanding we aim at too high a target: in ordinary situations people may understand just a fraction of what is said by a speaker but they ask clarification questions only when it matters. In reality, there are few penalties for failures such as miscommunication or misunderstanding: contrast medical counselling dialogue, legal searches, patents, and philosophical discussions, in all of which misunderstanding is thought disastrous and maybe carries real costs.
I have a feeling we may have swapped sides here a bit. Part of our difference may arise from what one could call my Wittgensteinian prejudices [28 from bad early training perhaps, which cause me to think language central and unreplaceable in thought and representations so that there will never be any alternative to doing what we do now -whatever happens to AI or computational linguistics---because we are self-defined by language and we can't expunge it from representations.
SN: If the issue here is that, however people may try, they will not be able to produce RLs which are not ambiguous, it is, or will be, a verifiable matter. Possibly, this will be happening asymptotically. But it is surely not plausible that people are somehow constitutionally unable to come up with an unambiguous RL, not because of the size and complexity of the problem (which can be ameliorated through tools, partial automation etc.) but rather by definition! We would need to go through a much more detailed discussion of the influence of the fallibility of human acquirers on the nature of the RL: Sapir-Whorfian  influences of native tongues, difficulties with listing all senses of a lexeme or all synonyms of a word, as opposed to the human faculty of judgment as to whether any two words are synonymous.
YW: Ah, so at the end we really do differ. I think it is beyond human ability to design an RL without the features they now have, and for the reason we touched on: they must remain comprehensible to us, and if they do they will be like NL, where I quite accept that ``like'' inevitably remains a bit fuzzy. It is as if comprehensibility will carry a price; which will be loss of control of the sort you think we can retain.
SN: But it is exactly your understanding of the meaning of ``like'' which is the crux of the matter here! As long as it is fuzzy, one cannot very well argue about it. Further, I assume that by ``control'' you mean whether people can be taught deliberately to produce representations that, as they or their project managers believe, would be processable by machines. As I understand it, you think this implausible because vestiges of human language will remain and corrupt the RL representations. I think it necessary, unless we can teach machines to reason using knowledge bases which are inconsistent or ambiguous. Mind you, I do not have any illusions about the practical attainability of knowledge bases which are fully consistent and unambiguous. The methodological choice is to carry on pretending that they are until special mechanisms are developed for dealing with such inconsistencies and ambiguities.
YW: Our misunderstandings persist to the end: vestiges of NL in an RL does not mean for me that a machine cannot ``understand'' it in a particular application. Of course not, it is happening all the time in millions of working programs.
SN: Right. So long as we agree on what constitutes understanding, but that would need another conversation!