It seems that the postscript to the 1995 NELS paper will include my current assessment of my 1985 dissertation, including what concerns I had at the time of writing it, and what I tried to do about them in my works between 1985 and 1995, including the 1990 manuscript. The papers included in the Oshumi volume in fact all grew out of those concerns, which eventually led to Language Faculty Science.
I have come to realize that it is not an easy task to write a postscript just to the 1995 NELS paper because of what is noted above. It may therefore be a while before I can post a draft of the postscript here. So, I will state a very rough outline of the points that I intend to include in the postscript.
As indicated in (some of) the earlier postings under this thread, what has driven my research since my 1985 dissertation is a pursuit of rigorous testability and reproducibility. I had concern about how clearly my own judgments as reported in Hoji 1985 would be reproduced not only by other speakers of Japanese but also by myself if we took the reported generalizations, if we took the reported generalizations as they are stated there.
A rigorous empirical research ought to make it clear exactly how we expect to obtain reproducibility about the claimed empirical generalizations. Hoji 1985, however, only presents empirical discussion of various lexical choices, reflecting how I thought I/we had obtained clear judgments with certain lexical items though not with some others. I was aware that the different lexical choices affect the judgments, and that was why I tried different "QPs" in Hoji 1985. But, the discussion in Hoji 1985 did not clearly state what lexical items to use as A and B of BVA(A, B) (or A of DR(A, B)) to obtain a confirmed predicted schematic asymmetry, using the terminology in my CUP book (Language Faculty Science) or why such should be the case.
The exposition there expects the native-speaker-of-Japanese readers to be willing to accept the validity of the claimed generalizations on the basis of the existence of a paradigm with particular choices of lexical items for which they have the predicted judgments, assuming that they indeed do. I have to check Hoji 1985 to make sure that this is an accurate assessment of Hoji 1985; but that is how I recall regarding the point at hand.
One of the central concerns in my work since 1985 and up to the CUP book has been about how to identify the best choices of A and B of BVA(A, B) (and also A of DR(A, B)) in Japanese; the best choices in the sense that they lead to the clearest judgments in accordance with the claimed generalizations. (It must be understood that back in 1985 I did not have a clear understanding of the importance of the fundamental asymmetry between the *Schema-based prediction and the okSchema-based prediction in the terms of the CUP book. So, I used to make much more efforts than I would make now to make okExamples (more) acceptable (not only for myself but also for other informants).)
Back in 1985 and for a long time after that, I thought I was investigating properties of BVA (and DR (= a wide-scope distributive reading involving two "quantity-expressing" expressions)). While working on the 2009 manuscript and the CUP book, I came to understand that the reason why I consider informant judgments about the availability of BVA(A, B) (and DR(A, B) and also the sloppy-identity reading) is that I want to find out about properties of (the Computational System of) the language faculty and that I want to identify what can be the best probes for doing that. Focusing just on FD -- which is a hypothesized formal object at LF --, the work on BVA(A, B) and the sloppy-identity reading is for the purpose of finding out about properties of FD and hence of the Computational System. I have known since my work on the 1990 book manuscript (available at http://www.gges.xyz/hoji/download/upload.cgi) that what appears to be BVA(A, B) can arise in more than one way and that the same is true for what appears to be the sloppy-identity reading. In order to find out about properties of the Computational System, we must therefore focus on the type of BVA(A, B) and the type of the sloppy-identity reading that are crucially based on FD.
Here are a few things that are important to note, I think.
First, although the CUP book only deals with the LF c-command condition on FD, in addition to the lexical condition on FD), I have been concerned, since shortly after 1985, with the/an additional structural condition on FD, often associated with Principle B of the Binding Theory. For ease of exposition, let us call the additional condition on FD "the anti-locality condition.
Second, when the judgments on a certain paradigm are not as clear as we want them to be and when they seem to be affected by certain factors we do not fully understand, we work on (slightly) different paradigms, to see if our judgments get replicated among different types of paradigms and to see if we get the sense that the judgments are converging. Well, that is what I tried in Hoji 1985 by considering DR(A, B) in addition to BVA(A, B). It is in that context since 1985 that I started to look into the effects of (i) the LF c-command condition, (ii) the anti-locality condition and (iii) the lexical condition on FD with regard to [BVA(A, B) and the sloppy-identity reading].
My 1990 book manuscript addressed various paradigms in this regard. The 1995 (NELS) paper focuses on the anti-locality condition on FD, in the context briefly outlined above.
When I saw the Otani and Whitman's (1991) LI paper, it was immediately clear to me that the type of sloppy-identity reading they were considering need not be based on FD. Upon further considerations, I came to understand that it cannot be based on FD. (In did not have the term "FD" then, but that is not very important.) If the sloppy-identity reading is necessarily based on FD, the informant judgments should be very clear -- especially with regard to what is called in the CUP book a *Schema-based prediction -- but we got nothing even close to that in Otani and Whitman's paradigms. My "reply" to their paper did not appear until 1998 for the reasons I do not get into here. But, I only mention here that the main point of my 1998 LI paper was to show that the sloppy-identity paradigm presented in Otani and Whitman's paper does not reflect a grammatical property that it is claimed to represent, by demonstrating that what appears to be the sloppy-identity reading is available even when necessary conditions for the establishment of FD, such as the LF c-command condition or the lexical condition (see above), are not satisfied.
There is a rather interesting aspect of the linguistics field, at least the part of the field I am familiar with to some extent, which is Japanese syntax and, to a lesser degree, syntax in general . That is the absence of the understanding that the accumulation of knowledge is based on the demonstration that something is not right about our hypotheses, which is a common sense in physics, as far as I understand; see Feynman's remarks quoted below (copied from General Remarks board [44446] "The theory could never be proved right," which is under [44413] "A key to language faculty science as an exact science").
"You can see, of course, that with this method we can attempt to disprove any definite theory. If we have a definite theory, a real guess, from which we can conveniently compute consequences which can be compared with experiment, then in principle we can get rid of any theory. There is always the possibility of proving any definite theory wrong; but notice that we can never prove it right. Suppose that you invent a good guess, calculate the consequences, and discover every time that the consequences you have calculated agree with experiment. The theory is then right? No, it is simply not proved wrong. In the future you could compute a wider range of consequences, there could be a wider range of experiments, and you might then discover that the thing is wrong. That is why laws like Newton's laws for the motion of planets last such a long time. He guessed the law of gravitation, calculated all kinds of consequences for the system and so on, compared them with experiment--and it took several hundred years before the slight error of the motion of Mercury was observed. During all that time the theory had not been proved wrong, and could be taken temporarily to be right. But it could never be proved right, because tomorrow's experiment might succeed in proving wrong what you thought was right. We never are definitely right, we can only be sure we are wrong. However, it is rather remarkable how we can have some ideas which will last so long." (Feynman 1965/94 (The Character of Physical Law): 151-152)
The paragraph that immediately follows this starts with:
"One of the ways of stopping science would be only to do experiments in the regions where you know the law. But experimenters search most diligently, and with the greatest efforts, in exactly those places where it seems most likely that we can prove our theories wrong. In other words we are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress."
At a more elementary or basic level, if you are interested in finding out about properties of the Computational System and use informant intuitions/judgments as evidence for or against your hypotheses, you want to focus on informant intuitions/judgments that can be reasonably understood as a reflection of properties of the Computational System. You do not expect to make progress if you did not do that. Once we accept this, it follows that a certain set of observations (in the form of intuitions/judgments by informants, including the researcher's own) may not be something that research concerned with the Computational System of the language faculty should try to provide a formal account of.
It, however, seems to be a fairly common practice in the field to demand that there be an account of a set of observations even when it is demonstrated that the set of observations do not reflect a robust empirical generalization as long as such observations have been presented with an analysis (in published work) . It is because of that demand, I included a section in Hoji 1998 where I provided an account of some contrasts that cannot be considered as a reflection of properties of the Computational System, at least as things were understood at that point. One may find the proposed account under discussion in Hoji 1998 less than compelling. And that is as expected, the account was not supposed to be a formal account in Hoji 1998. It would not lead to a predicted schematic asymmetry, in the terms of my CUP book. See the postings under General Remarks board [40868] "Recent ellipsis-related discussion in Japanese" for related discussion/remarks.
It has turned out to be extremely difficult to come up with a confirmed predicted schematic asymmetry dealing with the sloppy-identity reading, as can be seen in my 2003 (Surface and Deep ...) paper (available at http://www.gges.xyz/hoji/download/upload.cgi). And that is why I stopped working on the topic shortly after preparing that paper.)
In light of what is briefly noted above, my papers in the late 1990s and in the early 2000s dealing with the sloppy-identity reading should thus be understood as a report on how I tried to come up with confirmed predicted schematic asymmetries dealing with the sloppy-identity reading and failed to come up with an effective experimental design. Although I failed in that regard, I think I indicated fairly clearly what needed to be done if one wanted to pursue rigorous testability, dealing with the sloppy-identity reading. If we read Hoji 2003 (Surface and Deep c) from the perspective of my CUP book, we will perhaps get the sense of how the author of Hoji 2003 tried his best but failed to achieve what he wanted to.
A clear understanding of my own concern about the deducibility of definite predictions about our judgments on the basis of universal and language-particular hypotheses came much later and it came only very gradually. My 2003 (Lingua) paper is an interim report on my understanding around 2000.
A revised and expanded version of what is stated above can be the Preface to the Kindle edition of the Ohsumi volume.
As I noted elsewhere at some of the Discussion boards here, I plan to prepare (a) paper(s) in which I critically evaluate some of the papers included in the Ohsumi volume by following the methodology proposed in the CUP book. |