The Summary section of Chapter 5 is as follows. (The formatting is lost here and the footnotes are not provided here.)
*** In this chapter, I have discussed how our Experiments are designed and conducted and how the Experimental results are interpreted, in accordance with the proposed methodology for language faculty science. The general design of our Experiment is that it consist of a set of three Example sentences; one *Example instantiating the *Schema and one okExample instantiating the okSchema, both with the dependency interpretation under discussion, and another okExample, which is (as) identical (as possible) to the *Example but without involving the dependency interpretation. That our Experiment consists of Examples instantiating Schemata is in accordance with the fundamental schematic asymmetry recognized in Ch. 2, in pursuit of rigorous testability in a research program that aims at discovering properties of the universal aspect of the language faculty by dealing with judgments of speakers of a particular language. We check the acceptability of Example sentences with a dependency interpretation that we hypothesize to be crucially based on an LF c-command relation, so as to maximize the testability of our hypotheses stated within the general conception of the Computational System of the language faculty suggested in Chomsky 1993. The fact that our general Experimental design allows us to check effects of variables along two distinct dimensions is in harmony with the view that the hypothesized LF relation/object FD, which is hypothesized to underlie the dependency interpretation of BVA(A, B) with particular choices of A and B, is constrained by a structural condition as well as a lexical condition. In language faculty science as proposed here, we work with confirmed predicted schematic asymmetries, instead of a statistically significant contrast. The confirmed predicted schematic asymmetry is about an individual informant. We thus focus on the %(Y) on Schema B (and to a lesser degree, on that on Schema A) and %(I), instead of the "average" responses among a group of informants. In (52), I provide a review of what is meant by "%(Y) on a Schema," "%(I)," and "N(I)."
(52) a. The "%(Y)" on a Schema stands for the percentage of Yes answers (i.e., the reported judgment that the example in question is acceptable at least to some extent (with the intended BVA in the case of Schema A and Schema B in [31]-4, for example)) among all the answers/judgments given on the Examples instantiating the Schema in question. b. The %(I) stands for the percentage of the informants who gave a Yes answer on at least one *Example in a given Experiment. c. The "N(I)" is the number of the informants who have provided answers on the Examples being considered.
Our prediction is that the %(Y) on Schema B is 0 and those on Schema A (and on Schema C) are not 0 in our Main-Experiment as long as the informant clearly understands what is intended by our instructions (including the intended dependency interpretation) and for whom the Sub-Hypotheses in the Main-Experiment are valid. It follows that the predicted %(I) in a multiple-informant Experiment is also 0 in our Main-Experiment as long as we focus on such informants classified in its Sub-Experiments, provided that everything else about our Experiments are done properly and correctly and that our hypotheses are all valid. As discussed in section 4.4, a multiple-informant experiment is a collection of single-informant experiments. Its purpose is to see if the result of a single-researcher-informant experiment is replicated, with regard to whether we obtain a confirmed predicted schematic asymmetry, rather than to see if there is a significant difference between the average responses among a group of informants on the *Examples and the okExamples. In Chapter 7, I will present experimental results―both in English and in Japanese―that show that the significance of a contrast among a group of informants that falls (far) short of a confirmed predicted schematic asymmetry can only be assessed in light of the results of Sub-Experiments. The discussion there includes a demonstration that two seemingly identical-looking Experimental results exhibiting a contrast can turn out to have radically different interpretations once we consider the results of Sub-Experiments and the informant classification it leads us to. In the next two chapters, I will illustrate the proposed methodology for language faculty science, on the basis of actual experiments. The general experimental design and the specific aspects of actual Experiments still need improvement. The fact that we have been able to obtain experimental results that are quite close to our definite and categorical predictions, however, provides support for the viability of language faculty science as proposed here. |