Next: Demonstration and Interpretation of Up: Introduction Previous: The Derivation of Predictions
To answer the question whether a prediction referring to an aggregate really follows from assumptions about individuals, we need more mathematical formalization than for the commonly practiced 'loose' derivation criticized by Meehl (1967/1970 ). But this doesn't mean that we must make assumptions about the precise mathematical structure of probability distribution functions characterizing individuals and aggregates. The reformulation of classical test theory in the mathematical framework of conditional expectations by Zimmerman (1975 , 1976 ) has opened a way leading to statistical hypotheses for experiments, which undoubtedly deserves to be called a 'derivation', and no assumptions about parametric families of distributions (like normal or exponential or Gamma distributions) are necessary. It is sufficient to assume that the dispositions of individuals to produce certain values of a dependent variable can be modelled by probabilities, whose dependence upon experimental conditions will reflect the causal effects of interest.
With minor differences in notation, we follow an outline of this approach by Steyer et al. (1995 , 1996 ), confining this outline to experiments with a real valued dependent variable. The data of an experiment are regarded as realizations of the following process: Select a 'unit' from a set D of persons or persons in situations (usually the domain of the hypothesis under study), choose a treatment from a set C of conditions, and observe the value of the dependent variable, the set of potential values of this variable being the set R of real numbers or a subset of R.
In this framework, every unit u belonging to the set Dcan be characterized by a map such that the function value fu(c,x) (with and ) is the probability of obtaining a value up to (and including) x of the dependent variable, if unit u is selected and observed under condition c. Similarly, the entire process of selecting and observing a unit can be characterized by a map such that y(c,x) is the probability of obtaining a value up to x of the dependent variable, if condition cis applied. In the approach based on conditional expectations, these probabilities are conditional probabilities.4 Adding the notations and for the expectations of random variables with distributions given by the probabilities fu(c,x) resp. y(c,x),5 we can translate into this formalization an explication by Steyer et al. (1995 , 1996 ) of two concepts going back to Neyman (1923/1990 ).6 In a situation with , the difference is the individual causal effect of condition b (vs. a) for unit u on the expectation of the dependent variable Y, whereas is the average causal effect on this expectation.7 A basic result of these authors refers to these effects: If the random variables U and X indicating the selected unit resp. the experimental treatment are stochastically independent, then a hypothesis claiming a positive individual causal effect for every implies a positive average effect , this implication being valid for every distribution of the random variable U.8 On the other side, if the independence of the random variables U and X is violated, then the average effect may be negative, although the individual causal effect is positive for every .
It is almost trivial that the result for a situation with stochastically dependent variables U and X can be transferred to hypotheses expressing treatment effects in terms of medians (which will be denoted as resp. in the sequel). However, it may be considered non-trivial that a situation with for every and may also occur with independent random variables U and X. This fact we will be shown in Section 2.
Since the further analysis will be confined to situations with
stochastical independence of the variables U and X,
we can introduce a convenient terminology and notation to account
for the fact that the probability distribution of the dependent
variable in the process of selecting a unit and observing it depends
on the probabilities gouverning the selection of a unit. In situations
with a finite domain set D (which will be considered mainly
in the sequel), the selection process can be characterized by
the assignment of a selection probability to every .
these probabilities form the distribution of the random variable
U, which will be called the selection distribution in the sequel. The process of selecting a unit with selection
it under one of the conditions a and b will be called
is an abbreviation for 'random selection and observation'). Every
such process can be characterized by a map
in the same way, as every unit can be characterized by a map
The function value
is the probability of obtaining a value up to (and including)
x in the RSO-process ,
if condition c is realized.9
The expectation and the median
of the respective probability distribution will be denoted as
and the difference
can be called more precisely the average causal effect for the
on the expectation
of the dependent variable. Under these assumptions, the relation
between the probabilities fu(c,x) and
can be expressed by the equation
A generalization of Eq. (1) to a situation with an infinite domain set D is given by the equation
where the selection distribution is a probability measure on a suitable -algebra in D.10 Since this generalization will be used only marginally in the sequel, it can be ignored by readers not familiar with the underlying Lebesgue integrals.
Up to this point, the probabilities fu(c,x)and have been introduced as conditional probabilities, which are derived in a 'top down' mode from a complex probability space. We should also mention a 'bottom up' approach, which is known under headings like 'aggregation' or 'mixture distributions', and recommended by some authors (e.g. Sixtl, 1985 , Rost & Langeheine, 1991 ) as a basis of a methodology of statistics in psychology. There we would start with the probabilities fu(c,x)and derive the probabilities from Eq. (1) resp. (2). Since both approaches lead to identical results for the problems to be discussed in the sequel, we can refer to another article (Iseler, 1996b ) for a discussion of some reasons motivating a preference for the bottom up way.11
The same article formalizes a concept of 'stability under aggregation' referring to the fact that some (but not all) properties are inherited by an aggregate, if all aggregated elements have this property.12 The formal definition can be applied to the present situation as follows: A property H of maps like fuand (e.g., a hypothesized order of the expectations or medians for conditions a and b) is stable under aggregation, if and only if the following implication holds for every selection distribution : If there is a subset A of D such that the selection probability associated with this set is 1, and if furthermore the property Hholds for the map fu of every , then this property follows for the map .13 In particular, the premissa of this implication is true for every selection distribution , if the maps fu of all units belonging to the domain set D have the property H: Then the set Dcan take the role of the set A in the above implication, and the property H will follow for every selection distribution . Note that an identical selection distribution for all conditions is assumed by the Equations (1) resp. (2) underlying the above definition of stability under aggregation. Using this terminology, the above quoted result of Steyer et al. (1995 , 1996 ) for situations with independence of U and X (i.e., with identical selection distributions for all conditions) can be summarized in the conclusion that a positive causal effect of a condition b (vs. a) upon the expectation of a dependent variable is stable under aggregation. On the other side, the lacking aggregation stability of a corresponding property of medians is the objective of the present article.
Next: Demonstration and Interpretation of Up: Introduction Previous: The Derivation of Predictions Methods of Psychological Research 1997 Vol.1 No.4
© 1997 Pabst Science Publishers