1. Dissertationswettbewerb der FGAP 1999

Vom 19. bis 21. Juli 1999 fand an der Universität Regensburg der von der Fachgruppe ausgeschriebene Doktorandenwettbewerb statt. Die besten drei von 16 allesamt hervorragenden Arbeiten wurden dabei ausgezeichnet.

Der erste Preis, ein von der Firma Toshiba gestifteter Laptop, ging an Adrian von Mühlenen, Universität Bern (jetzt Universitaet Leipzig), für seine Arbeit über "Perceptual integration of motion and form information in a visual search task"

Der zweite Preis, eine Digitalkamera der gehobenen Klasse, ging an Karin Zimmer, Universität Regensburg (jetzt Universität Oldenburg), für ihre Arbeit über "Intrinsic geometry of binocular visual space"

Der dritte Preis, ein von der Firma Siemens gestiftetes Handy, ging an Klaus Rothermund, Universität Trier, für seine Arbeit über "Persistence and reorientation: Perseverence and dissolution of goal-related attentional sets"

Tagungsbericht der Fachgruppe Allgemeine Psychologie „Fortschritte der Experimentellen Psychologie"

Am 20. und 21. 07.1999 fand an der Universität Regensburg die Endausscheidung in dem von der Fachgruppe Allgemeine Psychologie der DGPs ausgeschriebenen Doktorandenwettbewerb statt. Eine Kommission von vier „Editoren" (Jan Drösler, Regensburg; Reinhold Kliegl, Potsdam; Josef Lukas, Halle; Rolf Ulrich, Tübingen) hatte alle eingereichten Arbeiten nach dem Peer-Review System begutachten lassen. Zur Erzielung einer möglichst hohen Vergleichbarkeit der Beurteilung war den Gutachtern dazu ein Fragebogen an die Hand gegeben worden, der für elf Kriterien eine Bewertung mit jeweils 0 - 3 Punkten verlangte. Die Kriterien umfaßten nicht nur Aktualität und Originalität der Fragestellung, sonder auch die Qualität der theoretischen Aufarbeitung sowie der experimentellen Durchführung.

Angesichts des ausnahmslos hohen Niveaus aller eingereichten Arbeiten wurden alle 16 Bewerber zur Endausscheidung eingeladen. Es handelte sich um junge Promovierte mit durchweg experimentalpsychologischen Arbeiten aus Deutschland, Österreich und der Schweiz. Die Zusammenfassungen ihrer Beiträge sind im Internet unter der Homepage der Fachgruppe nachzulesen. Für die Endausscheidung hatten die Bewerber ein Poster über ihre Dissertation einzureichen, das während der gesamten Tagung ausgestellt war, einen 20-minütigen Vortrag dazu zu halten und sich danach einer 10-minütigen Diskussion zu stellen. Alle anwesenden Mitglieder der Fachgruppe Allgemeine Psychologie waren berechtigt, den Vortrag und die Diskussion mit 0 - 6 Punkten sowie die Posterdarstellungen mit 0 - 4 Punkten zu bewerten. Aus der Summe aller drei Beurteilungen (schriftliche Arbeiten, mündliche Präsentation, Posterdarstellung) wurde die Endnote gebildet.

Es traf sich, daß die Philosophische Fakultät II der Universität Regensburg in dieser Woche Patrick Suppes, Stanford University, die Ehrendoktorwürde für seine herausragenden Leistungen auch zur theoretischen Begründung der experimentellen Psychologie verliehen hat. Patrick Suppes und die zum Zwecke seiner Ehrung angereisten auswärtigen Wissenschaftler wurden eingeladen, an der Endausscheidung des Doktorandenwettbewerbs als Beurteiler teilzunehmen. Die Wettbewerber hatten also englischsprachige Poster anzufertigen und ihre Vorträge und Diskussionen in Englisch abzuleisten.

Zur Überraschung aller Beteiligten bereitete diese sprachliche Restriktion keinem der Teilnehmer erkennbare Schwierigkeiten. Alle Anwesenden waren beeindruckt vom selbstverständlichen und souveränen Umgang sämtlicher junger Wissenschaftler mit der englischen Sprache. In gleicher Weise überzeugend war das durchgehend hohe Niveau der wissenschaftlichen Beiträge. Wenn eine solche Tagung sich dadurch auszeichnet, daß alle Vortragenden auf eine mindestens zweijährige Vorbereitungsphase für ihren Beitrag zurückblicken, dann könnte man allein deshalb schon eine ordentliche Leistung erwarten. Bemerkenswert war aber, daß über diese Erwartung hinaus der Tiefgang der Fragestellungen, die Präzision der versuchstechnischen Realisierung, die Gründlichkeit der statistischen Prüfung und die vorsichtige Art der Schlußfolgerungen ein gemeinsames Kennzeichen von beinahe allen Beiträgen war. Was die Beobachter angeht, war das Echo in allen Punkten außerordentlich positiv.

Was sagen nun die Teilnehmer selbst? Eine informelle Befragung am Ende der Tagung ergab, daß die ausgeprägte Wettbewerbssituation zwar als solche empfunden worden ist, sich aber in keiner Weise negativ ausgewirkt hat. Schließlich gab es wertvolle Preise zu gewinnen. Der erste Preis, ein moderner Laptop, gestiftet von der Firma Toshiba, ging an Adrian von Mühlenen, Universität Bern (jetzt Universität Leipzig), der zweite Preis, eine Digitalkamera der gehobenen Klasse an Karin Zimmer, Universität Regensburg (jetzt Universität Oldenburg) und der dritte Preis, ein Handy der Firma Siemens, an Klaus Rothermund, Universität Trier. In der Stellungnahme der Teilnehmer wurde die Verbindung mit der Verleihung der Ehrendoktorwürde an einen prominenten Psychologen durchaus als nachahmenswert hervorgehoben, selbst wenn damit sprachliche Auflagen verbunden sind. Es sieht so aus, als hätte die junge Generation das Problem der Internationalisierung unseres Faches nicht nur erkannt, sonder in wesentlichen Teilen bereits gelöst. Die Vertreter der Fachgruppe Allgemeine Psychologie werden jetzt im Anschluß an die Tagung sämtliche Erfahrungen auswerten und bei der nächsten Fachgruppentagung Bericht erstatten.

Jan Drösler, Karl Gegenfurtner, Josef Lukas

Kurzfassungen aller 16 Beiträge

Abstracts

Auditory sentence comprehension: Evidence from event-related brain potential studies Anja Hahne, Freie Universität Berlin*

A set of experiments explored the temporal parameters of auditory language comprehension. Using event-related brain potentials (ERPs) as dependent variable, the characteristics and possible interactions of semantic, syntactic and prosodic processes in sentence comprehension were investigated. ERPs to correct sentences were compared to ERPs elicited by sentences containing either a semantic or a syntactic violation. A first experiment replicated previous results using new materials. While semantically incorrect sentences elicited a centro-parietally distributed N400 component, syntactic phrase structure violations elicited an early left anterior negativity (ELAN) which was followed by a late parietally distributed positivity (P600). This data pattern is compatible with a language comprehension model assuming three processing phases, two of them being mainly syntactic.

A next experiment tested the degree of automaticity of the two syntactic processing steps. To do so, the proportion of correct and syntactically incorrect sentences was varied (20% vs. 80% violation), thereby inducing strategic behavior on behalf of the participants. While the early negativity was not influenced by the proportion manipulation, the P600 was observed for a low proportion of syntactically incorrect sentences only. This suggests that the first syntactic processing step is rather automatic while the late syntactic processing step is rather controlled.

Possible interactions of semantic and syntactic processing were examined in a subsequent experiment by additionally introducing a combined semantic-syntactic violation with the violation being realized on the same target word. This combined condition elicited the same ERP pattern as the pure syntactic condition, namely an ELAN followed by a P600 component. Interestingly, there was no N400 despite the overt semantic violation. This indicates that the semantic integration of a word in a sentence may only be initiated after this word has been successfully integrated into the syntactic phrase structure.

A subsequent experiment used the same sentences but different instructions. Rather than judging the sentences for overall correctness (as in the preceding experiment), participants now evaluated sentences for "semantic coherence" while ignoring syntactic aspects. The ERP-results were clear-cut: even when focusing on semantic aspects of the sentence, an early anterior negativity was elicited in the pure syntactic condition as well as in the combined semantic-syntactic condition. This finding further testifies to the highly autonomous character of the first syntactic processing step. A further important result of this experiment was that, in contrast to the previous experiment, a N400 component was elicited also in the combined condition. The fact that the N400 was dependent on the instructions suggests that the underlying processes are under the participants` strategic control.

A final set of two experiments studied possible interactions of syntactic and prosodic factors by investigating the effect of a non-canonical prosodic form. These data demonstrate that prosodic information, unlike semantic information, may influence even the early syntactic processes reflected in the ELAN component.

Taken together, the data provide important constraints for the temporal and functional coordination of semantic, syntactic, and prosodic processes in auditory language comprehension. Recent extensions of this experimental work demonstrate that the paradigm can also be successfully applied to uncover specific details of the comprehension processes in first and second language learners as well as in hearing-impaired patients.

*Presently Max-Planck-Institut, Leipzig.

Constraint Relaxation and Chunk Decomposition in Insight Problem Solving Günther Knoblich, Universität Hamburg*

Insight problems cause impasses because they deceive the problem solver into constructing an inappropriate initial representation. The main theoretical problem of explaining insight is to identify the cognitive processes by which impasses are resolved. It is hypothesized that impasses are broken by changing the problem representation and two hypothetical mechanisms for representational change are described: the relaxation of constraints on the solution and the decomposition of perceptual chunks. These two mechanisms generate specific predictions about the relative difficulty of individual problems, about differential transfer effects, and about the structure of eye movements. The predictions were tested in several experiments using matchstick arithmetic problems. The results were consistent with the predictions. Representational change is a powerful explanation for insight, if the hypothesized change processes are specified in detail. The results support the view that a key component of creative thinking is to overcome the processing imperatives of past experience.

*Presently Max-Planck-Institut, München.

Speed of Comprehension of Visualized Ordered Sets Christof Körner, Universität Graz

Visualizations and their comprehension are a vital prerequisite for the analysis and communication of numerical and non-numerical information. In a series of four experiments visualizations of the non-numerical data structure of ordered sets were investigated by means of so called upward drawings.

Mathematicians and computer scientists have investigated the properties of planarity, slope, and levels for ordered sets. Ordered sets need not to have these properties. If, however, these properties are given for a set they can (but need not) be visualized in a respective upward drawing. Mathematicians and computer scientists claim that consideration of these properties facilitates the comprehension of upward drawings.

The effect of adequate visualizations regarding the properties planarity, slope, and levels of a single ordered set was investigated. In each of the experiments 30 participants had to answer interpretation questions which were shown together with specific visualizations of the ordered set. The interpretation questions required comparisons between the elements of the set. The number of comparisons was also varied systematically. Moreover, the participants' instructed knowledge of ordered sets varied across experiments. The latencies of participants' responses were recorded.

Analysis of the response latencies with non-parametric models shows that visualization of planarity is the most influential variable regardless of slope and levels, the number of comparisons, and the instructed knowledge. The speed of comprehension varies with the number of required comparisons as expected. If the number of comparisons between elements is increased, slope has an effect on the speed of comprehension as well.

A joint model accounts for effects of both properties of the drawings and properties of the interpretation questions.

Memory search instead of template matching. Representation-guided inference in "same-different" performance. Thomas Lachmann, Universität Leipzig

The influence of stimulus material organization upon performance in visual recognition is considered within a previously advanced framework of memory-guided inference (Geissler & Puffe, 1983; Geissler & Lachmann, 1996). This approach attempts to specify strategies as a function of task and representation of sets of objects in perception and memory. A basic assumption is that, due to organizational constraints, strategies in complex recognition tasks exhibit characteristic deviations from unconditionally optimal performance. This was referred to as "seeming redundancy" (Geissler, 1985, 1995).

This contribution presents data from several "same-different matching" experiments with regular sets of 5-dot patterns, obtained by rotations and reflections on a imaginary 3 ´ 3 grid (Garner & Clement, 1963). When subjects were instructed to rate the patterns as "same" independent of their orientation, mean RTs of both, "same" and "different" responses, can be described as a function of the number of possible transformational alternatives of the presented patterns. This holds even when the patterns were physically identical. These results are incompatible with common theories assuming template matching and minimal mental transformations. They are instead understood as evidence for a group-code related search reflecting an extreme case of seeming redundancy in information processing.

This interpretation is supported by the finding that the experimental manipulation of the probability of occurrence of one single pattern affects the performance for all members of the group of transformationally related patterns.

Decomposing inconsistent choice behavior Martin Lages, Universität Heidelberg*

An efficient graph-theoretical decomposition technique is introduced that treats inconsistencies in binary choice data as adaptive behavior rather than random error. The ear decomposition reduces inconsistencies to a basis of directed cycles. The incidence vectors of this basis generate a cycle space comprising all possible cycles. The basis characterizes inconsistencies in any finite binary data set and its size offers an improved measure of inconsistency.

A version of the ear decomposition employs the sequence of choice-trials. This decomposition was applied to pair comparisons where the sequence of choice-trials was systematically varied. In the repetition block design, one of the alternatives from the preceding trial was repeated in the subsequent trial. In the resolution block design, each alternative appeared only once in a block of choice-trials. In a third block design the sequence of pairs was randomized. The effect of the block designs on intransitive choice was tested between and within subjects.

The results suggest that, contrary to the assumptions of classical deterministic and probabilistic choice models, intransitivities vary systematically between block designs and across sessions. Additional analyses show that the decomposition by sequence may serve as a model of how inconsistencies emerge in a sequence of trials. It is concluded that the ear decomposition constitutes a promising tool for analyzing inconsistent choice behavior without introducing randomness or domain-specific assumptions. In general, algebraic decompositions are regarded as a first step toward a qualitative theory of error in the social sciences.

*Presently Max-Planck-Institut, Berlin.

Vestibular Visual Interaction: Psychophysical and electrophysiological investigations Rainer Loose, Universität Düsseldorf*

Vestibular visual interaction is defined as the influence of vestibular stimulation on visual perception. The effect of concurrent self-motion on the perception of visual motion-direction was investigated in several experiments. Random-dot kinematograms with coherently moving pixels and randomly moving pixels were used as visual stimulus. The smallest percentage of coherently moving pixels leading to a clear perception of motion direction represented the perception threshold in the psychophysical studies. In addition electrophysiological investigations using visual evoked potentials were applied. The essential results are summarized as follows:

  1. Perception of visual motion-direction is impaired by concurrent body rotations about the vertical axis, when visual and vestibular motion directions are incongruous (visual and vestibular stimulation are in the same direction). Normally rotation is constantly combined with retinal image motion opposite in sign. In the postnatal development, the system are probably calibrated for physiologically congruent stimulation.
  2. Velocity but not acceleration of self-motion mediates vestibular-visual interaction.
  3. Translational egomotion does not influence the concurrent perception of visual motion-direction. Obviously the vestibular visual interaction is caused only by stimulation of the semicircular canals.
  4. Visual motion-direction evoked potentials decrease in amplitude during the rotation of subjects about their vertical axis, when visual and vestibular motion directions are incongruous. Potentials evoked by visual pattern reversal remain unaffected by vestibular stimulation. The vestibular visual interaction is therefore related to visual motion perception.
  5. For rotations about the interaural axis, decreased amplitudes of visual motion-direction evoked potentials are found for both congruous and incongruous combinations of visual and vestibular stimulation. Differences in the processing of motion-direction in the vertical and horizontal planes may be related to different demands for re-calibration within these planes: self-generated vertical retinal image motion occurs less frequently.

The visual perception of motion-direction is processed particularly in the middle temporal visual (MT) area. The medial superior temporal (MST) area is direction specifically activated by visual and vestibular stimulations. Area MT and area MST are closely reciprocal connected. A sensory interaction model focused on reciprocal inhibition of area MT and area MST explains vestibular visual interactions as well as visual vestibular interactions.

*Presently Universität Regensburg.

The effect of directed attention on the perceived duration of brief intervals Stefan Mattes, Universität Wuppertal*

Stelmach, Herdman, and McNeil (1994) suggested recently that the perceived duration for attended stimuli is shorter than for unattended ones. In contrast, however, an attention model of time perception (Thomas & Weaver, 1975) suggests the reverse relation between directed attention and perceived duration. Support for the latter model has come from dual-task studies, where the amount of attention available for a duration judgment was manipulated. It is, however, an open question, whether this finding generalizes to a situation where attention is directed by means of a precue. A series of eight experiments was conducted to test the validity of the two contradictory hypotheses. In all experiments attention was directed to one of two possible stimulus sources with durations less than 500 ms. In Experimen-ts 1-3 a stimulus appeared either in the visual or auditory modality. In the other experiments visual attention was directed to one of two possible locations within the visual field. Furthermore, interval type (filled vs. empty) and the psychophysical procedure were varied across experiments. In accordance with the attentional model the present results support the assumption that directed attention prolongs the perceived duration of a brief stimulus (or interval). Contrary to the results of the above mentioned dual-task studies, however, attention did not affect the performance in discrimination (DL). An attentional-switch hypothesis is discussed as an alternative account for the present results.

*Presently Universität Tübingen.

Explaining Interference in Serial Short-Term Memory: Working Memory and Changing-State Hypothesis Thorsten Meiser, Universität Bonn*

Differential effects of secondary tasks on serial short-term memory were investigated to test conflicting predictions derived from the working-memory model (A. D. Baddeley, 1986, 1997) and the changing-state hypothesis (D. Jones, P. Farrand, G. Stuart, & N. Morris, 1995). In Experiments 1 and 2, disruptive effects were analyzed as a function of the temporal location, the changing-state characteristic, and the modality of secondary tasks. Specific disruptions due to the changing-state characteristic occurred in the encoding phase of spatial and verbal serial memory tasks, but not in a retention interval. Secondary-task modality moderated the amount of interference in both phases, as indicated by cross-over dissociations between spatial and verbal memory performance. The results extend the findings of D. Jones et al. (1995) and support a novel explanation of changing-state effects in terms of an overload of the central executive caused by the concurrent requirements to encode serial information and to perform changing-state activities. This explanation was further sustained by an analysis of task demands. Experiments 3 and 4 corroborated that changing-state activities impose higher demands on the central executive than steady-state activities. Experiments 5 and 6 revealed that serial short-term memory is particularly susceptible to limitations of central-executive resources during the encoding phase.

*Presently Cardiff University, Wales, UK.

Perceptual Integration of Motion and Form Information in a Visual Search Task Adrian von Mühlenen, Universität Bern*

A common assumption of many theories of visual selective attention is that objects in the visual field compete with one another for access to detailed visual processing and/or control of behavior. The visual search paradigm, in which the participant has to detect a single target object in an array of multiple nontarget objects, has become a test bed for alternative theories of competition. The experiments that I will present all required visual search for conjunctions of motion and form.

One group of experiments re-investigated whether motion-based filtering (e.g., McLeod, Driver, Dienes, & Crisp, 1991) is direction-selective and whether cuing of the target direction promotes efficient search performance. Search was less efficient when items moved in multiple (2, 3, and 4) directions compared to just one direction. Furthermore, pre-cuing of the target direction facilitated the search. A second group of experiments was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets. Finally, a third group of experiments investigated the processes involved in the efficient detection of motion direction - form conjunction targets. The results showed that the proportion of moving nontarget Xs as well as the number of movement directions contributed to the search rates.

Two principles are proposed to explain the overall results: (i) interference on direction computation between items moving in different directions (e.g., Qian & Andersen, 1994) and (ii) selective direction tuning of motion detectors involving a receptive-field contraction (cf. Moran & Desimone, 1985; Treue & Maunsell, 1996). In conclusion, a relatively simple model derived from the Guided Search theory can provide a satisfactory account for the results in motion-form conjunction search. However, to account for the full pattern of results specific properties of different visual subsystems have to be taken into account.

References

McLeod, P., Driver, J., Dienes, Z., & Crisp, J. (1991). Filtering by movement in visual search. Journal of Experimental Psychology: Human Perception and Performance, 17, 55-64.

Qian, N., & Andersen, R. A. (1994). Transparent motion perception as detection of unbalanced motion signals: II. Physiology. Journal of Neuroscience, 14, 7367-7380

Moran, J., & Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229, 782-784.

Treue, S., & Maunsell, J. H. R. (1996). Attentional modulation of visual motio processing in cortical areas MT and MST. Nature, 382, 539-541.

*Presently Universität Leipzig.

Focusing of visuospatial attention in subjects with and without attentional expertise: an ERPs and RT study Caterina Pesce Anzeneder, Freie Universität Berlin

Focusing attention on delimited areas of the visual field seems to enhance the efficiency of processing of stimuli occurring at the attended locations. Behavioral results have been obtained by cueing attention with peripheral cues of different size [1]. Additional recording of event-related brain potentials (ERPs) may be useful, but the neural manifestation of the attentional allocation following peripheral cues may be confounded with effects of sensory interaction between the cue and the following target stimulus [2]. Nevertheless, endogenous attentional effects might be segregated by comparing subjects practising attention demanding tasks daily, such as skilled athletes, with non-practisers [3].

High-level volleyball players and controls subjects were submitted to a variation of Posner s paradigm for exploring covert orienting of visuospatial attention. In a simple RT task a peripheral cue of varying size was presented unilaterally or bilaterally of a central fixation point and followed by a target at different stimulus-onset-asynchronies (SOAs). The target could occur validly inside the cue or invalidly outside it with varying spatial relation to its boundary. EEG was recorded from F3, F4, P3, P4, O1 and O2 electrode sites. Target-elicited ERPs and mean RTs to targets were computed for each task condition. ANOVAs were performed on mean RTs [4] and ERP mean amplitude measures for the early and late Nd time interval (130-180 and 220-360 msec after target-onset, respectively).

Targets following unilateral small cues elicited an enhanced negativity over anterior and posterior scalp regions both in the early and late Nd time interval of ERPs as compared to the unilateral large cue condition. RT benefits for trials with smaller vs larger cues were found regardless of unilateral/bilateral cueing condition. Also, a negative enhancement occurred in the late Nd time interval of ERPs for trials with longer SOA as compared to trials with shorter SOA. Most interestingly, this effect occurred only with unilateral and small cues in the case of volleyball players, but with bilateral and large cues in the case of controls. Correspondingly, only volleyball players showed at longer SOA benefits for unilateral vs bilateral attending and for focusing on smaller vs larger cued areas. Further RT results, showed differences between volleyball players and controls depending on cue size and target position. Our results suggest: (1) The negative enhancement in the early and late Nd time interval of target-elicited ERPs seems to be an electrocortical manifestation of oriented and focused attending performance, but only the late Nd effects are related to the attentional skill. (2) Volleyball players modulate visuospatial attention differently than controls: they have presumably automatized the use of a large span of attention, which is adapted for their usual task demands, and endogenously increase their attentional effort to cope with less usual focused attending conditions. (3) Their attentional skill seems to lead to a more precise distribution of resources within and around the focus of attention.

References

[1] Castiello, U., and Umilta, C. Size of the attentional focus and efficiency of processing. Acta Psychologica, 1990, 73, 195-209.

[2] Eimer, M. An ERP study on visual spatial priming with peripheral onsets. Psychophysiology, 1994, 31, 154-163.

[3] Zani, A., and Rossi, B. Cognitive psychophysiology as an interface between cognitive and sport psychology. International Journal of Sport Psychology, 1991, 22, 376-398.

[4] Pesce Anzeneder, C., and Bösel, R. Modulation of the spatial extent of the attentional focus in high-level volleyball players. European Journal of Cognitive Psychology, 1998, 10, 247-267.

Integration of elementary visual stimulus attributes Tobias. E. Reisbeck, Universität Tübingen

Objects in our visual environment are defined by various attributes like form, color, depth and motion. During the last few years research was primarily concerned with the investigation of modules that process these attributes in a parallel and selective manner. In contrast, the results of many anatomical, physiological and psychophysical studies indicate that during visual processing interaction between these stimulus attributes does take place. The work presented here examines the question if integration of visual information already takes places on early stages of the visual processing hierarchy. Our interest is focused on elementary stimulus attributes which are known to be processed by neurons in the first visual area (V1), for example orientation, spatial frequency or color.

A first project was concerned with the question of whether color and form are processed by separate visual pathways. If this were the case orientation discrimination based upon stimuli defined by color only (isoluminant stimuli) should be significantly impaired. However, the results clearly show that orientation discrimination, an essential aspect of form perception, is not affected under conditions of isoluminance. In addition the processing of orientation differences for luminance and isoluminant stimuli is similar in a qualitative and quantitative way. These results are valid for simple tasks like orientation discrimination but also for complex tasks during which stimuli are characterized by simultaneous changes in orientation and contrast. Therefore the results of these experiments present strong evidence that similar mechanisms underlie orientation discrimination of luminance and isoluminant stimuli. Furthermore the assumption of strictly parallel and separated processing pathways can be refuted.

Interaction of elementary visual stimulus attributes plays a crucial role in the perception of velocity. Velocity has a direct physical interpretation as the ratio of temporal to spatial frequency in moving luminance patterns. The basic elements in all models of velocity perception are neurons in the first visual area (V1) which are sensitive to narrow ranges of temporal and spatial frequency. The experiments conducted here were concerned with the question of whether there exist mechanisms in the human visual system which are sensitive to pattern velocity or mechanisms that are based upon sensitivity to spatial and temporal frequencies only. The results of these experiments clearly provide evidence for velocity tuned mechanisms in human motion perception. The coding of velocity is realized during early stages of visual processing. Analogous experiments with isoluminant stimuli failed to exhibit evidence for velocity tuning, supporting the notion that the human color vision system is impaired in its coding of stimulus speed, despite excellent sensitivity to direction of motion.

Results of the various experiments demonstrate that already during early stages of visual processing integration of information about different stimulus attributes like color, form and motion takes place. These attributes are processed by neurons in the first cortical visual area (V1). The results are in contrast with the hypothesis of strictly separated and parallel processing pathways the information of which is integrated only at higher stages of cortical visual processing.

Persistence and reorientation: Perseverance and dissolution of goal-related attentional sets Klaus Rothermund, Universität Trier

The adoption of a goal or task is accompanied by a focusing of attention on goal- or task-related information. Two components of this focusing can be distinguished: (1) cognitive resonance for relevant information is increased, and (2) processing of irrelevant information is suppressed. Cognitive focusing is typically deactivated when the goal is achieved or the task has been performed successfully. But what happens to a goal-related attentional set when goal pursuit has definitely failed? In a first set of experiments, cognitive resonance for goal-related information after experimentally induced failure was investigated. In a first phase of the experiments, participants received positive, negative or neutral feedback in a complex labyrinth task (Experiment 1) or in a number of synonym tasks (Experiments 2 and 3). In a second phase of the experiments, automatic attentional capture for stimuli relating to the previous labyrinth or synonym tasks was measured by presenting these stimuli as distractors in a word-naming task. Interference effects of the distractors were increased after failure in all experiments. In a second set of experiments, inhibition of task-irrelevant information was analyzed during a failure episode. In a first study, participants had to work on a set of either solvable or unsolvable anagrams that were surrounded by task-irrelevant distractor words (Experiment 4). Subjects that had received unsolvable anagrams showed better recall of the distractor words in a subsequent surprise free recall test. In another study, participants had to work on a solvable or unsolvable labyrinth task while being exposed to acoustic distractor words (Experiment 5). Inhibition of irrelevant information was measured continuously during the labyrinth task by presenting the distractors as stimuli in a secondary color-naming task. Again, interference effects of the distractors increased during the unsolvable labyrinth but not during the solvable task. In sum, the experiments support the hypothesis that mechanisms of a preferred processing of relevant information and a blocking out of irrelevant information are differentially affected by failure. An increased cognitive resonance for goal-related information persists even after a definite failure of goal pursuit. This perseverance of sensitivity to goal-related information guarantees that possible future opportunities for a successful goal pursuit will not be overlooked. On the other hand, an inhibition of irrelevant information is not maintained in the face of failure. The experience of repeated unsuccessful attempts to reach a goal induces an open, defocalized mode of information processing that is functional for a reorientation after failure.

Existing memory theories cannot explain memory performance after enactment-or can they? Melanie Caroline Steffens, Universität Trier

Memory for action phrases (e.g., "open the book", "clap your hands") seems to improve when the actions are actually carried out, as compared to verbal learning or other encoding conditions. Many researchers have claimed that well-established regularities of memory do not hold for action memory. For instance, they failed to find a primacy effect, a generation effect, a levels-of-processing effect, or an effect of intentional as opposed to incidental learning when actions were carried out. They concluded that existing memory theories cannot explain memory performance after enactment. However, a critical re-analysis of the relevant findings reveals that this conclusion is not mandatory. Rather, better memory after enactment may very well be explained by existing memory theories, specifically, by the encoding specificity principle, by the retrieval cues provided by enactment, and by the semantic processing that enactment implies. Two predictions are deduced from this theoretical reconstruction. On the one hand, there should be circumstances under which memory performance after enactment is worse than memory performance after verbal learning. On the other hand, enactment should improve memory performance for some action phrases more than for others. Particularly, it should improve memory performance for those action phrases that are, when enacted, associated with retrieval cues. Both predictions are empirically demonstrated.

The role of feature integration in action planning Gijsbert Stoet, Universität München*

The feature integration hypothesis proposes that representations of simple features are bound together, allowing the building of complex representations from a limited set of building blocks. There is accumulating evidence for feature integration in the perceptual domain. In the present work, I hypothesize that feature integration also occurs in the motor domain. Furthermore, I hypothesize that action and perception share a common set of features that is acted upon by a general mechanism of feature integration.

The first set of experiments shows evidence for the hypothesis that action planning leads to the temporal binding of response codes. Adult human subjects prepared a left or right finger movement (A) but did not execute it. Next, they performed an independent left or right finger movement (B) before finally executing action A. The results show that when A and B were movements of the same finger, reaction times (RT) for B increased. These results, supported by control experiments, are consistent with the idea that once a spatial feature - in this case the position of the effector - is bound to one action plan, it is harder to bind that same spatial feature into a second action plan.

The second set of experiments shows evidence for the hypothesis that feature integration in action and perception relies partly on the same set of building blocks. This hypothesis predicts that once a feature has been bound into a perceptual object (e.g. the position of the object), it will be harder to bind that same feature into an action plan (e.g. a left or right finger movement). Participants memorized the features of an object (A), then performed a left or right finger movement (B) towards a letter, and finally answered questions about the features of (A). The results show that if A and B shared a spatial feature (e.g. A was on the left and B a movement with the left finger), RT of B increased. Similar results were found when A was merely attended to rather than being memorized.

In summary, the results suggest that feature integration is a mechanism through which action plans as well as perceptual representations are constructed from a common set of features.

*Presently Washington University, Saint Louis, USA.

Intrinsic geometry of binocular visual space Karin Zimmer, Universität Regensburg*

The structure of binocular space is frequently supposed to be either Euclidean or hyperbolic; empirical results, however, remain equivocal. In order to decide whether either geometry is suitable to represent the structure of binocular space, an axiom system [Suppes, Synthese, 24,} 298-316 (1972)] fundamental to both geometries was investigated. This system establishes a one-dimensional order of four points on a visual line. Its axioms served as suppositions concerning the structure of visual space and were tested in four experiments, using a computer-driven apparatus. The stimuli were presented in absolute darkness at the subjects' eye-level.

The first experiment showed that, for 6 out of 8 participants, at least one of the axioms was violated. Three subsequent experiments were performed to test the robustness of this result with respect to the distances of the stimuli from the observer, to the spatial extent of the stimulus configuration, and to its spatial orientation. An axiom that had been empirically invalid in 6 out of 8 cases in Experiment I was violated by 2 of 6 subjects when a configuration nearer to the observer was used, and by 3 of 7 subjects when the configuration subtended a smaller visual angle. When, instead of an obliquely positioned configuration, a frontoparallel configuration was presented, the axiom was violated by 4 of 6 subjects. Statistically, the proportion of axiom violations did not differ across experiments.

The results show that, contrary to previous assumptions, neither Euclidean nor hyperbolic geometry serve as a valid representation of binocular visual space.

*Presently Universität Oldenburg.

Experimental investigation of the effect of the context on highly saturated colors Rainer Zwisler, Universität Regensburg*

Adaptation to a spatial or temporal context changes the color within that context. Since colors can be represented by their coordinates in a convex cone embedded in the three-dimensional real vector space, the influence of the context can be modeled by linear, affine or projective transformations. Because only projective transformations predict an invariance of the spectrum locus, the effect of a context surrounding a highly saturated stimulus could differentiate between these models. A new experimental setup based on a liquid crystal tunable imaging filter (LCTF) was developed to present a variable highly saturated or even monochromatic stimulus surrounded by a colored context. Four subjects produced cross context matches by adjusting the stimulus within the target context to look the same as a similar stimulus previously presented within another context. The resulting data can be described neither by a linear or an affine or projective transformation. The "pure" context-effect, adjusted for the effect of memory for colors, reveals a better fit of the projective model. There are further indices favoring the projective model: If the linear or affine models were valid, subjects should not be able to match certain highly saturated stimuli within the target context but in fact they do so. They even rate these matches especially well and produce them faster and more efficient than other matches. These results strongly suggest, that a change of the context leads to a projective transformation of the colors contained within that context.

*Presently Bezirksklinikum Regensburg.

E-mail-Addresses:

Anja Hahne, hahne@cns.mpg.de
Günther Knoblich, knoblich@mpipf-muenchen.mpg.de
Christof Körner, christof.koerner@kfunigraz.ac.at
Thomas Lachmann, lachmann@psychologie.uni-leipzig.de
Martin Lages, lages@mpib-berlin.mpg.de
Rainer Loose, rainer.loose@psychologie.uni-regensburg.de
Stefan Mattes, mattes@uni-wuppertal.de
Thorsten Meiser, meisert@cardiff.ac.uk
Adrian von Mühlenen, vonmuehlenen@uni-leipzig.de
Caterina Pesce Anzeneder, schwarz-biegger@t-online.de
Tobias Reisbeck, tobias.reisbeck@sap-ag.de
Klaus Rothermund, rothermu@uni-trier.de
Melanie C. Steffens, steffens@uni-trier.de
Gijsbert Stoet, stoet@thalamus.wustl.edu
Karin Zimmer, zimmer@psychologie.uni-oldenburg.de
Rainer Zwisler, rainer.zwisler@bkr-regensburg.de

 


nach oben