Philippos Vanger, Robert Hoenlinger, Hermann Haken
The present paper presents a method of producing prototypical facial expressions of different emotions based on computation and deformation of digitalized facial images. Facial expressions of six basic emotions were portrayed by subjects. Each individual facial image was then deformed so as to accommodate to a "face stencil" defined by standard points on the facial structure. Prototypes for the expressions of each emotion were created by averaging the images of all individual faces. In this way the physiognomic variability of individual subjects is reduced to a single computer generated face while retaining the facial expression. Further combinations of upper and lower face parts produced various facial expressions with less clear emotional meaning. Applications and possibilities for further development of this method are discussed.
Cacioppo et al. (1992) in their discussion of facial signal systems point out that a facial image contains information that can be subdivided into:
"1. Static facial signals, e.g., the permanent features of the face such as the bony structure and soft tissue masses that contribute to an individual's appearance.
2. Slow facial signal, e.g., changes in the appearance of the face that occur gradually over time, such as the development of permanent wrinkles and changes in skin texture.
3. Artificial signals, e.g., exogenously determined features of the face such as eyeglasses and cosmetic.
4. Rapid facial signals, e.g., phasic changes in neuromuscular activity that may lead to visually detectable changes in facial appearance." (p. 9)
A great deal of psychological research on the face has so far concentrated on rapid facial signals or facial expressions and their role in interpersonal communication (Ekman, Friesen, & Ellsworth, 1972). Furthermore, a large body of literature has been concerned with demonstrating that facial expression is important and effective in communicating various emotional states in social interaction (DePaulo, 1992) and that the experiencing of emotion triggers the activation of facial muscles, producing the specific facial expression corresponding to each of the basic emotions (Buck, 1984; Ekman, 1972, 1977; Izard, 1977; Tomkins, 1962). Within this line of research a number of decoding studies have been conducted employing facial material of spontaneously emitted or posed facial activity. This is usually photographic, film, or video material of real persons such as developed by Ekman and Friesen (1978) for the FACS manual. However, since different decoding studies ask different questions, different research groups have developed their own facial material especially tailored for the needs of their studies (Etcoff & Magee, 1992). This means that although the facial expressions under study may be identical, nevertheless there is a great variability (inevitably) in physiognomic characteristics of the real persons involved in the production of facial material.