Bandeau

Conférienciers invités

Conférenciers invités

Martha W. Alibali (University of Wisconsin – Madison, USA)

Toward an Integrated Framework for Gesture Production and Comprehension

In this talk, I draw on multiple lines of research to sketch an integrated framework for gesture production and gesture comprehension. The first part of the talk will focus on gesture production. I will present evidence that gestures derive from simulated actions and perceptual states. I will argue that these mental simulations and the corresponding gestures serve to schematize spatial and motoric features of objects and events, by focusing on some features and neglecting others. Further, I will argue that, because of its ability to schematize, gesture can affect thinking and speaking in specific ways. The second part of the talk will focus on gesture comprehension. I will argue that seeing others’ gestures evokes simulations of actions and perceptual states in listeners. In turn, these simulations guide listeners to schematize objects and events in particular ways. These simulations may also give rise to gestures or actions. The third section of the talk will seek to bring production and comprehension together. I will argue that, with experience and via processes of statistical pattern detection, people develop expectations about when others are likely to produce gestures. These expectations guide people's attention to others’ gestures at times when those gestures are likely to contribute to comprehension. Thus, gesture production and comprehension are linked, both because of their shared ties to the action system, and because gesture comprehension depends, in part, on patterns that arise due to regularities in gesture production. 

Alessandro Duranti (UCLA, USA)

Eyeing Each Other: Visual Access during Jazz Concerts

During jazz concerts it is expected that the members of small combos will take one or more “solos,” that is, turns at creating “on the spot” melodies, chord substitutions, or rhythmic patterns. The absence of a conductor and the expectation that what is being played is different from whatever is annotated on the page create a number of interactional problems that need to be resolved. I will focus on one problem: musicians need to know at any given time who is going to solo next or when the solos are ending and all the band members join in to play the melody one more time.  A number of possible principles are made available by the history and culture of jazz including a sometimes explicit and other times implicit “hierarchy of players and instruments” (e.g., the band leader goes first; horn players go before rhythm section players like the pianist or the guitarist; the drummer takes one solo during each set).  In most situations, however, the aesthetics of jazz improvisation leaves room for ambiguity about the identity of the next player and the length of each solo. As I will show, it is in these contexts that eye gaze and other gestures as well as body postures come to play an important role.  But I will also argue that gestures and body postures can only be meaningful and effective against a shared understanding of where a transition point is possible.

Scott Liddell (Gallaudet University, USA)

Signers depict to such an extent that it is difficult to find a stretch of discourse without some type of depiction. Tokens are minimal depictions that take the form of invisible, isolated entities in the space within the signer’s reach. Although invisible, tokens are conceptually present at those sites and signers can direct pronouns and indicating verbs toward them for referential purposes. Other invisible depictions include linear spatial paths that depict time. Buoys, a class of signs produced by the non-dominant hand, also depict entities, but buoys make the depictions visible. Theme buoys, fragment buoys, and list buoys also give a physical form to the entities associated with them. Surrogates are depictions of (typically) humans and may be visible or invisible. A visible surrogate takes the form of (part of ) the signer’s body and depicts actions and dialogue. Visible surrogates frequently interact with life-sized invisible surrogate people or entities. Another type of depiction involves shapes or topographical scenes, including actions within those scenes. Depicting verbs create and elaborate this type of depiction and will be the focus of this presentation. Depicting verbs comprise a very large category with unique lexical and functional properties. Their lexical uniqueness comes from their lack of a specified place of articulation, and for some, unspecified aspects of the hand’s orientation. Their functional uniqueness comes from the requirement to produce every instance of a depicting verb within a spatial depiction. The depicting verb VEHICLE-BE-AT, for example, expresses the fixed, lexical meaning ‘a vehicle is located on a surface’. But, a depicting verb never expresses only its lexical meaning. That meaning is always embedded within a depiction. Since VEHICLE-BE-AT has no lexically specified place of articulation, signers must provide one each time they use the verb. The place selected locates the vehicle within a topographical depiction and the orientation of the hand depicts the vehicle’s orientation. Combining the lexical meaning with the depiction produces something like, ‘A vehicle is located (right here in the depiction) on a surface (facing this way in the depiction)’. The combination of fixed lexical meaning and non-fixed, creative depiction produces a vastly enhanced meaning from a single depicting verb. Video examples from adults and children will illustrate the extent to which depicting verbs are used, the nature of what they depict, and the speed at which signers are able to shift between depictions.

Cornelia Müller (European University Viadrina Frankfurt Oder, Germany)

Frames of Experience – The Embodied Meaning of Gestures

Addressing gestures as an embodied form of communication might appear somewhat tautological, while, in fact, most of the current debates in philosophy, linguistics, psychology, anthropology or the cognitive sciences more generally have not had much impact on theorizing the meaning of gestures in its specifics as a bodily mode of human expression (Streeck 2010 praxeological view is an important exception). My attempt to offer an embodied understanding of the meaning of gestures is related to Streeck’s work but also informed by cognitive linguistic’s perspective on the embodied grounds of meaning more generally. Philosopher Mark Johnson formulates this position in his book The Meaning of the Body: An Aesthetics of Human Understanding: “[…] meaning grows from our visceral connections to life and the bodily conditions of life. We are born into the world as creatures of the flesh, and it is through our bodily perceptions, movements, emotions, and feelings that meaning becomes possible and takes the form it does.” (Johnson 2007: 17)

I am going to suggest that gestures are a primary field to study how meaning emerges from bodily experiences. Not only are they grounded in very specific forms of embodied experience, but, by studying gestures, we can actually learn something about how meaning and even some very basic linguistic structures may emerge from embodied frames of experience notably in conjunction with their interactive contexts-of-use. This take on gestural meaning includes referential as well as pragmatic gestures. Informed by the Aristotelian concept of mimesis as fundamental human capacity a systematics for an embodied cognitive-semantics and pragmatics of gestures will be presented. I will argue that the meaning of gestures referring both metaphorically and non-metaphorically is experientially grounded in different forms of bodily mimesis and that the same holds for pragmatic forms of gesturing (see also Zlatev 2014).

Putting the mimetic potential of gestures center-stage opens a systematic pathway to accounting for the meaning of a given gestural form. Gestural mimesis, however, never happens outside a given moment in a communicative interaction.  The meaning of gestures, therefore always incorporates this specific contextual moment and this is what I refer to as frame of experience (Fillmore 1982). In conventionalization processes of co-speech gestures, we can witness sedimentations of the interplay between a motivated kinesic form and aspects of context that result in ‘semantizations’ of form clusters and kinesic patterns. Sometimes this involves the analytic singling out of a meaningful kinesic core with particular contextualized meanings as for example in the case of a group of gestures sharing a movement away from body.

The meaning of gestures thus emerges from embodied frames of experience, where embodiment involves both the sensory-motor experience of the body in motion and the specific intersubjective contextual embedding of this  bodily experience.

Catherine Pélachaud (CNRS, Télécom – ParisTech, France)

Modeling conversational nonverbal behaviors for virtual characters

In this talk I will present our on-going effort to model virtual character with nonverbal capacities.

We have been developing Greta, an interactive Embodied Conversational Agent platform. It is endowed with socio-emotional and communicative behaviors. Through its behaviors, the agent can sustain a conversation as well as show various attitudes and levels of engagement.

The ECA is able to display a large variety of multimodal behaviors to convey communicative intentions. We rely on a lexicon that contains entries defined as multimodal signals temporally coordinated. At run time, the signals for given communicative intentions and emotions are instantiated and their animations realized. Communicative behaviors are not produced in isolation from one another. We have developed models that generate sequences of behaviors; that is behaviors are not instantiated individually but the surroundings behaviors are taken into account.

During this talk, I will first introduce how we build the lexicon of the virtual character using various methodologies, eg corpus annotation, user-centered design or motion capture data. The behaviors can be displayed with different qualities and intensities to simulate various communicative intentions and emotional states. I will also describe the multimodal behavior planner of the virtual agent platform.

Invité spécial

Leonard Talmy (University at Buffalo)

Gestures as Cues to a Target

This talk examines one particular class of co-speech gestures: "targeting gestures".  In the circumstance addressed here, a speaker wants to refer to something -- her "target" -- located near or far in the physical environment, and to get the hearer's attention on it jointly with her own at a certain point in her discourse.  At that discourse point, she inserts a demonstrative such as this, that, here, there that refers to her target, and produces a targeting gesture.  Such a gesture is defined by two criteria. 1) It is associated specifically with the demonstrative. 2) It must help the hearer single the target out from the rest of the environment.  That is, it must provide a gestural cue to the target.

The main proposal here is that, on viewing a speaker's targeting gesture, a hearer cognitively generates an imaginal chain of fictive constructs that connect the gesture spatially with the target.  Such an imaginal chain has the properties of being unbroken and directional (forming progressively from the gesture to the target).  The fictive constructs that, in sequence, comprise the chain consist either of schematic (virtually geometric) structures, or of operations that move such structures -- or of both combined.  Such fictive constructs include projections, sweeps, traces, trails, gap crossing, filler spread, and radial expansion.

Targeting gestures can in turn be divided into ten categories based on how the fictive chain from the gesture most helps a hearer determine the target.  The fictive chain from the gesture can intersect with the target, enclose it, parallel it, co-progress with it, sweep through it, follow a non-straight path to it, present it, neighbor it, contact it, or affect it.

The prototype of targeting gestures is pointing, -- e.g., a speaker aiming her extended forefinger at her target while saying That's my horse.  But the full range of such gestures is actually prodigious. This talk will present some of this range and place it within an annalytic framework.

This analysis of targeting gestures will need to be assessed through experimental and videographic techniques.  What is already apparent, though, is that it is largely consonant with certain evidence from the linguistic analysis of fictive motion and from the psychological analysis of visual perception.

Personnes connectées : 1