|
Technical Program
Paper Detail
Paper: | TP-P7.1 |
Session: | Image and Video Modeling |
Time: | Tuesday, October 10, 14:20 - 17:00 |
Presentation: |
Poster
|
Title: |
EXTRACTING STATIC HAND GESTURES IN DYNAMIC CONTEXT |
Authors: |
Thomas Burger; France Telecom R&D | | | | Alexandre Benoit; LIS | | | | Alice Caplier; LIS | | |
Abstract: |
Cued Speech is a specific visual coding that complements oral language lip-reading, by adding static hand gestures (a static gesture can be presented on a single photograph as it contains no motion). By nature, Cued Speech is simple enough to be believed as automatically recognizable. Unfortunately, despite its static definition, fluent Cued Speech has an important dynamic dimension due to co-articulation. Hence, the reduction from a continuous Cued Speech coding stream to the corresponding discrete chain of static gestures is really an issue for automatic Cued Speech processing. We present here how the biological motion analysis method presented in [1] has been combined with a fusion strategy based on the Belief Theory in order to perform such a reduction. |
|