Monday, November 19, 2012

Disembodied pronunciation: computer generated, animated images of learners' inappropriate articulation


Clip art: Clker
Clip art: Clker
May start a new series of blogposts focusing on amazing-looking pronunciation techniques that, from a HICPR perspective, are so thoroughly disembodied or "dys-haptic" (generally depending heavily on only visual modalities, lacking a somatic, physical basis) such that chances of them working are probably not all that good, at best, such as this one:
"Improvement of animated articulatory gesture extracted from speech for pronunciation training," by Manosavan, Katsurada, Hayashi, Zhu, Nitta of Toyohashi University, a paper from the 2012 IEEE Convention--available for 31 bucks to nonmembers. (Have not read the full paper, just the abstract. My general policy is to pay for no research papers that cost more than 6 Starbucks Vente Carmel Frappuccinoes.) Computer-assisted Pronunciation Training (CAPT) is probably the future of the field, but a system that creates a moving cartoon-like representation of what a learner is doing wrong and then juxtaposes that with an animated image of how to do it right cannot possibly work effectively or efficiently-expect perhaps for those who are CAPT designers and gamers. (What do they need appropriate pronunciation for anyway?) 

However, if that video image were to be merged with "haptic cinema" technique and technology, (linked is a very "a-peeling" example, in fact!) they may still be on to something. 

No comments:

Post a Comment