Friday, November 30, 2012

The "music" of pronunciation teaching: Just "duet!"

Clip art: Clker
Clip art: Clker
It would be difficult to find anyone who does not support the use of music in language teaching, for any number of reasons. For the most part the rationales are common sense and intuitive--and backed by generations of validating classroom and extra-classroom experience. But here is a study by Lindenberger and colleagues at the Max Planck Institute (reported by Science Daily) that goes the other direction, examining the synchronous brain activity that is evident in "making music together," in this case two guitarists playing a duet, connected up to fMRI technology (including the usual bathing caps with dozens of wires attached!)

There are similar studies of "duets" of conversationalists, lovers, mothers and infants and others, which show coordination or mirroring of minds and brains. Likewise, studies of empathy show analogous "sync-ing" in between participants. (In EHIEP work, there is extensive mirroring, complementary background music and use of music in supporting rhythmic practice.) Can you imagine a more effective occasion for anchoring of new or changed pronunciation than when instructor and learner are locked in (neurophysiologically and pedagogically appropriate) synchronized dance from across the room--making music together? That is music to more than the ears--and not a bad place to begin in understanding when instruction enables uptake and when it doesn't. Take note . . . and your mp3 player. 

Thursday, November 29, 2012

Let's (not) get (too) physical in pronunciation teaching!

Clip
With apologies to Olivia Newton-John, I still get that response occasionally in workshops and in reaction to blogposts. The focus of HICPR is not on developing a "physical" method or approach to pronunciation teaching but rather on ensuring that the body is given an appropriate place in the process, especially with the development of technology and haptic-grounded virtual reality. Those who are not by nature "connected" to their bodies, either they (a) don't listen to it much at all or (b) are overly sensitive to how it feels and looks, may not be at ease in the "haptic" lesson or integrating movement, touch and general body awareness in their work.

art: 
Have done a couple of earlier posts related to mindfulness theory, meditation practices and body representation. A fascinating study by Dykstra and Barelds of Groningen University, entitled, "Examining a model of dispositional mindfulness, body comparison, and body satisfaction," suggests something of a different approach to better orienting learners and instructors to haptic engagement: dispositional mindfulness training. The research demonstrated " . . . a positive relation between mindfulness and body satisfaction: as individuals are more mindful, they are more satisfied with their body . . . consistent with the fact that non-judgment, a central component of mindfulness, is also highly relevant to the construct of body image . . . "
by Clker

The key element there is "dispositional," part of a general, eminently trainable, response to internal and external pressures and stressors, characterizing one's disposition or style of responding (varying from extremely reactive to non-reactive, for example). Combine that with mindfulness, a general, relatively nonjudgmental  awareness or comprehension of what is going on, and you have what appears to be a near optional mindset for learning pronunciation for any . . . body. Dispositional (haptic-integrated) mindful pronunciation learning: DHIMPL!
.com

Some of that is embodied in EHIEP today, the felt sense of confident, comfortable, (dimpled?) managed pedagogical movement, but it should also be the model underlying language instruction in general. The secret to getting there is your point of departure, Lessac's dictum: Train the body first!

Wednesday, November 28, 2012

Aiming at good pronunciation: on the Q(E)T

Clip art: Clker
Clip art: Clker
Always looking for ways to enhance haptic anchoring, I came across some interesting new research  by Wood and Wilson of Exeter University using Quiet Eye Training (QET), a well-established technique for helping one (especially professional athletes under pressure) aim at (or focus attention on) a target.The training assists the shooter in putting distraction out of mind. (Some studies report even more generalized impact on everyday cognitive functioning and sense of control as well.)

This is potentially a good fit with other attention management strategies in the EHIEP approach. Early on in the development of the system we experimented with some eye-tracking techniques similar to those used in OEI but discovered that they were a little too "high octane" for general pronunciation work. (In working with "fossilized" individuals I still use some of those regularly, however.) Since QET does not require instructor presence when the shot is taken, it may be possible to use it in some form. Will figure out how to adapt QET training, how to better enable learners to anchor what they do on the q.t. and get back to you. 

Monday, November 26, 2012

Physical vs social domains in pronunciation work

Ever wonder why students may not be able to use a new piece of pronunciation in pair work or controlled conversation or on their way our the door? Forthcoming research (already!) published in NeuroImage by Jacka, Dawsona, Beganya, Leckiea, Barrya, Cicciab and Snyderc, fMRI reveals reciprocal inhibition between social and physical cognitive domains (in the brain) suggests part of the answer: "Regardless of presentation modality, we observed clear evidence of reciprocal suppression: social tasks deactivated regions associated with mechanical reasoning and mechanical tasks deactivated regions associated with social reasoning."
clip art: Ckler
The implications of that for integration of pronunciation work, both in the lesson and in the brain of the learner, are worth an "uninhibited" reexamination. For one, perhaps insight, explanation, meaningful conversations, "lite drills" and metacognitive encouragement are not enough for efficient "uptake" to occur. Likewise, decontextualized "body drills" that focus primarily on the mechanics of articulation are not going to automatically bridge the "domain gap" either--in the classroom or on the street. Optimal learning in both domains must go on either simultaneously or in some kind of intricate dance that achieves both outcomes. Haptic integration is one answer to that, where the "channels" of communication and change are not quite in as direct competition. The only problem is often just overcoming the inhibitions of the "haptically challenged."

Sunday, November 25, 2012

Play it again, HIRREM! (A musical tone approach to balanced pronunciation learning?)

Clip art: Clker
With apologies to Humprey Bogart, one of the basic "learning" assumptions in most training systems is that some degree of balance between relevant areas of the brain, whether left~right, top~bottom or front~back (or all of those) is optimal. How that is to be achieved is the question, of course. As blogged earlier on several occasions, brain research (e.g., as in neurotherapy) is now beginning to offer alternatives or at least compliments to cognitive and physical exercises or disciplines: brain frequency "adjustment."

In a new study by Tegler and colleagues at Wake Forest University (summarized by Science Daily), musical tones were mirrored back to the brains of subjects to achieve a more balanced overall brain frequency profile--which appeared to successfully lessen insomnia, at least for a month or so. Tegler does note that " . . . the changes observed with HIRREM, could be due to a placebo effect. In addition, because HIRREM therapy involves social interaction and relaxation, there may be other non-specific mechanisms for improvement, in addition to the tonal mirroring."

Now granted, this specific technology may not directly impact a learner's ability to learn new or repaired sounds--or even "HIRREM" better, but it is clearly on the right track. (Nothing to lose sleep over if you can't spring for the 30k to get you a " . . . high-resolution, relational, resonance-based, electroencephalic mirroring or, as it's commercially known, Brainwave Optimization™ . . . " set up!) But multiple-modality and balanced "all-brain" engagement is the key to pronunciation change. It's coming. Keep in touch. 

Saturday, November 24, 2012

An alternative (hand) approach to (haptic) pronunciation teaching!

Clip art: Clker
Have done a few posts on "exercise persistence" research, trying to figure out how to help learners practice consistently. Among the variables will always be something like "self-control or self-discipline," along with other socially-oriented factors. One of the reasons I have found such studies of interest, of course, is the connection to movement and physical exercise in haptic pronunciation work.

In a new review article by Denson, DeWall and Finkel (summarized, of course, by Science Daily!) is reference to a study by Denson in which he (simply) had subjects use their non-dominant hand (in this case left hands) for two weeks for various "normal" functions, as all were right-handers, to see whether that might enhance self-control and reduce aggression. It worked! Denson doesn't say exactly why . . . but we can maybe help him.
Clip art: Clker

In the "brain business," such organizations as Luminosity and Brain Gym and many others, use a wide range of "out of the box" but proven, physical, bi-lateral hand and arm movements to manage thought in many forms, from emotion to brainstorming to creativity. They often report or claim the same general effect.

In EHEIP work, for rhythm, intonation, fluency and (some types of integration) the left hand moves across the visual field to the right hand. The left hand, in effect, "conducts" intonation, pitch and pace functions during correction and practice--and regulates overall speaking performance. The right hand (on the other hand) serves as the anchor for word, phrase, sentence and discourse focus. Denson's research is fascinating. Clearly some of the effectiveness of the EHIEP system as well may be due (simply) to increased activation and engagement of the left hand and arm. We'll take it, whatever the explanation.

Will see if I can work out a protocol to moderate the sometimes mildly (or wildly) "aggressive" reactions to haptic techniques of the "hyper-cognitive" or "hapticaphobic"--before they walk out of the next workshop, something a bit out of the (fuzzy-haptic) box . . . (See previous post on haptic "fuzziness.") 

Friday, November 23, 2012

Do-it-Yourself! haptic-integrated pronunciation teaching


Clip art: Clker
Clip art: Clker
Haptic work is, by definition . . . touching! As explored in several previous posts, there are a wide range of conditions under which haptic anchoring of movement, visual images and sound may or may not be effective in instruction. (According to new research, by Patterson and colleagues at the University of Liecester, summarized by Science Daily, there may even be a bias in favor of those of us over the age of 65 in responding to the typical "fuzziness" of haptic cinema!)

One of the most striking discoveries in our work has been the realization that some of the EHIEP pedagogical movement patterns can be taught well face-to-face but others may be better introduced by a video model, especially vowels, vowel "compaction" and intonation. That video model can be the instructor, him or herself, or someone else--such as in the EHIEP system of videos and student workbooks that I am developing, of course! Why that should be is complex but understood (See this blogpost by Grant on http://filmanalytical.blogspot.ca/)

In essence, it is emotionally and interpersonally very powerful. In some contexts, either because of the personality of the instructor or the class, video is a better option for perhaps half of the PMPs. One reason for that is the impact of eye contact on mirroring in a classroom setting. In essence, vivid "moving" visual feedback from students, whether negative  or positive can dramatically undermine an instructor's ability to teach PMPs. Once they are introduced, however, classroom use of a PMP to anchor vowels, stress, rhythm, intonation or pitch/volume/pace seems to be less susceptible to disruption.

Bottom line: It takes training to do pronunciation work of any kind effectively or efficiently. Either you get trained or have somebody else do it for you, either in your program or through technology. Haptic video and its post-production technology is very promising. I am tempted to use a term like "CAPT Video," Computer-Assisted-Pronunciation-Teaching with Video, were there not already a near-relevant song by that name .  .  .  

Wednesday, November 21, 2012

Pronunciation & body & media fit

Clip art: Clker
If you have been reading the blog occasionally, you are aware of the basis of the EHIEP model: (a) initial pronunciation teaching and (b) practice outsourced to video with subsequent (c) integrated use in the classroom, (d) strong haptic engagement (movement and touch) and (e) somatic or body awareness and training. For the latter piece, body monitoring, maybe what we need is something like the "BodyMedia FIT" system. I love the company's come on line: "Your body talks. We listen." Wish I had the spare change to buy one of those arm bands, just for fun. The research on effectiveness of the technology, using web-based systems,  is interesting. "Body training," in general, is biofeedback of one kind or another. This type of technology could easily be adapted to provide constant feedback on the quality of movement, relaxation, energy expenditure and body resonance. For much less money and hassle--with a modicum of self-discipline and persistence, learners can experience the same kind of integrated experience of speaking and pronunciation change with us. The future, however, is with technology such as this linked to CAPT (see previous post.) and haptic cinema. But if you have difficulty consistently managing your "current classroom body image" and its caloric correlates, consider "arming yourself" with such a band. 

Monday, November 19, 2012

Disembodied pronunciation: computer generated, animated images of learners' inappropriate articulation


Clip art: Clker
Clip art: Clker
May start a new series of blogposts focusing on amazing-looking pronunciation techniques that, from a HICPR perspective, are so thoroughly disembodied or "dys-haptic" (generally depending heavily on only visual modalities, lacking a somatic, physical basis) such that chances of them working are probably not all that good, at best, such as this one:
"Improvement of animated articulatory gesture extracted from speech for pronunciation training," by Manosavan, Katsurada, Hayashi, Zhu, Nitta of Toyohashi University, a paper from the 2012 IEEE Convention--available for 31 bucks to nonmembers. (Have not read the full paper, just the abstract. My general policy is to pay for no research papers that cost more than 6 Starbucks Vente Carmel Frappuccinoes.) Computer-assisted Pronunciation Training (CAPT) is probably the future of the field, but a system that creates a moving cartoon-like representation of what a learner is doing wrong and then juxtaposes that with an animated image of how to do it right cannot possibly work effectively or efficiently-expect perhaps for those who are CAPT designers and gamers. (What do they need appropriate pronunciation for anyway?) 

However, if that video image were to be merged with "haptic cinema" technique and technology, (linked is a very "a-peeling" example, in fact!) they may still be on to something. 

Sunday, November 18, 2012

Got an itch to teach pronunciation?

Clip art: Clker
Clip art: Clker
This is fun. Several of the pedagogical movement patterns in the EHIEP system involve either scratching (or brushing) one hand with the fingernails (or just fingers) of the other hand, as the sound is articulated. Have known for some time that when it is demonstrated by the instructor (on video) and learners are asked to mirror that movement, that pattern catches on very quickly. Now we know why. Research by Holle, Warne, Seth, Critchley and Ward of the Universities of Sussex and Hall (abstract on PNAS website) even suggests which personality trait might respond more readily to seeing someone else scratch an itch: neuroticism (tendency to respond disproportionately to negative emotions.)

Research on mirror neurons alone demonstrates just how powerful the impact of witnessing movement or gesture by another person can be. In this study the extension to tactile/touch is important for understanding just how haptic-integrated pronunciation instruction works, especially the potential effectiveness of pronunciation-based haptic anchors (gesture which includes hands touching as a stressed syllable of a word is spoken.)

Not sure exactly how neuroticism figures in, but in some of the protocols (sets of training techniques) we do use contrasting sets of positive and negative terms, anchored on opposite sides of the body or visual field, e.g. tough/nice, tricky/easy, puzzling/beautiful, complicated/fascinating. The "negatives" may actually resonate more with some! So don't be too concerned if you get an itch to get "tough" on your potentially neurotic students or colleagues who are critical of our work, who see it as too puzzling, tricky or complicated . . . 

Saturday, November 17, 2012

Your pronunciation teaching going off in all directions? Good!

Clip art: Clker
At least a-parent-ly! As reported in previous blogposts, semiotically, almost any framework for personality, behaviour, groups or the nature of the visual field can be positioned on north-south, east-west axes. Here's another example. A study by led by Hunter at the University of Virginia, summarized by Science Daily, provides a categorization schema for describing four "family cultures" in contemporary US. Each category " . . . represents a complex configuration of moral beliefs, values and dispositions -- often implicit and rarely articulated in daily life -- largely independent of basic demographic factors, such as race, ethnicity and social class." Here they are, followed by my interpretation of their general "direcction" in parentheses:

 A. The (American, idealistic) dreamers (27%) " . . . defined by their optimism about their children's abilities and opportunities." (North = Externally oriented, more meta-cognitive, extrovert-ish)
B. The (less educated, pragmatic) detached (21%) "Let kids be kids and let the cards fall where they may." (South = Internally oriented, less-conscious, introvert-ish) 
C. The (liberal) engaged progressives (21%) " . . . guided . . . by their own personal experience or what "feels right" to them." (East = Change oriented, creative) 
D. The (conservative/traditional) faithful (20%) " . . . seek to defend and multiply the traditional social and moral order." (West = Stability and structure-oriented) 

The four stereotypes presented in the (necessarily) brief summary are wonderfully artificial--especially in how they covertly reintroduce race, ethnicity and social class, despite the disclaimer, in the form of parenting cultures. (It is worth reading just for the entertainment value. I assume the full research report is still also worth reading for a more complete, scholarly contextualization of the study.) What is relevant is the basic set of four "directions," based on "beliefs, values and dispositions."  The "finding" of the research appears to be that the culture is fracturing, with ominous consequences, of course. Substitute in "learners" for children/kids and "cognitive/behavioural" for progressive/conservative above. 

The same principle applies to any integrated system, including pronunciation teaching, especially how it is experienced by the learner. For an interesting exercise, identify your "coordinates." (In this model a "perfect program" might  even be at 0/0, in fact, although at times in the process it may veer off radically in one direction or another for various intermediate learning outcomes) I'd position EHIEP, by design overall, generally at about 10 degrees North latitude and 20 degrees East longitude. In other words, requiring somewhat more public risk taking/performance and also more ongoing experience of change, but still not too far off center, particularly in reference to language structure and private, "inner speak." 




Thursday, November 15, 2012

FLASH! Conscious suppression of pronunciation work!

Clip art: Clker
Clip art: Clker
Conscious Flash Suppression (CFS) technology could well be in the future of pronunciation teaching, based on research by Hassin, Sklar, Goldstein, Levy, Mandel and Maril at Hebrew University, as reported in Science Daily. CFS is described as " . . . one eye is exposed to a series of rapidly changing images, while the other is simultaneously exposed to a constant image. The rapid changes in the one eye dominate consciousness, so that the image presented to the other eye is not experienced consciously." What they discovered was that the material not experienced consciously was still processed and responded to non-consciously in various ways.

Their conclusion: " . . . humans can perform complex, rule-based operations unconsciously, contrary to existing models of consciousness and the unconscious." Avoiding conscious interference with pronunciation change is big. Now that may sound like a candidate for your "Well . . . duh!" file (A finding that is not only common sense but probably not worth the grant money blown on coming up with it.) Two important developments here, however:

  • First, so much of what happens between instruction and spontaneous performance in pronunciation work is unconscious--or at least not the subject of research today. Even the focus in HICPR on the "clinical" is still a relative "outlier" in this field, although not in some related disciplines. We should be able to study that more systematically. 
  • Second, all methodologists assign a great deal of the work to the "dark side," whether they make that explicit (consciously) or not, some more than others, such as Lozanov . . . or Acton! We need to stop suppressing the use of several great techniques that have been proven by experience to work the subconscious effectively.

Would love to get ahold of some of that CFS technology and try it out with haptic anchoring of academic word list vocabulary in time for TESOL in Dallas. Just imagine the impact of a pedagogical movement pattern accompanying the "constant" image of the acronym "CFS." Hard to suppress the excitement already . . .   

Wednesday, November 14, 2012

Pronunciation change readiness: Meditate amygdala affect collar? Better pronunciation should "faller!"


Clip art: Clker
Clip art: Clker
This one is a bit of a stretch . . . stick with me. The impact of affect and emotion on pronunciation, both acquisition and production, is reasonably well understood--but how to manage it is not. One of the principles or assumptions has been that management of emotion should go on simultaneously with instruction, that a learner's affective state (relatively out of consciousness) tends to be pretty fragile and easily disrupted. (That certainly seems to be the case with one's "haptic state," at least. A number of studies have been reported on the blog pointing to the importance of attention management during haptic work.)

In new research by Desbordes and colleagues at Boston University, summarized by Science Daily, on the lasting impact of meditation training, it has been demonstrated that the effect of mediating amygdala responsiveness--through two types of standard meditation work--may persist for some time, the "physical" changes to the brain being clearly evident in increased mass and activity, or lack of, in the targeted area.

What that means for us, in principle, is that some kind of brain "training" (or maybe analogous neuro-therapeutic treatment) could have real promise for enhancing pronunciation change. The key here is that what is done (a) impacts general emotional responsiveness, and (2) may well be unrelated to what is considered "normal" classroom instruction, as long as it assists the learner in achieving a more "amiable (and less hyper-reactive) amygdala." Now if that immediately strikes you as utter nonsense . . . you, yourself, may be a good candidate for a little mindful, "amygdala tune up"!

Monday, November 12, 2012

EHIEP "haptic video" system development update!


 By late February everything should be ready for use in local programs, most anywhere on the planet. At this point, these courses could be in several formats: 
Image: AMPISys
  • A one-hour introductory session and then 
  • 8 or 9 weeks of classes, one module per week (or just selected modules, relevant for that class)
  • Each class would begin with a 30-minute instructional video, and then
  • could involve either 
    •      *immediate in class follow up by the instructor, or
    •      *assigned homework, or 
    •      *(simply) integrated use of the techniques by the instructor in subsequent speaking, listening or vocabulary instruction. 
  • For each module there are 3 homework practice videos, accompanied by a section from the student workbook. 
  • There are also about 12, 5-minute mini-modules for selected consonants. 
Image: AMPISys
EHIEP can be done
  • online, 
  • in schools or at
  • informal venues,
  • as independent study,
  • by trained or untrained instructors.
Based in part  on the recent "TED" blogpost, will have some new video to introduce the EHIEP system, etc. to prospective students. Have been talking with the university about collaboration in some venues to get official certificates or join in w/advertising, etc. Will also set up "profit sharing" framework for other potential partners who run or sponsor a class. 

Am setting up one-day teacher training workshops in a number of places, beginning in April. (We are doing one already at the TESOL convention in Dallas.) The idea there will be to do a day of training at relatively low cost to participants and then make available the online video and materials, either by download or subscription. WIll announce those here as they are confirmed. 

If you'd like to try out a specific, pre-publication EHIEP "haptic" video in your class, let me know. (wracton@gmail.com)




Sunday, November 11, 2012

The value of haptic pronunciation teaching

Clip art: Clker
 How would you convince your students or colleagues as to the advantages of going "haptic?" Now assuming that the list of features in the recent blogpost doesn't quite do the job, what will work? We know that getting a learner or instructor new to the idea to come along with us and experience a couple of the protocols in a demonstration is best, but, at least up to now that has required that one of us be physically present to lead that experiential introduction. Ultimately, to get the word out, the appeal or "pitch" must be delivered by video.

Research by Usher of Tel Aviv University's  and colleagues, summarized by Science Daily, suggests something of the way to do that. (The catchy title of the SD summary: "Going With Your Gut Feeling: Intuition Alone Can Guide Right Choice, Study Suggests.") Subjects were required to watch a fast moving video focusing on two alternative products or actions, presented with no clear logical, linear or conceptual organization, and then asked to quickly pick one, in effect using their "intuitions." What they found was that judgments were amazingly accurate, the better alternative being selected. The point being that perception of value goes on in very complex ways, in addition to careful, conscious calculation. (Subsequent research will apparently further examine just where and how in the brain that happens.)

Clip art: Clker
I have been using the model of the short (6-minute or so), high-impact TED talk for sometime now in trying to develop a new approach to introducing EHIEP, one that "moves" the viewer, in several senses. Preliminary efforts have met with some limited success. The key is to present the viewer with a (seemingly unordered) set of images that produce an immediate, less conscious response of high quality and value, not simply a reasoned, thoughtful, metacognitive assessment. Here is a great 4-minute promotion of TED 2012, A taste of TED, that does it very well, not surprisingly. A little more work to do before I audition for TED, but will post a "TED-wanabee" video here for your "gut reaction"---once I get one that feels right.


Saturday, November 10, 2012

ESP: "Social rewards" to encourage pronunciation practice and change!


Clip art: Clker
Clip art: Clker
How's this for a conclusion? " . . . a person performs better when they receive a social reward after completing an exercise. There seems to be scientific validity behind the message 'praise to encourage improvement'. Complimenting someone could become an easy and effective strategy to use in the classroom and during rehabilitation."  Really?

As self-evident and "Pavlovian" as that may sound, there is actually an interesting twist in the research by Sugawara, Tanaka, Okazaki, Watanabe and Sadato, entitled, " Social rewards enhance offline improvements in motor skill," as reported by Science Daily. Two key terms there: offline and motor, meaning performance on a keyboard finger dexterity task. Those who were praised after a trial, regardless of their relative performance, tended to do better on the next one; those who weren't, tended not to, at least not as much. (Their earlier research had established the concept that a cash reward had about the same effect--in the same area of the brain.)

The extensive research on the effect of praise for behaviour other than "offline motor" skills is ambiguous as best. Verbal reinforcement, like all instruction, must be thoroughly contextualized and situated. How and when to provide praise, as opposed to "corrective" feedback in pronunciation work, is a skill that develops with experience and constant, informed reflection on classroom practice (such as watching yourself teaching on video regularly!)

To the extent that pronunciation change is "motor-based" the research is certainly relevant. That is, of course, especially the case in "haptic" work, where learners are given feedback initially (almost exclusively) on accuracy of pedagogical movement patterns (which are done simultaneously as the sound, word or phrase is spoken)--not accuracy of articulation of the sound in question. The explicit movement, touch and body resonance focus in EHIEP, for example, provides an analogous framework for such timely "social rewards" . . .  We need to "cash in" on this, so to speak.

 "Embodied social praise" (ESP!) I like that! Looking good!








Friday, November 9, 2012

In the mood to better manage the milieu during pronunciation work?

Photo credit: Library of Congress/Clker
Intuitively, everyone from marketers to mothers understands the power of music to alter mood and help manage behaviour. Some language teaching methods, such as Suggestopedia, have been very intentional in what kind music is applied, how and when. Previous posts have addressed the value of using music synchronized to movement in training and practice in kinaesthetic and haptic-integrated work. (Some of the EHIEP videos are being redesigned to be strongly music/rhythm-synchronized.)

Like many of you, I have experimented over the years with background or "mood" music in a wide range of classroom settings. In general, I think it is fair to say that it always "worked." The problems, however, were simply time and technology: time, in that it took so much of it to identify and prepare appropriate pieces and excerpts; technology, in that the equipment at the time was so cumbersome that often just the effect or distraction of operating the system during a lesson was enough to more than cancel out any potential benefit. (At one point I did have great system in a mammoth classroom with a 6-CD capacity that seemed to be very effective at times.)

A 2011 study by Jolij and Meurs of the University of Groningen (Summarized by Science Daily) again points to the potential of background/mood music in our work. That research demonstrates dramatically how music can alter perceptions and expectations--based not just on experience, but mood (affected by music) as well. Although the study itself was relatively simple, basically varying speed of identifying happy and sad icons, depending on background music, the underlying effect appeared to be strong. Now that the technology is readily available to quickly create collections of songs with seamless transitions that complement the tasks involved, it is clearly time to reconsider managing the milieu more systematically--with music. 

Thursday, November 8, 2012

Mindful, embodied (less-stressful) monitored speaking!

For some learners, monitoring their spontaneous speech can be very problematic, interfering with fluency . . .  or ability to think! In many schools of singing instruction, kinaesthetic monitoring is standard practice. I have done a few blogposts on kinaesthetic monitoring and mindfulness. When you combine embodiment theory with mindfulness, not unlike what is suggested by Stressreductionatwork.com below, you get an interesting heuristic that in various forms or adaptations  can be useful in our work: (italics, mine)
Clip art: Clker
Clip art: Ckler

"As you speak, keep your main focus on your body sensations, while focusing on what you are saying secondarily. Notice the breath as it enters your body, and be aware of it as it leaves. Notice the touch points of the bodyyour sit bones and shoulders on the chair, your feet on the floor, your hands in your lap. Don’t be as concerned about what exactly it is you need to say or how people will perceive you as you say it. Your words will be just as comprehensible as before, but they’ll be more in tune with your inner presence, integrity and authenticity. One way of visualizing this is that as you speak, let the words come more from your body and less from your head."

Those are typical mindfulness-type suggestions, attention-management strategies. The debilitating effect of stress on pronunciation in various contexts is well-established. Experience has shown that the "felt sense" that embodied mindfulness techniques create can be helpful, especially for the chronically stressed and uptight in dealing with their self-monitoring (or not over-monitoring) of their pronunciation. Try it out first at your next contentious committee meeting, post-election political discussion or intimate gourmet dinner. 

Wednesday, November 7, 2012

The scent of pronunciation work: what you don't know can help you!


Clip art: Clker
Clip art: Clker
Previous posts have looked at the potential impact of olfaction on pronunciation instruction. A new study by de Groot, Smeets, Kaldewaij, Duijndam, Semin of the University of Utrecht, summarized by Science Daily, looked at the role of scent in signalling emotion. One conclusion: "The findings provide support for the embodied social-communication model, suggesting that chemosignals act as a medium through which people can be 'emotionally synchronized' outside of conscious awareness." Basically, subjects reactions were recorded as they sniffed sweat collected earlier  from people in various states of stress. Not surprisingly, as we all know from lived experience, body odor communicates, often quite unambiguously.

So what? Apparently, if a student is stressed, fearful or threatened that can covertly contaminate the lesson with the same emotional unease. Is that important? Research on multiple modality learning would suggest that it certainly can be. Can that be mediated with "de-stressing" exercises and techniques? (Check with your local "Affective" colleague!) To some extent, yes, but a more practical solution at this point may be to just mask it.

Also as noted earlier, I have experimented with mixed success over the years with a number of room scents or hand creams. Some students, of course, know how to use chemo-signals, such as perfumes and pheromones, very effectively! This research reaffirms the concept that aspects of embodied social communication which function generally outside of conscious awareness such as body motion and scent . . . are certainly nothing to sneeze or sniff at . . .  

Monday, November 5, 2012

Merging pronunciation with posture and gesture (PGMs)

Have you been watching any of the US presidential debates or video clips of both candidates? In doing a delightfully biased analysis of the use of body movement by both men, researchers at NYU and UC Berkeley, summarized by Science Daily, set the stage this way:

Clip art: Clker
 "Physical motions of speakers determine how voters feel about them. How they move influences whether you believe they are standing behind what they are saying -- or if you get the impression they are simply repeating a memorized list of terms. A speaker's physical movements -- arms, legs, shoulders, and facial expression -- can undermine or even contradict the verbal message."

Setting aside the results of the study, what is of particular interest are PGMs, " . . . full-body gesture movements, also called Posture-Gesture Mergers (PGMs), occurred when the candidates were stating their own beliefs and lauding their own accomplishments, with emphasis added in their beliefs by those body motions."

The concept of the PGM is actually a good way to characterize haptic (-integrated) anchoring: engaging the whole attention of the learner, using gesture, the visual field, posture and pronunciation. I will use that acronym from now on. In fact, had I a video camera handy, would love to whole-heartedly and whole-bodily PGM the strength of our belief in that regard and go on to "laud" some typical EHIEP stories and accomplishments.


Font of pronunciation work? A difference that makes a difference.


Clip art: Clker
Clip art: Clker
Research from several disciplines addresses the impact of font choice on everything from handwriting, to reading comprehension, to emotional reaction . . . to mediation of political perspective. Take a little tour through the pronunciation materials you have at hand. (If you are a "Phon-haptician," that has particular relevance, of course!) And then, read over the brief synopsis of font psychology as it relates to website design presented on TemplateMonsterBlog.

Most pronunciation change systems make explicit use of font design and manipulation, even if that only involves basic size, spacing, upper/lower case, super/sub-scripting, color, bold face, italics, underlining, etc. From a "haptic" perspective, the key is not just the impact of the visual display itself, but how that interacts with the rest of the multiple modality system. As many previous posts have explored, visual "clutter" can have a powerful, often neutralizing  effect on haptic anchoring of a word or sound.

Taking the TemplateMonster approach, the answer may be to figure out how to represent words graphically, probably in isolation for some procedures, with optimal emotional "sculpting." Hmmm. Pronunciation . . . Pronunciation . . . Pronunciation . . . Pronunciation . . . Pronunciation . . . Pronunciation . . . Which one you like? (Those are my only easy choices here!) Or do you prefer to create the graphic images in any of several styles of handwriting? With new computer interfaces that is a piece of cake now as well. As we design the EHIEP visual interfaces and student workbooks that issue is, understandably, very important.

From the research, your first assumption should be that your favorite font-age for use in anchoring work is very likely not the same as that of many of your students.  GIving them  at least some responsibility for and guidance in creating their own visual schemata for pronunciation change may be key. Our experience is that, for whatever reason, it does make a DIFFERENCE! 

Sunday, November 4, 2012

Anchoring pronunciation: Do you see what you are saying?


Clip art: Clker
Clip art: Clker
You can, in fact--if you are pronouncing a sound, word or phrase using EHIEP-like pedagogical movement patterns, PMPs (gestures across the visual field terminating in some form of touch by both hands.) Not only CAN you, according to research by Xi and colleagues at Northwestern University, summarized by Science Daily, but your eyes strongly interpret for you the "feeling of how it happens." The visual "character" of the dynamic gesture (its positioning, fluidity, distance from the eyes and texture on contact with the other hand) may well override the actual tactile feedback from your hands and proprioceptic "coordinates" of movement from your arms.

In the study, subjects were simultaneously presented with video clips that slightly contradicted what their hands and arms were doing. It was clearly demonstrated that even though subjects were also instructed to ignore the video and concentrate on the actual positioning, movement and related information about touch and weight coming from the hands, the "eyes have it." What they were seeing reinterpreted the other incoming sensory data.

As noted in earlier posts, visual can often override other modalities. What is "new" here and contributes to our understanding of how and why haptic-integration works is that the subjects' perception of the EHIEP sound-touch-movement "event" would appear to be strongly influenced by the style or flair or precision and consistency of the PMP. That has been one of key problems in creating the video models: insufficient clarity and consistency in the execution of PMPs (by me!)

This is both good news and bad news. Good, in that the PMP is, indeed, a potentially a very powerful anchor--and that the visual "feel" of each can contribute substantially to anchoring effectiveness. Bad, in that for maximal effectiveness the video/visual model needs to be exceedingly precise and consistent. (I have explored the use of Avatars instead of me but there are even bigger potential issues there.) Preparing/getting in shape now to do a new set of videos after the holidays, based on this and simular research. Can't wait to see what those feel like!

Saturday, November 3, 2012

Treasuring listening: near-ear training for pronunciation work

clip art: Clker
Good TED talk by Julian Treasure. For enhanced interpersonal listening he ends with the acronym RASA (Receive, Appreciate, Summarize, Ask), your basic attending skills--and even world peace! What is worth "listening to," however, is how he gets there, what he terms "savouring, mixing and listening positioning." In essence, "savouring" is focusing for a period of time either on one sound in your environment---or silence--for a couple of minutes; "mixing" is focusing briefly on the sounds in your environment, one after another for maybe half a minute each; "positioning" is the process of intentionally listening with a purpose or conceptual "filter" in mind (for example, to very consciously, listen empathetically or critically or sympathetically.) Now i'm not quite sure how you do the third (positioning) in our work, but the first two forms of auditory attention management, savouring and mixing, are intriguing. Those appear to be apt, applicable analogs for what is involved in "training the body first" to attend to the felt sense of movement and somatic resonance (good vibrations in the vocal track and upper body.) I have not systematically worked with such pre-pre-listening such as that described by Treasure but it sounds like a perfect fit. First chance I get I'll "embody" some of it in an upcoming EHIEP session and report back. Hear, hear! 

Friday, November 2, 2012

Minimal pairs booed! Bad, Bud?


Clip art: Clker
Say it ain't so! And if so, so? Using minimal pairs in reading (and by extension) pronunciation instruction to teach phonic rules has been the "go to" technique for generations. Now a new study by Apfelbaum, McMurray and Hazeltine at the University of Illinois suggests that phonic rules are learned much more efficiently when encountered with "variability," to quote the researchers--who are quoted in Science Daily:
Clip art: Clker

"During the study, one group of students learned using lists of words with a small, less variable set of consonants, such as maid, mad, paid, and pad. This is close to traditional phonics instruction, which uses similar words to help illustrate the rules and, presumably, simplify the problem for learners. A second group of students learned using a list of words that was more variable, such as bait, sad, hair, and gap, but which embodied (italics, mine) the same rules."

EMBODIED! See that? Maybe that is why it worked--or maybe not? Caveat emptor: They used a commercially available system called Access Code which has been around for some time to provide the treatment for the study.

This is going to take some time to process, of course . . . At a minimum, will first have to try it out in several different contexts and compare. There!

Thursday, November 1, 2012

Pronunciation improvement: analyze or empathize?

Just not at the same time, according to new research on the interplay between analytic and emotional processing in the brain (Summarized by Science Daily) by Jack and colleagues at Case Western Reserve. One of the conclusions: "Empathetic and analytic thinking are, at least to some extent, mutually exclusive in the brain." Turns out, both types of processing occur in the same "channel," in the same neurological network, so to speak. (An earlier post, The change-the-channel fallacy, addressed some similar questions in relation to basic pronunciation change, and why, for example, oral repetition as a strategy to correct an "incorrect" articulation may not be effective in many cases.) That also explains, in part, how meta-cognitive (analysis, monitoring, reflection, planning) activity can compete with embodiment (affect, movement, felt-sense of articulation and vocal resonance) for the attention of the learner. It's sort of analogous to just not having enough "band width" to handle all the messaging.

Or it would be something like trying to listen to Fraser and Dornyei simultaneously . . . Fraser in your right ear; Dorneyi, in your left--which would be a terrific idea for a symposium, by the way. (Dornyei's new website is a gold mine of free downloads, by the way--as is Fraser's.