The 31st Second Language Research Forum
Building Bridges Between Disciplines: SLA in Many Contexts
October 18-21, 2012

Conference Program


Colloquium I:  L2 Speech Perception in Richly Informative Environments


Organized by:
Dr. Luca Onnis
University of Hawaii at Mānoa

Studies geared at improving the perception of non-native phonemic contrasts have focused on presenting speech token multiple times either in isolation (e.g. /l/ versus /r/) or in minimal pair words (e.g. /light/ versus /right/) (e.g., Akahane-Yamada et al., 2004). This approach emphasizes a single acoustic dimension and de-emphasizes if not eliminates altogether contextual cues that are potentially very informative. Because natural languages are extremely rich with cues signaling structural properties at many levels of analysis, it has been proposed that successful language learning and processing should capitalize extensively on the integration of such cues, with bottom-up and top-down processes influencing each other in multidirectional ways (McClelland, Mirman, & Holt, 2006;  Onnis & Spivey, 2012). The four independent contributions in this colloquium all point to the benefits of a multiple-cue integration approach to learning  difficult contrasts in a second language, highlighting the role of cues that are both acoustic and non-inherently acoustic, such as sublexical, lexical, and orthographic information to scaffold L2 perception. The authors also introduce methodologies that gradually move away from minimal pair training towards more naturalistic tasks where attention is not necessarily oriented towards categorization per se. As such these methods could be better tailored to classroom activities.

Learning Foreign Sounds in an Alien World
Lori L. Holt, Sung-joo Lim, and Ran Liu
Carnegie Mellon University

Laboratory speech training studies demonstrate that adults maintain plasticity to support non-native phonetic category acquisition. However, most often these studies have involved explicit training with overt categorization responses and explicit response feedback, characteristics atypical of natural speech experience. We investigate non-native speech learning within an active videogame task that involves no overt speech categorization and no categorization-performance feedback. Paradoxically, directing listeners’ attention and action away from categorization appears to very efficiently influence adults’ non-native speech learning, even when sounds are embedded in continuous speech. With this approach it is possible to investigate simultaneous learning across multiple levels of statistical regularity without presuming the functional units across which regularities are computed. We will describe a series of experiments conducted within this task and describe implications for the nature of speech category acquisition, word segmentation, perceptual cue weighting, and retention of learning in non-native language learning among adults.

Second language phonemes can be retuned by lexical knowledge
Eva Reinisch (1,2), Andrea Weber (2), Holger Mitterer (2)
(1) Carnegie Mellon University, (2) Max Planck Institute for Psycholinguistics
 
Native listeners adapt to non-canonically produced speech by retuning phoneme boundaries by means of lexical knowledge. When hearing "giraffe" with the /f/ replaced by an ambiguous sound between /f/ and /s/ listeners later categorize more steps along an /f/-/s/ continuum as /f/.  We asked whether a second language lexicon can also guide category retuning. Dutch and German listeners performed a Dutch lexical decision task including manipulated words like "giraffe". The categorization of minimal pairs (graph-grass) was used as a test.  Both, native and nonnative listeners showed boundary shifts of a similar magnitude.  This suggests that, first, second language phoneme categories can be shifted in a controlled fashion and, second, lexical representations in a second language are specific enough to support lexically-guided retuning.  Having shown this effect with phonemes that exist in the learners' first and second language new data on effects for unfamiliar phonemes (English /th/) will be discussed.
                                                                            
Orthographic influences on second language phonological acquisition
Rachel Hayes-Harb, University of Utah
Catherine Showalter, Indiana University

Recent auditory word learning studies have provided evidence that second language learners (L2) make inferences about the phonological forms of new L2 words from simultaneously-presented orthographic forms. In fact, learners appear to make such inferences even when instructed to focus exclusively on the words’ auditory forms and when the orthographic and auditory information conflict. In addition, orthographic contrasts have been shown to improve memory for difficult-to-perceive  auditory contrasts. Thus far, these studies have focused on cases where the orthographic forms are presented in a familiar orthography to learners (i.e., the native and second languages both use the Roman alphabet). We report experiments designed to investigate the extent of this orthographic effect—can L2 learners exploit even unfamiliar orthographic information to learn the phonological forms of new words? We have found that some types of unfamiliar orthographic information support memory for phonological forms, while others do not.

Many Ways to Speech: Phonotactic and orthographic distributional regularities can aid categorical speech perception
Luca Onnis, University of Hawaii
Yoko Uchida, University of Hawaii and Tokyo University of Marine Science and Technology (Japan)

Perceiving speech contrasts in a foreign language can be hard. What cues available to learners could possibly assist this process? A corpus analysis of English revealed that the distribution of speech segments surrounding a difficult target contrast for Japanese speakers (/l/ versus /r/) and the equivalent distribution of letters in written words provide an extremely useful cue for correctly predicting the contexts in which L and R occur in English words. This predictive role of distributional regularities was then tested on Japanese learners and native speakers of English. Advanced Japanese learners of English had become sensitive to orthotactic information and used it to correctly predict an L or R in pseudowords never encountered before. Learners with better knowledge of orthotactics also had finer speech discrimination for the non-native /l/-/r/ contrast. We suggest novel training regimes that capitalize on the probabilistic distribution of segments and letters as additional non-acoustic cues.

Back to conference program.