Topics
Research
Resources
Projects
Services
About CAL
Join Our List
Featured Publication
Creating Access book cover
Email this page
Print this page

Resources

Online Resources: Digests

December 2000
EDO-FL-00-14

Simulated Oral Proficiency Interviews: Recent Developments

Margaret Malone, Center for Applied Linguistics


The simulated oral proficiency interview (SOPI) is a performance-based, tape-mediated speaking test. It follows the general structure of the oral proficiency interview (OPI) used by government agencies and the American Council on the Teaching of Foreign Languages to measure speaking proficiency. Whereas the OPI is a face-to-face interview, the SOPI relies on audiotaped instructions and a test booklet to elicit language from the examinee. Unlike many semi-direct tests, the SOPI contextualizes all tasks to ensure that they appear as authentic as possible.

The prototypical SOPI follows the same four phases as the OPI: warm-up, level checks, probes, and wind-down. The warm-up phase, designed to ease examinees into the test format, begins with simple personal background questions posed on the tape in a simulated encounter with a native speaker of the target language. The examinee responds to each warm-up question during a brief pause on the tape after each question. The next phase of the test consists of tasks similar to the level check and probe phases of the OPI. These tasks assess the examinee's ability to perform different functions at the ACTFL Intermediate, Advanced, and Superior levels. (For more information on the ACTFL Guidelines, see Stansfield, 1992.) The prototypical SOPI includes picture-based tasks that allow examinees to perform tasks such as asking questions, giving directions based on a simple map, describing a place, or narrating a sequence of events based on the illustrations provided.

Other SOPI tasks require examinees to speak about selected topics or perform in real-life situations. These tasks assess the examinee's ability to manage functions at the Advanced and Superior levels, including apologizing, describing a process, supporting an opinion, and speaking persuasively. Because these tasks may include functions too complex for lower-level examinees, the test may be stopped midway.

How Is the SOPI Administered?

SOPI administration materials include a master test tape, which includes the audiotape of all test instructions and tasks; an examinee response tape on which the examinee records his or her responses; and the test booklet, which includes all test tasks except the warm-up. Directions to all tasks are presented in English in the test booklet and on the test tape. The directions provide the context of each speaking task, including whom the examinee is addressing, what the situation is, why the speaking task is being performed, and any other relevant information. After listening to and reading the directions, the examinee hears a native speaker of the target language make a statement or ask a question relevant to the task described. Then the examinee performs the task by responding to the native speaker prompt.

The prototypical SOPI ends with a brief wind-down consisting of simple questions in the target language. After the SOPI is completed, the examinee response tape is scored by trained raters who apply the ACTFL Guidelines. Scores range from Novice-Mid to Superior.

Research on the SOPI

In several studies involving different test development teams and different languages, the SOPI proved to be a valid and reliable surrogate to the OPI. Clark and Li (1986) developed four forms of the SOPI in Chinese. Once developed, the four forms of the test were administered, along with an OPI, to 32 students of Chinese at two universities. Each test was scored by two raters, and the scores on the two tests were correlated. The results showed a correlation of .93 between the SOPI and the OPI.

Stansfield et al. (1990) reported on the development of three forms of a SOPI in Portuguese. This test and an OPI were administered to 30 adults at four institutions. Two raters scored each test. In this study, a correlation of .93 was found between the SOPI and the OPI. In addition, the SOPI proved to be slightly more reliable and easier to rate than the OPI.

Shohamy, Gordon, Kenyon, and Stansfield (1989) reported on a Center for Applied Linguistics/University of Tel Aviv project that developed and validated the Hebrew Speaking Test. Two forms of this SOPI were developed for use at Hebrew language schools for immigrants to Israel, and two forms were developed for use in North America. The first two forms, along with an OPI, were administered to 20 foreign students at the University of Tel Aviv, and the two North American forms were administered to 20 students of Hebrew at U.S. universities. The correlation between the OPI and the Israeli version of the SOPI was .89, while the correlation for the U.S. version was .94.

Subsequently, Stansfield and Kenyon (1992, 1993) described the development and validation of SOPIs in Indonesian and Hausa. In the Indonesian study, the correlation with the OPI for 16 adult learners was .95. Because no ACTFL-certified tester was available to conduct OPIs, two Hausa speakers were trained in applying the ACTFL scale and subsequently used this training to score the tests. Raters showed high interater reliability (.91) in scoring the test. In more recent research (Kenyon & Tschirner, 2000), 90% of the students studied received the same ACTFL rating on an OPI in German and a German SOPI.

SOPIs are currently available in Arabic, Chinese, French, German, Hausa, Hebrew, Indonesian, Japanese, Portuguese, Russian, and Spanish.

Rater Training

As these tests have been operationalized, the need for trained raters to score them has been addressed through live rater training workshops as well as through the development of self-instructional rater training kits and a CD-ROM-based training program. Rater trainer kits are available in Arabic, Chinese, French, German, Japanese, and Russian for language instructors who would like to administer and rate the SOPI themselves. For each language, the Rater Training Kit consists of a manual, a workbook, and a reference guide for scoring; three cassette tapes; and the SOPI testing materials. Research on the self-instructional rater training kits suggests that they are an effective way to acquire rating skills without participating in live rater training (Kenyon, 1997). Further research has been conducted into the usefulness of the German Rater Training Kit in learning to apply the ACTFL Guidelines (Norris, 1997). In addition, the Center for Applied Linguistics (CAL) is currently developing multi-media rater training programs (MMRTP) that include a CD-ROM with interactive self-instructional materials from the Rater Training Kits, practice exercises and quizzes, speech samples for practice rating; SOPI testing materials; and a reference guide. The MMRTP will be available in early 2001 in Spanish, French, and German. All rater training kits and the MMRTP have been updated to incorporate the revised 1999 ACTFL Speaking Proficiency Guidelines.

Applications

Because the SOPI format is flexible, it can be and often is tailored to the desired level of examinee proficiency and for specific examinee age groups, backgrounds, and professions. For several of the SOPIs developed by CAL, a lower-level version of the test can be created by administering only the first part. Such a version is available for rating proficiency from the Novice-Mid to Advanced levels.

The SOPI format has been used by various institutions in the development of tests to meet their specific needs. For example, the University of Minnesota and the Minnesota Department of Education developed a SOPI in which seven tasks are combined to follow one integrated story line or theme (Chaloub-Deville, 1997). Because this test focuses on examinees at the Novice-High to Intermediate-Low levels, it includes only Intermediate-level tasks.

Another SOPI with a specific focus is the Texas Oral Proficiency Test (TOPT) developed by CAL. A score of Advanced on the TOPT is required by all who seek teaching certification in Texas in French, Spanish, or bilingual education. This full-length test consists of 15 tasks and is taken by examinees at the Intermediate-Mid level or higher. Practice tests are available for the French and Spanish TOPT.

Many universities and school systems have incorporated a SOPI focus into their testing program. A handbook on designing SOPIs was developed to assist such programs in developing their own SOPIs (Stansfield, 1996). Currently, Stanford University uses a SOPI for diagnosis and placement of students into foreign language classes. In addition, the SOPI is administered to all students at the end of the third quarter to ensure that they meet the oral proficiency standard in their language.

The SOPI format has many practical benefits. Any teacher, language lab technician or aide can administer the SOPI. This has proved to be an advantage in locations where a trained interviewer is not available or in languages that lack ACTFL-certified testers. In addition, the SOPI can be administered simultaneously to a group of examinees by one administrator, whereas a live interview can only be administered individually. Thus, the SOPI may be preferable when many examinees need to be tested in a short time frame.

The SOPI may also offer psychometric advantages in terms of reliability and validity, particularly in standardized testing situations. The SOPI offers the same quality of interview to all examinees, and all examinees respond to the same questions. By recording the test for later scoring, it is possible to ensure that examinees will be rated by the most reliable raters and can be rated under controlled conditions. Raters who have scored both a live interview and a SOPI report that it is often easier to score a SOPI. This may be due in part to the SOPI's ability to produce a longer speech sample and to allow each examinee to respond to the same questions. Therefore, distinctions in proficiency may appear more obvious to the rater.

New Directions

Just as advances in technology have led to the development of the MMRTP to help train raters, new and better technologies have lead to research on new approaches to semi-direct testing. CAL conducted a 2-year study on the development of the Computerized Oral Proficiency Interview, or COPI. Like the SOPI, the COPI relies on taped and written directions to elicit language from the examinee. Unlike the SOPI, however, the COPI is adapted to the examinee's proficiency level. On the COPI, the examinee and the computer cooperate to produce a speech sample ratable according to the ACTFL Guidelines (Malabonga & Kenyon, 1999). The COPI allows examinees some choice in the difficulty level of the tasks presented to them. To ensure adequate probing and level checking, however, examinees have control over only a portion of the tasks on the test. This means that the examinees select some tasks they believe are appropriate to their own level of speaking ability, and the computer program includes tasks at other levels to ensure that adequate level checking and probing occurs. Because the COPI was developed principally as a research study, CAL has not yet operationalized this approach to testing speaking proficiency.

Conclusions

This discussion suggests that the SOPI is a reliable, easily administered test of speaking performance. The development of the COPI and MMRTP suggest that applying advances in technology to both test administration and rater training have the potential to further improve semi-direct approaches to performance testing.

References

Chaloub-Deville, M. (1997). The Minnesota Articulation Project and its proficiency-based assessments. Foreign Language Annals, 30(4), 492-502.

Clark, J.L.D., & Li, Y.C. (1986). Development, validation, and dissemination of a proficiency-based test of speaking ability in Chinese and an associated assessment model for other less commonly taught languages. Washington, DC: Center for Applied Linguistics.

Kenyon, D. (1997). Further research on the efficacy of rater self-training. In A. Huhta, V. Kohonen, L. Kurki-Suonio, & S. Luoma (Eds.), Current developments and alternatives in language assessment: Proceedings of LTRC 96 (pp. 257-273). Jyväskylä, Finland: University of Jyväskylä.

Kenyon, D.M., & Tschirner, E. (2000). The rating of direct and semi-direct oral proficiency interviews: Comparing performance at lower proficiency levels. Modern Language Journal, 84(1), 85-101.

Malabonga, V.A., & Kenyon, D.M. (1999). Multimedia computer technology and performance-based language testing: A demonstration of the Computerized Oral Proficiency Instrument (COPI). In M.B. Olsen (Ed.), Association for Computational Linguistics/International Association of Language Learning Technologies Symposium proceedings. Computer mediated language assessment and evaluation in natural language processing (pp. 16-23). New Brunswick, NJ: Association for Computational Linguistics.

Norris, J.M. (1997). The German Speaking Test: Utility and caveats. Die Unterrichtspraxis/Teaching German, 30(2), 148-158.

Shohamy, E., Gordon, C., Kenyon, D.M., & Stansfield, C.W. (1989). The development and validation of a semi-direct test for assessing oral proficiency in Hebrew. Bulletin of Hebrew Higher Education, 4, 4-9.

Stansfield, C.W. (1996). Test development handbook: Simulated Oral Proficiency Interview. Washington, DC: Center for Applied Linguistics.

Stansfield, C.W. (1992). ACTFL Speaking Proficiency Guidelines. ERIC Digest. Washington, DC: ERIC Clearinghouse on Languages and Linguistics.

Stansfield, C.W., & Kenyon, D.M. (1992). The development and validation of a simulated oral proficiency interview. Modern Language Journal, 76, 129-141.

Stansfield, C.W., & Kenyon, D.M. (1993). Development and validation of the Hausa Speaking Test with the ACTFL Proficiency Guidelines. Issues in Applied Linguistics, 4, 5-31.

Stansfield, C.W., Kenyon, D.M., Paiva, D., Doyle, F., Ulsh, I., & Cowles, M.A. (1990). Development and validation of the Portuguese Speaking Test. Hispania, 73, 641-651.


This digest was prepared with funding from the U.S. Dept. of Education, Office of Educational Research and Improvement, National Library of Education, under contract no. ED-99-CO-0008. The opinions expressed do not necessarily reflect the positions or policies of ED, OERI, or NLE.