CAL's logo
Read more

Second Language Testing, Inc.
Read more

Georgetown University
Read more

National Capital Language Resource Center
Read more

Program

New! PDFs of the plenary and other presentations are available for viewing. Click on the links in the agenda below.

Please note that Friday morning's ILR panel has been cancelled, due to the government shutdown. An amended schedule is available below.

Click the link below for a PDF of the agenda. An outline of the agenda and the plenary abstract follow.

ECOLT 2013 Agenda (PDF)

Agenda

Friday, October 25, 2013

All paper presentations will be given in the Intercultural Center (ICC) Auditorium, 2nd floor. Please note that the main entrance to the ICC is on the 3rd floor.

Refreshments will be located in the foyer of the auditorium.

Poster presentations will be located in ICC room 107, 1st floor.

8:30-9:45         Registration
9:45-10:15       Additional Paper Session (1 paper)

A test of productive grammatical ability in academic English: Validation inferences of score interpretations
Yoo-Ree Chung
                       
10:15-10:30     Break
10:30-11:30     Plenary Presentation

Reconsidering assessment validity at the intersection of measurement and evaluation
John Norris

11:30-12:30     Paper Session 1 (2 papers)

The effects of passage length, type-token ratio, and listeners’ working memory capacity on listening comprehension
Amber Bloomfield, Sarah Wayland, Lelyn Saner, Debra Kramasz, Stephen O’Connell, Kassandra Gynther, Jared Linck

Investigating the rater behaviors and the rater training effects of the writing section of a placement test
Saerhim Oh

12:30-1:45       Lunch
1:45-3:15         Paper Session 2 (3 papers)


The effect of modeling on examinee language in a computerized speaking test
Megan J. Montee, Margaret E. Malone, Lindsey Massoud, Samantha Musser

Constructing elicited oral response items for an English speaking proficiency screening test
Troy Cox, Ray Clifford

“What do you say?” Assessment of young ELLs’ use of oral English for social interaction
Kimberly Woo

3:15-3:30         Break
3:30-4:30         Paper Session 3 (2 papers)

A new use of automated scoring in an operational speaking test: The Test of English-for-Teaching
Larry Davis, Klaus Zechner, Keelan Evanini, Su-Youn Yoon, Xinhao Wang, Lei Chen, Chong Min Lee, Chee Wee Leong

Change in listening and reading skills over time: An investigation of loss and gain in foreign language proficiency test scores with longitudinal data
Amber Bloomfield, Megan Masters, Kassandra Gynther, Steven Ross, Beth Mackey

4:30-4:45         Break/Poster Set Up
4:45-6:00         Poster Session (7 posters)

Examining the validity of vocabulary as an indicator of speaking proficiency
Luke Wander Amoroso

Comparing scores of native and non-native English speakers on an assessment of second language proficiency in a K-12 setting
Christina Crowl, Tim Farnsworth

Surveying the assessment needs of language researchers and teacher educators
Margaret E. Malone, Victoria C. Nier

Promoting assessment literacy for language instructors through an online course
Victoria C. Nier, Margaret E. Malone

Formulaic sequences in office hours: Validating the International Teaching Assistant Speaking Assessment (ITASA)
Ildi Porter-Szucs, Ummehaany Jameel

From standards to oral language assessments for the Mississippi Band of Choctaw
Jill Robbins, Charles Stansfield, Roseanna Thompson

Comparing writing on essay tests with writing in the disciplines
Sara Weigle

 

ECOLT Plenary Speaker
Dr. John Norris, Georgetown University

John Norris


Title of Plenary: Reconsidering assessment validity at the intersection of measurement and evaluation

Abstract: Assessments are used for a variety of purposes in language education and the language-related professions, from making placement decisions to improving curriculum to certifying a candidate’s abilities to use language on the job. Traditionally, concerns with an assessment’s quality have been the purview of validation research, with a primary focus on the extent to which an assessment is measuring the construct it is purported to measure and the reliability with which it does so. While technical measurement qualities such as these are important, their prioritization by language testing researchers may downplay other critical concerns, including in particular the ways in which assessment information is actually interpreted by diverse stakeholders (teachers, students, program developers, employers, etc.),  and the ways in which assessments are actually used to inform specific decisions and actions related to curriculum and instruction, hiring and promotion, and other elements of language programs and professions. As a result, contemporary notions of ‘assessment validity’ may offer little value for understanding, judging, or improving assessments as they are actually put into practice. In this talk, I first suggest that technical standards and traditions of practice for measurement validation are insufficient for ensuring the quality of educational and professional language assessments, although they can play an important role in drawing our attention to the nature of the language ability phenomena of interest. I then articulate a framework for purposefully evaluating language assessments that are intended to meet most educational and professional ends. In my approach to “validity evaluation”, which builds upon key ideas introduced by Cronbach (1982, 1988), intended uses (i.e., purposes, goals) for assessments—and the reasons for evaluating them in the first place—drive the validation questions posed, the methods adopted, and any resulting changes made in assessment practices. By reflecting on several examples from my work with placement testing, task-based procedures, large-scale proficiency testing, and learning outcomes indicators, I highlight the ways in which distinct assessment uses call upon diverse methodologies for validation, ranging from fine-grained analyses of language production phenomena to innovative statistical inference procedures to triangulated qualitative inquiry involving assessment stakeholders. I conclude by proposing several avenues of validation-related work that are currently needed for all language assessments, focusing on how to answer the most fundamental question, as Cureton (1951) pointed out, of “how well a test does the job it was employed to do” (p. 621).

Bio statement
John Norris is an associate professor in the Linguistics Department at Georgetown University. His research and teaching interests include educational assessment, program evaluation, language pedagogy (task-based language teaching in particular), and research methods. He has taught language and applied linguistics courses, and consulted on assessment, evaluation, and teacher development projects, in Belgium, Brazil, Germany, Japan, Spain, and around the U.S. Prior to joining Georgetown, he served for eight years as a professor in the Department of Second Language Studies at the University of Hawai‘i, and for two years as outcomes assessment specialist at Northern Arizona University. John’s publications have appeared in journals such as Applied Linguistics, Foreign Language Annals, Language Learning, Language Learning & Technology, Language Teaching Research, Language Testing, Modern Language Journal, TESOL Quarterly, and Die Unterrichtspraxis. His most recent books explore the topics of language teaching (Task-based language teaching: A reader), evaluation (Toward useful program evaluation in college foreign language education), assessment (Validity evaluation in language assessment), and research synthesis (Synthesizing research on language learning and teaching). Currently, he serves as chair of the TOEFL Committee of Examiners and the International Consortium on Task-Based Language Teaching, and he recently completed a sabbatical as Fulbright scholar at the University of Alicante in Spain. John speaks German, Spanish, and Portuguese, and he is an avid runner/hiker/surfer.

Contact information
Dr. John M. Norris
Dept. of Linguistics
1437 37th Street NW
Box 571051
Poulton Hall 240
Georgetown University
Washington DC 20057-1051
Email: norrisj@georgetown.edu

 

Please also check the website of the Improving Quantitative Reasoning in Second Language Research conference, happening directly after ECOLT on October 26-27 at Georgetown University.

 

If you have any questions about the program, please email Meg Malone or Victoria Nier.