Topics
Research
Resources
Projects
Services
About CAL
Join Our List
Featured Publication
Literacy and Language Diversity in the US book cover
Email this page
Print this page
Spotlight
 

About CAL

Past Presentations

American Association for Applied Linguistics (AAAL) 2008 Conference

Omni Shoreham Hotel, Washington, DC
Saturday, March 29 to Tuesday, April 1, 2008
Visit the 2008 AAAL Conference Web Site

Incorporating Examinee Feedback in Test Development (Poster Session)
This poster described the development and operationalization of a computer-based, semi-adaptive test of Arabic or Spanish oral proficiency of secondary and post-secondary students or adults. By describing the results of piloting, specifically the role of user feedback in test revision, the presenters demonstrated the iterative nature of test development.
Presenters: Christina Cavella, Larry Thomas, and Amelia DiCola
Date, Time and Location TBD

Teacher Input in High Stakes Assessment (Poster Session)
This poster focused on the role of stakeholders in high-stakes testing. Teacher input can provide a practical perspective on test development and training. This poster described the process of revising an online training course for administrators of the ACCESS for ELLs Speaking Test, a K-12 test for English Language Learners.
Presenters: Megan Montee and Margaret E. Malone
Date, Time and Location TBD

 

Monday, March 31, 2008

Federal Funding Sources for Research in Applied Linguistics

Organized by: Donna Christian, Center for Applied Linguistics

This session provided AAAL members with relevant information about U.S. federal funding sources that support research in applied linguistics. Representatives of federal agencies including the National Institutes of Health, the National Science Foundation, the National Endowment for the Humanities, and the Department of Education discussed funding opportunities and priorities for the current year and prospects for the future. Resource materials were distributed.

Presenters:

Julia Huston Nguyen
Senior Program Officer
Division of Education Programs
National Endowment for the Humanities

D. Terence Langendoen
Program Director, Linguistics
National Science Foundation

Peggy McCardle
Chief, Child Development & Behavior Branch
National Institute of Child Health and Human Development
National Institutes of Health

Cindy Ryan
Office of English Language Acquisition
Department of Education

Ed McDermott
International Education Programs Service
Office of Postsecondary Education
Department of Education

Celia Rosenquist
National Center for Special Education Research
Institute of Education Sciences
Department of Education

Elizabeth Albro
National Center for Education Research
Institute of Education Sciences
Department of Education
Empire Room, 11:55 am – 1:55 pm

Predicting Item Difficulty: A Rubrics-Based Approach
Can item developers learn to accurately predict empirical item difficulty in a language test? In this study, the presenters described an attempt to use information about item difficulty to identify factors that may influence the difficulty, and to use this knowledge to help item developers write items that more closely reach the desired proficiency level. Items from a test of English language proficiency for grades K-12 based on standards developed by a multi-state consortium were analyzed using the Rasch model, and actual difficulty was compared to target difficulty in terms of six levels of proficiency defined by the standards. Items that met the target difficulty were analyzed for linguistic features that may influence their difficulty. Likewise, items that were shown empirically to be easier or more difficult than their target level were analyzed for linguistic features that may have contributed to their missing the target. Based on this analysis, a rubric was designed to help item reviewers judge how closely selected-response items fit the standards at their intended proficiency levels.

To examine the rubric's efficacy and potential as an item review tool, three raters were trained on the rubric and then individually rated the features of 100 operational test items. Raters' judgments were analyzed for consistency and investigated for the degree to which they successfully predicted the empirical difficulty of the items. The results indicate the extent to which this item review rubric and training in its use, intended to tighten the efficacy of the item and test development process, was successful in sensitizing raters to item difficulty. These findings show how empirical data can be combined with qualitative analysis to evaluate and strengthen the development process of a standards-based test anchored in second language acquisition theory.
Presenters: David MacGregor, Jennifer Christenson, Dorry M. Kenyon
Senate Conference Room, 4:30 – 5:00 pm

 

Tuesday, April 1, 2008

Academic Literacy through Sheltered Instruction for Secondary English Language Learners
This paper described findings on teacher change and student achievement from a quasi-experimental research study on the Sheltered Instruction Observation Protocol (SIOP) Model in secondary content and ESL classrooms. Results offered guidance for strengthening professional development for content teachers with ELLs and suggestions for improving student language and academic outcomes.
Presenter: Deborah Short
Governer's Room, 10:45 – 11:15 am

Return to CAL's list of past presentations.