In this colloquium we explore the use of technology-based language analysis tools that may help test developers gauge the extent to which items and passages reflect the language of the construct being tested at a level appropriate for the age group being tested. The colloquium consists of the following papers:
- Introduction (David MacGregor)
- Textual analysis of proficiency-based differences in test reading passages (Megan Montee)
- Predicting empirical item difficulty using linguistic features (Shu Jing Yen, David MacGregor, and Dorry M. Kenyon)
- A Rubrics-Based Approach to Predicting Item Difficulty (David MacGregor, Shu Jing Yen, and Dorry M. Kenyon)
The discussion will center on a comparison of the results of the three studies, and on how the use of such tools can improve the item writing process and be used to gather data as evidence that can be incorporated into an assessment use argument. The discussant will be Carsten Wilmes.