View our list of upcoming presentations.
![]()
|
About CALPast CAL PresentationsLanguage Testing Research Colloquium 2009 March 17 - 20, 2009 Thursday, March 19, 2009Developing an oral assessment protocol and rating rubric for applicants to the English for Heritage Language Speakers (EHLS) program This work in progress presentation described the development of a telephone interview protocol used in the English for Heritage Language Speakers (EHLS) program. The EHLS program gives native speakers of languages other than English the opportunity to achieve professional proficiency in English and thus increase their marketability with the U.S. government. To complete the program successfully, participants must have advanced proficiency in English at entry. Previously, each program applicant submitted a detailed written application, selected applicants then participated in English language testing, and then final selections were made for the program. For the 2009 application process, a 15-minute telephone interview of each applicant was added to the procedure in order to increase the information available to inform the selection of provisionally accepted candidates. An interview protocol and corresponding rating rubric were developed to elicit and assess a candidates language. This work in progress session described how the protocol and rubric were developed; discussed issues that were identified; and provided an informal evaluation of the efficacy of such an assessment tool. The research questions asked concern the relationship between the phone interview ratings and the fact of being selected for further testing (through formal OPIs), the subsequent OPI scores, and the fact of final selection into the program. We hypothesize that the ratings of the phone protocol would generally predict applicants further success in the selection process. The findings of our research seem to support our initial hypothesis; however, data analysis also uncovered additional considerations to be taken into account for future uses of the protocol and rubric. Presenters: Natalia Jacobsen,
Genesis Ingersoll,
Anne Donovan Friday, March 20, 2009Assessing domain-general and domain-specific academic English language proficiency The purpose of this study is to examine domain-general and domain-specific AEL from the angle of a construct validity study using a latent variable modeling approach. Specifically, the goal was to model domain-general and domain-specific variance in a latent factor model to evaluate and compare the salience of these variance sources. The analyses were carried out on data from multiple test forms targeting academic English language proficiency at different grade and proficiency levels, which affords comparisons of the latent factor models across different ELL student populations. The results of this study revealed that across grade levels, at low levels of English language proficiency, domain-specific variance did not play a significant role in explaining examinee performance on test items across the five standards. At the mid and high levels of proficiency, however, the presence of domain-specific variance was increasingly observable through a general increase in model fit and increasing salience of individual item factor loadings. The empirical findings suggest that AEL differentiates between domain-general and domain-specific dimensions with increasing English language proficiency. Thus, when considering the construct of AEL, level of English language proficiency must not be ignored. Presenters: Anja Römhild, Dorry M. Kenyon, David MacGregor The discourse of assessments: Addressing linguistic complexity in content and English language proficiency tests through linguistic analyses This symposiun addressed the issue of language complexity in both content tests and English language proficiency tests. Efforts over the last decade to better understand and address the requirements for valid and reliable tests of content knowledge and English language proficiency for English language learners has generated deliberation over how to determine the language complexity of test items. Papers in the symposium explored insights from discourse-based and cognitive-based approaches to linguistics can be used to more fully understand the functionality of test items. Presenters: Jim Bauman,
Laura Wright,
David MacGregor,
Abbe Spokane, |
|||||||||
| CAL Store | Press
Room | Jobs | Contact
Us | Site Map | Privacy Copyright © 2009 CAL |
||||||||||