Topics
Research
Resources
Projects
Services
About CAL
Join Our List
Featured Publication
Pathways to Multilingualism Book Cover
Email this page
Print this page
Spotlight
 

About CAL

Past CAL Presentations

Language Testing Research Colloquium 2009

March 17 - 20, 2009
Denver Marriott Tech Center
Denver, Colorado, USA
Visit the LTRC 2009 Conference Web site.


Thursday, March 19, 2009

Developing an oral assessment protocol and rating rubric for applicants to the English for Heritage Language Speakers (EHLS) program

This work in progress presentation described the development of a telephone interview protocol used in the English for Heritage Language Speakers (EHLS) program. The EHLS program gives native speakers of languages other than English the opportunity to achieve professional proficiency in English and thus increase their marketability with the U.S. government. To complete the program successfully, participants must have advanced proficiency in English at entry.

Previously, each program applicant submitted a detailed written application, selected applicants then participated in English language testing, and then final selections were made for the program. For the 2009 application process, a 15-minute telephone interview of each applicant was added to the procedure in order to increase the information available to inform the selection of provisionally accepted candidates. An interview protocol and corresponding rating rubric were developed to elicit and assess a candidates language.

This work in progress session described how the protocol and rubric were developed; discussed issues that were identified; and provided an informal evaluation of the efficacy of such an assessment tool. The research questions asked concern the relationship between the phone interview ratings and the fact of being selected for further testing (through formal OPIs), the subsequent OPI scores, and the fact of final selection into the program. We hypothesize that the ratings of the phone protocol would generally predict applicants further success in the selection process. The findings of our research seem to support our initial hypothesis; however, data analysis also uncovered additional considerations to be taken into account for future uses of the protocol and rubric.

Presenters: Natalia Jacobsen, Genesis Ingersoll, Anne Donovan
Time: 2:00 – 3:30 pm


Friday, March 20, 2009

Assessing domain-general and domain-specific academic English language proficiency
Within the last decade, academic English language proficiency has become a major focus in the assessment of English language learners (ELLs) in the United States. However, the construct of academic English language (AEL) is still not very well understood. A general definition of AEL (Chamot and OMalley, 1994) is as the language used in the classroom for the purpose of acquiring content-specific skills and knowledge. Recently, Bailey and Butler (2003) developed a conceptual framework that describes in more detail how the construct of AEL can be operationalized in language tests and language curricula. They hypothesized that an important dimension of variation is between linguistic features that are common to various academic domains (i.e., domain-general) and linguistic features that are unique to individual content areas (i.e., domain-specific). While this distinction is conceptually intuitive, it has not been investigated empirically.

The purpose of this study is to examine domain-general and domain-specific AEL from the angle of a construct validity study using a latent variable modeling approach. Specifically, the goal was to model domain-general and domain-specific variance in a latent factor model to evaluate and compare the salience of these variance sources. The analyses were carried out on data from multiple test forms targeting academic English language proficiency at different grade and proficiency levels, which affords comparisons of the latent factor models across different ELL student populations.
 
The study is based on test data from a large-scale assessment of English language proficiency for K-12 learners currently used by 19 states. The test is an operationalization of model performance indicators defining five English Language Proficiency Standards.  The test purports to measure academic English language proficiency in four content areas in addition to Social and Instructional language. The test is organized in five grade clusters within which are three overlapping test forms targeting five different proficiency levels. For this study, data from nine test forms from the upper elementary, middle, and high school clusters were analyzed.

The results of this study revealed that across grade levels, at low levels of English language proficiency, domain-specific variance did not play a significant role in explaining examinee performance on test items across the five standards. At the mid and high levels of proficiency, however, the presence of domain-specific variance was increasingly observable through a general increase in model fit and increasing salience of individual item factor loadings. The empirical findings suggest that AEL differentiates between domain-general and domain-specific dimensions with increasing English language proficiency. Thus, when considering the construct of AEL, level of English language proficiency must not be ignored.

Presenters: Anja Römhild, Dorry M. Kenyon, David MacGregor
Time: 10:30 am – 12:00 pm

The discourse of assessments: Addressing linguistic complexity in content and English language proficiency tests through linguistic analyses

This symposiun addressed the issue of language complexity in both content tests and English language proficiency tests. Efforts over the last decade to better understand and address the requirements for valid and reliable tests of content knowledge and English language proficiency for English language learners has generated deliberation over how to determine the language complexity of test items. Papers in the symposium explored insights from discourse-based and cognitive-based approaches to linguistics can be used to more fully understand the functionality of test items.

Presenters: Jim Bauman, Laura Wright, David MacGregor, Abbe Spokane,
Megan Montee
Time: 3:30 – 5:30 pm


Return to CAL's list of past presentations.