Return to our list of past presentations.
American Educational Research Assocation (AERA) 2008 Annual Meeting
March 24 - 28, 2008 New York, NY
Monday, March 24, 2008
Testing for the Future: Addressing the Needs of Low literacy English Learners by Moving Beyond the Use of Common Item Types across Tests
Presenters: Rebecca Kopriva and Jim Bauman
How can students who have learned science content in a language other than English demonstrate that knowledge while acquiring English language proficiency? Previous research convincingly suggests that low proficiency English language learners (ELLs) have problems accessing large-scale tests using traditional item formats. This presentation will report progress on a project whose objective is to use computer capabilities to make a science assessment core more accessible to low proficiency ELL students. Items in the science assessment core are being developed to reflect equivalent item targets as a statewide assessment (made up of multiple choice and constructed response items) that will be used by a consortium of states on their statewide test. For this project, eight different approaches to structuring the computer-based items will be presented, only one of which is multiple-choice. Items with the full range of cognitive complexity will be included, and the discussion will center on how cognitive complexity, linguistic complexity, and the essential and non-essential item components interact within each of the eight approaches. The ultimate goal of the projects activities, data collection, and analyses is to produce a computer-based science assessment core for low literacy English language learners comparable at the student score level to general science test cores used in states testing programs.
This work is groundbreaking because it not only addresses the access needs of this population, but also extends the argument of producing comparable assessments beyond the use of identical item types across forms. The research will examine the extent to which the new approaches can allow these students to show what they know and can do in science as a result of improved accessibility. In addition, the project also challenges the notion that item types must remain invariant across test versions in order to produce comparable results. To the extent that the content targets can be maintained across versions while validity for populations who need such changes can be increased, this work may move the field ahead in how targeted construct elements can be reliably measured.
Cognitive lab findings associated with the current project will also be presented. Preliminary findings suggest that native English speakers as well as low proficiency ELLs are both affected by the change in format. However, the validity of the inferences of low proficiency ELLs appears to increase more than it does for native English speakers when results are compared across the standard and access-enhanced versions. Results from several iterations of cognitive lab trials, completed as the items were being conceptualized and refined, will be discussed.
Crowne Plaza Hotel Times Square, Room 1506, 15th Floor
Tuesday, March 25, 2008
The Case of SEI in Three States: Appropriate Action for English Learners?
Presenters include Sarah Moore
The panel provides a critical analysis of both legal and educational issues raised pursuant to Proposition 227, Proposition 203, and Question 2 in California, Arizona and Massachusetts, respectively. An overview of the legal historical context of the current SEI policies, the legal cases and issues related to the SEI mandates, and a report on the status of programs for ELLs since Lau v. Nichols will be given, including the untold story of Lau v. Nichols and its legacy and influence on the education of ELLs; the post-Proposition 227 cases; the conflation of Proposition 203 and the Flores Consent Order related to SEI implementation in Arizona; the historical and current policy context of Question 2, with attention to legal/parent/community perspectives.
New York Marriott Marquis Times Square, Duffy/Columbia Room, 7th
The limits of DIF: Why this Item Evaluation Tool is Flawed for Learning Disabled, Hearing Impaired and English Language Learners
Presenters: Catherine Cameron (CAL) and Rebecca Kopriva (University of Wisconsin)
This presentation will discuss the value and limits of the Differential Item Functioning statistics as they are currently conceived and used, and compare DIF and item analyses findings from items in four subjects and six grades on a statewide test. The discussion about current methods will highlight the reasons why all approaches share the same problems for students with language challenges, as well as some other populations. DIF results from two sets of items, standard and enhanced-access, will be compared to detailed findings from distractor analyses and a systematic evaluation of other data from the same language arts, mathematics, science, and social studies items. Performance on the test items for control group students, those who were native English speakers and did not have IEPs, will be compared to students identified as having a learning disability, hearing-impaired students, and English learners identified as having pre-functional, functional or intermediate levels of English proficiency.
Crowne Plaza Hotel Times Square, 501-502, B2
Assessing Academic English Language Proficiency: Clarifying the Construct
Presenters: Anja Roemhild, Dorry M. Kenyon, David MacGregor
On the basis of empirical replications describing the relationship between general English language skills in reading and listening and language in the academic content areas across mulitple proficiency levels and multiple grades, this paper helps clarify the relationship between English language proficiency level and the development of academic English language proficiency.
New York Marriott Marquis Times Square / Westside Ballroom, Salon 3, 5th Floor
Friday, March 28, 2008
Survey Non-Response and Ratings Bias for Online Course Evaluation
Presenter: Carolyn Grim Fidelman
Survey nonresponse does not necessarily lead to survey ratings bias. Low response rates, especially those occurring with increasing use of online surveys, are causing a great deal of worry. However, the present study shows that some online surveys may, in fact, be more robust against nonresponse bias than other modes. The author has conducted a study of summative online course evaluation in 12 courses with 798 students using what is known as a rich sampling frame to provide rare insights into nonresponse. Logistic regression methodology paired with hierarchical linear modeling demonstrates one way to determine where bias is occurring, if at all.
Sheraton Exec Conf Ctr, Conf Rm E - Lower Lobby
|CAL Store | Press
Room | Jobs | Contact
Us | Site Map | Privacy
Copyright © 2009 CAL