Tools
Tools

ESL Resources
Research
State Capacity Building
About CAELA

Calendar
Subscribe to Our Newsletter

Do you have a question?
Ask CAELA
For Instructors

Assessment & Evaluation

Assessment and evaluation provide valuable information to learners, instructors, and program administrators. Assessment refers to the use of instruments and procedures to gather data on a regular basis for such purposes as identifying learners' needs, documenting learners' progress, or determining how program services are meeting learners' needs. Evaluation refers to the process of interpreting and analyzing assessment data at a given point in time for the purpose of improving and documenting program and learner outcomes.

Assessment and evaluation take place in many different contexts and for many different purposes. In a classroom or tutoring situation, learners want to know how they are progressing and teachers and tutors want to know how effective their instructional approaches and materials are. At the program level, administrators are concerned about the implementation and success of the program. No single assessment procedure or instrument can provide everything to all stakeholders, but information from a variety of assessments can provide more comprehensive portrait of the learner and of the success of instructional services.

Please see the CAELA resources and links below for more information.

Also, see below the discussion of why it important for adult ESL programs to have a comprehensive assessment and evaluation plan as well as other questions about assessment and adult ESL addressed by CAELA consultant, Carol Van Duzer.

Briefs

Using the ESL Program Standards to Evaluate and Improve Adult ESL Programs

 

Digests, and Q&As

Issues in Accountability and Assessment in Adult ESL

Needs Assessment for Adult ESL Learners

Valid, Reliable, and Appropriate Assessments for Adult English Language Learners

 

Other CAELA Resources

Assessing Adult English Language Learners (from Practitioner Toolkit)

Assessment and Accountability in Programs for Adult English Language Learners: What Do We Know? What Do We Have in Place? What Do We Need? (Proceedings Of 2003 symposium)

Assessment with Adult English Language Acquisitions (CAELA Fact Sheet)

Assessment and Evaluation in Adult ESL (CAELA Resource Collection)

Content Standards and Adult ESL (annotated bibliography)

Program Standardards and Adult ESL (annotated bibliography)

English Language Assessment Instruments for Adults Learning English (from Practitioner Toolkit)


1. Why is it important for adult ESL programs to have a comprehensive assessment and evaluation plan?

The mark of a quality adult ESL program is developing a coherent plan for implementing and evaluating instruction and assessing learner progress. Various stakeholders—learners, instructors, program administrators, funding agencies—may require different information from assessment and evaluation. Learners want to see how they are progressing, teachers want feedback on the effectiveness of their instruction, program administrators want proof of program success in meeting the goals of the program and the needs of the learners, and funding agencies would like to see that their money is being well spent.

A single assessment may not meet all of these demands. For example, the results from a standardized test that funding agencies might use to compare learner gains across programs may not give information that is readily understandable by learners. Whereas feedback on a teacher-made vocabulary quiz may indicate how many new words the learners can use, these results would not be used for comparing learner progress in another program that wasn’t studying the same words.

Frequently, the National Reporting System (NRS) requirement to use a standardized assessment procedure to show level gain takes over and becomes the assessment plan. It should only be one piece of a comprehensive plan that provides accurate and useful information for each of the stakeholders and for modifying and improving programs.

2. Can you describe the history of standardized assessment requirements in adult ESL for US federally funded programs?

 In 1988, the use of standardized tests to evaluate adult education programs appeared in amendments to the Adult Education Act. The amendments required states to evaluate one-third of their grant recipients using standardized tests. The National Literacy Act of 1991 required the US Department of Education (DOEd) to develop indicators of program quality that would assist states and local program in judging the effectiveness of programs that provide adult education services. Indicators were specifically called for in the areas of recruitment, retention, and education gains. DOEd sought input from the field by reviewing state and local practices related to program quality, commissioning papers by experts in the field, holding focus groups, and working closely with the state directors of adult education. Standardized tests, along with teacher reports of improvements in communication competencies, and demonstrated improvements in alternative assessments (e.g., portfolios, checklists of specific life skills, and student reports of attainment) were identified as sample measures of attainment towards basic skills and competencies supporting the educational needs of the learners.

With the passage of the Government Performance and Results Act (GPRA) in 1993, an emphasis was placed on performance measurement as a requirement of government-funded program evaluations. Now, since the Workforce Investment Act of 1998, states are required to negotiate with DOEd acceptable target levels of performance on three core indicators of quality, one of which is the demonstrated improvement in skills levels in reading, writing, and speaking the English language. The core indicator levels of performance must be expressed in objective, quantifiable, and measurable form and must show the progress of each eligible program toward continual improvement of learner performance. The National Reporting System was established by DOEd, working with the state directors of adult education, to facilitate the accounting and reporting process.

3. What types of adult ESL programs are required to report learner gains to the U.S. Department of Education? What’s the process?

 The Workforce Investment Act (WIA) of 1998 requires the U.S. Department of Education (ED) to negotiate levels of performance for core measures of performance with each state that receives federal funds for adult education under this act. All programs receiving federal funds through their state education office must provide information on the core measures for each student enrolled in their programs. One of the core measures is level gain. This information is reported through the National Reporting System (NRS). The NRS has defined six ESL functioning levels which describe skills students are expected to demonstrate in listening and speaking, reading and writing, and functional and workplace skills.

The states have the flexibility to set assessment policy that includes selecting tests or assessment procedures that local programs may use for pre- and post-testing, establishing a time frame for assessment (e.g., a calendar time for testing or a number of instructional hours before post-testing), and providing training to local program staff on requirements and procedures. States have also established state-wide data collection systems for the recording and reporting of core measure results.

The NRS data is either aggregated at the local level and used to generate reports for the state or is sent directly to the state education office to be aggregated. The data is submitted to ED in the form of reporting tables. The Department of Education uses the data to demonstrate program effectiveness to Congress and to award grants to states that exceed their levels of performance. States are expected to use the data for program management and improvement purposes.

4. What are the challenges in testing language proficiency level gains for adult English language learners?

There are many definitions of language proficiency. Because language has so many facets and so many uses, different tests approach different aspects of language proficiency. Over the years, proficiency testing has reflected changes in our understanding of language theory. It has moved from a structural view (e.g., discrete point tests of grammar, phonology, or other components of language), through a sociolinguistic view (e.g., integrative tests such as cloze and dictation), to a communicative view (e.g., oral interviews that assess the learner’s ability to use language to carry out communicative tasks). Today, given the focus on real-life practical content in adult ESL instruction and the goals of the learners—as well as in the educational functioning level descriptors of the National Reporting System (NRS)—a test should, in some way, look at language as communication.

Nevertheless, most items in most tests do not related directly to either theoretically or empirically derived understandings of adult English language proficiency development. It is not enough to assume that if a test is constructed in English and requires responses in English, then higher scores will automatically correspond to higher levels of English proficiency.

At the same time, a number of tests designed for English language learners relate scores to some broadly defined scale of proficiency levels that are described in very global terms. The actual items in the test, however, may be particular to a competency area and may sample very narrowly from the broad ranges of behavior described by the proficiency levels. The generalizability of from performance on the test items back to all situations consistent with a particular level description may be very difficult to establish.

The use of a single test form to assess the full spectrum of proficiency levels also means that most items will not match any particular test taker’s current level of functioning—that is, there are too few items at any one level of proficiency. Adaptive tests or tests with multiple levels usually make more accurate assessments of functioning levels.

Therefore, it is important for program and instructional staff to read (and ask for) the supporting test development documentation, not just the administration and scoring guidelines, for commercially available tests. Look for information about the population for whom the test is designed, the theoretical underpinnings upon which it was constructed, and the reliability and validity studies that support what the test claims it can do. Evaluate the test’s usefulness for the population and the purposes for which it will be used by your program.