Tools

ESL Resources
Research
State Capacity Building
About CAELA

Calendar
Subscribe to Our Newsletter

Do you have a question?
Ask CAELA

Evaluating Workplace ESL Instructional Programs


Miriam Burt and Mark Saccomano
Center for Applied Linguistics
September 1995

As the United States continued its shift from a manufacturing- to a service-based economy in the late 1980s and early 1990s, researchers reported that changes in employment patterns would require workers to have better communication skills and to be both literate and proficient in English (McGroarty & Scott, 1993). Not surprisingly, there was a rise in the number of workplace education programs for both native and non-native speakers of English. The U.S. Department of Education's National Workplace Literacy Program (NWLP), which funded demonstration workplace projects offering instruction in basic skills, literacy, and English as a second language (ESL), fueled this increase by funding more than 300 projects between 1988 and 1994. Forty-nine percent of these projects included at least some ESL instruction.

With this increase in workplace instructional programs, a need has arisen for procedures to evaluate program effectiveness. Evaluations of ESL workplace programs seek to determine if the attention given to improving basic skills and English language proficiency has made a change in the participant and in the workplace. They also identify practices associated with program effectiveness so that successes can be replicated (Alamprese, 1994). This digest examines evaluation measures and activities used in workplace programs, and discusses issues associated with the evaluation of workplace ESL programs.

Evaluation Measures and Activities

Because numbers alone cannot show the depth or the breadth of a program's impact, evaluations often use both quantitative and qualitative measures to gauge success in attaining program outcomes (Padak & Padak, 1991). Qualitative measures include focus groups and individual interviews, workplace observations, and portfolios of learner classwork (Alamprese, 1994). Quantitative measures include commercially available tests, scaled performance ratings, and some program-developed assessment tools, such as portfolios.

Focus Groups and Stakeholder Interviews

What is examined in an evaluation is determined by stakeholders' (employers, labor unions, participants, teachers, funders) stated goals, expected outcomes for the program, and the resources available to the evaluator (Patton, 1987). As stakeholders may have different, possibly conflicting goals, it is important to clarify these goals and achieve a consensus beforehand as to which goals are most important to examine with the available resources (Fitz-Gibbon & Morris, 1987). The information gathered from the focus groups and stakeholder interviews should be recorded and accessible to the program and to the evaluators throughout the program.

Observations

Task analyses are generally used in curriculum development as educators observe and record their observations of the discrete steps included in workplace tasks such as setting up the salad bar for a cafeteria or making change for a customer at the cash register. The recorded observations are then plotted on a matrix of basic skills or English language skills. Although programs have relied on these analyses as a key data source for workplace outcomes (Alamprese, 1994), they do not represent the totality of skills used at the workplace. In order to better understand the range of skills needed for workplace success, other workplace-related activities such as staff meetings and union functions should also be observed.

Participant and Supervisor Interviews

Pre-program interviews with participants solicit information on their goals, their reasons for enrolling in the classes, and their perceived basic skills and English language needs for the workplace. When matched with exit interview data, these data provide information to evaluate program outcomes. Because the purpose of these interviews is to obtain information about learner perceptions rather than to assess learner skills, it is advisable to use the native language when interviewing participants with low English skills.

Similarly, the direct supervisors of participants should be interviewed both before and after the program to compare initial assessment of learner needs and expected outcomes with actual results. It is also useful to interview the direct supervisors midway through the program for their feedback on worker improvement and to identify unmet needs.

Tests and Other Types of Assessment

Commercially available tests are commonly used sources of quantitative data. The perceived objectivity of these tests and their long tradition of use make them appealing to managers and funders who often use them to make decisions regarding the continuation of a program. And, in fact, test-taking is a skill all learners need, and it is likely that ESL participants will come across this type of test in other contexts, as well.

Two commercially available tests that include workplace-related items and are often used in ESL programs are the Basic English Skills Test (BEST) and the Comprehensive Adult Student Assessment System (CASAS) ESL Appraisal. These instruments are easy to use, their reliability has been tested, and they allow for comparison among programs. The objections to these tests are that they may not measure what has been taught in the classroom, and they may have little applicability to specific workplace tasks or to a particular workplace. And, as with all tests, when interpreting results, evaluators and program staff should be aware that some errors may be due to ESL participants' unfamiliarity with the format of the tests rather than to lack of content knowledge.

Because of the limitations of commercially available tests, a complete evaluation of learner progress requires using tests created for the program. Performance-developed tests are designed to measure the learner's ability to apply what has been learned to specific workplace tasks (Alamprese & Kay, 1993). Because these tests are developed from authentic materials (e.g., job schedules, pay stubs, and union contracts) from participants' own workplaces, the content is appropriate and likely to be familiar to the participants.

Another assessment measure is the portfolio of learner work. Portfolios often include samples of class work, checklists where learners rate their progress in basic and workplace skills, and journals where they record their reactions to class and workplace activities. Like interviews, these measures can provide vital information on learner attitudes and concerns. They are also a venue for self-assessment, and allow participants who are unable or unwilling to express themselves orally, or who have difficulty with formal tests, to demonstrate progress towards their goals.

Quantifying Qualitative Measures

To increase credibility and help ensure reliability of qualitative measures, evaluators collect multiple types of evidence (such as interviews and observations) from various stakeholders around a single outcome (Alamprese, 1994; Patton, 1987; Lynch 1990). Data collected from the various measures can then be arranged into matrices. This chart-like format organizes material thematically and enables an analysis of data across respondents by themes (See Fitz-Gibbon & Morris, 1987; Lynch, 1990; and Sperazi & Jurmo, 1994).

Questionnaire and interview data can be quantified by creating a scale that categorizes responses and assigns them a numeric value. Improvement in such subjective areas as worker attitudes can then be demonstrated to funders and managers in a numeric or graphic form.

Issues in Program Evaluation

Many issues surround program evaluation for workplace ESL instruction. Stakeholders may have unrealistic expectations of how much improvement a few hours of instruction can effect. It is unlikely that a workplace ESL class of 40-60 hours will turn participants with low-level English skills into fluent speakers of English. Therefore, when interpreting findings, it is important for stakeholders to realize that ESL workplace programs may not provide enough practice time to accomplish substantial progress in English language proficiency.

The measurement of workplace improvement presents a special challenge, especially in workplace programs at hospitals, residential centers, and restaurants. What measures of workplace productivity exist where there is no product being manufactured? Improved safety (decreased accidents on the job) is a quantifiable measure, as is a reduction in the amount of food wasted in preparation. But how is improved worker attitude measured? Some ESL programs measure success by counting the increased number of verbal and written suggestions offered on the job by language minority workers or by their willingness to indicate lack of comprehension on the job (Mikulecky & Lloyd, 1994; Mrowicki & Conrath, 1994). Other programs record participant requests to be cross-trained or to learn other jobs at their workplaces (Alamprese & Kay, 1993). A long-term view is often needed, however, to discern changes in worker performance and in workplace productivity; longitudinal studies, where stakeholders are interviewed six months to a year after completion of a program, are recommended.

Even if data from longitudinal studies is available, it is not accurate to place all credit for improvement in worker attitude or workplace productivity (or blame for lack thereof) on the instructional program. Sarmiento (1993) asserts that other factors (Are there opportunities for workers to advance? Are the skills of all workers appreciated and used? Is worker input in decision making valued?) need to be considered when evaluating workplace programs. However, for ESL participants who come from cultures where assertiveness, ambition, and speaking up on the job may not be valued, the presentation of opportunities to succeed is not enough. Advancing oneself at the U.S. workplace is a cross-cultural skill, which, like language and literacy skills, must be taught.

Finally, funding is an important issue in evaluation. The activities described above (focus groups, interviews in English or in the native language, program-developed assessment instruments, extensive contacts with all stakeholders from before the program begins until months after completion) are costly. As federal funds are unlikely to be available, evaluations need to be structured so that they can provide practical information to the employers funding them.

Conclusion

Evaluation is a complex process that involves all stakeholders and must be an integral part of workplace ESL instructional programs before, during, and after the programs have been completed. When done appropriately, it can increase program effectiveness by providing valuable information about the impact of programs and highlighting areas where improvement is needed (Jurmo, 1994). And, a rigorous and complete evaluation can identify replicable best practices, enabling a program to serve as a model for other workplace ESL instructional programs.


References

  • Alamprese, J.A. (1994). Current practice in workplace literacy evaluation. Mosaic: Research Notes on Literacy, 4(1), 2.

  • Alamprese, J.A.; & Kay, A. (1993). Literacy on the cafeteria line: Evaluation of the Skills Enhancement Training Program. Washington, DC: COSMOS Corporation and Ruttenberg, Kilgallon, & Associates. (ERIC Document Reproduction Service No. ED 368 933)

  • Fitz-Gibbon, C.T., & Morris, L.L. (1987). How to design a program evaluation. Beverly Hills, CA: Sage.

  • Jurmo, P. (1994). Workplace education: Stakeholders' expectations, practitioners' responses, and the role evaluation might play. East Brunswick, NJ: Literacy Partnerships. (EDRS No. ED 372 282)

  • Lynch, B.K. (1990). A context-adaptive model for program evaluation. TESOL Quarterly, 25(1), 23-42.

  • McGroarty, M., & Scott, S. (1993). Workplace ESL instruction: Varieties and constraints. Washington, DC: National Center for ESL Literacy Education. (EDRS No. ED 367 190)

  • Mikulecky, L., & Lloyd, P. (1994). Handbook of ideas for evaluating workplace literacy programs. Bloomington, IN: Indiana University. (EDRS No. ED 375 264)

  • Mrowicki, L., & Conrath, J. (1994). Evaluation guide for basic skills programs. Des Plaines, IL: Workplace Education Division, The Center‹Resources for Education. (EDRS No. ED 373 261)

  • Padak, N.D., & Padak, G.M., (1991). What works: Adult literacy program evaluation. Journal of Reading, 34(5), 374-379.

  • Patton, M.Q. (1987). How to use qualitative methods in evaluation. Beverly Hills, CA: Sage.

  • Sarmiento, A. (1993). Articulation and measurement of program outcomes. In MSPD Evaluation Support Center (Ed.), Alternative designs for evaluating workplace literacy programs (pp. 5: 1-13). Research Triangle Park, NC: Research Triangle Institute. (EDRS No. ED 375 312)

  • Sperazi, L. & Jurmo, P. (1994). Team evaluation: A guide for workplace education programs. East Brunswick, NJ: Literacy Partnerships. (EDRS No. ED 372 284)

 


This document was funded in part by the Andrew W. Mellon Foundation through a grant to the Center for Applied Linguistics (4646 40th Street, NW, Washington, DC 20016 202-362-0700). Additional funding was from the U.S. Department of Education (ED), Office of Educational Research and Improvement, under contract no. RR 93002010, The opinions expressed in this report do not necessarily reflect the positions or policies of ED or the Andrew W. Mellon Foundation. This document is in the public domain and may be reproduced without permission.