Loading Events

« All Events

  • This event has passed.

CAL Staff Presentations at AAAL 2024

March 16

Less Commonly Taught Language Courses in U.S. Community Colleges: A National Review

Although community colleges are consistently more likely to serve historically-minoritized students (Nagano et al., 2017) and better reflect the diversity of their local communities when compared to other types of institutions of higher education (Cohen et al., 2013), little is known about the extent to which opportunities for language study in community colleges also reflect such diversity and more research is needed on the availability of less commonly taught language courses in these settings. This poster presents results from a comparative analysis of community college-level less commonly taught language courses in the United States (i.e., courses for modern languages other than English, French, Spanish, or German). The project addresses the following research questions: What types of less commonly taught languages are taught in U.S. community colleges, including endangered languages and Indigenous languages? What course levels and sequences are offered for less commonly taught languages at U.S. community colleges? To what extent are less commonly taught language courses offered in community colleges designated as minority-serving institutions? Researchers collected data through a web search of two-year public institutions of higher education in each of the 50 states and the District of Columbia (n = 1014) using information from the institutions’ websites. Data was collected on the type of less commonly taught languages offered and the course levels and number of semesters offered for each language. This poster will provide an overview of the current state of less commonly taught language courses being offered in community colleges nationwide, focusing on languages rarely taught in U.S. school systems and courses offered at institutions serving diverse populations. Presenters will discuss the implications of these findings and how they can be used to support advocacy for and greater enrollment in less commonly taught language courses at the community college level.

Presenters:

  • Jamie Morgan, CAL
  • Jillian Marie Seitz, Georgetown University
  • Francesca Di Silvio, CAL

Computer-Mediated Speaking Tests for Young Learners: Interactions Between Interface Design and Task Performance

While computer-mediated speaking assessments of young language learners allow for efficient large-scale test administration, test developers and researchers need to understand how the technology features of the computer delivery interface interact with test taker experiences and performances to ensure that the test design reflects the intended speaking construct.

In this roundtable session, we discuss the design of and preliminary results from a research study focused on how technology interface features affect the speaking performances of young language learners in a computer-based speaking test of academic English language proficiency. In particular, we examine how opportunities to expand on previous answers through different automated prompts and scaffolds support students in providing test performances that optimally reflect their speaking ability.

In the study, we will administer speaking tasks using different iterations of a speaking test interface to young English learners (ages 7-10) via a series of cognitive labs featuring student observations and interviews with the students and their educators. In the cognitive labs, students will respond to different tasks with various scaffolds and multimodal prompts for additional language. We will then analyze and code observation and interview data about student interactions with design features and task scores and analyze students’ spoken responses to the test tasks. This session will present the study’s design and discuss preliminary results from observations of and interviews with English learners (n = 20) and their educators and elicit discussion from attendees about the study’s implications.

We expect the study’s results to inform future technology updates to a large-scale test of English language proficiency for young learners. Additionally, they will contribute to a more systematic understanding of how test interface design impacts student speaking performances and the relationship between the test design and the speaking construct.

Presenters:

  • Fabiana MacMillan, WIDA, UW-Madison
  • Megan Montee, Georgetown University
  • Mark Chapman, WIDA, University of Wisconsin-Madison
  • David MacGregor, WIDA at the University of Wisconsin
  • Justin Kelly, CAL

Research on the Current Landscape of U.S. K-12 Heritage Language Education

Although over 20% of children in the United States are heritage language learners (HLLs) who speak a language other than English at home (Carreira, 2022), few K-12 heritage language (HL) programs currently exist (Potowski, 2021), and those that do may use materials and approaches designed for L2 learners (Kibler & Valdés, 2016), perhaps due to a lack of content standards, curriculum frameworks, and proficiency scales specifically designed for HLLs (Gironzetti & Belpoliti, 2021). In addition, most research on HL programs is conducted at the postsecondary level, with limited research on and resources for HL education at the K-12 level.

This paper presents findings from a 2022 study conducted for the Massachusetts Department of Elementary and Secondary Education investigating the following research questions:

  • What is the landscape of heritage language programming in the U.S.?
  • What, if any, U.S. heritage language program models, elements, and practices in schools, districts, and states are associated with positive student outcomes?
  • What are the existing Massachusetts heritage language practices and dispositions?

To address the first and second research questions, we conducted a literature review examining how HL definitions, frameworks and standards, and program models are commonly used in the U.S. We also analyzed and compared HL definitions, frameworks and standards, program models, resources, and teacher training programs across 50 states, DC, and Puerto Rico; 123 world language, dual language, and HL organizations; and 33 individual districts/programs. To address the third research question, we conducted virtual focus groups (n=14), interviews (n=10), and an online survey (n=142) with Massachusetts educators who have experience working with HLLs.

Presenters will share major findings, including challenges and needs for resources identified by educators working with HLLs in K-12 schools, and discuss considerations for developing additional resources and training materials to further support HLLs in U.S. school systems.

Presenters:

  • Leslie Fink, CAL
  • Francesca Di Silvio, CAL
  • Jamie Morgan, CAL

Rubric Development for Assessing Pragmatic Competence: Lessons Learned from the Professional Performance Assessment

Constructing valid and reliable rubrics is a challenge in designing pragmatic competency assessments. Pragmatic competence is a difficult construct to operationalize (Laughlin et al., 2015), and ratings are prone to subjectivity and rater bias (Kasper & Rose, 2002; Lee, 2009; Taguchi, 2015). The Professional Performance Assessment, a locally designed assessment for specific purposes, has successfully measured students’ workplace pragmatic competence in English using a task-specific, checklist-style rubric based on Lukacsi’s (2020) methodology. This paper presents the development of the rubric and accompanying training, the reliability of resulting scores, and this assessment’s impact on stakeholders.

Several steps contributed to the success of the rubric and training development process. Scoring descriptors were based on pilot testing and divided into task completion and pragmatic competence categories. The checklist was adapted to be task-specific, as the three assessment tasks differ in both pragmatic language use and demonstrated workplace skills. Scoring anchors and justifications were developed for each task, and raters reviewed these and completed practice items before each scoring session. All student responses were double-scored. Discrepancies of 2+ points between total scores were adjudicated among raters against detailed notes.

This approach has successfully differentiated between student performances and shown high levels of rater agreement, with up to 96.1% exact and adjacent agreement on scores across tasks. However, agreement at the descriptor level was low, and scorers have noted a tension between the analytic rubric and the holistic nature of the pragmatic construct. This paper discusses newly adopted training considerations designed to address these concerns.

The success of this rubric and training shows that checklist-style task-specific rubrics and a robust training program can help assess pragmatic competence successfully. The study has implications for other assessments of pragmatic competence and highlights the usefulness of scoring methods that are highly tailored to context and task.

Presenters:

  • Katherine Moran, CAL
  • Mathilda Reckford, CAL
  • Leslie Fink, CAL