Using the Many Facet Rasch Model to Hire Online Instructors

Date & Time

Aug 4th at 12:45 PM until 1:30 PM


Online Education Administration 


Rating ( votes)

In 2018, adjunct instructors met 47% of current instructional staff needs in higher education (NCES, 2018). Many institutions rely heavily on adjunct instructors to teach a growing number of online sections. But hiring qualified adjunct instructors that fit an institution’s online model and organization structure is challenging because teaching online is a complex task that requires a unique skillset. Hiring processes should be well calibrated to an institution’s scope and mission. In addition, the rating processes can introduce unwanted error variance that decrease the validity and reliability of the scores (e.g., raters may be more or less severe). BYU-Idaho uses an evaluation course to observe the performance of candidates and make judgments based on simulated interactions with remote students. Their performance is rated and candidates are given a score. In an attempt to ensure that the resulting scores are fair and consistent, we have applied the Many-Facet Rasch Model (MFRM) to our rating process. The MFRM is part of a family of item response theory (IRT) models designed to account for error variance associated with various facets (i.e, rater severity or ease of question). The MFRM explicitly accounts for the variance associated with each rating facet (e.g., rater, rating occasion, item), places each level of each facet on a common “ability” scale, and produces a weighted ability score for instructor candidates that accounts for these facets. The resulting MFRM scores are directly comparable, given what we know about the facets involved. Our research questions included: Can the MRFM successfully be used to measure online instructor candidate performance in an evaluation course environment? What benefits do we realize from using the MRFM, as opposed to methods that do not account for variance across facets? What do we learn about our hiring process using the MFRM? Results indicate that the MFRM can produce candidate performance measures that are comparable across raters, rating occasions, and hiring constructs. Using the MFRM, we are able to confidently identify candidates that fit our institution’s mission and instructional model, regardless of the severity or leniency of their rater. MFRM output allows us to evaluate the effectiveness of the evaluation course, rubric items, and raters themselves. Other institutions may benefit from using the MFRM to ensure instructor fit and mitigate negative impacts of poor performing or misaligned instructors. VIEW THIS SESSION [Mediasite player]