Show simple item record

dc.contributor.authorKahraman, Nilufer
dc.date.accessioned2024-03-19T12:03:34Z
dc.date.available2024-03-19T12:03:34Z
dc.date.issued2014
dc.identifier.issn1302-597Xen_US
dc.identifier.urihttp://hdl.handle.net/11727/11876
dc.description.abstractProblem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local calibrations alone provide a poor model fit. Purpose: The purpose of this study was to investigate whether the item calibration process for a performance test, computer-based case simulations (CCS), taken from the United States Medical Licensing Examination((R)) (USMLE (R)) Step 3((R)) examination may be improved through explanatory IRT models. It was hypothesized that explanatory IRT may help improve data modeling for performance assessment tests by allowing important predictors to be added to a conventional IRT model, which are limited to item predictors alone. Methods: The responses of 767 examinees from a six-item CCS test were modeled using the Partial Credit Model (PCM) and four explanatory model extensions, each incorporating one predictor variable of interest. Predictor variables were the examinees' gender, the order in which examinees encountered an individual item ( item sequence), the time it took each examinee to respond to each item ( response time), and examinees' ability score on the multiple-choice part of the examination. Results: Results demonstrate a superior model fit for the explanatory PCM with examinee ability score from the multiple-choice portion of Step 3. Explanatory IRT model extensions might prove useful in complex performance assessment test settings where item calibrations are often problematic due to short tests and small samples. Recommendations: Findings of this study have great value in practice and implications for researchers working with small or complicated response data. Explanatory IRT methodology not only provides a way to improve data modeling for performance assessment tests but also enhances the inferences made by allowing important person predictors to be incorporated into a conventional IRT model.en_US
dc.language.isoengen_US
dc.relation.isversionof10.14689/ejer.2014.54.7en_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectExplanatory Item Response Theoryen_US
dc.subjectPartial Credit Modelen_US
dc.subjectItem Response Theoryen_US
dc.subjectPerformance Testsen_US
dc.subjectItem calibrationen_US
dc.subjectAbility estimationen_US
dc.subjectSmall testsen_US
dc.titleAn Explanatory Item Response Theory Approach for a Computer-Based Case Simulation Testen_US
dc.typearticleen_US
dc.relation.journalEURASIAN JOURNAL OF EDUCATIONAL RESEARCHen_US
dc.identifier.volume54en_US
dc.identifier.startpage117en_US
dc.identifier.endpage134en_US
dc.identifier.wos000422367200007en_US
dc.identifier.scopus2-s2.0-84904724291en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergien_US
dc.contributor.researcherIDS-9457-2018en_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record