Browsing by Author "Kalender, Ilker"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Can Computerized Adaptive Testing Work in Students' Admission to Higher Education Programs in Turkey?(2017) Berberoglu, Giray; Kalender, IlkerAdmission into university in Turkey is very competitive and features a number of practical problems regarding not only the test administration process itself, but also concerning the psychometric properties of test scores. Computerized adaptive testing (CAT) is seen as a possible alternative approach to solve these problems. In the first phase of the study, a series of CAT simulations based on real students' responses to science items were conducted in order to determine which test termination rule produced more comparable results with scores made on the paper and pencil version of the test. An average of 17 items was used to terminate the CAT administration for a reasonable reliability level as opposed to the normal 45 items. Moreover, CAT based science scores not only produced similar correlations when using mathematics subtest scores as an external criterion, but also ranked the students similarly to the paper and pencil test version. In the second phase, a live CAT administration was implemented using an item bank composed of 242 items with a group of students who had previously taken the exam the paper and pencil version of the test. A correlation of.76 was found between the CAT and paper and pencil scores for this group. The results seem to support the CAT version of the subtests as a feasible alternative approach in Turkey's university admission system.Item The Measurement Invariance of University Students' Ratings of Instruction(2019) Kalender, Ilker; Berberoglu, GirayThe invariance in the scores of student rating of instruction was studied across high and low achieving classrooms. Achievement levels were determined by the two criteria such as self-reported expected grades and end of semester grades. The data included 625 classrooms. The equality of (i) factorial structure, (ii) factor loadings, (iii) item intercepts, and (iv) error variances of the 7 item rating scale were studied across these groups. With respect to self-reported expected grades, high and low achieving classes produced invariant scale characteristics except strict invariance. On the other hand, with respect to end of semester grades, full equality in item intercepts and error variances were not achieved. It seems that comparing the rating results across the classrooms and courses independent of the achievement levels of the students may be misleading especially for the high-stake decisions since the origin of the scale is not the same across high and low achieving groups.