•  
  •  
 

Abstract

This research aimed to: 1) examine the inter-rater reliability of alignment between science items and indicators at the lower secondary level; and 2) evaluate the alignment between science items and indicators at the lower secondary level. Research subjects were 1,089 science test items used at the lower secondary level, chosen by using a multi-stage random sampling procedure. Analysis relied on 20 expert panelists to evaluate the alignment. The data were analyzed for inter-rater reliability by Fleiss? kappa statistic and the intra-class correlation (ICC), and mean scores of alignment between science test items and indices were calculated. The findings revealed that 1) in the cognitive complexity evaluation part, there was good inter-rater reliability, as demonstrated by the Fleiss? kappa statistic (Kf = 0.510), 2) in the evaluation of alignment between science test items, there was excellent inter-rater reliability, demonstrated by the intra-class correlation (ICC = 0.954, Sig. = .000), and 3) 92.93 percent of the items aligned with the specified indices by mean scores of 3.20-4.0

Publisher

Faculty of Education, Chulalongkorn University

DOI

10.58837/CHULA.EDUCU.48.3.9

First Page

144

Last Page

163

Included in

Education Commons

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.