160
Figure 26. Target Distribution of Score Points for Scientific Competencies
Scientific Competencies
% of score points
Explaining phenomena
scientifically
40-50%
Evaluating and designing
scientific enquiry
20-30%
Interpreting data and
evidence
scientifically
30-40%
TOTAL
100%
109. Item contexts will be spread across personal, local/national and global settings roughly in the
ratio 1:2:1 as was the case in 2006. A wide selection of areas of application will be used for units,
subject to satisfying as far as possible the various constraints imposed by the distribution of score
points shown in Figure 25 and Figure 26.
Reporting Scales
110. To meet the aims of PISA, the development of scales of student achievement is essential. A
descriptive scale of levels of competence needs to be based on a theory of how the competence
develops, not just on a post-hoc interpretation of what items of increasing difficulty seem to be
measuring. The 2015 draft framework has therefore defined explicitly the parameters of increasing
competence and progression, allowing item developers to design items representing this growth in
ability (Kane, 2006; Mislevy and Haertel, 2006). Initial draft descriptions of the scales are offered
below, though it is recognised that these may need to be modified as data are accumulated after
field testing of the items. Although comparability with the 2006 scale descriptors (OECD, 2007) has
been maximised in order to enable trend analyses, the new elements of the 2015 framework such
as depth of knowledge have also been incorporated. The scales have also been extended by the
addition of a level ‘1b’ to specifically address and provide a description of students at the lowest
level of ability who demonstrate very minimal evidence of scientific literacy and would previously
not have been included in the reporting scales. The initial draft scales for 2015 Framework
therefore propose more detailed and more specific descriptors of the levels of Scientific Literacy,
and not an entirely different model.