Education Indicators in Canada: Handbook for the Report of the Pan-Canadian Education Indicators Program September 2017
Section C: Elementary-secondary education
Archived Content
Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.
C1 Early years and school readiness
Indicator C1 assesses the early years and school readiness of 4- and 5-year-old children by examining their health status (including any health limitations), participation in activities, exposure to reading and reading materials, and their language scores/vocabulary skills.
Concepts and definitions
- The child’s general health was classified as: excellent; very good; good; or fair or poor. The categories were read to the adult respondents who answered on behalf of their children in the National Longitudinal Survey of Children and Youth (NLSCY).
- This indicator also considers certain health limitations affecting the child. One set of questions asked about the child’s day-to-day health and focused on his or her abilities relative to other children of the same age. The adult respondents were told that these same questions would be asked of everyone. This indicator considers the following: difficulty seeing; difficulty hearing; difficulty being understood when speaking; difficulty walking; and pain or discomfort. Pain or discomfort reflects the “no” responses to a question asking if the child is “usually free of pain or discomfort.” These questions are part of an index called the Health Utility Index.
- Before being asked about chronic conditions, the adult who was responding on behalf of the child was told that this referred to “conditions that have lasted or are expected to last six months or more and have been diagnosed by a health professional” and was instructed to mark all that apply. This indicator presents information for long-term allergies and long-term bronchitis, as well as asthma. The questions for asthma were asked separately, and the information presented reflects the percentage of children aged 4 or 5 who had ever been diagnosed with asthma, not just those who had had an asthma attack in the 12 months before the survey interview.
- Weekly physical activities outside of school hours refers to weekly participation (ranging from most days to about once a week) in: sports that involved a coach or instructor (except dance, gymnastics or martial arts); lessons or instruction in organized physical activities such as dance, gymnastics or martial arts; lessons or instruction in music, art or other non-sport activities; and participation in any clubs, groups or community programs with leadership (for example, Beavers, Sparks or church groups). The adults who responded on behalf of these young children were asked to provide information on the children’s physical activities for the 12-month period leading up to the survey interview.
- Daily reading activities outside of school hours reflects some of the information obtained from questions about literacy, including how often a parent read aloud to the child or listened to the child read (or try to read). Respondents were also asked how often the child looked at books, magazines, comics, etc. on his/her own, or tried to read on his/her own (at home).
- The Peabody Picture Vocabulary Test-Revised (PPVT-R) measures children’s receptive vocabulary, which is the vocabulary that is understood by the child when he or she hears the words spoken. It is a “normed” test; that is, a child’s performance is scored relative to that of an overall population of children at the same age level as the child. A wide range of scores represents an average level of ability, taking the age of the child into consideration. Scores below the lower threshold of this average range reflect a delayed receptive vocabulary, and scores above the higher threshold demonstrate an advanced receptive vocabulary.
- The PPVT-R is scaled to an average of 100. The range of average receptive vocabulary measured by the PPVT-R covers scores from 85 to 115. A score below 85 is considered to indicate delayed receptive vocabulary; a score above 115, advanced. Scoring is adjusted to reflect the different abilities of 4- and 5-year-olds. English and French scores are assessed separately and are not directly comparable.
Methodology
- The National Longitudinal Survey of Children and Youth (NLSCY) is a long-term study of Canadian children that follows their development and well-being from birth to early adulthood. The survey was designed to collect information about factors influencing a child’s social, emotional and behavioural development and to monitor the impact of these factors on the child’s development over time.
- This indicator is based on nationally representative data for 4- and 5-year-olds from cycle 8 of the NLSCY, which was conducted in 2008/2009.
- The information presented was obtained from the NLSCY child component; specifically, the questions on child health, activities (sports, lessons, clubs, etc.) and literacy. Responses were provided by the person most knowledgeable (PMK) about the child, which is usually the mother.
Limitations
- The NLSCY relies on the perceptions of the adult most familiar with the child to report on the child’s general health and development, and such reports may not always be entirely objective or accurate.
- The following are possible sources of non-sampling errors in the NLSCY: response errors due to sensitive questions, poor memory, translated questionnaires, approximate answers, and conditioning bias; non-response errors; and coverage errors.
Data source
- National Longitudinal Survey of Children and Youth (NLSCY), Statistics Canada. For more information, consult “Definitions, data sources and methods”, Statistics Canada Web site, survey 4450.
C2 Elementary-secondary school: enrolments and educators
Characteristics of the educator work force are captured in Indicator C2 (CANSIM 477-0109, 477-0107, and 477-0108)
Concepts and definitions
Public schools are publicly funded elementary and secondary schools that are operated by school boards or the province or territory. They include all regular publicly funded schools, as well as provincial reformatory or custodial schools and other schools that are recognized and funded by the province or territory.
- Educators include all employees in the public schools who belong to one of the three following categories: teachers, school administrators and pedagogical support.
- Teachers include personnel involved in direct student instruction in a group or one-on-one basis. They include classroom teachers; special education teachers; specialists (music, physical education); and other teachers who work with students as a whole class in a classroom, in small groups in a resource room, or one-on-one inside or outside a regular classroom, including substitute/supply teachers. Chairpersons of departments who spend the majority of their time teaching and personnel temporarily not at work (e.g. for reasons of illness or injury, maternity or parental leave, holiday or vacation) are reported in this category. It excludes teacher's aides or student teachers as well as other personnel who do not get paid for their employment. For paid teacher's aides or educational assistants see category "pedagogical support" below.
- School administrators include all personnel who support the administration and management of the school such as principals, vice-principals and other management staff with similar responsibilities only if they do not spend the majority of their time teaching. This category excludes those who are in higher level management; receptionists, secretaries, clerks and other staff who support the administrative activities of the school; and those who are reported under "other than educators".
- Pedagogical support staff includes professional non-teaching personnel who provide services to students to support their instruction program. It includes educational assistants, paid teacher's aides, guidance counselors and librarians. This category excludes those in health and social support who should be reported under "other than educators".
- Educator headcount is defined as the number of educators on September 30th (or as close as possible thereafter) of the school year who are responsible for providing services to the students reported in the enrolment headcount tables.
Methodology
- The Elementary-Secondary Education Survey (ESES) is a national survey that enables Statistics Canada to provide information on enrolments, graduates, educators and finance of Canadian elementary-secondary public and private educational institutions. It also provides enrolment information for home-schooled students.
- The ESES is an annual survey that collects aggregate data from each provincial/territorial Ministry or Department of Education. The information on enrolments is collected by type of program (regular, upgrading, and vocational), by grade and sex and by age and sex.
- The survey also collects data on secondary school graduates by type of program (regular, upgrading, and vocational), by age and sex.
- Information pertaining to full-time and part-time educators by age group and sex is also collected. Finally, the survey also gathers expenditures data pertaining to level of government (school board and other government) and type of expenditures.
Limitations
- Due to the nature of the Elementary-Secondary Education Survey (ESES) data collection, these data are updated on an ongoing basis and are therefore subject to further revisions.
- Care should be taken with cross-jurisdictional comparisons. The proportion of educators (comprising a mix of teachers, administrators and pedagogical support) differs in each jurisdiction.
Data source
- Elementary-Secondary Education Survey, Statistics Canada. For more information, consult "Definitions, data sources and methods", Statistics Canada Web site survey 5102.
C4 Student achievement
Programme for International Student Assessment (PISA)
Indicator C4 reports on student achievement in three key areas—reading, mathematics, and science—and looks at changes in results over time. Performance was examined using results from the Programme for International Student Assessment (PISA), an international program of the Organisation for Economic Co-operation and Development (OECD).
This sub-indicator presents detailed information on the performance of 15-year-old students in Canada in the major PISA domains of reading, mathematics and science (CANSIM 479-0001, 479-0002, 479-0003).
Concepts and definitions
- The Programme for International Student Assessment (PISA) is a collaborative effort of member countries of the OECD along with partner countries to regularly assess youth outcomes, using common international tests, for three domains: reading, mathematics, and science. The goal of PISA is to measure students’ skills in reading, mathematics, and science not only in terms of mastery of the school curriculum, but also in terms of the knowledge and skills needed for full participation in society.
- Reading: An individual’s capacity to understand, reflect on, and engage with written texts, in order to achieve one’s goals, to develop one’s knowledge and potential and to participate in society.
- Mathematics: An individual’s capacity to identify and understand the role that mathematics plays in the world, to make well-founded judgments and to use and engage with mathematics in ways that meet the needs of that individual’s life as a constructive, concerned and reflective citizen.
- Science: An individual’s capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity.
Methodology
- Internationally, around 510,000 students from 72 countries and economies participated in PISA 2015. PISA’s target population comprises 15-year-olds who are attending school. In Canada, the student sample is drawn from Canada’s 10 provinces; the territories have not participated in PISA to date. The PISA assessments are administered in schools, during regular school hours, in the spring. Students of schools located on Indian reserves were excluded, as were students of schools for those with severe learning disabilities, schools for blind and deaf students, and students who were being home-schooled.
- While all three of the PISA domains are tested in each assessment, only one forms the major domain in each cycle, meaning it includes more assessment items than the others. In each cycle, two-thirds of testing time is devoted to the major domain. Mathematics was the major domain in 2003 and 2012, reading in 2000 and 2009, and science in 2006 and 2015.
- Results for the major domains are available in a combined domain scale (which represents students’ overall performance across all the questions in the assessment for that domain), as well as on the sub-domains that make up each overall scale. As fewer items are tested as part of the minor domains, only combined or overall results are available from PISA.
- In PISA, student performance is expressed as a number of points on a scale constructed so that the average score for the major domains for students in all participating countries was 500 and its standard deviation was 100.
- PISA results can also be presented as the distribution of student performance across levels of proficiency. The levels range from the lowest, Level 1, to the highest, Level 6. Descriptions of each of these levels have been generated, based on the framework-related cognitive demands imposed by tasks that are located within each level, to describe the kinds of knowledge and skills needed to successfully complete those tasks, and which can then be used as characterisations of the substantive meaning of each level.
- According to the OECD, Level 2 can be considered a baseline level of proficiency, at which students begin to demonstrate the competencies that will enable them to participate effectively and productively in life. Students performing below Level 2 can still accomplish some tasks successfully, but they lack some fundamental skills that may prepare them to either enter the workforce or pursue postsecondary education.
- When comparing student performance among countries, provinces, or population subgroups, the PISA tables identify statistically significant differences. Statistical significance is determined by mathematical formulas and considers issues such as sampling and measurement errors. Sampling errors relate to the fact that performance was computed from the scores of random samples of students from each country and not from the entire population of students in each country. Consequently, it cannot be said with certainty that a sample average has the same value as a population average that would have been obtained had all 15-year-old students been assessed. Additionally, a degree of error is associated with the scores describing student skills as these scores are estimated based on student responses to test items.
- Standard errors and confidence intervals have been used as the basis for performing comparative statistical tests. The standard error expresses the degree of uncertainty around the survey results associated with sampling and measurement errors. The standard error is used to construct a confidence interval, which indicates the probability that a given error range (given by the standard error) around the sample statistic includes the population number. The PISA survey results are statistically different if the confidence intervals do not overlap. Furthermore, an additional t-test was conducted to confirm statistical difference.
- It is possible to compare changes in student performance over time in each PISA domain because a number of common test questions are used in each survey. However, the limited number of such common test items used increases the chances of measurement error. To account for this, an extra error factor, known as the linking error, is introduced into the standard error. The standard errors with linking errors should be used whenever comparing performance across assessments (but not when comparing results across countries/economies or subpopulation within a particular assessment).
Limitations
- Looking at the relative performance of different groups of students on the same or comparable assessments at different time periods shows whether the level of achievement is changing. Obviously, scores on an assessment alone cannot be used to evaluate a school system, because many factors combine to produce the average scores. Nonetheless, these assessments are one of the indicators of overall performance.
- Since data are compared for only two points in time, it is not possible to assess to what extent the observed differences are indicative of longer term trends.
- Statistical significance is determined by mathematical formulas and considers issues such as sampling. Whether a difference in results has implications for education is a matter of interpretation; for example, a statistically significant difference may be quite small and have little effect. There are also situations in which a difference that is perceived to have educational significance may not, in fact, have statistical significance.
Data sources
- Council of Ministers of Education, Canada, Employment and Social Development, Canada, and Statistics Canada.
- Organisation for Economic Co-operation and Development. Programme for International Student Assessment (PISA).
Pan-Canadian Assessment Program (PCAP)
Indicator C4 reports on student achievement in three core learning areas (also referred to as domains): mathematics, science, and reading. It also examines the process of mathematics problem-solving. This sub-indicator examines performance by presenting results from the Pan-Canadian Assessment Program (PCAP), an initiative of the provinces and territories conducted through the Council of Ministers of Education, Canada (CMEC).
Detailed information on the performance of Grade 8 students in Canada in the major PCAP domain of science, assessed in 2013, is presented. Mean scores and the distribution of students by performance levels for the overall science domain, as well as mean scores for the science sub-domains, are also outlined. The performance of students in science and reading in 2013 is also shown, in addition to performance over time for science. Results are presented by the language of the school system.
Concepts and definitions
- The Pan-Canadian Assessment Program (PCAP) is a cyclical program of assessments that measures the achievement of Grade 8 students in Canada. It is conducted by the Council of Ministers of Education, Canada (CMEC). PCAP provides a detailed look at each of three core learning areas, or domains, in the years when it is a major focus of the assessment (reading in 2007, mathematics in 2010, and science in 2013), along with a minor focus on the other two domains. PCAP, which was first conducted in 2007, has replaced CMEC’s School Achievement Indicators Program (SAIP). PCAP was designed to determine whether students across Canada reach similar levels of performance in these core learning areas at about the same age, and to complement existing assessments in each jurisdiction.
- PCAP 2013 focused on science literacy, defined through three competencies (science inquiry, problem solving, and scientific reasoning); four subdomains (nature of science, life science, physical science, and Earth science); as well as attitudes about science and its role in society. Science performance levels were developed in consultation with independent experts in education and assessment and align broadly with internationally accepted practice. Provinces also worked to ensure that the unique qualities of our country's education systems are taken into account.
- Mathematics: Mathematics is assessed as a conceptual tool that students can use to increase their capacity to calculate, describe, and solve problems.
- Reading is considered a dynamic, interactive process during which the reader constructs meaning from texts. The process of reading involves the interaction of reader, text, purpose and context, before, during, and after reading.
- While all three of the PCAP domains are tested in each assessment, each cycle places a major focus on only one domain, meaning it will include more assessment items than the other two. PCAP has been, and will be, administered to students as follows:
Domain focus | 2010 | 2013 | 2016 | 2019 | 2022 |
---|---|---|---|---|---|
Major | Mathematics | Science | Reading | Mathematics | Science |
Minor | Science | Reading | Mathematics | Science | Reading |
Minor | Reading | Mathematics | Science | Reading | Mathematics |
Methodology
- Approximately 32,000 Grade 8 students from Canada’s 10 provinces and Yukon participated in PCAP 2010. The Northwest Territories and Nunavut have not yet participated in the PCAP assessments.
- When PCAP began in 2007, its target population was all 13-year-old students. In 2010, the target was modified to capture all Grade 8 students, regardless of age. This simplified the selection of students and reduced disruptions to the schools and in the classrooms. In 2007, 13-year-old students accounted for most of the PCAP sample, although these students may not have all been in Grade 8 at the time.
- The assessment adopted the following stratified sampling process in the selection of participants:
- the random selection of schools from each jurisdiction, drawn from a complete list of publicly funded schools provided by the jurisdiction;
- the random selection of Grade 8 classes, drawn from a list of all eligible Grade 8 classes within the school;
- the selection of all students enrolled in the selected Grade 8 class;
- when intact Grade 8 classes could not be selected, a random selection of Grade 8 students.
- The PCAP participation rate was over 85% of sampled students. The school determined whether or not a student could be exempted from participating in the PCAP assessment. Students were excused from the assessments if they had, for example: functional disabilities; intellectual disabilities; socio-emotional conditions; or limited language proficiency in the target language of the assessment.
- The PCAP structure was designed to align with that used for the Programme for International Student Assessment (PISA), which is conducted by the Organisation for Economic Co-operation and Development (OECD).
- PCAP 2013 tested approximately 32,000 students in English, and about 8,000 students in French. The results for students in the French school system were reported as French language, and the results for students in the English school system were reported as English language. The overall results for a jurisdiction represent those for students in both systems. Results for French immersion students who wrote in French were calculated as part of the English results since these students are considered part of the English-language cohort. (Caution is advised when comparing achievement results based on assessment instruments that were prepared in two different languages. Despite extensive efforts to produce an equivalent test in both languages, each language has unique features that may make direct comparisons difficult.)
- Results for the major domains are available in an overall domain scale (which represents students’ overall performance across all the questions in the assessment for that domain), as well as on the sub-domains that make up each overall scale. As fewer items are tested as part of the minor domains, only combined or overall results are available from PCAP.
- When scores obtained from different populations and on different versions of a test are compared over time, a common way of reporting achievement scores that will allow for direct comparisons is needed. One such commonly used method numerically converts the raw scores to “standard scale scores.” For PCAP 2013, raw scores were converted to a scale on which the average for the Canadian population was set at 500, with a standard deviation of 100. From this conversion, the scores of two-thirds of all participating students fell within the range of 400 to 600 points, which represents a “statistically normal distribution” of scores.
- Results for a major domain in PCAP can also be presented as the percentage of students who had different performance levels. Performance levels represent how well students were doing based on the cognitive demand and degree of difficulty of the test items. Cognitive demand is defined by the level of reasoning required by the student to correctly answer an item, from high demand to low demand; degree of difficulty is defined by a statistical determination of the collective performance of the students on the assessment. There were four levels of performance in the science component of PCAP 2013:
- Level 4 (scores of 655 and above)
- Level 3 (scores between 516 and 654)
- Level 2 (scores between 379 and 515)
- Level 1 (scores 378 and less)
- Level 2 represents the expected level of performance for Grade 8 students, and Level 1, a level below that expected of students in their Grade 8 level group. Levels 3 and 4 represent higher levels of performance. These definitions of the expected levels of performance were established by a panel of assessment and education experts from across Canada, and were confirmed as reasonable given the actual student responses from the PCAP assessments.
- When comparing student performance among provinces and territories, or across population sub-groups, statistically significant differences must be considered. Standard errors and confidence intervals were used as the basis for performing comparative statistical tests. The standard error expresses the degree of uncertainty around the survey results associated with sampling and measurement errors. The standard error is used to construct a confidence interval. The confidence interval represents the range within which the score for the population is likely to fall, with 95% probability. It is calculated as a range of plus or minus about two standard errors around the estimated average score. The differences between estimated average scores are statistically significant if the confidence intervals do not overlap.
- This indicator compares the performance of students in mathematics on the 2013 PCAP assessment with the first major assessment of this domain in PCAP 2010. It is not possible to compare the results from any minor assessments that took place before the first major (full) assessment of a domain because the framework for the domain is not fully developed until the cycle in which it is assessed as a major domain. Consequently, the results measured as a minor domain beforehand are not comparable.
- In addition to the assessment of students’ knowledge and skills in mathematics, reading, and science, PCAP also administers accompanying contextual questionnaires to students, teachers, and schools.
Limitations
- An examination of the relative performance of different groups of students on the same or comparable assessments at different time periods shows whether the level of achievement is changing. However, scores on an assessment alone cannot be used to evaluate a school system, because many factors combine to produce the average scores. Nonetheless, these assessments are one of the indicators of overall performance.
- Since data are compared for only two points in time, it is not possible to assess to what extent the observed differences are indicative of longer term trends.
- Statistical significance is determined by mathematical formulas and considers issues such as sampling. Whether a difference in results has implications for education is a matter of interpretation; for example, a statistically significant difference may be quite small and have little effect. There are also situations in which a difference that is perceived to have educational significance may not, in fact, have statistical significance.
Data source
- Pan-Canadian Assessment Program, PCAP-2013: Report on the Pan-Canadian Assessment of Science, Reading, and Mathematics, Council of Ministers of Education, Canada (CMEC), 2014.
C5 Information and communications technologies (ICT)
Indicator C5 reports on computer and software availability in schools, computer use among students at school, and student self-confidence in performing computer tasks. Information is presented for Canada, the provinces, and selected member countries of the Organisation for Economic Co-operation and Development (OECD) using results from the OECD’s 2009 Programme for International Student Assessment (PISA).
Concepts and definitions
- Information for this indicator is obtained through the 2009 Programme for International Student Assessment (PISA), which evaluates the skills and knowledge of 15-year-old students that are considered to be essential for full participation in modern economies, and sheds light on a range of factors that contribute to successful students, schools, and education systems. Information on computer and software availability in schools is obtained through the PISA school context questionnaire in which principals provided information on the availability of computers at their schools and whether they felt a lack of computers or software hindered instruction. Information on computer use among students at school and student self-assessment of their confidence in performing computer tasks was obtained from the optional ICT familiarity component of the PISA student context questionnaire.
- The number of computers per student is often used as a proxy to indicate the technology available to students. It refers to the total number of computers available for educational purposes to students in schools in the national modal grade for 15-year-olds (Grade 10 or equivalent in Canada) divided by the total number of students in the modal grade.
- A shortage or inadequacy of computers or software for instruction was explored in the PISA 2009 school context questionnaire as another way of looking at student access to ICT resources. In this questionnaire, principals reported on their perceptions of whether their school’s capacity to provide instruction was hindered by a shortage of computers or computer software for instruction. Schools are considered to have a shortage or inadequacy of computers or software for instruction when school principals reported that this situation was hindering instruction to “some extent” or “a lot”. The principals’ subjective perceptions of shortages should be interpreted with some caution, because cultural factors and expectations, along with pedagogical practices, may influence the degree to which principals consider shortages a problem. Perceptions of inadequacy may be related to higher expectations among principals for ICT-based instruction rather than fewer computers available for learning.
- The Index of self-confidence in information and communications technologies high-level tasks was constructed to summarize student’s self-confidence in performing certain computer tasks. This index reflects a composite score based on students’ indications of the extent to which they could perform the following five different types of technical tasks: edit digital photographs or other graphic images; create a database; use a spreadsheet to plot a graph; create a presentation; create a multimedia presentation. For each task there were four possible responses: I can do this very well by myself; I can do this with help from someone; I know what this means but I cannot do it; I don't know what this means. This index was constructed so that the average OECD student would have an index value of zero, and about two-thirds of the OECD student population would be between -1 and 1. For this index, a negative score indicates a level of confidence that is lower than the average calculated for students across OECD countries. Students' subjective judgments of task competency may vary across jurisdictions. Each index is self-contained; that is, a jurisdiction’s score on one index cannot be directly compared with its score on another.
- The Index of computer use at school was constructed to summarize how frequently students perform different types of ICT activities at school. This index reflects a composite score based on students’ responses when asked how frequently they perform the following nine activities: chat on-line; use e-mail; browse the Internet for schoolwork; download, upload or browse material from the school Web site; post work on the school’s Web site; play simulations; practice and do drills (e.g., for mathematics or learning a foreign language); do individual homework; and do group work and communicate with other students. For each activity there were four possible responses: never or hardly ever; once or twice a month; once or twice a week; every day or almost every day. This index was constructed so that the average OECD student would have an index value of zero, and about two-thirds of the OECD student population would be between -1 and 1. Index points above zero indicate a frequency of use above the OECD average. Each index is self-contained; that is, a jurisdiction’s score on one index cannot be directly compared with its score on another.
- The modal grade attended by 15-year-olds is the grade attended by most 15-year-olds in the participating country or economy. In Canada, most 15-year-olds attend Grade 10 (or equivalent).
- Students’ socio-economic status is measured by the PISA Index of Economic, Social and Cultural Status (ESCS). It is important to emphasize that this indicator presents information organized according to the socio-economic status of the student, not of the school attended by the student.
- The PISA Index of Economic, Social and Cultural Status (ESCS) provides a measure of the socio-economic status of the student. This index was constructed based on information provided by the representative sample of 15-year-old students who participated in the PISA student background questionnaire, in which information on students’ backgrounds was obtained from their answers to a 30-minute questionnaire that covered topics such as educational background, family and home situation, reading activities, and school characteristics. The PISA ESCS index was derived from the following variables: the international socio-economic index of occupational status of the father or mother, whichever is higher; the level of education of the father or mother, whichever is higher, converted into years of schooling; and the index of home possessions, obtained by asking students whether they had a desk at which they studied at home, a room of their own, a quiet place to study, a computer to use for school work, educational software, a link to the Internet, their own calculator, classic literature, books of poetry, works of art (e.g., paintings), books to help them with their school work, a dictionary, a dishwasher, a DVD player, three other country-specific items, and the number of cellular phones, televisions, computers, cars and bathrooms at home. The rationale for choosing these variables is that socio-economic background is usually seen as being determined by occupational status, education, and wealth. As no direct measure of parental income or wealth was available from PISA, information on access to household items was used as a proxy as students would have knowledge of these items within the home. These questions were selected to construct the indices based on theoretical considerations and previous research. Structural equation modeling was used to validate the indices.
- Greater values on the Index of Economic, Social and Cultural Status (ESCS) represent a more advantaged social background, while smaller values represent a less advantaged social background. A negative value indicates that the socio-economic status is below the OECD mean. The index is divided into quarters based on students’ values on the ESCS index. Therefore students in the bottom quarter are in the lowest quarter of students in the ESCS index, and students in the top quarter are in the highest quarter of students based on their ESCS value.
Methodology
- The target population for PISA 2009 comprised 15-year-olds who were attending schools in one of Canada’s 10 provinces; the territories have not participated in PISA to date. Students of schools located on Indian reserves were excluded, as were students of schools for those with severe learning disabilities, schools for blind and deaf students, and students who were being home-schooled.
- In 2009, PISA was administered in 65 countries and economies, including Canada and all other OECD member countries. Between 5,000 and 10,000 students aged 15 from at least 150 schools were typically tested in each country. In Canada, approximately 23,000 students from about 1,000 schools participated in the 10 provinces. This large Canadian sample was needed to produce reliable estimates for each province.
- The information for this indicator is obtained from certain responses to three contextual questionnaires that were administered along with the main PISA skills assessment: a student background questionnaire that provided information about students and their homes; a questionnaire on familiarity with ICT that was administered to students; and a questionnaire administered to school principals. The questionnaire framework that is the basis of the context questionnaires and the questionnaires themselves are found in PISA 2009 Assessment Framework: Key Competencies in Reading, Mathematics and Science (OECD 2010).
- All member countries of the OECD participated in the PISA 2009 main assessment (including the student and school background questionnaires that are a main source of data for this indicator), and 29 member countries chose to administer the optional ICT familiarity questionnaire. This indicator presents information for a subset of these participating countries; namely, the G-8 countries (Canada, France, Germany, Italy, Japan, the Russian Federation, the United Kingdom, and the United States) and nine selected OECD countries that were deemed to be among Canada’s social and economic peers and therefore of key comparative interest (Australia, Denmark, Finland, Ireland, Korea, New Zealand, Norway, Sweden, and Switzerland).
- The statistics in this indicator represent estimates based on samples of students, rather than values obtained from the entire population of students in each country. This distinction is important as it cannot be said with certainty that a sample estimate has the same value as the population parameters that would have been obtained had all 15-year-old students been assessed. Consequently, it is important to measure the degree of uncertainty of the estimates. In PISA, each estimate has an associated degree of uncertainty, which is expressed through the standard error. In turn the standard error can be used to construct a confidence interval around the estimate—calculated as the estimate +/- 1.96 x standard error—which provides a way to make inferences about the population parameters in a manner that reflects the uncertainty associated with the sample estimates. Using this confidence interval, it can be inferred that the population parameter would lie within the confidence interval in 95 out of 100 replications of the measurement, using different samples randomly drawn from the same population.
- When comparing sample estimates among countries, provinces and territories, or population subgroups, statistically significant differences must be considered in order to determine if the true population parameters are likely different from each other. Standard errors and confidence intervals are used as the basis for performing comparative statistical tests. Results are statistically different if the confidence intervals do not overlap.
- In Percentage of 15-year-old students in schools whose principals reported shortage or inadequacy of computer hardware or software for instruction, by students’ socio-economic status, Canada, provinces, G-8 and selected OECD countries, 2009, differences in the percentage of students whose principals reported a shortage or inadequacy of computers or software between the top and bottom quarters of the PISA Index of Economic, Social, and Cultural Status were tested for statistical significance at Statistics Canada’s Centre for Education Statistics. The testing method involved calculating the confidence intervals surrounding the percentage of students whose principals reported computer or software inadequacies for both the top and bottom quarters of the index. If these confidence intervals did not overlap, then the difference was determined to be statistically significant at the 95% confidence level.
Limitations
- Some data previously presented in Indicator C5 of Pan-Canadian Education Indicators Program (PCEIP) are not available from PISA 2009 as some of the questions were not repeated, or the information is not comparable with that used in past iterations of the PISA assessment.
- The PISA background questionnaires that explored ICT topics were not designed to assess the quality of ICT use at school nor the integration of ICT in pedagogy and its impact on student’s cognitive skills.
- The territories have not participated in PISA to date.
Data sources
- Statistics Canada, Programme for International Student Assessment (PISA), 2009 database; Organisation for Economic Co-operation and Development (OECD), 2009 PISA database.
- Date modified: