Statistics Canada
Symbol of the Government of Canada

Site navigation menu

Chapter C: Elementary-secondary education

Warning View the most recent version.

Archived Content

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.

C1 Early years and school readiness
C2 Elementary-secondary school: enrolments and educators
C3 Secondary school graduation
C4 Student achievement
C5 Information and communications technologies (ICT)

C1 Early years and school readiness

Tables C.1.1 and C.1.2

Indicator C1 assesses the early years and school readiness of 4- and 5-year-old children by examining their health status (including any health limitations), participation in activities, exposure to reading and reading materials (Table C.1.1), and language scores/vocabulary skills (Table C.1.2).

Concepts and definitions

  • The child’s general health was classified as excellent, very good, good, fair, or poor; the categories were read to the adult respondents who answered on behalf of their children in the 2004/2005 National Longitudinal Survey of Children and Youth (NLSCY).
  • Before being asked about any health limitations that affected the child’s daily activities, the adult respondents were told that a “difficulty, condition or health problem is one that has lasted or is expected to last six months or more.” This indicator considered the following:  long-term bronchitis, long-term allergies, asthma, pain/discomfort, difficulty walking, difficulty being understood when speaking, difficulty hearing, and difficulty seeing.
  • The percentage of children with asthma reflects those aged 4 or 5 who had received a diagnosis of asthma from a health care professional at some point, not just those who had had an asthma attack in the 12 months before the survey. Pain or discomfort reflects the “no” responses to a question asking if the child is “usually free of pain or discomfort?”.
  • Regularly scheduled activities outside of school hours refers to weekly participation in:  sports that involved a coach or instructor; dance, gymnastics or martial arts; music, art or other non-sport activities; and clubs, groups or community programs.
  • Exposure to books reflects some of the information obtained from questions about literacy, including how often a parent read aloud to the child or listened to the child read (or try to read). Respondents were also asked how often the child looked at books, magazines, comics, etc. on his/her own, or tried to read on his/her own.
  • The Peabody Picture Vocabulary Test-Revised (PPVT-R) measures children’s receptive vocabulary, which is the vocabulary that is understood by the child when he or she hears the words spoken. It is a normed test; that is, a child’s performance is scored relative to that of an overall population of children at the same age level as the child. A wide range of scores represents an average level of ability, taking the age of the child into consideration. Scores below the lower threshold of this average range reflect a delayed receptive vocabulary, and scores above the higher threshold demonstrate an advanced receptive vocabulary.
  • Methodology

  • This indicator is based on nationally representative data for 4- and 5-year-olds from cycle 6 of the NLSCY, conducted in 2004/2005. Data collection covered the reference period between September 27, 2004 and June 24, 2005. All samples were drawn from the Labour Force Survey’s sample of respondent households.
  • The information presented was obtained from the NLSCY child component; specifically, the questions on child health, activities (sports, lessons, clubs, etc.) and literacy. Responses were provided by the person most knowledgeable (PMK) about the child, which is usually the mother.
  • The PPVT-R is scaled to an average of 100. The range of average receptive vocabulary measured by the PPVT-R covers scores from 85 to 115. A score below 85 is considered to indicate delayed receptive vocabulary; a score above 115, advanced. Scoring is adjusted to reflect the different abilities of 4- and 5-year-olds. English and French scores are normed separately and are not directly comparable.
  • Limitations

  • The NLSCY relies on the perceptions of the adult most familiar with the child to report on the child’s general health and development, and such reports may not always be entirely objective or accurate.
  • Some possible sources of non-sampling errors in the NLSCY are:  response errors due to sensitive questions, poor memory, translated questionnaires, approximate answers, and conditioning bias; non-response errors; and coverage errors.
  • Data source

  • National Longitudinal Survey of Children and Youth, cycle 6, 2004/2005, Statistics Canada.
  • C2 Elementary-secondary school: enrolments and educators

    Tables C.2.1 through C.2.7

    Information on enrolment in public schools at the elementary-secondary level (Table C.2.1), as well as on the number of educators (Table C.2.2), is captured in Indicator C2 . A student–educator ratio, which measures the total human resources available to students, is also presented (Table C.2.3), along with some characteristics of the educator workforce (Tables C.2.4 through C.2.7).

    Concepts and definitions

  • Public schools are those established and operated by local school authorities pursuant to the public schools legislation of the province or territory. Also included in this category are Protestant and Roman Catholic separate schools and schools operated in Canada by National Defence within the framework of the public school systems.
  • All data in this indicator are for public elementary and secondary schools only and do not include private schools, federal schools and schools for the visually and hearing impaired. Schools are classified as elementary if they provide Grade 6 and under or a majority of elementary grades, and secondary if they offer Grade 7 and over or a majority of secondary grades. Federal schools include schools administered directly by the federal government, overseas schools operated by the Department of National Defence for dependants of Canadian Forces personnel, and schools operated by Indian and Northern Affairs Canada or by band councils.
  • Full-time equivalent (FTE) enrolments represent full-time elementary-secondary enrolments on September 30th (or as close as possible thereafter) of the school year, plus the sum of part-time enrolments according to their percentage of a full-time enrolment allocation (determined by the province or territory) (Table C.2.1).
  • Educators includes all employees in the public school system (either school-based or school district-based) who are required to have teaching certification as a condition of their employment. This definition generally includes teaching staff, principals, vice-principals and professional non-teaching staff such as pedagogical consultants, guidance counsellors and special education teachers. It includes all educators in regular public schools, provincial reformatory or custodial schools, and of other students recognized and funded by a province or territory (correspondence or distance programs, private schools or independent schools financed by federal departments such as the Department of National Defence and the Department of Indian and Northern Affairs are excluded). Substitute/Supply teachers, temporary replacement teachers, teachers on leave, student assistants and teaching assistants are excluded. All teachers in regular programs for youth, adult upgrading programs and vocational programs for youth and adults are considered in this definition.
  • Full-time equivalent (FTE) educators means the number of full-time educators on September 30th (or as close as possible thereafter) of the school year, plus the sum of part-time educators according to their percentage of a full-time employment allocation (determined by the province or territory) (Table C.2.2). For example, if a normal full-time work allocation is 10 months per year, an educator who works for 6 months of the year would be counted as 6/10 (0.6) of a full-time equivalent, or an employee who works part-time for 10 months at 60% of full-time would be 0.6 of an FTE. Full-time educators (headcount) (Table C.2.4) refers to the number of educators on September 30th (or as close as possible thereafter) of the school year who are responsible for providing services to the headcount enrolment students.
  • The labour force comprises the portion of the civilian, non-institutional population 15 years of age and over who form the pool of available workers in Canada. To be considered a member of the labour force, an individual must be working either full- or part-time or be unemployed but actively looking for work. The age distribution of the full-time employed labour force is presented in Table C.2.5.
  • Methodology

  • The Elementary-Secondary Education Statistics Project (ESESP) is a national survey that enables Statistics Canada to provide information on enrolments (including minority and second language programs), graduates, educators and finance of Canadian elementary-secondary public educational institutions. Annually, the department or ministry of education in each jurisdiction sends data pertaining to enrolments, graduates, educators and finance of the public elementary-secondary schools under their jurisdictions to Statistics Canada. ESESP is a census of all provinces and territories. The frame used is the list of all provinces and territories. It is an annual survey; the reference period is September (or as close as possible thereafter) of the school year.
  • ESESP was first introduced in 2003, and it has replaced the following surveys: Elementary-Secondary School Enrolment Survey; Minority and Second Language Education; Secondary School Graduates Survey; and the Elementary-Secondary Education Staff Survey.
  • The full-time equivalent (FTE) enrolment rate represents the time fraction spent in the classroom and for which students are funded. If this fraction is not known, an estimate should be used. For example, for junior kindergarten and kindergarten students taking a half-time program and where a half-time program is being funded, the FTE enrolment would be the headcount enrolment divided by two (0.5). If a student is only taking one-quarter of the usual course load and is funded on that basis, the FTE enrolment would be the headcount enrolment divided by 4, which is 0.25.
  • The student–educator ratio (Table C.2.3) is calculated using full-time equivalent enrolment in Grades 1 to 12 (OAC in Ontario) and ungraded programs, plus pre-elementary full-time equivalent enrolment, divided by the full-time equivalent number of educators, both teaching and non-teaching.
  • The Labour Force Survey data used to compare the age distribution of the overall full-time employed labour force with that of the full-time educator workforce is based on a monthly average from September to August (Table C.2.5).
  • Interjurisdictional comparability

    • Until 2000/2001 in New Brunswick, Quebec and Manitoba, full-time equivalent enrolments (Table C.2.1) include those in adult programs and professional training under the authority of the school boards or districts. Certain jurisdictions include all students whether or not they are funded, while others include only funded students.
    • In Ontario, data for full-time equivalent enrolments (Table C.2.1) and full-time equivalent educators (Table C.2.2) exclude publicly funded hospital and provincial schools, care, treatment and correctional facilities.
    • Nunavut was created on April 1, 1999. Prior to that date, full-time equivalent enrolment/educator data for Nunavut were included with data for the Northwest Territories. This creates a break in series for the Northwest Territories in 1999/2000. As a result, the overall percentage change is calculated for the period 1999/2000 to 2004/2005 for the Northwest Territories and Nunavut. For all other jurisdictions, the overall percentage change is calculated for the 1997/1998-to-2004/2005 period.
    • Data on the number/distribution of full-time educators by sex and age group are not available for Canada due to a lack of data from Prince Edward Island, Nova Scotia, Manitoba, and Yukon, Northwest Territories and Nunavut. Prince Edward Island, Nova Scotia, Manitoba and Yukon report combined full-time and part-time educator data only. Northwest Territories and Nunavut do not report headcount data on full-time, part-time educators or combined, only data on full-time equivalent educators. Data for females, all ages, includes a small number of cases for which age is not reported. Percentage distributions are based on educators for whom age is reported.
    • The following points apply to data for Saskatchewan: (1) Educators in provincially funded schools (including “associated independent” and “historic” high schools) are included, but those in “independent” “First Nations” schools and postsecondary sites are excluded. (2) The counts vary year by year partly because the number of “associated independent” schools that receive provincial funding through agreements with school boards has changed over time. (3) Educators in Lloydminster serve students who live in Alberta and in Saskatchewan, but all of these educators are included in Saskatchewan’s counts only (Tables C.2.2 and C.2.4). (4) Headcounts and FTEs include: classroom teachers, school administrators at the school level (but not at the higher level management) and pedagogical support.

    Limitations

  • The student–educator ratio should not be taken as a measure of classroom size. Average classroom size depends not only on the number of teachers and students, but also on the hours of instructional time per week, the per-teacher hours worked, and the division of time between classroom instruction and other activities. The number of educators in this indicator includes both teaching and non-teaching educators (such as school principals, librarians, guidance counsellors, etc.).
  • Lack of Canada-level data for full-time educator headcounts, sex and age breakdowns and part-time educator headcounts is due to a lack of data from a few jurisdictions.
  • Data sources

  • Elementary-Secondary Education Statistics Project, 1997/1998 to 2004/2005, Statistics Canada.
  • Labour Force Survey, 2004/2005, Statistics Canada.
  • C3 Secondary school graduation

    Table C.3.1

    Indicator C3 provides information on trends in high school graduation rates. Overall rates, along with comparisons of graduation rates at typical and after-typical ages of graduation, are presented for the 1997/1998 and 2002/2003 academic years (Table C.3.1). High school graduation rates have historically been used as a basic indicator of educational outcomes.

    Concepts and definitions

  • For this indicator, graduatesare students who obtain a secondary school graduation certificate. This definition does not include those who complete high school outside the regular secondary school systems. Data on graduations from some secondary programs are not uniformly available across jurisdictions, and high school equivalency (General Educational Development or GED), adult basic upgrading and education, and education from adult day school, which take place outside regular secondary school programs are, in most instances, not included.
  • Typical age at graduation is the age at which an individual completes high school if he/she starts at the prescribed age and experiences no repetition or interruption in schooling. The typical age of graduation is 18 for all jurisdictions except Quebec, where it is 17.
  • High school graduate refers to completion of grade 12 in all jurisdictions except Quebec (Secondary V). High school graduate statistics are presented for the 1997/1998 and 2002/2003 academic years. In Quebec, graduates from adult and trade/vocational programs are included.
  • Methodology

  • Data are from the Secondary School Graduates Survey (SSGS), which was discontinued after the 2002/2003 reference year. The SSGS was an administrative survey that collected aggregate counts from provincial/territorial departments/ministries of education who originally collected the data for their own purposes. Each October, the provinces and territories were asked to send graduate data by age and sex. The reference period was the academic year (September to June), including “late graduates” from the fall. Response was mandatory.
  • Graduation rates are based on administrative data and are calculated by Statistics Canada based on data reported by ministries/departments of education/training, together with population estimates produced by the Demography Division at Statistics Canada. The data reported are guided by the following standards, and the rates for individual jurisdictions are considered comparable.
  • Overall graduation rate = (sum of graduates of all ages) / (sum of the population at the typical age of graduation).

    Typical-age graduation rate = (sum of graduates whose age is equal to or less than the typical age of graduation) / (sum of the population at the typical age of graduation).

    After-typical-age graduation rate = (sum of graduates whose age is greater than the typical age of graduation) / (sum of the population at the typical age of graduation).

    It should be noted that the administrative data pertain to graduations from the regular school system only, and not the “second chance” programs. Hence, these rates are only a measure of after-typical-age graduations in the regular school system and reveal nothing about the level or trend in after-age-graduations in the “second chance” system.

    Interjurisdictional comparability

  • Both in-migration and out-migration may also affect graduation rates. As well, given that some jurisdictions use a different methodology than PCEIP, the provincial secondary school graduation rate figures presented in the 2007 PCEIP Report may differ somewhat from those reported by the jurisdictions themselves.
  • Data on high school graduation rates for 2002/2003 are available for all provinces and territories except Ontario. Graduates in Ontario generally represent about 37% of all graduates in Canada. Because of the elimination of Grade 13 (OAC), two cohorts graduated in 2002/2003. These cohorts are not reflected in the Canadian average.
  • All rates for Canada exclude Ontario, as well as Quebec, where secondary graduations include graduates from adult and trade/vocational programs.
  • Limitations

  • When the SSGS was discontinued after the 2002/2003 reference year, the Elementary-Secondary Education Statistics Project (ESESP) took over data collection for high school graduation counts. However, data for subsequent reference years were not available for the 2007 PCEIP series of indicators because, at the time of the development of indicators, the jurisdictions had yet to agree upon the methodology for providing these counts. An agreement was reached in June 2007, enabling an update in the 2009 PCEIP Report.
  • Administrative data tend to underestimate the true graduation rate because people who complete high school outside the regular secondary school systems are not included. Data on graduations from some secondary programs are not uniformly available across jurisdictions, and high school equivalency (General Educational Development or GED), adult basic upgrading and education, and graduation from adult day school, which all take place outside regular secondary school programs are, in most instances, not included. Administrative data refer only to those enrolled in the school system of the particular jurisdiction.
  • The graduation counts are based on all “youth” graduates from regular programs. The exact definition of a “graduate” is not easily determined. This is partly attributable to a number of special equivalency programs offered by the provincial and territorial departments or ministries of education and the fact that there is no clear distinction between “youth” and “adult” graduates. Furthermore, school graduations are not always compiled by provincial/territorial departments/ministries of education; in some cases, data are not available.
  • From 1997/1998 to 1999/2000 in Newfoundland and Labrador, high school graduation was based on school results only; there were no provincial examinations.
  • Data source

  • Secondary School Graduates Survey, 1997/1998 and 2002/2003, Statistics Canada.
  • C4 Student achievement

    Programme for International Student Assessment (PISA)

    Tables C.4.1 through C.4.5

    Indicator C4 reports on student achievement in four key areas—reading, writing, mathematics, and science—and looks at changes in results over time. Performance was examined using results from two assessment programs: The Programme for International Student Assessment (PISA), an international program of the member countries of the Organisation for Economic Co-operation and Development (OECD), and the School Achievement Indicators Program (SAIP), an initiative of the provinces and territories conducted through the Council of Ministers of Education, Canada (CMEC).

    This sub-indicator presents detailed information on the performance of 15-year-old students in Canada on the major PISA domains of mathematics (Table C.4.1) and reading (Table C.4.2) by looking at average scores and the distribution of students by proficiency levels. It also compares performance in mathematics (Table C.4.3), reading (Table C.4.4) and science (Table C.4.5) between 2000 and 2003.

    Concepts and definitions

  • The Programme for International Student Assessment (PISA) is a collaborative effort among OECD member countries to regularly assess youth outcomes, using common international tests, for three domains: mathematical literacy, reading literacy, and scientific literacy. In addition, PISA 2003 measured problem-solving skills (which are not reported on in PCEIP 2007), as a one-time domain. PISA defines reading, mathematics, and science not only in terms of mastery of the school curriculum, but also in terms of the knowledge and skills needed for full participation in society. PISA uses the term “literacy” to reflect the practical, or applied, aspects of learning.
  • Mathematical literacy (also referred to as “mathematics”): An individual’s capacity to identify and understand the role that mathematics plays in the world, to make well-founded judgments and to use and engage with mathematics in ways that meet the needs of that individual’s life as a constructive, concerned and reflective citizen.

    Reading literacy (also referred to as “reading”): An individual’s capacity to understand, use and reflect on written texts, in order to achieve one’s goals, to develop one’s knowledge and potential and to participate in society.

    Scientific literacy (also referred to as “science”): An individual’s capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity.

  • Information on two of the four mathematics sub-domains is presented for this indicator:
  • Shape and space relates to spatial and geometric phenomena and relationships, drawing on the discipline of geometry. It requires looking for similarities and differences when analyzing the components of shapes, recognizing shapes in different representations and different dimensions, as well as understanding the properties of objects and their relative positions.

    Change and relationships involves mathematical manifestations of change as well as functional relationships and dependency among variables. It relates most closely to algebra. Mathematical relationships often take the shape of equations or inequalities, but relationships of a more general nature (e.g., equivalence, divisibility, inclusion) are also relevant. Relationships are given a variety of different representations, including symbolic, algebraic, graphical, tabular, and geometrical representations. Since different representations may serve different purposes and have different properties, translation between representations is often of key importance in dealing with situations and tasks.

    Methodology

  • PISA’s target population comprises students who are 15 years of age and attending school in one of Canada’s 10 provinces; the territories have not participated in PISA to date. The 2003 PISA assessment was administered in schools, during regular school hours, in April and May 2003. The 2000 assessment was administered in April and May of that year. Students of schools located on Indian reserves were excluded, as were students of schools for those with severe learning disabilities, schools for blind and deaf students, and students who were being home-schooled.
  • While all three of the PISA domains are tested in each assessment, only one forms the major focus (or major domain) in each cycle: reading in 2000, mathematics in 2003, and science in 2006. Results for the science assessment are not reported in PCEIP 2007 as they had not been released at the time the publication was being prepared. Information on these results can be found at www.cmec.ca.
  • Results for the major domains are available in a combined domain scale (which represents students’ overall performance in that domain), as well as on the sub-domains that make up each overall scale. For the PCEIP 2007 report, it was decided that the focus would be on the overall performance of students for reading 2000 and mathematics 2003 (in other words, performance on the sub-domains for these two domains are not reported). As fewer items are tested as part of the minor domains, only single measures are available.
  • In PISA, student performance is expressed as a number of points on a scale constructed so that the average score for the major domains for students in all participating countries was 500 and its standard deviation was 100. This means that about two-thirds of the students scored between 400 and 600. The OECD average for the combined mathematics score was established using weighted data so that each OECD country contributed equally. As the anchoring of the scale was done for the combination of the four mathematics sub-domain scales, the average mean and standard deviation for the sub-domain scores differ from 500 and 100 score points.
  • In PISA 2000, five levels were used in reporting reading achievement, to identify the most difficult test items a student could answer; therefore, a student at one level could be assumed to have the ability to answer questions at all lower levels. To help in interpretation, these levels were linked to specific score ranges on the original scale. The five levels are: Level 1 (score 335 to 407); Level 2 (408 to 480); Level 3 (481 to 552); Level 4 (553 to 626); and Level 5 (above 626).
  • Students performing below Level 1 (total reading score below 335) are not able to routinely show the most basic type of knowledge and skills that PISA seeks to measure. Such students have serious difficulties in using reading literacy as a tool to advance their knowledge and skills in other areas. Placement at this level does not mean that these students have no literacy skills. Most of these students are able to correctly complete some of the PISA items. Their pattern of responses to the assessment is such that they would be expected to solve less than half of the tasks from a test composed of only Level 1 items.
  • Six levels were used to assess proficiency in mathematics: Level 1 (score 359 to 420); Level 2 (421 to 482); Level 3 (483 to 544); Level 4 (545 to 606); Level 5 (607 to 668); and Level 6 (above 668).
  • Students performing below Level 1 (mathematics score below 359) are not able to routinely show the most basic type of knowledge and skills that PISA seeks to measure. Such students have serious difficulties in using mathematical literacy as a tool to advance their knowledge and skills in other areas. Placement at this level does not mean that these students have no mathematics skills. Most of these students are able to correctly complete some of the PISA items. Their pattern of responses to the assessment is such that they would be expected to solve less than half of the tasks from a test composed of only Level 1 items.
  • The 2003 confidence interval includes a linking error associated with the uncertainty that results from making comparisons with PISA 2000. Please refer to Annex 8 of the OECD (2004), Learning for Tomorrow’s World—First results from PISA 2003, for an explanation of the methods used to establish the link between PISA 2000 and PISA 2003.
  • Limitations

  • Looking at the relative performance of different groups of students on the same or comparable assessments at different time periods shows whether the level of achievement is changing. Obviously, scores on an assessment alone cannot be used to evaluate a school system, because many factors combine to produce the average scores. Nonetheless, these assessments are one of the indicators of overall performance.
  • The average scores in the PISA assessments were computed from the scores of random samples of students from each country and not from the population of students in each country. Consequently, it cannot be said with certainty that a sample average has the same value as a population average that would have been obtained had all 15-year-old students been assessed. Additionally, a degree of error is associated with the scores describing student skills as these scores are estimated based on student responses to test items. When comparing scores among countries, provinces or population subgroups, the degree of error in each average should be considered in order to determine if averages are different from each other. Standard errors and confidence intervals may be used as the basis for performing these comparative statistical tests. The standard error can be used to construct a confidence interval, which provides a means of making inferences about the population means and proportions in a manner that reflects the uncertainty associated with sample estimates. A 95% confidence interval is used in this report and represents a range of plus or minus about two standard errors around the sample average.
  • This indicator compares the performance of students in Canada in the PISA assessments in mathematics, reading, and science in 2003 with those in 2000 (Tables C.4.3, C.4.4 and C.4.5). Because of differences in the content areas covered by the PISA 2000 and 2003 assessments in mathematics, it is not appropriate to make this comparison in terms of the overall mathematics performance (as is done for reading and science). However, it is possible to compare change in the two sub-domains of mathematics-space and shape, and mathematics-change and relationships as these were included in both assessments.
  • Since data are available for only two points in time, it is not possible to assess to what extent the observed differences are indicative of longer term trends.
  • It is possible to compare results between PISA 2000 and PISA 2003 for reading, science, and the two mathematics sub-domains (shape and space, and change and relationships), but small differences should be interpreted with caution.
  • Statistical significance is determined by mathematical formulas and considers issues such as sampling. Whether a difference in results has implications for education is a matter of interpretation; for example, a statistically significant difference may be quite small and have little effect. There are also situations where a difference that is perceived to have educational significance may not in fact have statistical significance.
  • For more information about PISA concepts, definitions and methodology, consult the PISA section of the CMEC Web site, www.cmec.ca/pisa ,or the PISA Canada Web site, www.pisa.gc.ca.
  • Data source

  • Programme for International Student Assessment, 2000 and 2003, Organisation for Economic Co-operation and Development/Statistics Canada.
  • School Achievement Indicators Program (SAIP)

    Tables C.4.6 and C.4.7; text tables C.4.a and C.4.b

    Indicator C4 reports on student achievement in four key areas—reading, writing, mathematics, and science—and looks at changes in results over time. Performance was examined using results from two assessment programs:  The Programme for International Student Assessment (PISA), an international program of the member countries of the Organisation for Economic Co-operation and Development (OECD), and the School Achievement Indicators Program (SAIP), an initiative of the provinces and territories conducted through the Council of Ministers of Education, Canada (CMEC).

    This sub-indicator, based on data from SAIP, examines reading, mathematical and scientific literacy (Tables C.4.6 and C.4.7; also see text tables C.4.a and C.4.b).

    Concepts and definitions

  • On a pan-Canadian basis, student achievement has been reported through the School Achievement Indicators Program (SAIP). The provinces and territories, through CMEC, developed SAIP to assess the performance of samples of 13- and 16-year-old students across Canada in the key subject areas of mathematics, reading, writing, and science. Three cycles of SAIP have been conducted.
  • Table 1
    The SAIP assessment schedule
    Mathematics Reading and Writing Science
    1993 1994 1996 (written and practical)
    1997 1988 1999 (written and practical)
    2001 2002 (writing) 2004 (written)

    The SAIP science assessment of 2004 was the last SAIP administered. Starting in 2007, the new Pan-Canadian Assessment Program (PCAP) is being introduced.

  • The SAIP mathematics assessment had two domains: a content component and a problem-solving component. The content component was designed to test for knowledge of mathematics concepts in the areas of numbers and operations, data management and statistics, algebra and functions, and measurement and geometry; whereas the second component allowed for the demonstration of mathematics problem-solving skills.
  • The SAIP writing assessment in 2002 was the third in a series of writing assessments. However, the results of this assessment cannot be compared with those from earlier assessments. In this assessment, students were given a specific real-life environmental dilemma and asked to write to generate public awareness about this dilemma. This was intended as a cross-curricular theme linking environmental, scientific, social and political information and issues relevant to both classrooms and local communities. The task design also included a brief reading response to stimulate engagement with the theme.
  • The written component of the SAIP science assessment was designed to test students' knowledge of science concepts (physical sciences/chemistry, life science/biology, physical sciences/physics, and earth and space sciences), the nature of science, and the relationship of science to technology and societal issues. Questions also dealt with conceptual knowledge and understanding, procedural knowledge and skills, and the ability to use science to solve problems.
  • Methodology

  • The target population for SAIP is students in the 10 provinces and 3 territories aged 13 and 16 (i.e., those students who had their 13th or 16th birthdays between September 1 and August 31 of the previous year).
  • SAIP presents achievement results for Canada as a whole and for each participating province and territory. SAIP also provides results for the English and French school systems within a jurisdiction. Beginning with the 1999 science assessment, SAIP began to collect contextual information on student performance to help interpret and explain the achievement results (this background information is not reported on in PCEIP 2007).
  • In all SAIP assessments, achievement results are described over five levels, representing a continuum of knowledge and skills acquired by students over the entire elementary and secondary school experience. Criteria for Level 1 were representative of knowledge and skills typically acquired during early elementary education, while Level 5 criteria were typical of those acquired by the most capable students taking specialized courses near the end of their secondary education. More detailed descriptions of the SAIP performance levels in relation to the subject of the assessment can be found in the SAIP reports, as well as the Handbook for Schools, which may be found on the CMEC Web site.
  • The questions in SAIP assessments are designed with the expectation that most 13-year-olds would achieve level 2 or higher, while most 16-year-olds would achieve level 3 or higher.
  • In each assessment, both age groups write components of the same test. Thus direct comparisons between 13- and 16-year-olds can be made.
  • For all SAIP assessments, development teams composed of representatives from provinces and territories have worked with CMEC staff to consult with all jurisdictions to establish a common framework and set of criteria for each subject area studied. These frameworks are intended to reflect the commonly accepted knowledge and skills students should acquire during their elementary and secondary education in Canada.
  • A statistic called the "standard error" is used to express the degree of uncertainty in the scores for the sample compared with the population. Using the standard error, a confidence interval (CI) can be constructed. The CI is a range of scores within which it can be said, with a known probability (such as 95%), that the score for the full population is likely to fall. The 95% confidence interval used in this report represents a range of plus or minus about two standard errors around the average.
  • Limitations

  • Looking at the relative performance of different groups of students on the same or comparable assessments at different time periods shows whether the level of achievement is changing. Obviously, scores on an assessment alone cannot be used to evaluate a school system, because many factors combine to produce the average scores. Nonetheless, these assessments are one of the indicators of overall performance.
  • The SAIP assessments were not designed to measure achievement at the school or individual student levels.
  • Statistical significance is determined by mathematical formulas and considers issues such as sampling. Whether a difference in results has implications for education is a matter of interpretation; for example, a statistically significant difference may be quite small and have little effect. There are also situations where a difference that is perceived to have educational significance may not in fact have statistical significance.
  • Although mathematics has been assessed since 1993 in SAIP, only those assessments conducted in 1997 and 2001 are comparable because of significant changes that were made in the scoring methods and assessment design after the 1993 assessment.
  • For the writing assessment, caution is advised when comparing achievement results based on assessment instruments prepared in different languages, despite the extensive efforts to ensure equivalence for the sake of equity and fairness for all students. Every language has unique features that are not readily equivalent and render comparisons between languages inherently difficult. On specific jurisdictional results, comparisons are made to the Canadian results by language. That is, the results for English jurisdictions are compared with the Canadian English results, and the French ones to the Canadian French average.
  • Because SAIP assessments in all subject areas were generally designed to retain sufficient elements from one administration to the next, they allowed for comparisons of student achievement over time within each subject area. However, an important factor to consider when comparing assessments over time is the impact of changes in curriculum and in teaching practice over time. Generally, SAIP assessments in all subject areas are designed to retain sufficient elements from one administration to the next to allow longitudinal comparisons of student achievement while making certain modifications to reflect changes in educational policies and practices.
  • The performance of students in Canada (and within each jurisdiction) was compared by looking at the proportion of students meeting or exceeding each level of performance in each jurisdiction and at the cumulative distributions of these proportions. Since the available scores were based on samples of students from each jurisdiction, it cannot be said with certainty that these scores are the same as those that would have been obtained had all 13- and 16-year-old students been tested.
  • For more information about SAIP concepts, definitions and methodology, consult the PCAP (SAIP) .
  • Data source

  • Student Achievement Indicators Program (SAIP), 1996, 1997, 1999, 2001, 2002 and 2004, Council of Ministers of Education Canada:
  • Writing, SAIP 2002;

    Mathematics, SAIP 1997 and 2001;

    Science, SAIP 1996, 1999 and 2004.

    C5 Information and communications technologies (ICT)

    Tables C.5.1 through C.5.5

    Indicator C5 presents data on computer use among students—at school and at home. Availability of computers and the Internet (Tables C.5.1 and C.5.2), frequency of use (Tables C.5.3 and C.5.5), and computers as learning aids (Table C.5.4) are explored.

    Concepts and definitions

  • Information on computer accessibility and use is available through the Program for International Student Assessment (PISA), which evaluates the performance and achievements of 15-year-old students.
  • The average number of student per computer, or the student–computer ratio, is often used as a proxy to indicate the technology available to students. It refers to the total number of students enrolled in a school divided by the total number of computers in the school. This indicator uses data from PISA, which reports this ratio for schools in which 15-year-olds are enrolled.
  • Computer use at home/school was categorized as: frequent—use computer most every day or a few times each week; infrequent—use computer between once a week and once a month or less than once a month; and never—computer never used.
  • The Organisation for Economic Co-operation and Development (OECD) is a multidisciplinary international body made up of 30 member countries that offers a structure/forum for governments to consult and co-operate with each other to develop and refine economic and social policy. This indicator presents data from the Russian Federation and the following OECD countries: Australia, Belgium, Canada, Finland, France, Germany, Italy, Japan, Mexico, Sweden, Switzerland, the United Kingdom, and the United States. These countries are also OECD members: Austria, the Czech Republic, Denmark, Greece, Hungary, Iceland, Ireland, Korea, Luxembourg, the Netherlands, New Zealand, Norway, The Slovak Republic, Poland, Portugal, Spain, and Turkey.
  • Methodology

  • The target population for PISA 2003 comprised 15-year-olds who were attending schools in one of Canada's 10 provinces; the territories have not participated in PISA to date. Students of schools located on Indian reserves were excluded, as were students of schools for those with severe learning disabilities, schools for blind and deaf students, and students who were being home-schooled. Forty-one countries participated in PISA 2003.
  • In most countries, between 4,500 and 10,000 15-year-olds participated in PISA, for a total of over 250,000 students. In Canada, 30,000 students from 1,200 schools in the 10 provinces took part. This large Canadian sample was needed to produce reliable estimates for each province.
  • The information for this indicator is from responses to the short PISA questionnaire that focused on information technology. The average number of students per school computer (student–computer ratio) was calculated by dividing the total number of students enrolled in the school divided by the total number of computers for the school in which 15-year-olds are enrolled.
  • Limitations

  • Some data previously presented in Indicator C5 of PCEIP 2003 using PISA 2000 are not available from PISA 2003 as some of the questions were not repeated.
  • Data source

  • Programme for International Student Assessment (PISA), 2003, Organisation for Economic Co-operation and Development/Statistics Canada.