Purpose, theories and methods

Warning View the most recent version.

Archived Content

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.

This opening chapter describes the purpose and specific objectives of the International Study of Reading Skills. It also offers a summary overview of the theoretical foundations, the definitions applied in the study, and the instruments used for the data collection. A few preliminary comments on the significance and limitations of the study are also given. The next section sets the stage for the study by reviewing some issues and questions that are pertinent to policy makers, educators and literacy advocates in Canada.

The policy context
Purpose of the study
Theory, definitions and instruments
Significance and limitations
Organization of the report

The policy context

Canada has a comparatively high level of educational attainment, one outcome of spending more on education as a percentage of GDP than most OECD countries over decades. Canada also has applied a selective immigration policy, a fact that explains why immigrants arrive with higher levels of educational attainment than their Canadian-born peers. Together, Canada's education and immigration policies have contributed to creating one of the highest standards of living in the world. These policies also have allowed Canadian workers and firms to compete successfully in an increasingly global economy. The same policies have also helped to shape what many believe to be one of the most tolerant, culturally diverse and creative societies in the world.

Literacy – the ability to access and apply information gleaned from the printed word – is known from research studies to enable access to social and economic systems, and to play a key role in determining long-term economic growth rates (Coulombe and Tremblay, 2006; Coulombe, Tremblay and Marchand, 2004). Furthermore, differences in the level and distribution of adult literacy are associated with large differences in employment, wages, health, lifelong learning opportunities, and participation in broader society.

Although Canada has one of the highest levels of average literacy skill among OECD countries (OECD and Statistics Canada, 2000; 2005), many adults do not possess the requisite level of skill observers believe are needed to maintain competitiveness in an increasingly global knowledge economy. This observation has led to a call for new investment in adult literacy training (Canadian Council on Learning, 2006; Movement for Canadian Literacy, 2006). Arguments for additional investment have been justified in three ways. First, the case has been made on the basis of fairness, as investment needed to level the playing field, to allow all Canadians full and equal access to labour markets, health services, education systems and democratic institutions. Second, the argument has been advanced on the basis of broad economic self-interest, as investment needed to maintain the position of the nation in the global knowledge economy. Finally, the case has been justified on the basis of narrow economic self-interest, as a means to reduce the demand for, and cost of, delivering public goods and services such as education, health and criminal justice.

A large body of empirical evidence, much of it generated by secondary analyses of data collected for the International Adult Literacy Survey, supports the three claims, showing inter alia:

  • Large differences in the average level and distribution of literacy and numeracy competencies exist between Canada and many of its key competitors;

  • Differences in the average level and distribution of literacy and numeracy exist among Canada's provinces and territories, with scores declining from West to East;

  • These skill differences matter to individual quality of life over a range of outcomes, including employability, wages, physical health, social engagement, and access to lifelong learning opportunities. They also explain a significant proportion of the variance in long-term GDP growth rates in advanced OECD countries;

  • Shifts in economic structures, work organizations and technologies of production are expected to amplify the impact that literacy has on individual and collective outcomes. Key among these is the impact of literacy on the rate at which firms can adopt advances in information and communication technologies;

  • Rapid increases in the global supply of economically important skills, including literacy, are expected to place severe pressure on employment and wage rates as multi-national firms shift production to lower cost but equally skilled labour markets; and

  • Large proportions of key population groups including recent secondary graduates and aboriginal peoples fail to attain, or retain, the threshold of literacy and numeracy deemed necessary to meeting the rising skill demands in the economy and society.

The results presented in this volume will shed light on how to improve the literacy of low-skilled (level 1 and 2) adults and on how to help youth avoid leaving initial education with low literacy skills. In addition, the ISRS findings presented will help policy makers determine how best to invest in literacy skills development by providing answers to the following questions:

  1. What are the learning needs of different groups of low skilled adults?

  2. What kind of intervention programs would be needed to achieve this goal?

  3. What magnitude of investment would yield a marked improvement in skill?

  4. Where, given the implicit trade-offs between efficiency and equity, would new investments have the most impact?

  5. Who should underwrite the costs of this investment? Is there evidence of a market failure of the sort that would justify public investment? Do individuals and families have the financial resources to underwrite a part of the cost of improving their skill levels? What role should employers pay in financing literacy programs?

  6. What are the consequences of inaction? What are the opportunity costs of investing in literacy?

Answers to these questions depend upon a subtle understanding of the literacy learning needs of different groups of adults, most particularly those with the lowest skill levels. But despite the fact that Canada has acted as a pathfinder in developing valid and reliable measures of adult literacy, little data is currently available to shed light on these questions.

Purpose of the study

This report uses data from a new study, the International Study of Reading Skills (ISRS), to address the issues raised above. The main purpose is to describe in depth the reading abilities of the least-skilled adult readers in society and to identify the basic reading profiles of these adults, based on their strengths and needs in reading. The goal is to supply policy makers, researchers and practitioners with new information useful for making decisions about how to plan and deliver appropriate and efficient reading instruction for different adult learners. As such, the current report only addresses the first of the six questions enumerated above.

Specifically, the ISRS was designed to characterize the reading profiles and learning needs of demographically different groups of low skilled Canadian adults by administering a battery of clinical reading tests to a sample of adults who previously had participated in an international literacy assessment. The new data set should inform the development of better diagnostic systems for low skilled adults, tailoring the content and modalities of instruction to their needs, and creating improved strategies to encourage active participation by adult learners.

The ISRS study has the potential to shed light on all but the last question listed above. Answers to the 6th question, however, will only emerge from a nuanced analysis of Canada's economic prospects by a range of public and private actors and a focussed public debate.

As explained in Box 1.1, Canadian and US based teams jointly developed the ISRS, building on the theories and assessment frameworks developed for two prior international assessments of adult literacy: the International Adult Literacy Survey (IALS) fielded in 20 countries between 1994 and 1998, and the International Adult Literacy and Skills Survey (IALSS), implemented in seven countries or territories in 2003. Representative sub-samples of respondents to the English and French variants of the IALSS were selected for the Canadian component of the ISRS. Since the ISRS was a follow-up to the IALSS the information gathered from the two surveys could be combined and used together in analyzing the data.

Box 1.1 The international dimension of the study

The ISRS has an international dimension not only because it builds on large-scale comparative assessments of adult literacy but also because its design, data collection and analysis involved several US and Canadian research teams.

The ISRS is a joint project of the Educational Testing Service, Princeton and Statistics Canada, Ottawa, implemented in co-operation with the National Center for the Study of Adult Learning and Literacy at the Harvard Graduate School in Boston and Westat, Inc. based in Maryland.

Human Resources and Social Development Canada and Statistics Canada funded the Canadian part of the study while the US part was financed by the Office of Vocational and Adult Education and the National Center for Education Statistics of the US Department of Education.

The US and Canadian studies had slightly different objectives and surveyed different populations but shared common approaches to measuring component reading skills. Initial results of the US study may be found in Adult Education in America: A First Look at Results from the Adult Education Program and Learners Surveys (ETS, 2007).

Theory, definitions and instruments

"Low skill" in the ISRS was defined as proficiency below Level 3 on the IALSS prose literacy scale, a choice in keeping with the view that Level 3 is the desired threshold needed by adults to participate fully and fairly in the knowledge economy, given that Level 3 skills are known to be associated with satisfactory job performance in the overwhelming majority of Canadian occupations, with the effective use of public health information and with active community participation (Statistics Canada and OECD, 2005; Statistics Canada, 2005). The Level 3 threshold is also one that reading researchers believe represents a point at which there is an important shift in the underlying cognitive strategies that readers must deploy to access and apply information embedded in print.

The Canadian component of the ISRS selected representative sub-samples of a total of 1,815 respondents in the 10 provinces; 986 of them completed the tasks in English, and 829 did so in French. There were 232, 332 and 422 individuals at Levels 1, 2 and 3+ in the English sample, and 98, 312 and 419 individuals at Levels 1, 2 and 3+ in the French sample.

Adults scoring at Levels 1 and 2 were over-sampled in order to provide a means of studying the relationship of the component skills and the prose literacy scale.

The ISRS sample included every French language adult classified at Level 1 in the IALSS study who accepted to be re-contacted. Although the number of such adults was lower than desired the resulting estimates represent the adult population of Canada aged 16 to 65 living in the 10 provinces.

The ISRS was administered in respondents' homes using several instruments. First, respondents were invited to complete a background questionnaire, which consisted of several information modules required to relate the tested skills to social and economic background variables. They were asked a series of questions about their education, the language they use in various situations and their labour force status and another set of questions about health and disabilities. Next the prose and document literacy component, which required respondents to complete a number of tasks, were administered. First there was a booklet of nine simple tasks, and if respondents successfully completed at least three of them, they were given a second test booklet containing 31 tasks. If they did not, they moved directly to the survey's third component, a series of additional exercises designed to measure reading-related component skills.

For prose literacy the IALSS definition is used – the knowledge and skills needed to understand and use information from texts including editorials, news stories, brochures and instruction manuals. Similarly, document literacy is defined as the knowledge and skills required for locating and using information contained in various formats, including job applications, payroll forms, transportation schedules, maps, tables and charts.

Prose literacy and document literacy are measured on a scale of 0 to 500. Each result on the scale represents a point at which a person has an 80 percent chance of correctly performing a task associated with an equivalent level of difficulty. To simplify reporting of the results, the scales are also divided into five levels, with each level representing a set of tasks that an individual at that level is capable of performing. Table 1.1 describes the increasing levels of task difficulty.

Table 1.1
Five levels of difficulty for the prose and document literacy scale

 

Prose

Document

Level 1
(0 to 225
points)

Most of the tasks in this level require the respondent to read relatively short text to locate a single piece of information that is identical to or synonymous with the information given in the question or directive. If plausible but incorrect information is present in the text, it tends not to be located near the correct information.

Tasks in this level tend to require the respondent either to locate a piece of information based on a literal match or to enter information from personal knowledge onto a document. Little, if any, distracting information is present.

Level 2
(226 to 275 points)

Some tasks in this level require respondents to locate a single piece of information in the text; however, several distractors or plausible but incorrect pieces of information may be present, or low-level inferences may be required. Other tasks require the respondent to integrate two or more pieces of information or to compare and contrast easily identifiable information based on a criterion provided in the question or directive.

Tasks in this level are more varied than those in Level 1. Some require the respondents to match a single piece of information; however, several distractors may be present, or the match may require low-level inferences. Tasks in this level may also ask the respondent to cycle through information in a document or to integrate information from various parts of a document.

Level 3
(276 to 325 points)

Tasks in this level tend to require respondents to make literal or synonymous matches between the text and information given in the task, or to make matches that require low-level inferences. Other tasks ask respondents to integrate information from dense or lengthy text that contains no organizational aids such as headings. Respondents may also be asked to generate a response based on information that can be easily identified in the text. Distracting information is present, but is not located near the correct information.

Some tasks in this level require the respondent to integrate multiple pieces of information from one or more documents. Others ask respondents to cycle through rather complex tables or graphs containing information that is irrelevant or inappropriate to the task.

Level 4
(326 to 375 points)

These tasks require respondents to perform multiple-feature matches and to integrate or synthesize information from complex or lengthy passages. More complex inferences are needed to perform successfully. Conditional information is frequently present in tasks at this level and must be taken into consideration by the respondent.

Tasks in this level, like those at the previous levels, ask respondents to perform multiple-feature matches, cycle through documents, and integrate information; however, they require a greater degree of inference. Many of these tasks require respondents to provide numerous responses but do not designate how many responses are needed. Conditional information is also present in the document tasks at this level and must be taken into account by the respondent.

Level 5
(376 to 500 points)

Some tasks in this level require the respondent to search for information in a dense text that contains a number of plausible distractors. Others ask respondents to make high-level inferences or use specialized background knowledge. Some tasks ask respondents to contrast complex information.

Tasks in this level require the respondent to search through complex displays that contain multiple distractors, to make high-level text-based inferences, and to use specialized knowledge.

Source: Learning a Living: Initial Results of the Adult Literacy and Life Skills Survey.

The clinical reading tests administered in the ISRS study measure the word reading and vocabulary skills that are thought to underlie the emergence of the fluent and automatic reading associated with Level 3 performance on the IALS and IALSS prose literacy scales. Although the emergence of fluent and automatic reading also depends on other factors, research studies cited in Chapter 3 have shown that few learners manage to reach Level 3 proficiency without having first mastered these component skills. Hence mastery of component skills is a necessary but not sufficient condition for the acquisition of Level 3 performance. Other factors also play a role, including the relevance of the material for readers' lives or whether they are familiar with the specific genre of text, e.g., fiction, academic writing, persuasive essays, poetry, etc. Notwithstanding these factors, individual performance on the clinical reading tests used in the ISRS explains up to 80 percent of performance on the overall literacy proficiency scale.

The component measures administered as part of the ISRS were selected for several related considerations. First, it had to have been established on both theoretical and empirical grounds that the specific component was important to the acquisition of Level 3 skills. Chapter 3 describes the theory and evidence underlying the reading components assessed in the ISRS. Second, the measures had to be amenable to administration by non-specialist interviewers within the context of a household survey. Third, the measures had to display good psychometric properties in terms of their validity, reliability and comparability. Finally, equivalent measures were to be employed to assess component skills in both English and French. Although conceptually identical, it was found they did not provide results that are strictly comparable. Accordingly, the relationships among the components and between them and the emergence of fluent and automatic reading were shown to differ in certain respects between the two language groups. Apart from collecting data on the component reading measures, the respondents to the ISRS also were assessed in terms of their ability to understand spoken English or French, and to speak it intelligibly at a native conversational pace on everyday topics.

Six instruments were used to measure the reading-related component skills. The first was the abridged Peabody Picture Vocabulary Test (PPVT-m), which required respondents to identify which of four different images corresponded to a word spoken by the interviewer. Second came the Rapid Automatized Naming (RAN) test, in which respondents were asked to read a series of random letters as quickly as possible. The third exercise concerned the Test of Word Recognition Efficiency, requiring one to read a list of real words (TOWRE-A), followed by a list of pseudo-words (TOWRE-B), as quickly as possible. The time limit for each word list is usually 45 seconds, however to get as much variability as possible a 60 second limit was used in the ISRS. The fourth instrument was PhonePass, which contained three different tasks: repetition of simple sentences, a set of short-answer questions, and reading of simple sentences. The fifth test involved repeating a series of digits in order and another series of digits in reverse order. The final exercise was a spelling test.

The component measures were scored individually. In order to facilitate analysis the raw component scores were scaled separately using a two-parameter logistic (2PL) model based on the Item Response Theory (Birnbaum, 1968; Lord, 1980). The score for each component varies from 0 to 1 and represents the expected proportion correct on the entire test. More information about the scaling of the components is given in Annex B.

Significance and limitations

The ISRS of the component reading profiles and learning needs of low skilled adults is by far the largest of its kind ever undertaken in Canada. The study uses a large, representative sample of adults in order to support the generalization of results and also provides a means to estimate the absolute number of different types of adult learners in the population.

Despite the utility of the ISRS findings for educators, researchers and policy makers, the study is not without its limitations. The French and English findings were analyzed separately, both to capture differences in how the component measures relate to overall reading ability and to reflect large demographic differences between the two populations, particularly with respect to the characteristics of immigrants. Interpretation of the findings is also made more complex than is usually the case in survey research because the population sampled for the ISRS is a subgroup of those who participated previously in the IALSS, with a focus on those scoring at the lowest levels of literacy proficiency. Unfortunately the least literate respondents were also those who had the highest refusal and non-response rates among those sampled. Although statistical procedures were implemented to correct for non-response bias some residual upward bias in component scores may be present.

Large as they are compared to other research studies in the field, the sample sizes fielded in the ISRS are still relatively small. The limited number of low skilled respondents available from the IALSS and the high cost of administering the component reading tests to a geographically widely distributed sample of adults in 10 provinces, precluded further increases of the ISRS sample sizes. Having established the utility of the approach future research could expand the scope of the enquiry in useful ways.

Given the link of the ISRS to major comparative literacy assessments, every effort was made to establish the validity, reliability, comparability and interpretability of estimates, and to control and quantify errors that might interfere with or bias interpretation. Notes to Charts and tables are used to alert readers whenever errors might affect interpretation. The data presented in this report are estimated from representative but complex samples of adults in Canada. The sample design is described in Annex A. Tables reporting the results of the data analyses are included in Annex C. These annex tables also give the standard errors, in parenthesis, next to the actual estimates, expressing the degree of uncertainty associated with both sampling and measurement errors. Even though the sample size of the ISRS is the largest that has been used for this type of study to date, some key statistics have coefficients of variation that are higher than the standard cut off set by Statistics Canada for publication and as such are suppressed in the data presented in this report.

Organization of the report

The report is divided into five chapters and is supported by five annexes.

Chapter 1 is the Introduction.

Chapter 2 presents an overview of the characteristics of adults who perform at Levels 1 and 2 on the IALSS proficiency scales, including their distribution by age group, gender, educational attainment, immigrant status and income characteristics. The chapter also highlights differences between adults at these levels in each of Canada's official languages and provides a rationale as to why the attainment of Level 3 skill is so important. This chapter uses the IALSS Canadian dataset, which has a larger sample size than the ISRS and, hence, can offer more reliable estimates of key characteristics.

Chapter 3 describes the theories and evidence derived from previous research studies that underlie the reading components that were assessed, and sets out their pertinence for instruction.

Chapter 4 explores the relationships between performance on the separate reading components and the emergence of fluent and automatic reading skill defined as the attainment of Level 3 prose literacy in each of Canada's official languages. This chapter also defines different groups of learners based upon their patterns of component skills and attempts to tease out what these patterns imply for the content, structure, mode and duration of remedial instruction. This chapter also explores the relationship between patterns of component skills, underlying causal factors and a range of social and economic outcomes observed at the individual level.

Chapter 5 presents a summary of key findings and a few implications for literacy policy and program design and delivery.

Annex A describes the survey and sample design employed.

Annex B explains the methods and statistical models applied in scaling and proficiency estimation.

Annex C is the statistical annex. It provides the estimates and associated standard errors for all data analyses presented in the report.

Annex D lists the references cited in the text and offers suggestions for further reading.

Annex E identifies the individuals and institutions that contributed to the study.