Alternative Assessment Portfolios For Students With Intellectual Disabilities A Case Study

Abstract

The purpose of this paper is to present findings on alternate assessments for students who are deaf or hard of hearing (SDHH). Drawn from the results of the “Second National Survey of Assessments and Accommodations for Students Who Are Deaf or Hard of Hearing,” this study investigated three alternate assessment formats: portfolio, checklists, and out-of-level testing. Analysis includes descriptive data of alternate assessment use across all three formats, qualitative analyses of teacher perspectives, and an exploratory logistic regression analysis on predictors of alternate assessment use. This exploratory analysis looks at predictors such as state policy, educational setting, grades served, language of instruction, and participant perspectives. Results indicate that predictors at the student, teacher, and system level may influence alternate assessment use for SDHH.

In the current high-stakes testing environment, large-scale, standardized assessments are the primary way that states measure student achievement (e.g., No Child Left Behind Act [2002]). When including students with disabilities in large-scale assessments, additional considerations need to be made as to how students will access the assessment content (Individuals with Disabilities Education Act, IDEA, 1997). Students with disabilities may face barriers to the content of the assessment for many reasons, including the paper-and-pencil format used by many test vendors. In some cases, testing accommodations can remove the barriers to accessing the target skill of the assessment and allow students to meaningfully participate in the test (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 1999; Bolt & Thurlow, 2004). In cases where effective accommodations are not available, or changes made with the accommodation put the validity of the test at risk, “alternate assessment” formats can be used to effectively measure student knowledge and skill.

The purpose of this paper is to discuss alternate assessment use for students who are deaf or hard of hearing (SDHH). This manuscript will focus on three common alternate assessment formats selected because of their use with this student population (Cawthon & the Online Research Lab, 2006) or their inclusion in state alternate assessment policy (Thompson, Johnstone, Thurlow, & Altman, 2005): portfolios of student work, checklists of student knowledge and skill, and out-of-level testing. This literature review will first discuss the purpose of alternate assessment use, followed by ;a discussion of the format and technical issues for each of the three targeted alternate assessment formats. The paper will then turn to what is known about alternate assessment use for SDHH and the guiding research questions for this study. The paper then provides alternate assessment results from the “Second Annual National Survey of Assessment and Accommodations for Students Who Are Deaf or Hard of Hearing.” The discussion section considers the implications of these results for SDHH and their inclusion in the No Child Left Behind (NCLB) accountability framework.

Alternate Assessment

Alternate assessment formats.

Alternate assessments allow students who cannot participate in standardized assessments (even with accommodations) to be included in the large-scale evaluation process. These students are usually identified as those who have significant cognitive disabilities (CDs), though that definition and its application varies from state to state (Thompson et al., 2005). It is important to note that many students whose primary disability is deaf or hard of hearing also have other disabilities. Approximately 40% of students counted in the 2003–2004 Gallaudet Annual Survey were listed as having an additional disability (Gallaudet Research Institute [GRI], 2005), including learning disabilities (LDs), cerebral palsy (CP), mental retardation, emotional disturbance, and attention deficit disorder. Although each individual disability may not be considered severe, a student with multiple disabilities may face challenges in participating in assessments that one with only hearing loss may not.

State guidelines for developing alternate assessments have occurred later than standardized tests and accommodations (Koretz & Barton, 2003; Quenemoen, Rigney, & Thurlow, 2002). Development of assessments for students in regular education has had at least 50 years to develop into a robust field of study; this timeline has been compressed and accelerated for developing assessments for students with complex disabilities, due in part to the IDEA and NCLB legislation. Many states are still revising their alternate assessment policies in response to changes in federal laws, transitioning from a focus on functional skills to those that reflect curriculum standards for all students (Thompson et al., 2005). In the development of their assessment systems, states may choose from a range of alternate assessment strategies for students with the most significant disabilities. A brief review of the three formats of alternate assessments explored in this study, followed by a discussion of reliability and validity issues, is given below.

Portfolio (body of evidence).

A portfolio approach to alternate assessment typically involves gathering together artifacts and documents from the student's work in the classroom to be used as a collection of evidence of student's content knowledge and skill (Turner, Baldwin, Kleinert, & Kearns, 2000; Wiener, 2006). By 2005, approximately half of all states (including the District of Columbia) (n = 25) used this approach in their alternate assessments (Thompson et al., 2005). Of these states, 13 evaluated students based on a standardized set of items. For example, in Kentucky, a review board of three teachers scored a collection of work from multiple subject areas that span the work done in the previous 3 years. The teachers used a rubric including how the student met state academic standards, made progress toward Individualized Education Program (IEP) objectives, can apply skills to multiple settings, and whether the student had opportunities to develop relationships with peers (Kentucky Department of Education, 2007).

Checklist of knowledge and skill.

The second format of alternate assessments explored in this study is a checklist of student knowledge and skills. Approximately a third of all states (30%) used a checklist in 2004 assessments, but this number decreased to only 14% (n = 7) in 2005, the year of the data collected in the current study (Thompson et al., 2005). The checklist consists of a summary of a student's competencies that align to state standards and expectations for student achievement. The checklist can be a simple “yes” or “no” of targeted skills or a rating scale of relative progress that a student has made toward skill mastery. Compared with the portfolio, a checklist is not scored by an outside party and can, in some cases, be submitted without student work.

Out-of-level testing.

Out-of-level testing is different from the other formats of alternate assessments. By definition, it measures alternate standards rather than grade-level standards. Although the test is in a standardized format, out-of-level testing is considered to be an alternate assessment because scores cannot be compared with those of their grade level, typically developing peers. It is also one of the least commonly allowed formats in state assessment policies, with fewer states using out-of-level testing than even just 4 years prior (VanGetson, Minnema, & Thurlow, 2004). At the time of the study (2003–2004), 13 states had some form of out-of-level testing, but six of these states had pending changes reducing its use in statewide assessment. In their analysis, VanGetson et al. (2004) examined published testing policies for each state for statements regarding out-of-level testing options (either allowed or prohibited). Authors then provide further descriptions of the context for allowed out-of-level testing, including test subject, test purpose, and student selection criteria. These findings were corroborated with discussion with state administrators in the annual Survey of State Directors of Special Education conducted by the National Center for Education Outcomes (e.g., Thompson & Thurlow, 2003). One rationale behind an out-of-level test is that it can help prevent the significant frustration experienced by a student when he or she takes a test that is far above their academic proficiency. It may also be more meaningful to teachers and parents to test students on the content of instruction instead of material that they have not had access to in the classroom. From a technical standpoint, the out-of-level tests are easier to score than portfolios and do not have some of the reliability issues that come with subjective evaluations of student work.

The use of subgroup norms for a specific population, in this case, for SDHH, is useful in understanding how a student is achieving relative to their peers. Researchers at the Gallaudet Research Institute (GRI) have compiled an extensive database on student achievement scores from the Stanford Achievement Tests (SAT), from the 7th edition through the present 10th edition (GRI, 1983, 1991, 1996, 2004). As part of the norming process, students take an out-of-level “screening test” to determine at what level they should participate in the standardized assessment. In other words, the norming process involves participation in an out-of-level test (Qi & Mitchell, 2007). This history may have an impact on state standardized assessment strategies used with SDHH.

There are significant limitations, however, in use of out-of-level testing in accountability reforms such as NCLB. First, out-of-level testing used for state standardized assessments is different than the process for developing deaf or hard of hearing norms on the SAT. Student scores are not compared against a different norm, or “average” level for a subgroup, but against those for the student group as a whole. By extension, out-of-level testing means that the score can no longer be treated as measuring the same skills as standardized test scores for the student's grade-level peers. In other words, the score from a student taking a test below grade level cannot be compared against those from students taking the on–grade level exam. For states that do allow out-of-level testing, the default is to assign a failing grade to the score regardless of how well a student performs on the test. Depending on the state policy and data reporting practice, passing scores on an out-of-level test may be reported as “not proficient” or “below proficient” (Minnema, Thurlow, Bielinski, & Scott, 2000). This practice leads to a reduction in information about knowledge and skills the student may have, even if it is below grade level. Out-of-level testing thus has a limited role in assessment used to evaluate student progress toward grade-level standards.

Reliability and validity.

Issues of reliability and validity are especially challenging for alternate assessments. One reason for this is the central role of the person who compiles or assembles the evidence for student proficiency. “Reliability” refers to the consistency of assessment ratings, both across participants and across evaluators. For example, in an effort to maintain high-quality standards and reliable results, all teachers who submit portfolios in Kentucky must also attend workshops that explain how alternate assessments are scored. This training helps ensure that a student's portfolio is correctly developed and compiled, thereby avoiding penalization due to missing or under developed items. This training also encourages teachers to provide students with opportunities to learn the material that is aligned to state standards and focused on their IEP goals. There is little information available as to the extent of this process across states.

Work on the “validity” of alternate assessments focuses mainly on the portfolio approach (Johnson & Arnold, 2004). The validity of a student's score can be conceptualized in a number of ways. One question is whether the alternate assessment is aligned to state academic content standards, as required by NCLB (Roach, Elliott, & Webb, 2005). Sufficient alignment, or content validity, is important because the purpose of both standardized and alternate assessments is to measure student achievement on grade-level standards. Many states have recently revised their alternate assessment systems to meet these new guidelines, subject to peer review by the Department of Education (Thompson & Thurlow, 2003). States are responsible for developing their own evaluations of test validity as part of this process. Public reports of alternate assessment validity studies are thus only available for a handful of states (e.g., Johnson & Arnold, 2004; Roach et al. 2005; Turner, Baldwin, Kleinert, & Kearns, 2000; Weiner, 2006; Yovanoff & Tindal, 2007). Issues raised in these studies include (a) number of skills used to substantiate a student's performance on standards; (b) range of standards covered by the assessment; (c) depth of knowledge measured; (d) use of single skill to “count” for multiple criteria; (e) equating proficiency ratings with scores from standardized assessments; and (f) variance in the above issues across subject areas (e.g., reading, math, science). State assessments vary in the extent to which they meet these technical quality criteria. Even with extensive teacher training, alternate assessment implementation requires significant investment in rigorous test design and assessment validation (Johnson & Arnold, 2004).

Alternate Assessment and NCLB

NCLB places significant restrictions on the proportion of student scores from alternate assessments that can be used toward a district's calculation of proficiency (Yell, Katsiyannas, & Shiner, 2006). Only a very small proportion (initially 1%) of students can be included in an alternate assessment, typically those with the most severe CDs. Scores of alternate assessments above these cutoffs for alternate assessment use are counted for participation, but not toward school, district, and state proficiency rates. Recent changes to the legislation allow up to an additional 2% of a school's eligible test takers to participate in alternate assessments that measure performance on “modified academic achievement standards” (Federal Register, December 15, 2005; Federal Register, April 9, 2007). The purpose of this new category is to recognize that some students with mild to moderate disabilities will not meet grade-level IEP proficiency goals and may not be able to meaningfully participate in a standardized assessment. However, these regulations only come into effect when it can be verified that students (a) received appropriate accommodations and (b) had the opportunity to learn grade-level content. In developing modified standards, states may change the level of mastery required to meet grade-level standards, but the content areas must remain the same. These new regulations are meant to give states further flexibility in what students are expected to learn and how their performance is measured. Furthermore, states can report student proficiency on modified standards in their overall evaluation of school effectiveness and progress toward NCLB Adequate Yearly Progress measures.

Alternate Assessments and SDHH

The first study by the authors, the “First Annual Survey of Assessments and Accommodations for Students Who Are Deaf or Hard of Hearing” provided a preliminary look at nationwide trends in alternate assessment use with SDHH (Cawthon, 2006). The unit of analysis for this initial study was the “school or program” that served SDHH. A total of 71 schools or programs had students who participated in alternate assessments. A total of 45% of schools or programs reported using an out-of-level test for at least one student, 42% a work sample, 37% used a curriculum-based assessment, 24% used a checklist or structured observations, and 17% used unstructured observations. (Totals add to more than 100% because participants could choose more than one format.)

SDHH are taught in a variety of educational settings (Marschark, 1997). Depending on the setting, students may be educated with SDHH peers or as a single student integrated into the mainstream. The communication mode used in the classroom will also vary by setting: some schools for the deaf use sign language almost exclusively, whereas a fully mainstreamed student may have no one who signs in their learning environment (Luetke-Stahlman & Nielsen, 2003). In the “First Annual Survey,” there were some differences in alternate assessment use by educational setting. Schools for the deaf were most likely to have students participate in alternate assessments (76%), followed by district-wide/school programs (40%), and then by mainstreamed settings (12%). Findings also varied by the percent of students with severe or profound hearing loss served by the school or program. Schools that did have students participate in the alternate assessment served an average of 58% of students with severe or profound hearing loss. In contrast, schools that did not have students participate in an alternate assessment served an average of 40% of students with a severe or profound hearing loss. There appears to be a relationship, therefore, between educational setting, student characteristics, and alternate assessment use.

Purpose of study.

Previous findings demonstrate the possibility of differences in the prevalence of alternate assessment use due to both educational setting and student characteristics; however, these findings are at the school or program level and do not offer enough information to make strong conclusions about the factors that contribute to alternate assessment use. Additional information such as the state policy for alternate assessment formats and teacher recommendations for when to use alternate assessments would help to clarify factors that contribute to decisions about student participation in alternate assessment. The purpose of this article was therefore to further investigate both the prevalence of alternate assessment formats and the factors that contribute to their use.

Research questions.

Three research questions guide this study:

  1. What alternate assessment formats did participants report using with SDHH in 2004–2005 statewide assessments?

  2. What recommendations do participants make about effective use of out-of-level testing or portfolios of student work?

  3. What factors predicted the use of out-of-level testing, portfolios, and checklist alternate assessment formats in 2004–2005 statewide assessments?

Method

Instruments and Procedures

The instruments and procedures for this study were originally reported in Cawthon & the Online Research Lab (2007). Data for this paper are drawn from the “Second Annual National Survey of Assessment and Accommodations for Students Who Are Deaf or Hard of Hearing (National Survey).” The “Second Annual National Survey” was available from April through June, 2006. The survey consisted of three parts: demographics, perspectives on accommodations, and perspectives on alternate assessment. Whereas the previous publication focused on accommodations, this paper provides results from the alternate assessment component of the survey. The survey format included multiple choice, Likert scale, and open-ended response items. The survey instrument was administered in two ways: (a) online at the project Web site www.dhh-assess-survey.org (developed using www.surveymonkey.com) and (b) paper versions provided to individuals with stamped, self-addressed envelopes for returned responses. Incentives for participation included entry in a drawing for one of 4 $25.00 gift certificates upon completion of the survey.

Participants were primarily recruited from participants in the “First National Survey” and the GRI's Annual Survey of Schools and Programs contact list. Contacts were also made through students who are deaf and hard of hearing (SDHH) Web site affiliations, state lists of SDHH programs and services, e-mails, 687 personal postcard invitations by the principal investigator, and individual telephone calls that were made to all of the schools for the deaf in the United States. Study recruitment therefore consisted of both direct contact and “snowballing” techniques, where individuals refer their colleagues to the study through informal professional networks. Additional information about multiple recruitment strategies used to contact potential participants is available in Cawthon (2006). Most of the respondents preferred completing the online version of the survey (89%) rather than the paper version of the survey (11%). Each respondent reported on a group of students that they served or taught in the 2004–2005 school year, making the participant the primary unit of analysis for the dependent variables.

Because participants had the option of remaining anonymous (with the exception of the school or district name), it was necessary to review the data set for potential duplicate information about alternate assessment use with SDHH. After all responses were collected, care was taken to verify whether more than one teacher reported data for the same students. Within each school or district, participants were first sorted by the grade range of the SDHH students they served. If there was any overlap in those figures, researchers then looked at data on the participation of students in alternate assessment use. If these figures were the same, the participant with the most complete set of responses, from demographic data through to best practices recommendations, was left in the data set for student and alternate assessment results. This process led to the elimination of four participants who were from the same school, grade, and had the same number of students as another participant. Data on views on validity and best practices were left in the data set.

Analysis

Open-ended coding.

A key component of this survey investigated teacher recommendations of effective practices for two alternate assessment formats: out-of-level and portfolio use. We chose out-of-level and portfolio use because of their unique use with SDHH (out-of-level) or their prevalence in national policy (portfolio). We did not include a question regarding checklists due to time and space constraints on the survey instrument. The questions were open ended and allowed for participants to describe under what conditions they would recommend use of the alternate assessment format. Responses were analyzed for recurring themes using thematic content analysis. Categories reflect similar meaning of the responses, even if different vocabulary words were used to express them. Two research assistants coded each response. Both coders had developed the thematic categories and participated in analyzing pilot data. The initial interrater reliability was 93% for portfolio and 78% for out-of-level testing questions. The lead researcher evaluated responses where coders did not agree and made the final coding designation.

Because teachers often gave several examples in their answers, responses could be coded for more than one theme. Coding categories included (a) aspects of the test; (b) student characteristics; and (c) issues surrounding testing students with disabilities, such as validity, policy, student academic level, student communication, student academic level, test subject, test format, assessing progress, and additional disabilities. Examples of items for each code can be found in Appendices A and B.

Logistic regression.

The third research question guiding this study centers on variables that predict the use of the three alternate assessment formats: out-of-level, portfolio, and checklists. The purpose of this analysis was to explore how different levels of the system may affect alternate assessment use. The following analyses use logistic regression with the forward method based on the Wald statistic to identify the best models for predicting the use of each format. A logistic regression is similar to a multiple regression with categorical instead of continuous dependent variables. The forward selection method was chosen because this was an exploratory research project designed to identify which variables are good predictors of alternate assessment use (Thayer, 2002). In addition, the Wald statistic was chosen because it is one of the best methods for variable selection and is more efficient than other methods (Harrell, 2001).

We examined three categories of independent variables: student characteristics, teacher perspectives, and environmental/context characteristics. These variables were drawn from previously discussed components of the “National Survey” and are summarized below.

Student characteristics.

The student characteristics include candidate predictors that describe the characteristics of students served by participants in this study: (a) number of students, (b) grade level, and (c) other disabilities. Participants indicated the number of students in each grade level that they served. First, in order to create a categorical variable, the total number of DHH students served by each school program was dummy coded into one of four categories: 1–5, 6–10, 11–20, and 21 students or higher. The total number of SDHH served was dummy coded because the variable was not normally distributed. Moreover, transformations were not conducted because they are more appropriate with ungrouped data, and it is difficult to interpret the log odds of transformed variables (Tabachnick & Fidell, 2007). The categories (e.g., 1–5, 6–10, 11–20, and 21 or higher) were chosen based on the percent distribution of cases (one quartile in each group). The grade levels in the analysis included kindergarten through 12th grade (many teachers taught students in multiple grades). If a participant served a student in a grade (e.g., fifth grade), that variable received a code of 1 = served). Finally, study respondents identified whether or not they had worked with a SDHH who had a LD, was DB, had a CD, had CP, or had an emotional disorder (ED).

Teacher perspectives.

Teacher perspectives were drawn from teacher ratings of ease of use for an alternate assessment as well as their views on best practices. Ease of use ratings were shown to be significant across many accommodations used with SDHH and thus were included in our models here (Cawthon & the Online Research Lab, 2007). To facilitate interpretation in the logistic regression, the ease of using each accommodation was coded in the following way: 1 = “very difficult,” 2 = “fairly difficult,” 3 = “neither easy nor difficult,” 4 = “fairly easy,” and 5 = “very easy.” These values are “flipped” from the responses participants gave on the survey itself. In previous analysis of factors that predict assessment accommodations use, we found no differences in responses from participants who identified themselves as “teachers” vs. “administrators” in their role working with SDHH (Cawthon & the Online Research Lab, 2007). We therefore analyzed all participants and did not disaggregate this analysis by participant role.

Participant views on best practices for portfolio and out-of-level testing were drawn from the open-ended questions discussed in the previous section of this paper. For portfolio, the variables used in the logistic regression included the following reasons for recommendation: (a) a good method to see student progression, (b) good for students with disabilities, and (c) good for students who are below grade level. For out-of-level, variables included (a) whether this form of assessment helps to show student progression to parents and (b) that it should be used when student is below grade level.

Environment and context characteristics.

The environment and context characteristic variables included the educational setting, communication mode in the classroom, and state policy. Educational setting variables were whether the study participant taught in a school for the deaf, regional/district program, or a mainstreamed setting. The communication modes included (a) American Sign Language (ASL), (b) other signed language, (c) oral only (speech), (d) oral and signed language together by instructor, and (e) oral by instructor and signed language by interpreter.

The policy variable was adapted from the “National Center on Educational Outcomes” report on state policies on alternate assessment (Thompson et al., 2005). Thompson et al. analyzed state assessment policies and created the following four categories: “portfolio, checklist, IEP, other, and in revision.” State policies were included for the portfolio use analysis; checklist and out-of-level testing had too many states with missing data to make it a viable predictor to include in these models. We coded all states that had a portfolio designations with P = (1) and the remaining states with P = (0). The state policy fields were merged into the National Survey database by state of residence for each participant, allowing us to summarize state policy on alternate assessment use for their students.

Dependent variables.

The participation in alternate assessments section of the National Survey asked respondents about whether or not the alternate assessment was used with one or more of their students. The type of alternate assessments included checklists, portfolios, and out-of-level alternate assessment. Each of these alternate assessment variables was transformed into a dichotomous variable: none of the students served by the participant received the alternate assessment (coded as 0) or at least one student received the alternate assessment (coded as 1).

Data reduction and missing cases.

The data for each model were screened for assumption violations, including multicollinearity (George & Mallery, 2006; Pallant, 2005; SPSS Training, 2001; Tabachnick & Fidell, 2007). None of the criterion variables were found to be highly correlated. For example, in generating the collinearity, diagnostics all of the Standardized beta coefficients were lower than 1 and all of the tolerance levels were higher than .01, indicating that multicollinearity did not exist.

In addition to controlling for multicollinearity, the number of candidate predictor variables was reduced by using the formula discussed by Harrell (2001). The procedure identifies the number of cases required in each group of the dependent variable in a logistic regression in order to avoid over fitting the model (Peduzzi, Concato, Kemper, Holford, & Feinstein, 1996). Harnell's formula is p < m/10, where p is the number of candidate predictors and m is the number of cases required in each group of the dependent variable. As an illustration, in the checklist model, there are an identified 33 possible candidate predictors. Using the equation above to solve for m, each group of the dependent variable (i.e., those who did and did not receive the checklist format) would require at least 330 cases. Consequently, there are not enough cases in the checklist model to use 33 predictor variables. There are only 195 respondents who did not use the format and only 97 who did use the assessment. In order to reduce the number of candidate predictors, only the predictors that were found to have a statistically significant chi-square statistic (α < .05) were loaded into the model (Harrell, 2001). Hence, the number of candidate predictor variables is reduced from 33 to 10. Referring back to the formula, the number of cases required in each group of the dependent variable is 100. In summary, the maximum number of candidate predictor variables that could be used was 10 for the checklist model, 12 for the portfolio model, and 7 for the out-of-level model.

We also addressed issues with missing data in this analysis. Each of the 3 alternate assessment use models had candidate predictor variables with missing data ranging from 16% to 24%. In working with missing data, the randomness of the missing data was considered to be more important rather than the amount (Tabachnick & Fidell, 2007). Variables were first examined for randomness; if randomness was found, then the missing values were randomly replaced with 0s and 1s (Harrell, 2001). If the missing data were found to be statistically significantly (p < .05) different from nonmissing data, then the variable with the missing values was excluded from the analysis.

Results

Demographics

A total of 314 teachers or administrators from mainstreamed educational settings (n = 115, 37%), schools for the deaf or hard of hearing (n = 80, 25%), or district and regional programs (n = 119, 38%) participated in this study.1 Respondents served students as teachers of the deaf (50%); regular education, special education, or itinerant teachers (20%); administrators (8%); and those serving in multiple roles (6%). Participants lived in all regions of the country with at least one participant in each state (including the District of Columbia). We do not have estimates of the national population of teachers and educational professionals who served SDHH, but we do have estimates of the student population. Although the sample was not random, the results are still fairly representative of SDHH throughout the United States (Table 1). As an example, 11% of the students in the National Survey were from the northeast whereas an estimated 17% of SDHH nationwide live in that Census region (Mitchell, 2004). The south is overrepresented, with 41% of this sample compared to a population estimate of 32%. The proportion of students in schools for the deaf (53%) far exceeds the national average, an estimated 25% GRI (2005). Student characteristics, school resources, and instructional strategies may vary greatly from region to region; greater representation from the northeast and from mainstreamed settings is thus necessary to make stronger conclusions about how each factor contributes to alternate assessment use.

Table 1

Geographic distribution of participants and students served

School for the deaf District/regional program Mainstreamed Total National estimatea
Northeast Participants: 18 17 15 50 (16%) 17% 
Students: 446 249 161 856 (11%) 
Midwest 16 28 34 78 (25%) 23% 
898 344 332 1574 (21%) 
South 33 44 40 117 (37%) 32% 
1912 754 455 3121 (41%) 
West 13 30 26 69 (22%) 25% 
753 976 250 1979 (26%) 
Total Participants: 80 (25%) 119 (38%) 115 (37%) 314 
Students: 4009 (53%) 2323 (31%) 1198 (16%) 7530 
School for the deaf District/regional program Mainstreamed Total National estimatea
Northeast Participants: 18 17 15 50 (16%) 17% 
Students: 446 249 161 856 (11%) 
Midwest 16 28 34 78 (25%) 23% 
898 344 332 1574 (21%) 
South 33 44 40 117 (37%) 32% 
1912 754 455 3121 (41%) 
West 13 30 26 69 (22%) 25% 
753 976 250 1979 (26%) 
Total Participants: 80 (25%) 119 (38%) 115 (37%) 314 
Students: 4009 (53%) 2323 (31%) 1198 (16%) 7530 

View Large

Language used in instruction.

New Alternate Assessments Are Here!

New alternative assessments were given for the first time in the spring of 2015 in most states. The assessments are changing instructional practices for students with complex needs, including autism. For many states, it’s the first time students were assessed on writing.

Simple portfolio assessments are out.

Skill assessments are in.

Learn more about the alternate assessments and find out what it will change for your curriculum and instruction.

Are you confident that your curriculum produces the literacy skills that your students must demonstrate on the new assessments?

Consider ramping up your curriculum by adding these reading and writing curriculum programs designed for students taking the alternate assessment.

First Author Writing Curriculum

A comprehensive curriculum gives educators the right tools to teach students with complex instructional needs how to write and then measure their writing progress. This curriculum will help you meet your new curriculum writing requirements and prepare students for the alternate assessment.

Start-to-Finish CORE Curriculum

A new comprehensive reading curriculum driven by simplified chapter book literature. This curriculum is perfect for education leaders who believe that students with complex support needs deserve to experience important works of literature, like their peers.

0 thoughts on “Alternative Assessment Portfolios For Students With Intellectual Disabilities A Case Study

Leave a Reply

Your email address will not be published. Required fields are marked *