TEST ACCESS: Guidelines for
Computer Administered Testing*
James M. Allan, Ph.D., Lead Project Consultant
Nanette Bulla, M.Ed., Project Consultant
Stephen A. Goodman, Project Consultant
With a Foreword by
Larry Skutchan, Technical Consultant, Technology Project Leader
American Printing House for the Blind
American Printing House for the Blind, Louisville, Kentucky
*Book Number One in the TEST ACCESS Series, promoting accessibility of testing materials for persons who are blind or visually impaired.
© 2003 American Printing House for the Blind, Inc. This material may be distributed free of charge for educational and nonprofit use. No other use of this material is allowed without written permission. All trademarks are of their respective companies.
Disclaimer: Web links in this document were current as of the date of publication, but may have become deactivated or modified since then. Please report any dead or modified links to firstname.lastname@example.org. These links are for informational purposes only and do not constitute an endorsement or approval of policy, views, products, or services of the publishing organization.
Allan, J. M., Bulla, N. and Goodman, S.A. (2003). Test Access: Guidelines for Computer-Administered Testing. American Printing House for the Blind: Louisville, KY. Available from: http://www.aph.org.
1839 Frankfort Avenue
P.O. Box 6085
Louisville, KY 40206-0085
Toll Free: 800-223-1839
* Italicized terms in this document are defined in the glossary.
Barbara W. Henderson, Project Director
Test and Assessment Project Leader
Department of Educational and Technical Research
Larry Skutchan, Project Technical Advisor
Technology Project Leader
Department of Educational and Technical Research
Debbie Willis, Project Advisor
Test Central Manager
Educational and Advisory Services
Tessa Wright, Project Editor
Department of Educational and Technical Research
Kristopher Scott, Project Editor
Department of Educational and Technical Research
William W. Armstrong
Department of Educational and Technical Research
Department of Educational and Technical Research
As a young college student in the late 1970's who had recently lost my sight, I quickly gained a first-hand familiarity with the problems of accessibility in testing and assessment. All the traditional assessment methods at that time consisted of printed tests and hand written answers. (This occurred long before online assessment was feasible, so I didn't even consider the more complex questions of equipment training, flexible interfaces, and alternative input and output options.) I was very lucky that many of the professors at the University of Arkansas at Little Rock were patient and caring enough to provide alternate means of assessment, usually with the professor himself reading the questions and accepting my responses orally, but many of the alternative assessment techniques were not nearly so amenable or even fair.
I particularly recall an assessment where a freshman work-study student read me the questions and wrote my responses, and it was clear that the material was far above the knowledge level of that student. This situation is particularly troublesome in advanced course studies where the pool of candidates qualified to render such an examination shrinks in direct proportion to the complexity of the material on which the student is assessed. On the other hand, one has to wonder, especially with oral assessments, if the student or professor sometimes inadvertently conveys information about the material. None of these situations makes for an accurate assessment of the student's knowledge and ability.
The No Child Left Behind initiative affirms the Bush administration's commitment to the ideals of this country, but implementation will take years. As one reading this document, of course, you already know that universal access benefits all, but getting the test publishing industry to realize this, persuading designers to work toward these goals, and convincing industry to put the additional initial effort required to design an interface for universal application will all take time. The process cannot begin, however, without taking some first steps, and this document provides test publishers, test administrators, technology experts and teachers the tools necessary to begin moving toward the day when assessment is fair and equitable for every student.
While the topics presented in this document are geared toward creating access solutions that work for blind and visually impaired children and adults, the general concepts apply to all students. The authors of this document bring years of experience on universal design concepts, tests and assessments, and with creating innovative solutions to solve particularly complex problems in working with blind and visually impaired children.
You will find specific suggestions and recommendations about addressing particular issues in this document, and you will find valuable references to information that provides you with details about specific implementation issues. Most importantly, in this age of quickly changing information and the fast-evolving field of universal design concepts, this document is dynamic. By publishing it on the Internet, you will find that this document never becomes obsolete. As new information emerges and new technology evolves, the authors will incorporate new suggestions, techniques, and practices, and the links that this document uses will provide you with new and relevant information about the areas of test assessment and universal design in which you are interested.
Technology Project Leader
American Printing House for the Blind
This document highlights and addresses the problems of all aspects of test accessibility to disabled individuals, particularly those with low vision or who are blind. In spite of advances in technology (e.g., computer-administered tests) it is clear that these individuals are still not able to perform with their sighted peers on equal ground. The authors put forth the "Principles of Inclusive Design" and argue that tests must be made accessible to all potential test-takers, regardless of format and/or disability. Further, this is only possible by initiating the process at the design stage; accommodations at the point of test administration are not enough.
Technology is increasingly used as a learning format and this is reflected in the use of electronic assessment, although not on an equivalent basis. Students cannot be expected to perform their best if they are unfamiliar with a testing format. Accordingly, the authors stress that students must be tested in the format in which they typically learn. Further, content areas may differ in the way that they are presented and taught to students with respect to the inclusion of technology. The authors conclude that discrepancies in learning and testing format must be minimized in order to ensure accurate evaluations of the individuals taking these tests.
Those who create, publish, or administer computer-based tests, including state departments of education, service agencies, test publishers, and software developers, would benefit from this guideline commentary. Separate parts of this document provide information about the educational impact of visual disabilities and a general overview of present testing accommodations for paper and pencil tests, as well as computer-based tests. Further sections detail a point by point review of the Guidelines for Computer-Based Testing: February, 2002, by the Association of Test Publishers (ATP), with comments & responses, and an accessibility-oriented review of web-based and commercial tests. Subsequent parts are devoted to design considerations for computerized and on-line tests including the accessibility of images, graphs, charts, and maps. The text is followed by a complete bibliography, web sites for low vision and blindness information, and a glossary of terms.
This project was truly a collaborative effort. The idea was originally put forward in 1997 as a budget request to the U.S. Department of Education. It was awarded special federal initiative funding in FY1999. From inception, the project promised to make important information available to test publishers and educational policy-makers on making computerized testing accessible, and it drew in numerous experts from the fields of visual impairment and blindness, assessment, and technology. So many people had a part in the eventual discovery process that they are certainly not all named here. However, some notable contributions were made and those have not been forgotten. The project would not have come to fruition without those people.
At the outset, Dr. Gage Kingsbury, Director of Research at Northwest Evaluation Association (NWEA) in Portland, Oregon, was willing to listen patiently to our inquiries regarding computer adaptive testing and he became a valued information source. It was Gage who recommended that we get involved with the Association of Test Publishers (ATP). Larry Skutchan, APH Technology Project Leader and project Technical Advisor, came up with the name Test Access. After Larry "found" the name, the Test Access logo was conceptualized and beautifully designed by Bridgett Johnson.
As the project began to take shape, people from across the country showed tremendous enthusiasm. Because of their immediate positive responses to the project, major thanks are due to Dr. Phil Hatlen, Superintendent of the Texas School for the Blind and Visually Impaired, Austin, Texas, and Dr. Stuart Wittenstein, Superintendent of the California School for the Blind, Fremont, California. Both gave graciously of their employees' time and volunteered use of their school facilities in support of the project. Dr. Dean Stenehjem, Superintendent of the Washington State School for the Blind, Vancouver, Washington, eagerly gave of his own time in researching sources of information on computer-based testing. William Daugherty, Superintendent of the Kansas School for the Blind, Kansas City, Kansas, who became personally invested in the project early on, eventually recommended and put project staff in contact with the main project consultant. Finally, Dr. Ralph Bartley, Superintendent of the Kentucky School for the Blind, Louisville, Kentucky, was instrumental in connecting APH staff with the Kentucky Department of Education (KDE), Exceptional Children's Services Division. Thanks to Ralph's enthusiasm and persistence, APH and KDE are currently working together, with the software developer, to pilot accessible online delivery of the Commonwealth Accountability Testing System (CATS).
Larry Brown, Teacher of the Visually Impaired and Manager of the Oregon Textbook and Media Center, in Portland, Oregon, kept the momentum of the project going. Larry periodically advised us of progress on computer-based testing in his state and others. Always willing to get involved, Larry has acted as liaison between the field of visual impairment and state/local educational policy-makers. He has networked with other educators who have gone through the same discovery process with computerized testing. His good work and positive attitude deserve recognition. And it was indeed our good fortune to have known Sandra J. Thompson, Ph.D., of the National Center on Educational Outcomes (NCEO), and Karen Barton, Research Scientist, of CTB/McGraw-Hill Publishers, who kindly did reviews of the near-final version of this document. Their insights were invaluable, as were those of the project field reviewers.
Finally, much of the nitty-gritty was accomplished by APH Research Assistants: Tessa Wright, Kristopher Scott, Rosanne Hoffmann, Monica Vaught, Will Armstrong, and Sarah Ballard. Their hard work and diligence saved the day. And while they were editing the text, proofreading, word processing, and checking/rechecking the hyperlinks, Monica Coffey, Research Assistant, and Mario Eiland, Computer Programmer, tested many web sites for accessibility. Last, but not least, in the role of supreme information guru, Inge Formenti, the APH Librarian, tracked down elusive references. Thanks very much, all of you.
Barbara W. Henderson, Project Director
Six persons, each representative of one of the target audiences for this document, were chosen for the review process. Each came with impressive recommendations. Thankfully, all agreed to read and comment upon an early draft. Their subsequent excellent comments and suggestions resulted in a much better end-product, and provided an unequaled networking opportunity:
JoEllen V. Carlson, Ph.D., Assessment and Evaluation Consultant
Robert P. Dolan, Ph.D., Senior Research Scientist, Center for Applied Special Technology, Inc. (CAST)
Steve Hahn, Computer Instructor, Kansas State School for the Blind
Preston Lewis, Program Manager, Kentucky Department of Education, Division for Exceptional Children Services
Mary T. Terlau, Ph.D., Adult Life Project Leader, American Printing House for the Blind
Mila Truan, Ed.D., Reading Specialist, Tennessee School for the Blind
James M. Allan, Ph.D., Lead Project Consultant
Webmaster and Statewide Technical Support Specialist
Texas School for the Blind and Visually Impaired
Nanette Bulla, M.Ed., Project Consultant
Educational Diagnostician and AER Certified Low Vision Therapist
Texas School for the Blind and Visually Impaired
Stephen A. Goodman, Project Consultant
Director of Pupil Personnel Services
California School for the Blind
James M. Allan is the webmaster at the Texas School for the Blind and Visually Impaired in Austin, Texas. He has worked in the field of assistive technology and information access for over 20 years. Allan received a Distinguished Service Award from the Mayor's Committee on People with Disabilities and for his work on the Accessibility Internet Rally (AIR Austin). Jim is a member of the World Wide Web Consortium--Web Accessibility Initiatives (W3C - WAI). An active participant in the User Agent Working Group, Jim previously served on the Authoring Tools and Education and Outreach Working Groups. He also chairs the Research and Development Working Group of American Foundation for the Blind's (AFB) Textbooks and Instructional Materials Solutions Forum. Finally, Jim chaired the Accessibility Subcommittee of the Texas EducationAgency's Computer Network Study Project,which produced a comprehensive guide for accessible multimedia and Internet textbook design and delivery. As webmaster at the Texas School for the Blind and Visually Impaired, Jim is committed to accessible web page design and the development of accessible multimedia textbooks, learning materials, and assessments.
Nanette Bulla is a graduate of the University of Texas (Austin) College of Education's program to train teachers of students with visual impairment, under the direction, guidance, and inspiration of Dr. Natalie Barraga. She has been employed in special education for almost 30 years, 26 of which have been at Texas School for the Blind and Visually Impaired. Nan has worked in a number of roles including homebound teacher, teacher of students with visual impairments, counselor, parent counselor for a Title VI-C deaf-blind project, educational diagnostician, and coordinator for low vision services. Nan is best known to her colleagues as the "eyeball lady" because of her passion for the topic of low vision, as well as being an expert in the field of assessment of students with visual impairments. She is often consulted or called upon to conduct workshops, in addition to responding to numerous national and international inquiries regarding vision and assessment. She is a native Texan who lives in Round Rock with her husband of 30 years, their two sons ages 21 and 18, and their most recent Humane Society adoptee, Beau, a black Labrador retriever.
Stephen A. Goodman attended the University of California, California State University, Hayward, and the University of Michigan. He has worked as a school psychologist, a coordinator of special education, a college and university teacher, a principal, and now serves as the Director of Pupil Personnel Services at the California School for the Blind, Fremont. In his current assignment, Steve has helped create assessment services at the school for the blind, including assessment programs serving students from throughout the state and within the school. Most recently, Steve initiated a vision clinic at the school. He is also involved in the delivery of orientation and mobility services, media services, and outreach programs. Professional participation for Steve includes: membership on the organizing committee that resulted in the creation of the National Association of School Psychologists (NASP), service as the first State Representative from California to NASP, Chapter President of the San Mateo County Chapter of the California Association of School Psychologists, and President of the Northern California Chapter of the Association for the Education and Rehabilitation of the Blind and Visually Impaired (AER). He is currently serving as Director-Elect of AER, Division 4. Finally, Steve co-edited Collaborative Assessment of Visually Impaired Students With and Without Additional Disabilities, to be published in Fall 2003.
"The question is no longer whether assessment must incorporate technology. It is how to do it responsibly, not only to preserve the validity, fairness, utility, and credibility of the measurement enterprise but, even more so, to enhance it." (Bennett, 2002, p. 15). Add to the list, accessibility. It is a key part of incorporating technology into assessment responsibly, and enhancing it.
In the rush to make technology-based assessments, students with visual impairments are being disenfranchised. Accessibility is provided through the use of accommodations that are provided by test publishers and testing entities. By assuming that accommodations alone will allow students with visual impairments to demonstrate their ability to achieve on an equal footing with their sighted peers, educational institutions are courting disaster. No amount of accommodations can compensate for poorly designed, inaccessible, testing materials. Accessibility must be a consideration in all areas of assessment design and delivery. This document will discuss what that implies.
Designing a test in an accessible manner has benefits for all students, not just those with visual disabilities. Results from a recent study on testing of non-disabled students found that reduced screen resolution caused students to scroll more and reduced performance results in reading comprehension tasks (Bridgeman, Lennon, and Jackenthal, 2001). Many factors, such as monitor size, screen resolutions, keyboard layout, connection speed, and other technical characteristics may impact assessment results (Bennett, 2002).
These factors have parallels in accessibility and accommodations. A poorly designed test may cause a student to scroll horizontally and/or vertically in order to compensate for a non-flexible design, thus causing the student to take longer to read, comprehend, and respond to a test item. Also, images without proper descriptions, or ones that cannot be properly magnified,may prevent students from correctly responding to test items.
The idea for this project grew out of a desire to provide information on creating accessible computer-based tests BEFORE most states actually implemented them. While the Association of Test Publishers (ATP) did an excellent job of establishing the first industry standards of this kind, their Guidelines for Computer-Based Testing: February 4, 2002, did not specifically address the fundamental requirement that accessibility must be an integral part of the entire test creation process--from inception to delivery. Based on an evaluation of the ATP document, and utilizing expertise from the field of visual impairment and blindness, this document seeks to provide four pieces of information:
First, appropriate background information concerning the nature and educational impact of visual disabilities
Second, a review of testing accommodations in general
Third, a review of the Association of Test Publisher's (ATP) Guidelines for Computer-Based Testing: February, 2002, point by point, to expand the scope of the items to include accessibility, and to explain the rationale behind ensuring accessibility for each item
Fourth, a review of web-based and commercially available computer-based tests and practice materials regarding accessibility features
All of the above are predicated on the fact that accommodations cannot be given to a student with a visual disability at the time he/she takes the test. Accommodations must be those used by the student in the classroom, and the student must know how to use them prior to taking the test. Further, if the test delivery systems and the individual components that make up each test item are not accessible, then no amount of accommodations, assistive technology, or time will make the item or the test accessible. It may not be economically feasible to retrofit an existing test to make it accessible. In order to create an accessible test, accessibility must be a consideration from the beginning.
It is recognized that a test cannot be totally accessible to all students without some accommodations; that is, the test itself cannot have all of the alternative media (braille, audio, etc.) built into it. Many technological accommodations (assistive technology) exist and are being developed that can make the content of a test more accessible. We suggest using the "Principals of Inclusive Design, " and including people with knowledge of visual impairment, in the design, development, and delivery process for all technologically-based assessments.
Inclusive design can be defined as, "the design of mainstream products and/or services that are accessible to, and usable by, as many people as reasonably possible on a global basis, in a wide variety of situations and to the greatest extent possible without the need for special adaptation or specialised design" (Tiresias, 2001, Inclusive design section, ¶ 2). Applied to testing, inclusive design recognizes that the content of the test must be presented in a manner that is consistent with the needs of assistive technology and usability standards. There are limitations in the use of assistive technology. For example, assistive technology cannot create content where none exists. A screenreader cannot create a description of a diagram. It must rely on the test author to provide the alternative content. The foundation of inclusive design, as it relates to computer-based testing, is predicated upon consideration of the needs of all students and recognition of the limitations of assistive technology.
The purpose of this document is to provide accommodations guidelines for computer-based testing of students who are blind or visually impaired. These guidelines are not written about alternative assessment, but are concerned with detailing the design considerations and accommodations necessary for delivery of a valid test, in all testing environments, to test-takers who are blind or visually impaired.
"Current computer-based testing delivery and administration services are primarily delivering high-stakes computer-based tests. These tests include information technology certification tests, college and graduate school entrance tests, and professional licensure and certification tests. The increased use of practice tests and simulation environments, and the increased ratio of students to computers within schools and work environments, are expected to expand the volume of low-stakes computer-based testing environments. The Internet and World Wide Web can be used for administration of both high-stakes computer-based tests with appropriate proctoring or for administration of low-stakes computer-based tests" (Guidelines for Computer-Based Testing: February 4, 2002, Association of Test Publishers, 2002, p. 4).
Technical training for teachers and students is essential. Accommodations for testing should be the same as those used in teaching the general curriculum and not new or special circumstances introduced during testing.
These guidelines are written for those who create, publish, or administer computer-based tests, including, but not limited to, state departments of education, test publishers, test administrators, agencies, and software developers. All of these entities are required to provide accommodations for persons with disabilities as mandated by the Americans with Disabilities Act of 1990. The document will be of interest to the aforementioned groups as well as to educators and individuals who want up-to-date information about the status of computer-based testing.
The guidelines presented here were not the first to address accessibility of computer software. For example, the National Center for Accessible Media (NCAM®) and WGBH®, the not-for-profit Boston television and radio station, published "Making Educational Software Accessible: Design Guidelines including Math and Science" (December, 2000) after a three-year research project. The World Wide Web Consortium (W3C®) has published the "Web Content Accessibility Guidelines" (1999), the "Authoring Tool Accessibility Guidelines" (2000), and on August 21,2002, a working draft of the "User Agent Accessibility Guidelines" was posted at http://www.w3.org/TR/UAAG10/. Microsoft® has had long-standing software accessibility guidelines, such as "Microsoft Windows Guidelines for Accessible Software Design." IBM®, too, has established accessibility guidelines for software, Java® , and corporate web pages among others. Finally, the federal government has published the "Telecommunications Act Accessibility Guidelines" http://www.access-board.gov/telecomm/html/telfinal.htm and Section 508 guidelines http://www.section508.gov/. Most recently, Macromedia®, the developer of Director®, Shockwave®, and other software and web application development tools, announced an increased awareness of the need for accessibility. Macromedia is striving to ensure that their product and the content it creates is more accessible.
Moreover, WGBH and several partners, including the Educational Testing Service (ETS®), Sun Microsystems, Microsoft, and the Center for Applied Special Technology (CAST) have joined to formulate the IMS [Intelligent Manufacturing systems] Global Learning Consortium, Inc. This partnership seeks to develop specifications and support implementation, enabling individuals with disabilities to access distributed/distance learning. In so doing, IMS Global also promises to improve Internet-mediated instruction for all users. Specifications will affect the entire community of public and private companies, organizations, and individuals developing learning resources. ( http://www.imsglobal.org/)
While testing accommodations provide test-takers with disabilities the opportunity to demonstrate their abilities under standard conditions, they do not ensure that the test-taker using accommodations will be able to access the test. Accommodations assume that the testing material and delivery mechanisms are compatible with approved accommodations such as screen readers, screen magnification software, or other assistive technology. Accommodations provided during testing should be the same as those the student is afforded in the classroom. However, if the test is not designed accessibly, then accommodations will not facilitate test-taking.
The problem of inaccessibility lies in the development of the testing software at its inception. Testing software is no different than other commercial software. If design considerations relating to accessibility are applied to testing software at inception, then the program should be capable of providing accessible content within a stable electronic environment. In other words, accessibility is dependent upon the underlying coding, construction, and presentation to the test-taker. Simulations and other complex assessment strategies cannot be made accessible at the time of testing.
Critical areas of concern for accessibility are:
Position statement: If a test is designed without consideration of accepted accessibility guidelines, then no amount of accommodations will make the test accessible.
The National Center on Educational Outcomes (NCEO) has devoted much of its web site to a discussion of testing accommodations. An online accommodations bibliography is provided in addition to this definition:
Accommodations are changes in testing materials or procedures that enable students to participate in assessments in a way that allows abilities to be assessed rather than disabilities. They are provided to "level the playing field." Without accommodations, the assessment may not accurately measure the student's knowledge and skills.
Many states have grappled with their accommodations policies for students with disabilities. We know that all states have written guidelines to indicate which accommodations are "allowed." Accommodations are generally grouped into the following categories:
Although there is variability in the categories used across states, and often extreme variability in specific accommodations allowed, there now is common federal legislation. Several federal laws, including Section 504 of the Rehabilitation Act of 1973, the Americans with Disabilities Act of 1990, Title I of the Elementary and Secondary Education Act, and the Individuals with Disabilities Education Act Amendments of 1997, call for accommodations to be provided as necessary to allow students with disabilities to participate in assessments.
Many states have accommodations standards for statewide and other high-stakes assessments posted on their web sites. For example:
In Texas, local school districts during the Individualized Educational Plan (IEP) team meeting, required by IDEA, determine appropriate accommodations for testing, as well as the appropriate level of testing to match the students' abilities.
California has specific General Education Development (GED®) Accommodations. Testing accommodations can include, but are not limited to, additional time to complete the test, one-on-one testing sessions, or a schedule of breaks. The GED Test is available in large print, braille, and recorded formats.
Colorado has five criteria for selecting accommodations:
In general, most colleges and universities that require testing for admittance follow or defer to the test manufacturers' guidelines, such as those from ETS (http://www.ets.org/disability/index.html). The ETS accommodations policy is used for all ETS tests, including the Graduate Record Exam (GRE®), PRAXIS, Graduate Management Admission Test (GMAT®), as well as the Scholastic Aptitude Test (SAT®), Advanced Placement Program Examination (AP®), College-Level Examination Program (CLEP®), Preliminary Scholastic Aptitude Test (PSAT/NMSQT®) ( http://www.collegeboard.org/disable/students/html/accom.html). The American River College and Portland Community College policies are very similar, if not identical, to the ETS accommodations policy.
Accommodative policies may carry over to other on-campus testing, or the university may establish additional policies. Oregon State University has developed the "Oregon State University Software Access Guidelines" ( http://tap.oregonstate.edu/soft.html).
The purpose of the guidelines is to provide vendors to the university, and those individuals responsible for overseeing the purchase of information technology a set of minimally acceptable standards for accessibility that software applications must meet if they are to be purchased and used by university programs. These Software Access Guidelines include testing materials. (See Part V: Design Considerations for Stand-Alone Computerized Tests for the full guidelines). These guidelines were based on the U.S. Department of Education's Assistive Technology Program software guidelines ( http://www.ed.gov/offices/OCIO/programs_services/assistive_technology/).
It is crucial that exposure to accessible technology becomes a regular part of a student's academic curriculum. It is equally important to expose the student to the software and interface that will be used to administer the test prior to testing. If the testing software is unavailable before the test, simulation software may be substituted. However, any software substituted should be tested to ensure that the interface is the same as the interface of the actual test administration software. Though there are software simulations available for some tests, the software is often produced by a third party company, and while the simulation may look similar to the actual test, the underlying design and behavior of the simulation may be completely different in terms of accessibility.
In Part II, testing accommodations were discussed in a general sense. Accommodations were said to include modifications in presentation, response, setting, and timing/scheduling. This section provides specific accommodative requirements when testing via computer, for students with visual impairments and blindness. Each accommodative requirement is detailed and the reasoning behind the requirement is explained.
There are two types of accommodations: the flexibility built into or allowed by the testing system, and the add-on assistive technologies that augment or enhance the existing capabilities of the testing environment. Each is detailed below.
Persons with visual impairments may need additional time to read and review information. Persons with low vision generally need additional time to review material visually and persons who are blind usually need even more time to review materials through tactual or auditory methods. Additional time allowance, along with consideration for optional rest breaks, should be provided according to individual need. The need for special accommodations should be specified when the individual registers for the test. While this may be considered to be a modification of standard test administration, the allowance for extra time may be a necessary accommodation for users with visual impairments. In some cases, a special administration of the test will be necessary since time allowances may present a significant deviation from the rest of the test-takers, for example, in a separate testing room with an individual monitor. Test administrators and monitors should be aware of this possibility and be prepared to make the proper accommodations as they arise.
Many students with visual disabilities use a variety of computer access tools to meet the changing demands of the task and working environment, and these tools should be available during the test period. In most situations, a test-taker with visual disabilities will complete a form requesting special accommodations (i.e. screen magnification) prior to the test date. However, in some cases, the student may discover the need for additional accommodations (i.e. speech) upon taking the test. For example, the testing environment, the test itself, the computer, or the room where the computer is located may require that the test-taker also use a screenreader to access the information on the computer. If the test were designed appropriately, with built-in accessibility, then the accommodation needed, e.g., a screenreader, could be delivered instantly under control of the test-taker. The following sections discuss specific accommodative needs.
Students with low vision use a standard computer in two ways--with and without assistive technology. Examples of assistive technology include screen magnification software, hand-held or spectacle-mounted optical aids, and screenreader software. For a general review of assistive technology used by students with visual impairments see http://www.tsbvi.edu/technology/overview.htm.
When using a computer without assistive technology, a test-taker will rely on configuring the operating system to meet his/her visual needs. Most operating systems (Windows and Macintosh) allow the user to configure the foreground and background colors, as well as the default font style and size. Usually, the changes a test-taker makes to the operating system will carry over to applications installed on the machine.
However, some applications may not follow operating system guidelines and protocols. If operating system protocols are not followed, then the changes a student makes to the display properties of the operating system will not affect that application, and the chances of the application being inaccessible increase. If the student is not allowed to alter fonts and colors via the operating system, the student must then rely on the features of the application software or use assistive technology in order to interact with the software.
At minimum, test-takers with low vision must be able to configure the font size, font type, and foreground and background colors. Additionally, the testing software should be designed to allow the text to reflow or re-wrap when text size changes. This allows the reader to continue reading the text while scrolling vertically on the page. If the text does not re-wrap, it will expand beyond the borders of the screen (or boundaries of the application window). Thus, the burden is placed on the student with low vision who must now scroll horizontally (to read the text beyond the border of the screen) and vertically to read all of the information on the screen. If the test is delivered over the web and conforms to Web Content Accessibility Guidelines 1.0 (WCAG) of the World Wide Web Consortium, then the test should be usable by students with visual disabilities. Conformance will allow students to change foreground and background colors, font type, and font size, and text will reflow on the page.
Recent attention has been given to the physical and visual problems experienced by users of computers or video display terminals. Increased attention to ergonomic design, positioning of equipment, excessive time spent staring at the computer screen or monitor, and visual accommodation problems (computer vision syndrome) have heightened awareness of the requirements test creators must consider when making accommodations for test-takers. Persons with low vision may be more prone to these problems because of the need to look at the screen for longer periods of time and/or at closer proximity to the screen. Additionally, persons with low vision may need more time to review material visually, creating a greater possibility for fatigue and the need for frequent breaks. Other problems with the visual environment such as reflections on the screen, glare from lighting, or inadequate lighting may further complicate the situation.
Test-takers with low vision may use one or more ways of accessing computer administered testing programs. Because low vision covers a wide variety of conditions and levels of visual functioning, the need for computer and environmental accommodations will vary depending on individual requirements. Even if the user requires and regularly uses software that enlarges print on the screen, some practice in the use of the program at the test site may be necessary prior to testing dates.
A functional vision evaluation is used to determine needed accommodations. Some users with low vision may need no accommodations, while others may need any or all of the following in order to take the test independently:
1. Magnification and readability of print
Most commercial software programs and operating systems have options for changing the print size on the screen, but these are often limited to 3x (three times normal) magnification. Even non-disabled individuals can benefit from enlarging print or changing font styles to prevent fatigue and reduce the possibility of errors. While this may be adequate for some, it may not be enough for other persons with low vision.
Software which enlarges screen up to 12-16x is available. When using magnification software the computer screen essentially becomes a window on the virtual monitor. For example, when using 2x magnification the virtual screen is twice as high and twice as wide as the actual screen, so the monitor is a window that is only able to view one-fourth (1/4) of the virtual screen. At 12x magnification the computer screen can only view one-one hundred forty-fourth (1/144) of the virtual screen. As magnification is increased, field of view is compromised. It is relatively simple to change from one magnification level to another. Font styles can also be changed to suit individual preferences, making print more easily readable and accessible. The larger the font, the less information will be viewable on the screen at one time. Therefore, more time will be needed for the individual with low vision to read and view material on the screen.
A major difficulty for screen magnification users is the viewing of images. Images can be pictures, graphs, charts, maps, and even words (such as logos). As images are magnified, their quality deteriorates. What was once a smooth, straight, diagonal line soon becomes a stair step. The smooth curve of a ball becomes a circular saw blade. Images begin to look like colored squares and very quickly become meaningless. Using vector graphics, such as structured vector graphics (SVG) or portable network graphics (PNG), rather than .JPG or .gif graphics, allows images to be scaled smaller or larger without loss of detail.
The use of commercially available "page magnifiers" is not recommended. Fresnel prisms can be placed on top of a page or book, but they offer little magnification and are of poor optical quality.
2. Screen display - monitor, contrast, color, and control for glare
Persons who have difficulty seeing small print or icons may choose to simply increase the size of their monitor. Increasing the monitor size, for example, from 14 to 19 inches, gives a little less than two times magnification; from 14 to 21 inches gives nearly two and one-half times magnification. This remedy may help some users with low vision, but not all. (See preceding comments on screen magnification software.) Note: Many low vision users maintain that the 480 X 640 resolution serves them best, so it may be desirable to design test presentations for this least common denominator.
Monitors should be of high quality. Generally, the more pixels there are, the better the image. Smaller dot pitches (less than 0.28 mm) are better for color monitors. Another consideration for monitors is the refresh rate, or the number of times a display screen is redrawn per second. Typically, refresh rates of 90 Hz and higher are preferred. Monitors with lower refresh rates have a "flickering" effect that is problematic for users who have low vision.
Most standard monitors have controls for contrast and brightness. Character resolution and definition should be maximized for better contrast. However, the person with low vision may need more control and options than what is typically available from the application or the operating system. Options within screen magnification software programs can be used to change foreground and background colors to the comfort level of the individual user.
The amount of glare emitted from the screen can also be a problem for some test-takers, but especially for those with low vision. Simply turning down the brightness control on a monitor may not suffice because of the possible loss of contrast. A colored filter or additional screen may be needed for glare control, but again should be the choice of the individual. Changing foreground and background colors may also help. For example, choosing a black background with white characters can provide good contrast. These should be options for the test-taker with low vision and should be specified in a student's Individual Education Plan (IEP) or in a functional vision assessment. Again, the student should have had ample time to practice using these options prior to the date of testing.
3. Lighting - task, room, and control for glare
The type, amount, and position of lighting all contribute to the ease or difficulty of a visual task. Overhead lighting may cause glare on the computer screen or monitor. Natural lighting from windows may do the same, if the monitor is facing a window. For test-takers with low vision, the option to turn off overhead lights or change the position of the monitor should be offered. Shades or blinds over the windows can help. Some may prefer task lighting on the work area while others may prefer no task lighting. Task lighting and additional illumination can take various forms and use different types of bulbs, such as incandescent, fluorescent, or halogen. Individuals may have specific needs depending on how direct or diffuse they prefer lighting and how intense they prefer the illumination, and such options should be available.
An optional, detachable glare reduction filter over the monitor may prevent reflective glare on the screen. However, some glare reduction filters may darken the screen too much and consequently diminish contrast and readability. Another option would be to add a shade or hood over the monitor. These can be purchased commercially or can be simply made with a file folder or cardboard attached to the top and sides of the monitor. In any case, the low vision user should have options to control lighting and reflective glare.
4. Ergonomics - move monitor to desired distance and height
Besides ergonomic considerations for seating, keyboard height, and monitor positioning, the test-taker with low vision may need to view the screen from a closer distance than is typically provided. Recommended monitor viewing distance is approximately 20-28 inches, with a minimum distance of 15 inches. Some users with low vision may need to be closer to the screen while others may need to be farther from the screen. Generally, the top of the screen should be in front and level with the test-taker's eyes. Unusual viewing positions may be necessary due to some types of vision problems. Mounting the monitor on an adjustable platform or movable surface can provide options for various viewing distances. Placing the keyboard in a keyboard drawer would also allow the monitor to be nearer to the edge of the desk or workstation edge for easier viewing. Other considerations are adaptive keyboards and alternative mouse options.
5. Screen review (screenreading) software that is compatible with print enlarging software
Some users with low vision may prefer to listen to large volumes of material rather than read it on a screen, thereby having to look at the screen only for brief periods in order to review specific information, answer questions, and proceed to the next test item. In that case, screen review software, which will read what is on the screen providing the test-taker with auditory feedback, that is compatible with screen enlarging software should be provided.
Test-takers who are blind interact with computers using a standard computer and assistive technology (see Part IV for information on adaptive devices). The Windows operating system is more widely used by consumers who are blind than any other. In order to be accessible to assistive technology, an application must conform to operating system guidelines and use operating system protocols. The Windows operating system has many features that allow enhanced accessibility to assistive technology users. Chief among these is Microsoft Active Accessibility (MSAA) http://www.msdn.microsoft.com/library/default.asp?url=/nhp/Default.asp?contentid=28000544. MSAA communicates important information that would otherwise be unavailable to the user. Without MSAA the assistive technology is less effective and may not provide necessary access to the application.
In addition to guaranteeing an accessible interface with assistive technology, test designers should be aware that it may be necessary to provide other kinds of accommodations to blind test-takers, depending upon the subject matter. Science, social studies, mathematics, and even reading comprehension tests may include items dependent upon diagrams and illustrations which must be presented in a tactile format.
If the user requires particular assistive devices, some familiarity or instruction in the use of the program and its interface with the assistive devices already being used is necessary prior to the testing date.
Taking into consideration the above, most blind test-takers will need any or all of the following accommodations in order to take the test independently :
1. Screenreader-accessible information displays or self-voicing output
Most screenreading applications use MSAA to provide information to the user via speech output. Another option that test makers may use to make their testing applications accessible is to make them self-voicing. That is, the application speaks its contents without the need for assistive technology. Applications which support text-to-speech rendering of text-based files are known as text-readers. The user interface (menus, buttons, and other controls) and all content of the test (images, math equations, media, etc.) would all include alternative representations (i.e., verbal descriptions of images, captions of audio and video tracks) are built into the application. This would eliminate the need and expense of providing assistive technology at the testing location.
For test-takers with visual or learning disabilities, using human readers during tests has several disadvantages. The problems include inconsistent quality of reading, test-taker anxiety and embarrassment at having the reader reread material, reader mistakes in recording answers, fatigue caused by the slowness and intensity of this reader/test-taker interaction, a greater need for testing time, and so forth. A prototype "self-voicing test" system that might reduce the need for human readers was evaluated with 19 individuals with visual and learning disabilities. Researchers conducted interviews, observed the individuals using the system, and conducted a focus group session. The study found that, in general participants would "highly recommend" a self-voicing computer-based test like this system for individuals requiring speech output technology. Furthermore, several of the participants indicated that they would "much prefer" the self-voicing system over a human reader, particularly if improvements were made in the areas of speech quality, system responsiveness, navigation and preparation materials (Hansen, Lee, and Forer, 2002).
2. Review capability and spell-check availability
Due to the complexities of the English language (homophones, etc.) and the errors introduced by the speech synthesizer (mispronunciation of words), test-takers need the ability to review testing materials with speech, that is, reread individual words or check the spelling of confusing or mispronounced words. If the computer-administered test is self-voicing, the test-taker must also be able to review text for the correct spelling of words. Also necessary is the ability to mark and return to sections of the test that have not been completed, if sighted students are given the opportunity to return to those sections.
3. Refreshable braille display
The refreshable braille display provides tactile access to textual information being read from the computer screen. As the screenreader reads the print line, a line of braille is displayed under the braille reader's fingertips. Rather than listening to audio descriptions and presentations of questions, the test-taker can read the braille display. At the present time, braille displays can accommodate a maximum of 40 braille cells and cannot display tactile graphics, icons, lines, etc., though further developments in technology may allow for full-page display. As technology improves, refreshable braille displays will also become more affordable.
4. Tactile supplement
Braille reading test-takers may need access to a "tactile supplement," composed of raised line versions of diagrams and illustrations, with braille labels, to accompany audio or "voiced" portions of the test.
5. ALT text / longdesc
Alternative or ALT text is a spoken description which enhances the meaning and understanding of a test item (usually a diagram or picture). Without appropriate ALT text (or a longer description called longdesc), a student using a screenreader does not have complete access to all of the information required to respond to the test item.
Determination of the appropriate adaptive devices for use during computer administered testing depends on the delivery medium of the test and the nature of the disability of the student taking the test. For students with visual impairments, the preferred computer platform (stand-alone, networked, or connected to the Internet) is the personal computer with the Windows operating system…by far the most accessible. Several organizations have written guidelines to address accessibility within the Windows environment, for example the "Microsoft® Windows® Guidelines for Accessible Software Design," http://www.cs.bgsu.edu/maner/uiguides/msaccess.htm or WGBH's "Making Educational Software Accessible." http://ncam.wgbh.org/cdrom/guideline/ The Windows operating system supports a greater variety of adaptive devices than any other operating system. For example, Windows supports at least eight screenreaders: JAWS® , WindowEyes, ASAW, Hal, Outspoken, WinVision, Protalk, Screen Reader. The Macintosh® and Unix® (all varieties) operating systems support only one screenreader each, Outspoken for the Macintosh, Emacspeak for Unix®. Windows supports at least two screen magnification applications plus a magnification utility built into the operating system: ZoomText and Magic. In addition, Windows 98 and beyond include a screen magnifier utility as part of the operating system. Macintosh supports only one screen magnifier (InLarge) and a system utility (CloseView), while Unix® supports a few freeware screen magnification applications such as Dynamag and GMag. Windows also has accessibility tools (MSAA) and enhancements to the operating system that allow adaptive devices to work more efficiently and with fewer errors.
A computer-administered test can be delivered on an individual stand-alone (non-networked) computer, a networked computer, or a computer connected to the Internet. Adaptive software applications, such as screenreaders and screen magnifiers, generally work best if they are started on the computer before the testing application begins. This allows the student to become oriented to the testing application as soon as it starts. In the rare instance where the testing software or networking environment is DOS based, then the adaptive software must be loaded before the network connection is established and the testing application is loaded. It should be noted that the adaptive software working with the operating system does not guarantee that it will work with the testing software. The testing software must be written following "How to Write an Accessible Application" developed by Microsoft.
If the test is delivered via the Internet/World Wide Web, several options are available. If the test presentation conforms to the AA level of the W3C Web Content Accessibility Guidelines, and is presented using Microsoft's Internet Explorer®, then the user may use most screenreaders to interact with the test. Currently Microsoft's Internet Explorer and the Opera® browser are the most usable browsers for people with blindness and visual impairments. One additional web browser that deserves note is the Home Page Reader, available from IBM ( http://www-3.ibm.com/able/hpr.html). Home Page Reader is self-voicing, meaning it will need no additional software (i.e., screenreader) to speak or interact with the web presentation of the test. Home Page Reader works with valid HTML pages using HTML 4.0 and allows the user to change font styles, sizes, and foreground and background colors as well as change the speech intonation to signify headings or links, among other features.
The following principles should be adhered to when developing any software application, including test delivery software.
From IBM "Principles for Accessible Software" ( http://www-3.ibm.com/able/principles.html):
From "Microsoft Windows Guidelines for Accessible Software Design" http://msdn.microsoft.com/library/default.asp?url=/nhp/Default.asp?contentid=28000544
The previous section showed very general accessibility guidelines for the development of computer applications. Computer-based testing as a stand-alone application should follow these as a foundation for accessibility. This section describes how web accessibility guidelines can be applied to internet-based testing. Relevant parts of Section 508 of the Rehabilitation Act of 1973 (29 U.S.C. 794d) and Web Content Accessibility Guidelines (WCAG) are delineated. While there is considerable overlap between them, additional WCAG guidelines are used because they are more comprehensive than Section 508 regulations.
In "Prospects and Limitations of Psychological Testing on the Internet," from which the following quotation is excerpted, Barak and English (2002) ( http://construct.haifa.ac.il/~azy/eTesting.htm) discussed some of the drawbacks to internet-based testing. From the perspective of the psychometrist wanting to develop a sophisticated test, these may appear as limitations. From the perspective of a test-taker with a disability who is interacting with an online test, these limitations are seen as the foundation for accessibility:
Another drawback of using an Internet-based test has to do with still prevailing technological difficulties. Specifically, there is always the possibility that the client's browser, monitor, and/or video card may have other settings or an entirely different configuration than the designer intended so that the layout of the questionnaire or test might look somewhat different from that envisioned. The user's software and hardware ultimately display the test or questionnaire, and the hosting web server has no control over those versions, specifications, or settings. As a result, the designer of the test is best served by creating Web pages that are simple to display, using the most general, tried-and-true standards available so as to reach and serve as many varieties of client machines as possible. [emphasis added]
The above is not a limitation related to accessibility. The use of standard HTML and other web standards results in very accessible pages for test-takers with visual disabilities. Following accepted standards allows for the widest variety of client machines, connection speeds, and assistive technology.
Related to this problem is the fact that Internet connection quality varies among users, depending on their own equipment as well as the equipment used by the service providers and other intersections along the information superhighway. A slow modem, a small screen, or a problematic phone line might significantly undermine online testing. [emphasis added] (Barak and English, 2002)
This is another excellent reason to make web-based testing accessible. Well designed, accessible pages with appropriate alternative text generally display faster. The user can read the alternative text while waiting for the image to download. A well-designed page will scale according to the display parameters of the client computer. Again, accessible design compensates well for a slow modem and small screen.
The World Wide Web Consortium's "Web Content Accessibility Guidelines" (WCAG) and Section 508 ( http://www.tntluoma.com/sidebars/section508/) are guidelines that provide a strategy for web authors concerned with creating a web site on which the information presented is accessible to people with disabilities. All of the content guidelines should be followed during the construction of web pages, whether marketing materials, displaying general testing information, or delivering actual web-based tests. Many of these guidelines apply specifically to online testing. For example:
Section 508 (a), WCAG Checkpoint 1.1: Provide a text equivalent for every non-text element (e.g., via "alt", "longdesc", or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, ASCII art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video.
Comment: When images of math equations or fractions are displayed,
include a linearized text version as "alt." For example:
has an "alt" of "1/4." Use standard markup rather than images to convey information on test questions, e.g., use table markup rather than using an image of a table to convey the information.
Section 508 (c), WCAG Checkpoint 2.1: Ensure that all information conveyed with color is also available without color, for example from context or markup.
Section 508 (g), WCAG Checkpoint 5.1: For data tables, identify row and column headers.
Comment: Use standard markup rather than images to convey information on test questions; e.g., use table markup rather than using an image of a table to convey the information.
Section 508 (h), WCAG Checkpoint 5.2: For data tables that have two or more logical levels of row or column headers, use markup to associate data cells and header cells.
Comment: By way of explanation for Guidelines 5.1 and 5.2, see section on access to tables in "CD-ROMS for Math and Science" by M. Rothberg and T. Wlodkowski, 1998 ( http://www.rit.edu/~easi/itd/itd v05n1-2/article1.html):
Reading and manipulating tables is an important way of processing scientific information and is a particular problem for blind users. Using data in a table requires referring to the headings for the row and column in order to interpret the information in a single cell. Currently, when navigating the tables (in most current software), blind users don't know what cell they are in at any given time, let alone the column and row headers that apply. A standard way of letting screen readers know which headers apply to each cell is needed, such as that proposed as part of the HTML 4.0 specification from the World Wide Web Consortium (W3C) (1997). Once this information is available, screen readers can create appropriate navigation commands and respond with the data for each cell in context. This will permit blind users to explore a set of tabular data more efficiently (Rothberg and Wlodkowski, 1998).
Section 508 (b), WCAG Checkpoint 1.3: Provide an auditory description of the important information of the visual track of a multimedia presentation.
Section 508 (b), WCAG Checkpoint 1.4: For any time-based multimedia presentation (e.g., a movie or animation), synchronize equivalent alternatives (e.g., captions or auditory descriptions of the visual track) with the presentation.
WCAG Checkpoint 2.2: Ensure that foreground and background color combinations provide sufficient contrast when viewed by someone having color deficits or when viewed on a black and white screen.
WCAG Checkpoint 10.1: Do not cause pop-ups or other windows to appear and do not change the current window without informing the user.
WCAG Checkpoint 9.4: Create a logical tab order through links, form controls, and objects.
WGAG Checkpoint 9.5: Provide keyboard shortcuts to important links (including those in client-side image maps), form controls, and groups of form controls (e.g. "next question" button).
Section 508 (n) WCAG Checkpoint 12.4: Associate labels explicitly with their form controls (e.g. multiple choice answers should have the entire answer as the label "C. dog: collie" rather than "C." or no label at all).
These checkpoints, if followed, would allow assistive technology to appropriately and accurately convey the information to the test-taker with a visual disability.
Information concerning test item component accessibility can be found in "Accessibility of Information in Electronic Textbooks for All Students," ( http://www.tsbvi.edu/textbooks/tea1999.htm), an excerpt from Report on the Computer Network Study Project (1999, http://www.tea.state.tx.us/textbooks/archives/cnspt5.htm).
In addition to general guidelines for accessible software, the U.S. Department of Education Assistive Technology Program ( http://www.ed.gov/offices/OCIO/programs_services/assistive_technology/index.html) and at least one university (Oregon State University) have also developed accessible software guidelines ( http://osu.orst.edu/dept/tap/Policy/soft.html ). Both sets of guidelines are combinations of the Microsoft, IBM, and web (W3C-WCAG) accessibility guidelines.
Some of the test items that are more difficult to make accessible to blind and low vision students are images used to convey necessary information. These include, but are not limited to, graphs, charts, diagrams, and maps. Below are suggestions for presenting images in a manner that help simplify the identification process for the test-taker with visual disabilities. The information applies to both stand-alone and web-based test delivery methods.
For information regarding the tools and methods used to create tactile graphics see Appendix B.
Parts of the following section have been excerpted from Part 2 of "Computer-based Testing Guidelines," by the Association of Test Publishers (ATP) (2002). The ATP has drafted this document toward defining industry standards. The latest version was presented at their Conference on Computer-Based Testing, February 4, 2002, in Carlsbad, California.
Each of the guidelines ATP has developed are important to computer-based testing. However, the individual guidelines quoted here were chosen for comment and revision because they pertain to issues affecting test-takers with visual disabilities. Not all sections are addressed here. Additions made by the authors of this document follow the material directly quoted from ATP's current guidelines. Responses are bolded and comments are italicized. See updates at http://www.testpublishers.org/.
PART 2: COMPUTER-BASED TESTING GUIDELINES
Chapter 1: Planning and Design
This chapter provides guidelines for planning the computer-based test and for the information that should be provided in the test specifications (p. 16).
1.1 The purpose of a computer-based test should be investigated through a needs analysis to determine characteristics and requirements relevant to the test purpose for the test-takers, the test users, the test developers, the test publishers/test deliverers, and the test sponsors. If a needs analysis is conducted, the results should be documented (p. 16).
Response: What should be included in this particular guideline, however, is the recognition that for the needs analysis to be considerate of all its potential users, the accommodations that might be required for a student to have access to the test must be taken into consideration as well.
1.3 Computer-based tests can be designed and developed to meet different purposes. Design documentation should include:
Response: Needed within this guideline is the acknowledgement that the test purpose, the content domain definitions, the content structure for the test items, and the required response formats for the test items must all be mouse independent. Further, test specifications should include required response input methods, the accessibility guidelines followed, and those elements designed to accommodate accessibility, e.g., recasting the test item in a non-visual manner, sample items explaining accessibility, and accessible test administration.
1.4 Design documentation should include the procedures concerning the required content for the test including how it will be obtained and reviewed. The review mode for the computer-based test should replicate the expected test delivery conditions to the greatest extent possible (p. 16).
Response: What should be emphasized in this guideline is the responsibility of the test developer to review the test for accessibility, using assistive technology that matches the accommodation available at the test site.
1.5 Information should be provided about the test delivery environment, minimum computer hardware for the computer-based test development and delivery environments, and minimum software configurations and requirements for computer-based-test development and delivery environments (p. 16).
Response: This guideline could support the assertion that test developers' specifications should include information about the minimum hardware and software requirements needed to operate proficiently the expected accommodations and related assistive technology required by test-takers who have disabilities.
PART 2: COMPUTER-BASED TESTING GUIDELINES
Chapter 2: Test Development
These guidelines relate to development of the computer-based test and provide direction on using computers in the process of test development. They apply to test administration only indirectly (p.17).
2.1 Characteristics of the potential testing delivery environment(s) should be specified to match the needs of the test. It is important to make sure that items can be properly displayed in the test delivery environment and that test-taker input and results can be collected, aggregated, and reported. For example, graphics constraints need to be identified so that item writers do not create items that have too many colors or require a screen resolution that is too high for the current test delivery environment (p. 17).
Response: Additionally, those items displayed in the test delivery environment should be created to deliver accessibility via screen display (with and without large characters), speech, and braille output. Test-taker input should also be device independent. Moreover, foreground/background or other color combinations must be designed to be discernible by people who are colorblind. Do not rely on color alone to convey important information. Font size, screen clutter, and monitor size are also accommodations to be considered if the test delivery method is to test the required concept while allowing for the various methods of presentation needed by students who are visually impaired.
Comment: While testing accommodations provide students who are visually impaired the opportunity to demonstrate their abilities under standard conditions, such adaptations do not ensure that the test-taker using accommodations will be able to access the test. Accommodations assume that the testing material and delivery mechanism are accessible to approved accommodations such as screen readers, screen magnification software, or other assistive technology. If the test is not designed following established accessibility guidelines, then no amount of accommodations will make it accessible. See above, Part V: Design Considerations for Stand-Alone Computerized Tests.
The W3C User Agent Accessibility Guidelines (UAAG, http://www.w3.org/TR/UAAG10/) provide "guidelines for designing user agents that lower barriers to Web accessibility for people with disabilities." User agents include HTML browsers and other software applications that retrieve and render web content. "A user agent that conforms to these guidelines will promote accessibility through its own user interface and through other internal facilities, including its ability to communicate with other technologies (especially assistive technologies). By following these guidelines, developers will create more usable software for all web users." ( http://www.w3.org/TR/UAAG10/cover.html - abstract)
Non-web software that satisfies the requirements of the UAAG will also be more usable, flexible, manageable, extensible, and beneficial to all users.
2.4 Where necessary, item writers should be trained in the technology needed to author computer-based testing items that match target assessment constructs or objectives. Item writers may also collaborate with graphics designers and programmers in developing items and item display sequences (p. 17).
Response: Interfaces used by item writers should follow operating system standards. Device independent (i.e. non-mouse) interfaces, such as using the tab or arrow keys to move between responses and using the Enter key as an equivalent for the mouse click, should also be included in those types of accessible items included in the design of the test interface. Further, item design, aside from impacting item performance, might affect accessibility, understandability, and comprehensibility.
2.5 When the presentation modality may affect test-taker performance, item writers and reviewers, should view the item under conditions as close as possible to the actual testing delivery environment (p. 17).
Response: To ensure accessibility, reviewers who are visually impaired should be consulted to help examine the item in conditions that closely match the actual testing delivery environment.
2.6 The test development process should include the development of test introduction and instruction screens. These screens welcome the test-taker and provide information about the test and test administration conditions (e.g. time limits, number of questions, whether skipping and returning to items are allowed). Similar screens might be needed to separate subsections of the test and to prepare the test-taker for tasks to follow (p. 17).
Response: Test introduction and instruction screens must be made accessible to assistive technology. Such screens should indicate how test items are displayed and should provide instructions explaining to the test-taker how to interact with the software, i.e. keyboard shortcuts.
2.8 Instructions, tutorials or practice tests should be created for each computer-based test to familiarize test-takers with the features of the test and items to minimize construct irrelevant variance due to computer modality. Characteristics to consider include use of specific item types (e.g. drag-and-drop, point-and-click) and test navigation features (e.g. item skipping) (p. 17).
Response: Tutorials and practice tests designed to familiarize test-takers with the features of the test should also be made accessible. The use of device independent keyboard alternatives, such as arrow or tab navigation and enter key for response choice, should also be considered.
Comment: Technology used to make the test accessible should be made part of the regular curriculum. It is equally important to expose the student to the software and interface that will be used to administer the test prior to the testing date.
NOTE: There are simulation and practice materials available for most tests. These materials are usually produced by a third party company. Although they purport to be just like the real test, the underlying design and accessibility may be completely different.
2.9 Test developers should consider and design appropriate computerized test timing features. Several factors should be considered such as minimizing construct irrelevant variance; the use of introduction screens, surveys and tutorials; and item display time, item response latency, and speededness (p. 17).
Response: When determining time limits, the time needed for the student to respond to alternative representations (i.e. captions, descriptive video, tactile graphics, etc.) should also be considered.
2.10 Developers of computer-based tests should consider how aspects of computer delivery might impact fairness and equity and take appropriate action to minimize their effect. These factors may include aspects of test design, content, specific items, or format elements (p. 18).
Response: Developers of computer-based tests should also document those methods and procedures that are employed when striving toward accessibility.
2.11 Developers of computer-based tests should plan for, specify and document reasonable accommodations that can be useful for test-takers who have specific conditions that may make the standard form of a computer-based test an inaccurate measure of the test-taker's knowledge, skill, or performance. Test developers will need to take into account the extent to which an accommodation may affect the validity of the inferences made from test scores (p. 18).
Comment: This is a global guideline. While the intent is clear, test developers need more specific information in order to implement this guideline. Preceding and following guidelines seek to provide that information in context. The underlying premise is that the content is in accessible form. Accommodations cannot make inaccessible content accessible. The current document discusses this issue in detail.
2.12 Developers of computer based test should consider, specify and document item selection algorithms, scoring rules, delivery features and reporting options consistent with measurement objectives (p. 18).
Response: Additional delivery features should include appropriate alternatives required by test-takers who are visually impaired (e.g. alternative text and transcripts used for representing images, maps, graphs, and multimedia).
Delivery features could include graphics files in various sizes for low vision test-takers and appropriate descriptive text for screen readers, item display parameters that include variable font sizes and color, text labels used in the items, buttons with text labels, and the presentation of alternative accessible media, i.e., captions, descriptive video, audio, transcripts.
2.13 The test developer should consider processes and technologies to aid in item authoring and review, item classification, tracking of item changes, test version control, management of results data, etc. (p. 18).
Response: (1)The resulting test generated by the development system
should be accessible by default. Along with edits, accuracy, and sensitivity, the
written item review should also consider accessibility.
(2) Additionally, the quality of any accessibility feature(s) within a computer-based testing environment should be accurately evaluated and affirmed. Reviewers who are visually impaired should participate in the quality assurance process.
Comment: (1) There are two types of test development environments: stand-alone programs online programs. A stand-alone test development environment should follow the basic principle for developing an accessible program, discussed above in Part V: Design Considerations for Stand-Alone Computerized Tests. For tests being developed for delivery over the World Wide Web, it is suggested that the tools used to develop a testing environment conform at a high level to the W3C's Authoring Tool Accessibility Guidelines 1.0 (ATAG1.0).
The goals of this document [ATAG1.0] can be stated as follows: that the authoring tool be accessible to authors regardless of disability, that it produce accessible content by default, and that it support and encourage the author in creating accessible content. Because most of the content of the Web is created using authoring tools, they play a critical role in ensuring the accessibility of the Web. Since the Web is both a means of receiving information and communicating information, it is important that both the Web content produced and the authoring tool itself be accessible. (ATAG, http://www.w3.org/TR/ATAG10)
Using a tool that conforms to ATAG ensures that the delivered test will meet the requirements of the W3C's Web Content Accessibility Guidelines.
(2) CTB/McGraw-Hill included forty visually impaired students in each testing group used in collecting the normative data for the TerraNova Series. One percent (1%) of the students included in the Harcourt-Brace norming of the Stanford Achievement Test, Ninth Edition, were categorized as visually impaired. The State of Kentucky has developed six versions of their state assessment tests. One version has been modified specifically for students who are blind or visually impaired. This version of the test is also used with the other versions during testing of the general student population.
1.14 Test developers should consider the use of standard item and data interchange formats to facilitate access and retrieval of information (p. 18).
Response: Tactile graphic production tools are crucial applications to which item banks should be connected.
PART 2: COMPUTER-BASED TESTING GUIDELINES
Chapter 3: Test Administration
This chapter provides Guidelines about the administration of computer-based tests. It includes information for the test-taker, and information on the testing interface, the testing environment, hardware and software requirements, special accommodations for test-takers with disabilities, test security, and disaster recovery.
3.1 Test sponsors, developers and deliverers should establish acceptable hardware and software requirements (e.g. configuration standards) for the administration of computer-based tests (p. 18).
Response: The means for test-takers to navigate through the test (e.g., mark items for review, change answers) should be accessible. Moreover, if an accessibility flag is set and alternative media are being displayed, this time should be excluded from the test time allocated to the test-taker.
3.2 Variability across testing environments should have no meaningful impact on test scores. In addition to factors such as test-taker comfort, noise level, amount of workspace, and lighting, appropriate steps should be taken to ensure that the test environments meet the specified hardware and software requirements (p.18).
Response: Assistive technology and other accommodations could be effectively listed in this guideline.
3.3 The test-taker should be provided with an opportunity to become familiar with, and to demonstrate facility with, the computer testing interface and functionality. For example, a test-taker should have the opportunity to take a set of practice questions or a tutorial prior to or during the actual testing administration. Instructions to test-takers should clearly indicate how to select, enter, or construct responses and use any special equipment (p. 18).
Response: For students who are visually impaired, accessible tutorials or practice questions should be made available in order to allow the test-taker the opportunity to become familiar with and demonstrate proficiency in using a computer testing interface.
Comment: The tutorial should include information concerning accessibility features and their functioning.
3.6 Reasonable accommodations in the test event should allow for a test experience that is fair and equitable for all test-takers. Modifications to the test event should be made in accordance with the procedures outlined by the appropriate stakeholders on the basis of carefully considered professional judgment. Examples of reasonable accommodations include a larger monitor for a test-taker with a visual impairment, the availability of an alternate means for responding to a test item for an individual with a physical or psychomotor impairment, and extended test time(p. 18).
Comment: See guidelines and comments above for specific instances where accessibility and accommodations can be incorporated into the test administration process.
Note: Testing accommodations provide test-takers with disabilities the opportunity to demonstrate their abilities under standard conditions. Assistive technology assumes [emphasis added] that the testing materials and delivery mechanisms are accessible to approved accommodations such as screen readers and screen magnification software. If the test is not designed following accepted accessibility guidelines, then no amount of accommodations will make the test usable by test-takers with disabilities.
PART 2: COMPUTER-BASED TESTING GUIDELINES
Chapter 4: Scoring and Score Reporting
This chapter contains guidelines concerning factors that influence scores, score interpretation, or reporting of scores on computer-based tests.
4.1 Any aspect of the hardware or software that might affect the interpretation of test scores should be thoroughly described in the test user's manual or other test documentation (p. 19).
Response: The features of accessible design, alternative presentations, assistive technology, and accommodative tools can potentially impact the interpretation of a test, and thus must be specified within the test documentation.
4.6 When test score information is released to stakeholders, appropriate interpretations of individual or group summary test scores should be available. The interpretations should clearly describe the meaning of the scores, the precision of the scores, common misinterpretations of the scores, and the use of the scores (p. 19).
Response: The interpretation of test scores and other related information should be communicated in an accessible method, i.e. via electronic text.
PART 2: COMPUTER-BASED TESTING GUIDELINES
Chapter 6: Stakeholder Communications
There may be a need to educate test-takers and the public to the benefits, limitations, and capabilities of computer-based tests. Since test-takers may not understand the differences between computer-based and paper-based tests, these differences need to be communicated to all appropriate stake holders. These differences could include but are not limited to test scheduling and delivery, tutorials, practice tests, marketing, web design, scoring and results, item types and presentation models, test design, development and delivery, and copyright ownership.
6.1 To ensure a successful transition from a paper-and-pencil based to computer-based format, test organization sponsors should develop and execute a well-conceived test communications and education program. Elements of such a program might include:
The transition plan should include ample time to ensure appropriate stakeholder understanding. (p. 20).
Response: Students with disabilities must be included as part of all stakeholder communications and the development of testing education programs.
Comment: Tutorials, practice tests, marketing, web design, and rights and responsibilities have an accessibility component. Are tutorials, practice tests, marketing, web design, and rights communicated in an accessible manner? Is the test-taker able to interact with these communications and tests in an accessible manner? (see information on the rights and responsibilities of test-takers at http://www.apa.org/science/ttrr.html)
6.2 Where appropriate, test developers should provide information concerning the test purpose and test content specification to test users prior to the availability of the test (p. 20).
Response: Test sponsors must relay to the test-taker in an accessible format all information regarding test purpose and specification.
6.3 Test-takers who are unfamiliar with a computer keyboard or a mouse should have access to practice questions, practice exams or tutorials. Any practice questions could be given prior to or during the computerized test administration process. When test-takers have had experience using previous versions of the testing system, it may be worthwhile to emphasize administration instructions specific to any changes (p.20).
Response: Practice exams and tutorials should be accessible, and a user interface similar to the one used to represent the actual test should be made available so that the student who is visually impaired can become familiar with computer-based item types.
The Guidelines for Computer-Based Testing: February 4, 2002, published by the Association of Test Publishers (ATP), a non-profit organization that represents publishers of test and assessment materials, were drafted to "help assure high measurement quality of computer-based tests and to provide direction for the principles, procedures and best practices used for developing and administering these tests" (p. 2). ATP Guideline 3.3 states "a test-taker should have the opportunity to take a set of practice questions or a tutorial prior to or during the actual testing administration" (p. 18). This guidelines highlights the statement made earlier that for students with visual disabilities, it is crucial to provide practice using testing software and assistive devices prior to the testing window.
During our investigation concerning the accessibility of computer-administered tests, a random review of current tutorial and practice exams and other testing software programs was undertaken with regard to their accessibility. In the following section, commercially available test preparation software and other educational software with testing components are discussed first, followed by a discussion of web-based psychological and educational testing sites. One test generation software program that was included with a high school textbook was reviewed. Lastly, two accessible test creation and administration packages were considered.
Seven commercially available software packages were tested. They were either purchased at a local store or were available upon request from a test manufacturer. The format for this section is as follows: a list of musts for accessibility, the name of the software package, and bulleted items listing which portions of the software were tested for general accessibility, screenreader accessibility, screen magnification accessibility, and keyboard accessibility.
MUSTS FOR ACCESSIBILITY
If computer-based testing applications are to be accessible they should:
Kaplan Deluxe GMAT, LSAT, and GRE (2001 Edition)
U.S. History: Semester 2, by Fogware
StudyWorks!® Mathematics Deluxe, by Mathsoft
The Princeton Review: Inside the SAT, ACT, and PSAT Deluxe(2000 Edition), by The Learning Company
Excel @ Middle School, by Knowledge Adventure
There are many online testing and tutorial sites on the World Wide Web. A variety of sites were chosen for review based on content and question types. Moreover, six sites were taken from the article "Prospects and Limitations of Psychological Testing on the Internet" (Barak and English, 2002, http://construct.haifa.ac.il/~azy/eTesting.htm). The format for this section is as follows: name of the web site, a description and web site address (URL), and the accessibility review. All sites were tested using IBM's Home Page Reader, a web-browser with integrated speech. The program uses speech synthesis to read web pages to the user. It allows the user to navigate through the pages with a high degree of control and flexibility. While many of the modern screenreaders have direct support for web pages, the control and simplicity offered by IBM seems to be preferred by users, especially new users.
MUSTS FOR ACCESSIBILITY
If web delivered tests are to be accessible they must:
Psychiatry Information for the General Public
Queendom.com®: Tests, tests, tests
Self Discovery Workshop
Keirsey Character and Temperament Sorter®
What's Your Emotional Intelligence Quotient?
Organizational Diagnostics Online: Profiler
Prentice Hall® History test
My SAT Prep
Teacher's Pet®: Test Generation Software
In addition to these commercially available test packages, the American Printing House for the Blind (APH) produces an accessible test creation and administration program called Teacher's Pet. This program contains no specific test content, but, instead, provides a platform that lets the teacher or parent create their own content. Both the creation and the test taking process follow all accessibility guidelines and include support for incorporating graphics and sounds in each test question. You can get more information about Teacher's Pet at www.aph.org/tech.
Another test creation software package was reviewed for accessibility: ExamView® by FSCreations, Inc., version 2.01, developed in cooperation with Glencoe/McGraw-Hill. ExamView also has a web site for exploration of its product and web-based tests that it can generate ( http://www.examview.com/). The software allows the teacher to choose font type and size, or change them quickly. Test creators can choose the number of columns for the layout of a test. Tests can be exported to Rich Text Format (RTF) files, which allows the easy creation of braille versions of the test (it should be noted that valid HTML also allows easy conversion to appropriately formatted braille). A variety of question types, including multiple choice, short answer, essay and others were available. The sample tests on the web site were very accessible and usable with HomePage Reader. ExamView includes tools for the creation of online (web-based) tests.
Kaplan® Deluxe GMAT, LSAT and GRE® 2001 Edition. Encore Software, 2000. http://www.encoresoftware.com/
StudyWorks! III for Math. MathSoft, 2000. ISBN: 1-57682-084-X http://www.mathsoft.com/
U.S. History. Fogware, 2000.
The Princeton Review: Inside the SAT, ACT, and PSAT. The Learning Company, 1999. ISBN: 0-7630-3190-9 http://www.broderbund.com/Category.asp?CID=208
Excel @ Middle School. Knowledge Adventure, Inc., 2000. ISBN: 1-58189-460-0 http://www.knowledgeadventure.com/
Teacher's Pet: Test Creation Software, American Printing House for the Blind, Inc., http://www.aph.org/tech
Public education is being required to become more accountable for student achievement (No Child Left Behind Act of 2001). Achievement is generally measured through testing. Paralleling the need for accountability and testing is the increasing reliance on computers and the internet in today's schools. A study by the U.S. Department of Education noted that in 2001, 77% of classrooms in the United States had computers connected to the Internet (National Center for Educational Statistics, 2001). "As technology becomes more central to schooling, assessing students in a medium different from the one in which they typically learn will become increasingly untenable" (Bennett, 2002, p.2).
Increasingly schools are using technology-based assessments to measure student achievement. The Association of Test Publishers (2002) has recently published Guidelines for Computer-Based Testing: February 4, 2002 in support of efforts to move to computer-based tests. Bennett (2002) notes there are five factors currently driving technology-based assessment:
First, states are implementing technology-based assessment in elementary and secondary schools and in all of the key content areas…
Second, states plan to deliver both low- and high-stakes examinations through this medium.…
Third, some of these examinations are already being administered statewide…
Fourth, the tests generally use multiple-choice items exclusively…
Finally, …electronic assessment is part of an integrated state plan to employ technology throughout the educational process (p. 8-9).
The following states have some type of current or planned operational delivery of technology-based assessment: South Dakota, Oregon, Virginia, Georgia, Idaho, Utah, North Carolina, and Maryland. South Dakota even implemented their mandatory computerized assessment for grades 3, 6, and 10 statewide, in Spring 2002, without providing a paper counterpart (Bennett, 2002, p. 10-11). Additionally, in 2001, the State of Kentucky piloted online delivery of its Commonwealth Accountability Testing System (CATS), presented an online practice test session in October, 2002, and scheduled online presentation of the test statewide for Spring, 2003. Students with visual and other disabilities are successfully being fully included in the Kentucky online testing program.
[T]he incorporation of technology into assessment is inevitable because, as technology becomes intertwined with what and how students learn, the means we use to document achievement must keep pace (Bennett, 2002, p.14)
Association of Test Publishers. (2002). Guidelines for computer-based testing. Washington, DC: Author. (Final Version presented February, 2002, at the ATP Conference on Computer-Based Testing, Carlsbad, CA)
Barak, A., and English, N. (2002). Prospects and limitations of psychological testing on the internet. Journal of Technology in Human Services, 19(2-3), 65-89. http://construct.haifa.ac.il/~azy/eTesting.htm
Bennett, R. E. (2002). Inexorable and inevitable: The continuing story of technology and assessment. Journal of Technology, Learning, and Assessment, 1(1), 1-9. http://www.bc.edu/research/intasc/jtla/journal/v1n1.shtml
Braille Authority of North America. (1983). Guidelines for mathematical diagrams. Washington, DC: Author.
Bridgeman, B., Lennon, M. L., and Jackenthal, Altamese. (2001). Effects of screen size, screen resolution, and display rate on computer-based test performance. Princeton, NJ: Educational Testing Service.
Burk, M. (1999). Computerized test accommodations: A new approach for inclusion and success for students with disabilities. Washington, D.C.: A.U. Software.
Canadian Braille Authority. (1996). Tactile graphics research project report part II: Interim measures and supplement. Toronto, Ontario: Author.
Corn, A. L., and Koenig, A. J. (Eds.). (1996). Foundations of low vision: Clinical and functional perspectives. New York: AFB Press.
CTB/McGraw-Hill. (2001). Guidelines for Inclusive Test Administration. Monterey, CA: Author.
D'Andrea, F. M., and Farrenkopf, C. (Eds.). (2000). Looking to learn: Promoting literacy for students with low vision. New York: AFB Press.
Elliott, S. N. (1999). Valid testing accommodations: Fundamental assumptions and methods for collecting validity evidence. Retrieved April 14, 2002, from University of Wisconsin-Madison, Wisconsin Center for Educational Research Web site: http://www.wcer.wisc.edu/testacc/publications/CCSSOvaliditypaper.699.htm
Freeman, P. B., and Jose, R. T. (1997). The art and practice of low vision. (2nd ed.). Newton, MA: Butterworth-Heinemann.
Hansen, E.G., Lee, M. J., and Forer, D.C. (2002). A "self-voicing" test for individuals with visual disabilities. (Research Notes). Journal of Visual Impairment and Blindness (JVIB), 96, 273-275.
Joint Forum for General Certification of Secondary Education (GCSE) and General Certificate of Education (GCE). (1998). Specification for the preparation and production of examination papers for visually-impaired candidates. Cambridge, England: Author.
Jose, R. T. (Ed.). (1983). Understanding low vision. New York: AFB Press.
Lewis, S., and Allman, C. B. (2000). Seeing eye to eye: An administrator's guide to students with low vision. New York: AFB Press.
Keates, S., Clarkson, P. J., Harrison, L., and Robinson, P. (2002). Towards a practical inclusive design approach. Retrieved November 7, 2002, from http://rehab-www.eng.cam.ac.uk/papers/lsk12/cuu2000/
McDonnell, L. M., McLaughlin, M. J., and Morison, P. (Eds.). (1997). Educating one and all: Students with disabilities and standards-based reform. Washington, DC: National Academy Press.
National Center for Educational Statistics. (2001). Internet access in U.S. public schools and classrooms: 1994-2000. Retrieved November 7, 2002, from http://nces.ed.gov/pubs2002/internet/ (NCES No. 2002018).
No Child Left Behind. (2001). The facts about…measuring progress. Retrieved November 7, 2002, from http://www.nochildleftbehind.gov/start/facts/testing.html
Rose, D.H., and Meyer, A. (2002). Teaching every student in the digital age: Universal design for learning. Alexandria, VA: Association for Supervision and Curriculum Development.
Rothberg, M., and Wlodkowski, T. (1998). CD-Roms for math and science. Information Technology and Disabilities Journal, 5(1-2). Retrieved April 18, 2002, from http://www.rit.edu/~easi/itd/itdv05n1-2/article1.html
Stuen, C., Aries, A., Horowitz, A., Lang, M. A., Rosenthal, B., and Seidman, K. (Eds.). (2000). Vision rehabilitation: Assessment, intervention and outcomes. Exton, PA: Swets and Zeitlinger.
Thompson, S.J., Thurlow, M.L., Quenemoen, R.F., and Lehr, C.A. (2002) Access to computer-based testing for students with disabilities (Synthesis Report 45). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved July 12, 2002 from the World Wide Web: http://education.umn.edu/NCEO/OnlinePubs/Synthesis45.html
Tiresias. Guidelines: Inclusive Design. Retrieved November 7, 2002, from the World Wide Web: http://www.tiresias.org/guidelines/inclusive.htm
World Wide Web Consortium (W3C): Web Content Accessibility Guidelines 1.0 (WCAG). Retrieved January 12, 2003, from the World Wide Web: http://www.w3.org/TR/WAI-WEBCONTENT/
All About Vision. Com
American Academy of Ophthalmology
American Optometric Association
Mississippi State University, The Rehabilitation and Research Training Center on
Blindness and Low Vision
The Low Vision Centers of Indiana
The Low Vision Gateway
The Low Vision Information Center
Low Vision Research Group Network
The National Eye Institute
The Screen Magnifiers Homepage
Trace Center, College of Engineering, University of Wisconsin at Madison
University of Houston College of Optometry
Westchester Low Vision
Adaptive and Assistive Technology Online
Links to Manufacturers of Assistive Technology Devices
http://www.state.me.us/rehab/Links to Manufacturers of Assistive Technology Devices.htm
Microsoft Accessibility Overview of Assistive Technology
Microsoft Developer's Network (MSDN)
Rehabilitation Engineering and Assistive Technology Society of North America
Technology Manufacturer's Links
The Alliance for Technology Access: Publications
VI Guide to Assistive Technology
This page, sponsored by the Texas School for the Blind and Visually Impaired, provides a series of links to helpful guidelines in the area of accessibility.
Disability, Race, and High-Stakes Testing of Students
In this National Center on Accessing the General Curriculum (NCAC) supported article, author Jay Heubert provides a comprehensive look at the issue of high stakes testing involving students with disabilities and minorities.
Dolan, R.P., and Hall, T.E. (2001). Universal design for learning: Implication for
large-scale assessment. IDA Perspectives 27(4), 22-25. Retrieved March, 2002,
from the World Wide Web:
In this article, the authors start with a comment about how much easier it can be for students with disabilities to gain access to the school building than the material taught within it. The site goes on to address this issue by providing methods for accommodating students with all varieties of disabilities.
eDescription: Detailed Overview. Extended, Enhanced, Educational Description.
This site gives information about an upcoming project planned to create a revolutionary methodology, intended for blind and visually impaired students, of description through an auditory medium.
Educational Testing Service Accommodations Policy
This document provides information regarding testing accommodations for students with disabilities. Concerns for computer-based testing and software accessibility are addressed as well as the frequent need for extended time and breaks during testing. Pen and paper accommodations are reviewed along with the options for alternative test formats.
Guidelines for Providing State Assessments in Alternate Formats For Students with
Visual Impairments, by Carol Allman, Ph.D
Dr. Allman's article gives guidelines for providing state assessments in braille formats, literary braille formats, mathematical (Nemeth Code) braille formats, and guidelines for providing assessments in large print formats.
Oregon State University Software Access Guidelines
The Oregon State University Software Access Guidelines contain information about keyboard access, icons, sounds, display, field labeling, and documentation.
Principles of Assistive Technology for Students with Visual Impairments, by Jim
Allan and Jay Stiteley
This document provides useful information that outlines not only the legal definitions of assistive devices and services but also describes the evaluation process to determine if a student requires such technology. The author goes on to list several common forms of assistive technology and describes their features. This document also provides helpful information for both the student and the teacher about using assistive devices in the classroom environment.
Requirements for Accessible Electronic and Information Technology
(E & IT) Design
These guidelines, established by the U.S. Department of Education, work to ensure that information is accessible to all employees and external customers regardless of disabilities.
Student's Recommendations for Digital Textbooks
Here, the not-for-profit organization, Center for Accessible Special Technology (CAST), fully details the processes and outcomes of a project funded by the U.S. Department of Education to research the advantages and feasibility of computerized textbooks for a variety of learners.
Corn, A. L., and Koenig, A. J. (Eds.). (1996). Foundations of low vision: Clinical and functional perspectives. New York: AFB Press.
This book provides a comprehensive look at low vision. Taking on a scientific view to cover the anatomy of the eye, a psychological view to cover social and psychological implications, and including detailed definitions of specific degrees of vision are just some of the ways in which Corn and Koenig tackle this topic.
While this organization largely provides products for the blind and visually impaired, the site also describes processes of vision loss and emphasizes the difference between low vision and blindness by stating, "Low Vision = Usable Vision."
University of Houston College of Optometry Center for Sight Enhancement
This site defines low vision in the context of an individual's life. Additionally, it cites different causes of low vision as well as methods an individual with low vision may use to strengthen abilities that are impeded by a visual impairment.
Applied Measurement Professionals: lx.com
CollegeBoard.com: Question of the Day
Links to Test Prep Materials
Microsoft.com: Testing Demos
Microsoft Training & Certification Exam Resources
Steck-Vaughn GED Practice
Testcraft Assessment Software
Thomson/Peterson's Test Prep
APH Educational Research: Guidelines for Design of Tactile Graphics
APH Products: Communication Modes/Literacy
Products Manufactured by Quantum Technology: Tactile
Resources for Preparing Quality Tactile Graphics
Table comparing prices of Devices for Embossed Graphics
Tactile Graphics, An Overview and Resource Guide
By John A. Gardner, Science Access Project, Department of Physics, Oregon State University
The Real Challenge in Tactile Graphics
By Phil Hatlen, Superintendent, Texas School for the Blind and Visually Impaired
Types of Tactile Graphics
ACT, Inc.: Test Prep
Biolink, developer of access programs for the visually impaired
Cat Global Testing Network
Computerized Testing at Indiana University, Bloomington
Educational Testing Service: Computer-Based Testing
EdVision Performance: Standards-Based Measurement on the Web
EpiCenter: A Microcomputer-Based Testing System
Harcourt: Computer-Based and Online Testing
Inexorable and Inevitable: The Continuing Story of Technology and Assessment
Microsoft Accessibility Center
NCS Awarded Georgia's Electronic Testing Contract
NCS Awarded Maryland's Online Testing Contract
NCS Awarded Virginia's Online Testing Contract
NCS: Electronic Testing Services
Northwest Evaluation Association
Online Math Courses and Test Prep
SAT Program Information - Student Disability Requirements
Using OMNI 1000 with Shortcut Keys
W3C Speech Synthesis Markup Language Specification,
W3C Working Draft 5 April 2002
Colorado Department of Education: Understanding Accommodations
Consortium for Equity and Standards in Testing
Council of Chief State School Officers (CCSSO): state educational accountability systems
CTB/McGraw Hill Student Accommodations
ERIC: Disabilities and Their Implications for Testing
FairTest.com: Report on State Assessment and Accountability Systems
IMS Global Learning Consortium, Inc.
National Center on Educational Outcomes: Accommodations
State Standards. Com: State educational standards coupled to lesson plans and
Students with Disabilities in National and Statewide Assessments -Minnesota Report 7
The National Center for Education Statistics
The National Fair Access Coalition on Testing (FACT)
The National Institute on Student Achievement, Curriculum and Assessment
Refers to the freedom or ability of an individual to obtain or make full use of a product or environment. A product is accessible to an individual with disabilities only if he or she is able to use it to carry out all of the same functions and to achieve the same results as individuals with similar skills and training who do not have disabilities. ( http://www.tea.state.tx.us/Textbooks/archives/cnstemp.htm, Texas Education Agency [TEA], 1999)
Changes that can be made to the way students with a disability access instruction and demonstrate performance. Accommodations can be made to instructional methods and materials, assignments and assessments, learning environment, time demands and schedules, and special communication systems.
The utilization of non-traditional approaches in judging student performance.
Alt text (alternative text)
Text written to replace an image or graphical representation, so that the reader gets a description of the image. This text is then accessible to assistive technology such as screen readers.
Animated GIF (Graphics interchange format)
An animated GIF (Graphics Interchange Format) file is a graphic image on a Web page that moves - for example, a twirling icon or a banner with a hand that waves or letters that magically get larger.
On the Web, using Java, the object-oriented programming language, an applet is a small program that can be sent along with a Web page to a user.
American Standard Code for Information Interchange (ASCII) is the most common format for text files for computers and on the Internet. In an ASCII file, each alphabetic, numeric, or special character is represented with a 7-bit binary number (a string of seven 0s or 1s). One hundred twenty-eight (128) possible characters are defined.
Assistive technology (sometimes used interchangeably with adaptive technology)
The Technology-Related Assistance for Individuals with Disabilities Act of 1998 (PL 100-407) gave us the first legal definition of assistive technology devices and services. An assistive technology device was defined as: any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities. An assistive technology service was described as: any service that directly assists an individual with a disability in selection, acquisition or use of an assistive technology service. http://www.unf.edu/~tcavanau/publications/site2001/AT_in_IT.htm
A person who transcribes braille materials and holds a certification in Literary, and possibly Nemeth, Braille Transcription from the National Library Service (NLS) of the Library of Congress, Washington, D.C.
Client-side image maps
Client-side image maps work by placing a complete representation of the active areas of an image, including their shape, size, and destination, into an SGML-compliant textual form. This markup may also optionally include a textual description for each area for display on non-textual browsers. This representation, or "map," is given a name to identify it.
In computer video display technology, a frame is the image that is sent to the display image-rendering devices. It is continuously updated or refreshed from a frame buffer, a highly accessible part of video RAM.
Invented in 1822 by a French physicist, a thin optical lens consisting of concentric rings of segmental lenses producing a concentrated beam of light. Having a short focal length, it is used primarily in spotlights, overhead projectors, and the headlights of motor vehicles.
…the use of scores on achievement tests to make decisions that have important consequences for examinees and others--as a primary strategy to promote accountability. Some high stakes decisions affect students, such as the use of test scores for promotion, tracking and graduation. Others affect teachers and principals when scores are used to determine merit pay or contract renewal. Still others affect schools, as when schools are awarded extra funds when scores increase or are put into intervention status when scores are low.
HTML (Hypertext Markup Language)
HTML is the lingua franca (formal language of choice) for publishing hypertext on the World Wide Web. It is a non-proprietary format based upon SGML, and can be created and processed by a wide range of tools, from simple plain text editors - you type it in from scratch- to sophisticated WYSIWYG authoring tools. HTML uses tags such as <h1> and </h1> to structure text into headings, paragraphs, lists, hypertext links etc.
Individual Education Program/Plan (IEP)
The Individualized Education Program (IEP) is the cornerstone of the Individuals with Disabilities Education Act (IDEA), which ensures educational opportunity for students with disabilities. The IEP is a quasi-contractual agreement to guide, orchestrate, and document specially designed instruction for each student with a disability based on his or her unique academic, social, and behavioral needs.
This attribute specifies a link to a long description of the image. This description should supplement the short description provided using the alt attribute. When the image has an associated image map, this attribute should provide information about the image map's contents.
Refreshable braille display
Braille on a paper page is an example of a Static Braille Display; once written it does not change. A refreshable Braille Display is one in which braille symbols can be changed by electromechanical means to reflect changes in screen data or keyboard input from the operator.
Screen magnification Software can be installed on a PC to magnify both the text and graphics on the computer screen. Features may include: magnification from two up to 32 times normal size, reverse color contrast, cursor enhancement, auto tracking, option for screen reading capabilities, and automatic scanning of text documents.
When a screen reader is running, a synthesized voice reads items on the screen aloud, describes graphics and states the user's keyboard commands. The program will, for example, say "tab" when the user presses the tab key. Designed to eliminate the need to use a mouse, screen readers enable users to navigate the screen and execute all commands using keyboard shortcuts.
In computer programming, a script is a program or sequence of instructions that is interpreted or carried out by another program rather than by the computer processor (as a compiler program is).
(SGML) Standard Generalized Markup Language
SGML (Standard Generalized Markup Language) is an international standard for the definition of documents in electronic form that are both device and system-independent. It is a standardized approach to defining the characteristics of applications and documents. With SGML, a document is stored as a straight text file. This text file is formatted through the use of special streams of ASCII characters known as markups. Thus an SGML document is more than just a text file alone, it is a "marked-up" text file. This leads us to our next subject, the markup language.
"Speededness" in testing is the effect of time limits on the test-takers' scores. An exam is "speeded" to the extent that those taking it score lower than they would have if they had unlimited time. Most of the speededness statistics that are produced for AP Exams are based on the number of items that were "not reached." In each separately timed section or subsection, if a student leaves items 47, 49, and 50 unanswered on a 50-item exam, it would be assumed that the students reached item 47 but not items 49 and 50.
Standard (non-valid) HTML
Standard HTML is HTML that has not been validated. That is, it may contain errors. Browsers interpret HTML. When the HTML is invalid, the browser attempts to "repair" or "takes a best guess" and displays (or attempts to display) the results on the screen. Non-valid HTML may confuse assistive technology, which interacts with the "repaired" version from the browser.
Students who are blind:
Students who are functionally blind
This group of students is identified as "functionally blind" because, although such students have some useful vision, they function much like blind students with regard to reading. That is, they primarily use braille because they generally do not have the ability to see details well enough to sustain the efficient reading of print. Students who are functionally blind utilize tactile, auditory, and visual methods for learning. Students in this group have a range of visual abilities from the ability to perceive light to the ability to read the print in textbooks slowly or for a short period of time. Most of these students require specialized instruction to learn to use effectively the vision that they do have (Lewis and Allman, 2000, p. 5).
Students who are legally blind
Members of the visually impaired population who have been identified as legally blind are entitled to particular government services and considerations. Students who are considered legally blind have a visual acuity of 20/200 or worse in their better eye with correction, or they have a severely restricted visual field of less than 20 degrees. Legal blindness is a medical term that does not refer to the ability of a student to use his or her vision functionally. Therefore, students who are legally blind have a wide variety of abilities and needs. (Lewis and Allman, 2000, p. 6).
Students who are totally blind
Students who are totally blind either do not have the ability to see anything or do not have the ability to use their limited vision purposefully to complete most tasks. They must use their other senses and tactile and auditory information to learn about the world in which they live. For reading, these students typically use braille as their primary literacy medium (Lewis and Allman, 2000, p. 5).
Students who are visually impaired
Those who have a vision disorder that affects their ability to function in daily life have a wide range of visual abilities, and their visual abilities can vary depending on the circumstances. Generally speaking, however, three broad categories of visual impairment are used to describe a student's level of visual impairment: total blindness, functional blindness, and low vision. In addition, the term legal blindness has been widely used as a definition for certain legal purposes, even though it can refer to students in any of the three categories (Lewis and Allman, 2000, p. 5).
Students with low vision
The distinguishing characteristic of students with low vision is the ability to see details well enough to use print as an efficient mode of reading. The primary method of learning for such students is visual. To improve their visual functioning, students with low vision often must use specialized optical devices such as magnifiers or telescopes, non-optical devices (such as bold line black markers), or environmental modifications (such as changing their seating in class to view a video). In many cases, specialized techniques must be taught to these students before visual efficiency is achieved. Like the students who fall into the functionally blind group, students with low vision represent an extremely heterogeneous population with regard to both their visual abilities and need for specialized instruction. Other terms that have been used to describe these students are partially sighted or visually impaired. Currently, the term low vision is used more widely (Lewis and Allman, 2000, p.5-6)
Valid HTML conforms to the W3C HTML specification. Valid HTML is "error-free" in terms of the requirements of the HTML language. It will render accurate information better, faster, and in more browsers than HTML with errors. Because of accessibility features and requirements built into HTML, valid HTML generally enhances accessibility.
XHTML (Extensible Hypertext Markup Language)
A Reformulation of HTML 4 in XML 1.0.