The Honorable John King
Secretary
U.S. Department of Education
400 Maryland Avenue, SW
Washington, DC 20202
September 9th, 2016
Re: THE INNOVATIVE ASSESSMENT AND ACCOUNTABILITY DEMONSTRATION AUTHORITY: COMMENTS SUBMITTED TO THE UNITED STATES DEPARTMENT OF EDUCATION REGARDING PROPOSED ESSA REGULATIONS
Dear Dr. King:
The Institute for Education Policy at Johns Hopkins University (“The Institute”) submits this letter and recommendations to inform the U.S. Department of Education’s (ED) implementation of the Innovative Assessment Demonstration Authority.
Background:
The Institute has been assisting Chiefs for Change and a number of their member states and districts in supporting their consideration of applying for the Demonstration Authority. Our work has been informed by a number of nationally recognized experts in assessment design and implementation. After extensive discussions, we offer the following thoughts and a recommendation for your consideration.
Discussion:
We believe that The Demonstration Authority regulation should be constructed to encourage development and implementation of innovative approaches to assessment and accountability that will fulfill the purposes stated in ESSA. Thus the regulation should be written to encourage a wide range of possible innovations. In terms of assessments, the innovations could be:
• Different ways to do the same thing as current assessments (e.g., developing technology-enhanced items that yield test scores very comparable to tests based on selected-response items)
• Different ways to do things current assessments intend to do, but aren’t doing completely (e.g., developing performance assessments that produce student performance evidence of higher-order knowledge and skills that are not assessed well in current assessments)
• Different ways to do things beyond what current assessments intend to do (e.g., provide evidence of student competency in less-standardized settings, such as individualized timing or real-world applied settings)
More importantly, the innovation in assessment should be matched with innovation in intended uses that are intended to solve the educational challenges identified by states and consistent with ESSA, for example, by supporting powerful instructional models (e.g., competency-based education) and/or by providing more valid assessment information (e.g., deeper learning associated with college/career readiness).
In particular, requirements of technical or psychometric comparability should be appropriate to the intent of the innovative assessment. Greater comparability with the current standardized assessment is more appropriate in the first bulleted situation above, while the second and third bulleted situations should require less comparability since the reason for the innovative assessment is to do something different and better than the current standardized assessment. Requiring the innovative assessment to be technically too tightly comparable with the current assessment could undermine the purpose and intent of the innovative assessment.
We thus recommend that the proposed regulation be modified as follows:
(4) Provide for acceptable quality for State academic assessments under section 1111(b)(2) of the Act, including by generating results that are valid, reliable, and comparable for all students and for each subgroup of students under section 1111(b)(2)(B)(xi) of the Act, as indicated by passing Peer Review when proposed for statewide operational use. During any developmental period, the innovative assessments must be documented according to industry standards and evaluated annually. As appropriate to the claims and intended uses of the innovative assessments, part of the evaluation should include comparisons of students/schools/districts participating in the Demonstration Authority Pilot with other assessments, e.g., compared to the results for such students on the State assessments. Consistent with the SEA’s or consortium’s evaluation plan under §200.78(e), the SEA must plan to annually determine comparability during each year of its demonstration authority period. The evaluation plan should lead to more confidence in the suitability of the innovative assessment over time, ultimately leading to successful Peer Review. For example, an evaluation plan might progress from small pilot test to full operational data, from internal consistency to comparisons with external criteria, and from implementation fidelity to sustainability.
Sincerely,
Dr. David Steiner
Executive Director
Johns Hopkins Institute for Education Policy
Professor, School of Education
Johns Hopkins University