According to the article, "How Standardized Tests Shape - and Limit - Student Learning", standardized testing has a negative impact on student learning. The three main areas of focus in the article are- the changing nature of teaching, the narrowing of the curriculum, and the limiting of student learning and how each of these items are negatively impacted by standardized testing.
Evidence provided to support this argument:
- Teachers lose between 60 to 110 hours of instructional time in a year due to testing.
- Mandatory curricula designed to prepare students for standardized testing limits teachers from providing instruction more suited to the needs of the students. (I am experiencing this at full force this year)
- Time spent prepping students for testing through practice tests or drill sessions reduces teacher abilities to
- Non-tested subjects such as music, art, foreign languages, and social studies are squeezed out impeding students from a well rounded education.
- Curriculum is narrowed to focus on literacy skills found on standardized tests rather than higher-order critical reading skills.
- Limitations of machine scoring for testing writing effects the focus of writing instructional time. A greater focus is made on the mechanical skills needed to score points and not the substance of the writing.
- Students designated not-proficient hold a negative perception of their abilities.
I agree and disagree with this article.
As a reading specialist, I see the need to administer standardized tests as indicators for student performance particularly in reading. What I don't see is the need for it to be so high-staked and stressful for everyone involved. The lack of good reading skills can have a direct impact on performance in other subjects but that should not diminish the significance of those other subjects in a child's education. I also believe teachers should have more say in the tests that will be administered to their students. Too often, these decisions are made by individuals at administrative levels who most likely have no experience within the given grade they are making decisions for. Tests seem to be chosen based on financial decisions (fewer test choices covering broader grade ranges for purchasing power) rather than individualized purpose per grade.
Last year, I worked only with K-2 so I did not use WestTest Data for my instruction. I was in Hampshire County and they had decided to no longer use DIBELS or Acuity. They had chosen STAR Reading for first through fifth. They used PASI (Phonemic Awareness Screener for Intervention) for kindergarten and first grade and PSI (Phonics Screener for Intervention) for first and second. These tests provided valuable information for not only student weaknesses but also identified students that were advancing beyond the pace of the classroom.
I am now in a different county working with K-4. Mineral County uses DIBELS but is no longer using Acuity (as expected now that WestTest is no longer implemented). After administering DIBELS within just a few weeks of school starting, I was like ok, here are the kids identified as struggling but now what do I do with them. Since the results for the second grade were as they were, I decided to administer the PSI on my own to the entire 2nd grade class and there you go, I knew exactly what I needed to teach. As I found with all the grades DIBELS was a good indicator for identifying struggling students but did not provide enough information to fully guide instruction. As far as I know, the county does not provide materials for further testing beyond this point so it would be up to the classroom teacher and Reading Specialist to do so.
I administered DIBELS to the entire school within 4 days (with the exception of the first grade teacher who wanted to do her own testing). I think it would be great for the classroom teachers to administer these tests themselves but as she found out it took her almost 4 weeks just to do her class. This large of a testing gap could render the results invalid. I think it is much better for classroom teachers not to do the individualized testing such as this. I think it is much better to use a designated person such as a Reading Specialist. This way students are pulled out individually and only miss 5-15 minutes of instructional time per testing versus the loss of collective instructional time for the classroom teacher to do it. Also, having one individual testing school wide would ensure more consistent results especially for subjective areas such as the retell. Lastly, the testing is much more accurate if completed within as small of a time frame as possible, especially within the K-2 grades.
In regards to kindergarten, DIBELS worked as a good indicator of students for intervention. Skills such as letter naming, first sound recognition, phoneme segmentation, letter sound recognition, and the blending of those sounds. Once students identified for intervention were placed in an IPAP (Intensive Phonemic Awareness Program) group, other skill deficits were evident during instruction. While this testing easily identified students requiring phonemic awareness instruction, I am not sure as a stand alone it would be adequate to assess all the pre-reading skills necessary for a kindergartner.
While in Hampshire County, the kindergarten teachers no longer had access to DIBELS so they were generating their own comprehensive assessment inventories based on the pacing of their instruction for each quarter. These tests were designed mainly by the teachers themselves with input from the Reading Specialists. Since they were comprehensive and individualized, the tests were administered by the Reading Specialists to prevent a loss in instructional time for the teacher. During the 3rd and 4th quarter the teachers administered the tests themselves for any students they were considering for retention. These assessments had a direct impact on the instructional plan for both the classroom teachers and Reading Specialists. They were also directly used for Progress Reports for parents. This is the type of testing that is set to expand into first and second grade. My daughter attends pre-k in Maryland and they have used this type of testing and reporting for a few years now. As both a teacher and parent, I like the objectivity of the reporting.
For me, DIBELS isn't really as effective once you reach middle of the year third grade. Only one student was identified as Intensive and a couple others as Strategic. By this point, most students are beginning to struggle more with comprehension than decoding and fluency. DIBELS probably isn't the best indicator for comprehension. Ideally, I would prefer the opportunity to explore other options for testing beyond this level.
While attending Frostburg State's MAT program, the reading assessment course was taught by a principal in Washington County. We attended class on Fridays at her school and administered various assessments to her students. Their county-wide choice of reading assessment was Fountas & Pinnell. I could see this as a much better choice of comprehension assessment rather than DIBELS for third grade and above. Since it is typically a differentiated assessment based on reading level, this would not be a standardized type test. It would provide information as to levels of instructional and independent reading for each student and would be administered as needed per student.
I am also working with a group of five fourth graders. DIBELS was really of no use for instructional planning for this group. These students were identified by the classroom teacher for intervention based mainly on classroom performance. Their MOY Benchmark has fluency ranges of 88-140 WPM. This has been the most challenging group for me to work with in a sense of knowing what skills to work on with them. Their WestTest Data also did not give me individual indicators of skills needed just the overall concepts of what constitutes non-proficient. Since I did not work with grades above second last year in Hampshire County, I was not able to review STAR Reading data for this grade level to see if it gave indications of skill deficits. Curious to know if anyone in our group is using this form of assessment.
I am very interested to see where the Common Core testing will lead us. From what I have seen regarding Smarter Balance during a brief professional development seminar and from the online site, I fear the students I work with are no where near ready for that level of testing. What I do like is the idea of computer adaptive testing that will focus on growth rather than just a baseline.
Sorry for the super long post!!
No comments:
Post a Comment