My morning went quite well… For about 90 seconds.
The first thing I did was check my messages for news on House Bill 399, the legislation that was inspired by the work of the DPAS II Advisory Sub-Committee. Upon seeing that it had been passed with amendments, I was thrilled. At 2:30 am, I was expecting amendments, though I had not seen the amendments that sent the bill back to the House, where it was approved in the wee small hours of the night, until they were linked in a blog post by Exceptional Delaware. Even the inimitable News Journal reporter Matt Albright hadn’t gone into the depth of explanation that sent me from excited to incensed in a very short span of time when I read the amendment.
Frankly, I expected the amendment clarifying the administrator’s role in approving the goals set, and I don’t really have an issue with that because a) from what I understand, it rarely happens that the teacher/specialist and evaluator don’t agree and b) the admin already has that role. In fact, that was already in the language of the bill, but to my understanding there needed to be stronger language to address the concerns that it was too vague. I was pleasantly surprised to read Senator Bryan Townsend’s amendment, which protects students from the possibility of being victim to increased testing as an inadvertent outcome of the changes.
However, upon reading the actual amendment submitted by Senator Sokola, I realized that the language turns the entire set of recommendations into a pilot program. Not piloting the algorithm part of the recommendations, but turning virtually the entire contents of the bill into a pilot. The key parts of the recommendations of the sub-committee are completely gutted from this legislation. In fact, the Sub-Committee specifically stated that this should not BE a pilot, aside from the mathematical component to ensure it was valid and reliable.
Just as troubling, there is a provision for adding in student and parent surveys to Component IV, Professional Responsibilities. As aggravated as I am by the amendment and how this all went down, especially because the bill was sent back to the House really late after intense WEIC discussions and votes so no one really had a chance to digest the information, I’m actually more frustrated by this survey bit.
When I went through a messaging training session six years ago, one thing that stuck in my mind is how information can easily be manipulated based on the willingness of people to believe something, regardless of whether it is true. Essentially, information can be true and believable, true and not believable, false and believable, and false and not believable. As an example of this, there is a satire going around about the dangers of dihydrogen monoxide. Because of the juxtaposition of scientific terminology and outright fear-mongering, playing to the basest fears of people (what we eat and drink is poisoning us), there are people who believe water is dangerous to drink. In case you aren’t following, dihydrogen monoxide = water, simple H2O. In this case, the information is false but completely believable.
This is one way in which people in positions of power manipulate others to believe in something that is not true. Often one can tell this has happened because the individuals being manipulated will vigorously and with absolute certainty defend a position that is provably false. It helps that we as people hear what we want to hear, especially when it comes from a person in a perceived position of power and/or with access to information others do not have.
Before I continue, I’d like to state for the record that there are ways in which this manipulation is used that are completely not harmful. For instance, a slightly misleading headline that gets the viewer to read the article, or the time I told my daughter that she has a lie dot on her forehead, which is how I always knew when she was lying, though the truth was she would immediately cover her forehead every time she told a lie. (That and as a parent, I rarely ask a question I don’t already know the answer to.) I’m in no way saying that the individuals being manipulated are weak, less intelligent than other people, not well-intentioned, or unwilling to be informed. I’m also not saying that those doing the manipulating are bad people; they may genuinely believe in what they are saying and doing, or they are trying to right a wrong, or get other people involved in a movement. This is the very foundation of politics, in which each side tries to prove that they are right and the other is wrong, when the reality is somewhere in the gray.
All that said, let me address the issue around parent and student surveys as part of an educator evaluation system.
This is a clear case of something being believable as having an impact, but not being true when one scratches beneath the surface.
We are going to ignore, for now, the fact that this legislation gives absolutely no guidance to how the surveys should be created, who should receive them, how they should be disseminated and collected, who will review them to collect the data for the evaluation, what types of questions should be asked, or exactly how the new data will fit into the existing Component IV criteria. We shouldn’t ignore that, but we are going to. For now.
Let’s begin with the very real fact that all sorts of surveys are given, and that the data gleaned from those surveys can be used for overall school evaluations. As a parent of three school-aged children in the Red Clay School District, I can honestly say I promptly return every single survey sent home filled out in its entirety, and I immediately fill out any surveys that are emailed to me. This data is important, and I want to make sure it is counted. Based on the number of follow-up emails from the teachers and administrators at the school imploring us and reminding us to return the surveys, not many parents do.
This is concerning. How will we guarantee a response rate from parents to include surveys in the educator evaluation system? Furthermore, not all teachers and specialists work with the same types or numbers of students. For instance, a guidance counselor would be responsible for a very large portion of the total student population (hundreds of students), a mathematics teacher might only have 90 students for the entire school year, and an elective teacher might see more than 200 students throughout the year. Even assuming a 100% response rate, the numbers are so diverse and the spread so wide that there is no way to guarantee the validity of the data.
Additionally, in schools where there is a high rate of absenteeism, transience, homelessness, foster care, or a myriad set of other instances, how likely is it that a representative sampling would be acquired to make the data meaningful? Would there be a minimum number of surveys set for the data to count? What happens if that minimum isn’t met? What happens if there are more? Does someone pick and choose what data gets included? In theory, all the data would be averaged and used, but then we are back to the concern about the dilution of the average for educators who have high numbers of students versus those with low numbers of students.
What happens in the (albeit rare) case that a parent requests their child not have a specific teacher, yet the school is unable to accommodate that request? Perhaps the parent knows this teacher is a bad match for the child. Perhaps the child has a medical reason he should not be in physical education, or is allergic to the class pet. Maybe that parent disagreed with the school’s restrictive bathroom pass policy and disliked the teachers who enforced it. Now the parent is predisposed to giving an average or even negative rating on the survey, not necessarily because of a lack of integrity, but because they genuinely had a bad experience.
True story: With the birth of my first child, I was at Christiana Hospital. I had a horrific experience there, and subsequent births were at St. Francis. However, Christiana is my go-to hospital for everything else; my gallbladder removal, my thyroidectomy, and even trips to the emergency room. And tons of people have had wonderful, amazing experiences there. To my knowledge, no one’s employment was ever put in jeopardy because of my negative survey rating, and therein lies the difference. You might argue that I could just take my business elsewhere, but keep in mind that, in Delaware at least, so can parents.
Let’s take a quick foray into the student survey side. My daughter LOVED her third grade teacher. Both of my school-aged boys have loved ALL of their teachers. Does that mean that all the teachers my boys have had were amazing, and all but one of my daughter’s teachers were terrible? My oldest two had the same exact first grade teacher, so even leaving my opinion aside, I think it’s obvious that the answer isn’t the teacher was good for one child and bad for the other.
Then there’s the age thing. For my pre-k son, there’s recess and finger painting and drawing and reading and building and friends… What’s not to love about school? For my second grade son, everything is doable as long as he focuses and works and checks his work. For my fourth grade daughter, math is boring, writing is a real pain, but reading is super awesome. If we were to survey those three kids about their school experiences, I’m wondering what questions might be asked of them that, a) they’d understand well enough to answer usefully, and b) might give insight to the quality of the teacher.
Expand that survey process out to other educators. How does the high school student who rarely uses the library media center complete a survey about the effectiveness of the librarian? How does the student who has never been to the nurse evaluate the nurse’s job performance? What about paraprofessionals who only work with one student in a school year? Educational diagnosticians? Disciplinary deans? For that matter, how does a parent rate those educators? Based on what knowledge and experience?
And the parent of an elementary student likely has one or two classroom teachers and a handful of specialists interacting with the student in a year. Your average secondary student will be interacting with 10 or more teachers and specialists throughout the year. Is each parent and each student going to rate each teacher and each specialist? Can you imagine us going from “just” having weeks devoted to testing in schools to having weeks devoted to testing AND weeks devoted to surveys?
It is completely believable that parent and student surveys should count towards an educator’s evaluation. It is believable because this is a business model. I go to the Firefly Music Festival, I receive a product and a service, I submit an evaluation expressing my opinion about the product and service. Each time I go to Firefly I’m going to have a different experience, and as a result the evaluation I submit will reflect a different level of satisfaction. I can make the decision to attend or not attend the festival, but my poor evaluation is not going to cause the folks who run it to lose their jobs. The goods and services offered at Firefly are more holistic, more rounded, than what could be accurately reflected in a survey, even keeping in mind that surveys are often more likely to be filled out by the extremely satisfied and the extremely dissatisfied, thus skewing the results for the average individual.
Let me sum it up this way: My child is not a backpack full of cash. My child is not an interchangeable widget. My child, all four of my children, are individual little people with personalities and opinions and work ethics and social issues, just like all children are. My children are going to have experiences that are good and experiences that are bad, and unless there is a serious harm being done in the classroom (which is likely going to be known by the administration more concretely than I could make it on a survey), having interactions with authority figures we don’t necessarily like is actually a good life experience.
As for me, I’m at work when my kids are in school, and I don’t have time to go observe the classroom to collect evidence of what’s going on in there. I do not pretend that I’m an expert in how other teachers should be teaching and their classrooms should be run, and for me to impose my opinions on other educators is condescending and inappropriate. If I have an issue, I approach the teacher directly, or seek other support services offered by the school and district.
If you are looking for a parent trigger, that’s a different conversation. As for surveys, perhaps it’s my own lack of creativity, but I cannot see how it will be beneficial or effective at the implementation level. Finally, by the time a survey is submitted to evaluate the educator, it will be the end of the school year with no time or ability for the educator to receive meaningful feedback and make substantive change. And what would an improvement plan look like when generated by poor survey ratings?
These are all questions and issues I strongly believe should have been asked, discussed, and answered before this type of language was ever included in a piece of legislation with the potential to end someone’s career.