In any case, school systems that didn't like the idea were free not to apply for Race to the Top grants. MCPS didn't.
• Estimates from VAMs should always be accompanied by measures of precision and a
discussion of the assumptions and possible limitations of the model. These limitations are
particularly relevant if VAMs are used for high-stakes purposes. o VAMs are generally based on standardized test scores, and do not directly measure
potential teacher contributions toward other student outcomes.
o VAMs typically measure correlation, not causation: Effects – positive or negative –
attributed to a teacher may actually be caused by other factors that are not captured in
the model.
o Under some conditions, VAM scores and rankings can change substantially when a
different model or test is used, and a thorough analysis should be undertaken to
evaluate the sensitivity of estimates to different models.
• VAMs should be viewed within the context of quality improvement, which distinguishes
aspects of quality that can be attributed to the system from those that can be attributed to
individual teachers, teacher preparation programs, or schools. Most VAM studies find
that teachers account for about 1% to 14% of the variability in test scores, and that the
majority of opportunities for quality improvement are found in the system-level
conditions. Ranking teachers by their VAM scores can have unintended consequences
that reduce quality.
It is unknown how full implementation of an accountability system incorporating test-based
indicators, such as those derived from VAMs, will affect the actions and dispositions of teachers,
principals and other educators. Perceptions of transparency, fairness and credibility will be
crucial in determining the degree of success of the system as a whole in achieving its goals of
improving the quality of teaching. Given the unpredictability of such complex interacting forces,
it is difficult to anticipate how the education system as a whole will be affected and how the
educator labor market will respond. We know from experience with other quality improvement
undertakings that changes in evaluation strategy have unintended consequences. A decision to
use VAMs for teacher evaluations might change the way the tests are viewed and lead to changes
in the school environment. For example, more classroom time might be spent on test preparation
and on specific content from the test at the exclusion of content that may lead to better long-term
learning gains or motivation for students. Certain schools may be hard to staff if there is a
perception that it is harder for teachers to achieve good VAM scores when working in them.
Overreliance on VAM scores may foster a competitive environment, discouraging collaboration
and efforts to improve the educational system as a whole.
There are also studies that show that value-added models do work. Here is the American Statistical Association position about them:
https://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf
In any case, school systems that didn't like the idea were free not to apply for Race to the Top grants. MCPS didn't.
Anonymous wrote:Value-added doesn't work because there are far more factors than the model can control for.
You mean that this wasn't thought through very well before they made it a requirement for Race to the Top?
There's some kind of a pattern here.
Value added would work if they don't include students who failed the year before (because they would not be up to the past level and would be taking the next level's test). If they gave the failing students the same test from the year before, that might be fair (because they might have improved at the previous level). However, if the student was more than a level behind, they would have to go back to a previous test that could reflect their growth.
I'm sure this would all get ferreted out.
Value-added doesn't work because there are far more factors than the model can control for.
Anonymous wrote:
Value added would work if they don't include students who failed the year before (because they would not be up to the past level and would be taking the next level's test). If they gave the failing students the same test from the year before, that might be fair (because they might have improved at the previous level). However, if the student was more than a level behind, they would have to go back to a previous test that could reflect their growth.
I'm sure this would all get ferreted out.
They, who?
Anonymous wrote:No, that's not how the test-results part of the performance evaluation works. Nobody is proposing a simple "high test scores = good teacher, lower test scores = bad teacher" performance evaluation method.
But you do agree that they are proposing to use test results as part of the performance evaluation.
No, that's not how the test-results part of the performance evaluation works. Nobody is proposing a simple "high test scores = good teacher, lower test scores = bad teacher" performance evaluation method.
No, that's not how the test-results part of the performance evaluation works. Nobody is proposing a simple "high test scores = good teacher, lower test scores = bad teacher" performance evaluation method.