If I were a college administrator, I would want to collect accurate data about my school’s performance.
(If you can’t measure it, you can’t improve it.)
This can’t be done by one or two people alone. Having gathered accurate information, to then deliberately report it incorrectly probably would require the collusion of multiple people.To avoid suspicion it would have to be done consistently, year after year. How likely is this to happen, on a wide scale, in large communities of people who take pride in analyzing information accurately (esp. if many of the same numbers are reported to Moodys, the DoE, etc.)? I think what is more likely to introduce errors (or confusion) is imprecision in the CDS/USNWR instructions. If the instructions seem to leave even a little room for more than one interpretation (or reporting option), then some schools will take advantage by following the most self-serving approach.
Does this affect the ranking results very much? With respect to many small individual ranking positions, maybe.With respect to the overall set of top colleges, even, it might affect an entire class of schools (like top state universities) enough to shift all their rankings one way or another, in or out of an arbitrarily-defined tier. Has anyone here found any data, other than research production numbers, that clearly bump Berkeley into the top 10 or Michigan/Virginia/UNC-CH into the top 20 (where the USNWR peer assessments place them)? Regardless, for most people who can attend one of these schools at in-state rates, it’s rational to think of them as top colleges in building an application list.
For many excellent students, the most important set of top N colleges shouldn’t be the USNWR T20, but the set of ~60 that claim to meet 100% of demonstrated financial need. Maybe those claims deserve at least as much scrutiny as the magazine rankings get.