<p>Oh and to add another thing, it does date from the 1990’s, early to mid 90’s was the last update if I recall correctly. Don’t you think things have changed just a tad at numerous schools since then? Even if there was any validity to his report (and there wasn’t), it is woefully out of date.</p>
<p>Alexandre is correct. The Gourman Report from 1997 is consistently corroborated by current independent sources. When the new National Academy of Sciences ranking comes out, we can check the Gourman rankings against it. I am not sure what happened to the U of Rochester history ranking. </p>
<p>The Myth that Gourman did not explain his method is just that, a myth. Here is his methodology from the introduction to the 1997 edition.</p>
<p>INTRODUCTION</p>
<p>Since 1967, The Gourman Report has made an intensive effort to determine what
constitutes academic excellence or quality in American colleges and .universities.
The result of that research and study is found within this book. </p>
<p>The Gourman Report is the only qualitative guide to institutions of higher education
that assigns a precise, numerical score to each school and program. This score is
derived from a comprehensive assessment of each program’s strengths and
shortcomings. This method makes it simple to examine the effectiveness of a given
educational program, or compare one program to another. </p>
<p>These deceptively simple numerical ratings take into account a wide variety of
empirical data. The Gourman Report is not a popularity contest or an opinion poll,
but an objective evaluation of complex information drawn from the public record,
private research foundations, and universities themselves. Many of the resources
employed in this research, while public, are not easily accessible. Individual
researchers attempting to collect this data in order to compare institutions or
programs would face a daunting task. </p>
<p>This book is intended for use by: </p>
<p>• Young people and parents wishing to make informed choices
about higher education.
• Educators and administrators interested in an independent
evaluation of their programs … </p>
<p>• Prospective employers who wish to assess the educational
qualifications of college graduates.
• Schools wishing to improve undergraduate programs
• Foundations involved in funding colleges and universities.
• Individuals interested in identifying fraudulent or inferior
institutions …
• Citizens concerned about the quality of today’s higher education.
For all of these researchers, the breadth and convenience of the data in The
Gourman Report can greatly facilitate the study of higher education. </p>
<p>Method of Evaluation </p>
<p>Much of the material used in compiling The Gourman Report is internal-drawn
from educators and administrators at the schools themselves. These individuals are
permitted to evaluate only their own programs-as they know them from daily
experience-and not the programs of other institutions. Unsolicited appraisals are </p>
<p>occasionally considered (and weighed accordingly), but the bulk 'of our
contributions come from people chosen for their academic qualifications, their
published works, and their interest in improving the quality of higher education. It
attests to the dedication of these individuals (and also to the serious problems in
higher education today) that over 90% of our requests for contributions are met
with a positive response. </p>
<p>In addition, The Gourman Report draws on many external resources which are a
matter of record, such as funding for public universities as authorized by legislative
bodies, required filings by schools to meet standards of non-discrimination, and
material provided by the institutions (and independently verified) about faculty
makeup and experience, fields of study offered, and physical plant. </p>
<p>Finally, The Gourman Report draws upon the findings of individuals, associations </p>
<p>and agencies whose business it is to make accurate projections of the success that </p>
<p>will be enjoyed by graduates from given institutions and disciplines. While the </p>
<p>methods employed by these resources are proprietary, their findings have </p>
<p>consistently been validated by experience, and they are an important part .of our </p>
<p>research. </p>
<p>The Gourman Report’s rating of educational institutions is analogous to the grading
of a college essay examination. What may appear to be a subjective process is in
fact a patient sifting of empiricar data by analysts who understand both the “subject
matter” (the fields of study under evaluation), and the “students” (the colleges and
universities themselves). The fact that there are virtually no “tie” scores indicates
the accuracy and effectiveness of this methodology. So does the consistent
affirmation of the ratings in The Gourman Report by readers who are in a position
to evaluate certain programs themselves. </p>
<p>The following criteria are taken into consideration in the evaluation of each
educational program and institution. It should be noted that, because disciplines
vary in their educational methodology, the significance given each criterion will vary
from the rating of one discipline to the next; however, our evaluation is consistent
for all schools listed within each field of study. </p>
<ol>
<li>Auspices, control and organization of the institution; </li>
<li>Number of educational programs offered and degrees conferred
(with additional attention to “sub-fields” available to students
within a particular discipline);</li>
<li>Age (experience level) of the institution and of the individual
discipline or program and division;</li>
<li>Faculty, including qualifications, experience, intellectual interests,
attainments, and professional productivity (including research);</li>
<li><p>Students, including quality of scholastic work and records of
graduates both in graduate study and in practice;
• The Goullnan Report-Undergraduate </p></li>
<li><p>Basis of and requirements for admission of students (overall and
by individual discipline) </p></li>
<li><p>Number of students enrolled (overall and for each discipline); </p></li>
<li><p>Curriculum and curricular content of the program or discipline
and division;</p></li>
<li><p>Standards and quality of instruction (including teaching loads); </p></li>
<li><p>Quality of administration, including attitudes and policy toward
teaching, research and scholarly production in each discipline,
and administration research;</p></li>
<li><p>Quality and availability of non-departmental areas such as
counseling and career placement services;</p></li>
<li><p>Quality of physical plant devoted to undergraduate, graduate and
professional levels; </p></li>
<li><p>Finances, including budgets, investments, expenditures and
sources of income for both public and private institutions;</p></li>
<li><p>Library, including number of volumes, appropriateness of
materials to individual disciplines, and accessibility of materials;</p></li>
<li><p>Computer facility sufficient to support current research activities
for both faculty and students;</p></li>
<li><p>Sufficient funding for research equipment and infrastructure; </p></li>
<li><p>Number of teaching and research assistantships; </p></li>
<li><p>Academic-athletic balance.
ipecific information about the data used to rank institutions and programs is
Ivailable in Appendix A and Appendix B.</p></li>
</ol>
<p>Yes, I have seen all that. And when the UPenn researcher contacted the people supposedly contacted by Gourman, none of them had in fact been contacted. Also, saying what factors you use without saying how you use them is not explaining your methodology. Finally, again, using grad school type of criteria (How does the NAS in any way speak to history, btw) says little about the undergrad quality or experience. The very idea of being able to “rank” undergraduate disciplines in most areas is preposterous. There are so many things wrong with it I couldn’t begin to start, but it has been discussed at length elsewhere. I am not going to go through it again. At least you agree it is out of date.</p>
<p>But hey, anyone should feel free to use the Gourman report all they want. It is a faulty premise to begin with, using these kinds of reports to pick where one chooses to go to school for undergrad. If they want to make that kind of mistake and pick a school based on the supposed strength of a single department, that is their problem.</p>
<p>The NAS Assessment of Grad Programs will include history programs. Undergrad history programs are not all equal. MIT and Yale would not offer the same experience in history.</p>
<p>The Gourman Report rankings are a useful piece of information when used in combination with other information. I would never choose a college based on only one source of information. You should always look at the big picture.</p>
<p>fallenchemist, saying that there isn’t a strong correlation between graduate rankings and undergraduate quality is incorrect. Undergrads have access to virtually all the faculty and resources that make graduate programs strong. </p>
<p>In my opinion, a strong graduate program necessarily means a strong undergraduate offerings but the absence of a strong graduate program does not necessarily mean the opposite. As you aptly point out, LACs and many quasi-LACs (Brown, Dartmouth, William and Mary etc…) provide great undergraduate educations (on par with the top research university), particularly in non-technical fields. That is why I recommended that the OP not look at rankings but choose a good university that fits her/his needs.</p>
<p>As for the Penn researcher, I would love to see his argument against Gourman when every other ranking out there copied him. Gourman’s first rankings came out in 1988. His 1997 rankings were his 10th edition, and were virtually identical to his 1st edition rankings. The USNWR and the NRC mimic his findings. .</p>
<p>
Prove it, if you please. If an LAC can provide a strong undergrad education in these fields without all the fancy resources, than why not a more undergraduate focused research university? And since you are obviously saying that is the case, how do you know the undergrad experiences at the strong research universities are better than the others? You don’t because you can’t.</p>
<p>But this is a useless argument. You two can go right on believing that undergraduate programs can be measured for quality, whether by strength of graduate programs or otherwise. I think that is absurd on its face.</p>
<p>Except for intro, college history classes tend to hone in on a period or angle. You don’t have to decide yet- but it may help you. </p>
<p>Evaluating is not about the most Ivy Phds. It’s about finding the ideal, empowering experience for you. Start with the course catalog/web dept info. Some classes may be taught by another dept’s faculty- or, could combine angles: Islamic World may be taught by a religion prof; African history could be designed to include anthropology; medieval lit could combine a gender angle. Does this work for you? </p>
<p>Don’t be bewitched by faculty with “top name” PhDs. It’s better to understand whether their alma mater is tops in the prof’s specific field. Also, know that a fab prof may only teach to jr-srs or grad students. You may not have access. He or she may teach only rearely, if at all.</p>
<p>Grant money? Some outside $$$ sound impressive, but are meant to cover the prof’s research off-campus for some period- not hiring res assts. At some schools, RA’s are paid from the dept budget (or work-study) not outside grants. In some programs, there may be fewer RA-ships than students competing for them. Etc. Good luck.</p>
<p>Let me add this: rankings are media dolls. They serve a purpose for the author(s.) Attention, fame, income. They tend to try to apply statistical processes to a non-statistically oriented subject.<br>
The strength of a grad program in no way guarantees the strength of undergrad. Sorry. Sure, grad school is a select group of students, working closely (in theory) with top professor specialists. Undergrad: can be huge classes, more general in nature, the kid sitting next to you is texting/chose a history major cuz he had to choose something. In some cases, TA’s run the large study groups…and set grades.</p>
<p>I agree with you, lookingforward.</p>
<p>@collegehelp: Using MIT and Yale is a red herring. Few people go to MIT for a history degree, I rather suspect. Maybe History of Science, lol. And I never said the experiences would be the same. That’s absurd. The question is would the quality of a history education at Yale be better than one from Chicago, or Wash U in St. Louis, or UCLA? You cannot answer that and no one can. Expanding on what lookingforward says, if an 18 year old without a day of college under their belt can be 100% sure they want to study the history of Austrian-Hungarian Empire (although various studies have shown that between 50-70% of students change majors at least once, and that is majors, not just topics), then sure. If Yale has an expert in that area, by all means that would be a great school to pick, I am sure. Assuming, of course, they are smart enough and like that environment and don’t mind the lack of big time sports and so on and so forth.</p>
<p>OP, you also might look at my post in this thread:</p>
<p><a href=“http://talk.collegeconfidential.com/other-college-majors/924594-ranking-history-departments.html[/url]”>http://talk.collegeconfidential.com/other-college-majors/924594-ranking-history-departments.html</a></p>
<p>Regarding the G methodology quoted by collhelp: on close reading, it says very little that proves the methods applied were academically sound. It counts on our assuming that it is technically valid because it says so. </p>
<p>“…rating of educational institutions is analogous to the grading of a college essay examination.” Oops! Essay grading is inherently subjective. Even if you gain/lose pts for particular details, it is ultimately a prof’s opinion.</p>
<p>“What may appear to be a subjective process is in fact a patient sifting of empirical data by analysts who understand both the “subject matter” (the fields of study under evaluation…)” “Empirical” - relying on experience or observation alone often without due regard for system and theory. </p>
<p>I was once an “analyst” sifting empirical data for a prominent guidebook. I “understood” the subject matter. I was 24- I opened survey envelopes and made piles. </p>
<p>“… and the “students” (the colleges and universities themselves).” OOOPS! Don’t you know how my college got ranked top-5 in category X? 20 students/faculty answered the survey. Not scientific.</p>
<p>“The fact that there are virtually no “tie” scores indicates the accuracy and effectiveness of this methodology.” The absence of ties says ZERO about accuracy. I find it worrisome, in terms of statistical probability.</p>
<p>And so on. And, I can jump on USNWR, too, but they admit their flaws.</p>
<p>The whole thing about ties was BS anyway. The researchers had a field day with that. I don’t remember the details now, but the results were so statiscally improbable as to be laughable. It was something like every single school was separated by 0.01 or whatever. You are right, looking forward, there should have been ties. It was obvious Gourman just made it up, or fudged the data enough to break ties thinking that would look suspicious when in fact it is suspicious that there are not ties. And that line about empirical data is a howler. He never tells what the data is or how it was derived, I don’t think.</p>
<p>I’ll see of I can find that article again.</p>
<p>Pay ZERO attention to Gourman Report and Ruggs Recommendations. It is the same elitist crowd. </p>
<p>Many history departments at most colleges are outstanding, with emphasis on private schools. </p>
<p>For example, missing from Gourman and Ruggs is Fordham. And Fordham’s History Department is stellar with some incredible profs with amazing credentials (Ivy).</p>
<p>Those top schools that Gourman recommends produce some top level PhD’s and THOSE PROFS then go out and teach at many LAC’s or low first tier/second tier schools.</p>
<p><
Those top schools that Gourman recommends produce some top level PhD’s and THOSE PROFS then go out and teach at many LAC’s or low first tier/second tier schools.></p>
<p>I tend to disagree with that emphatically. While some graduates of the top PhD programs inevitably end up at LAC’s. The best go into post-doc programs at other major research universities and then take tenure track positions at AAU universities. The best young history professors are going to end up in departments where primary research is valued and they’re given the opportunity to mentor doctoral candidates.</p>
<p>Sorry, lenny. Fact is: most new liberal arts PhDs- even from great institutions- struggle to find permanent employment anywhere. Many are itinerant- a class here, maybe another there. Some go on to other fields. Go to one of the respected academic media to view the dialogue- eg, Chronicle of Higher Ed. Thus, many who end up at good/best LACs are hand-picked because of their academic qualifications. Second, there is an ongoing debate- among faculty as well as their employers- about what makes the best coll teachers: research focus or teaching focus. It is a struggle to do justice to both. Yes, many PhDs of any age would prefer to mentor doctoral students- BUT, those jobs are few and far between. For those of us who routinely note openings in Chronicle, etc, it is not encouraging. And, profs are not machines. Many prefer the atmosphere of a LAC- smaller class size, closer to the source of admin policies, etc, quality of life, housing affordability, etc. All you need do, to determine the academic backgrounds of LAC profs, is: look at their bio’s on their coll web sites.</p>
<p>The Gourman Report last came out in 1997. And no, it never published its methodology, which apparently, has produced some statistical impossibilities. </p>
<p>US News graduate history rankings are 100% reliant on peer assessment surveys. The response rate for the surveys about history programs was an appalling 23%. No actual statistical data is included in its methodology.</p>
<p>Just know of what you get when you look at these things.</p>
<p>The Gourman Report DOES contain ties. Another myth.</p>
<p>Here are the latest NRC rankings in history based on “regression” 5th percentile (whatever that means).
COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK
JOHNS HOPKINS UNIVERSITY
UNIVERSITY OF CALIFORNIA-BERKELEY
HARVARD UNIVERSITY
PRINCETON UNIVERSITY
STANFORD UNIVERSITY
UNIVERSITY OF CALIFORNIA-LOS ANGELES
UNIVERSITY OF CHICAGO
PRINCETON UNIVERSITY
UNIVERSITY OF PENNSYLVANIA
YALE UNIVERSITY
UNIVERSITY OF MICHIGAN-ANN ARBOR
NEW YORK UNIVERSITY
UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
HARVARD UNIVERSITY
RUTGERS THE STATE UNIVERSITY OF NEW JERSEY NEW BRUNSWICK CAMPUS
YALE UNIVERSITY
OHIO STATE UNIVERSITY MAIN CAMPUS
UNIVERSITY OF MINNESOTA-TWIN CITIES
DUKE UNIVERSITY
NORTHWESTERN UNIVERSITY
INDIANA UNIVERSITY AT BLOOMINGTON
CORNELL UNIVERSITY
UNIVERSITY OF WISCONSIN-MADISON
UNIVERSITY OF CALIFORNIA-SANTA BARBARA
UNIVERSITY OF NOTRE DAME
YALE UNIVERSITY
BRANDEIS UNIVERSITY
BROWN UNIVERSITY
UNIVERSITY OF ARIZONA
VANDERBILT UNIVERSITY
PENN STATE UNIVERSITY
UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
UNIVERSITY OF CALIFORNIA-SAN DIEGO
UNIVERSITY OF VIRGINIA
ARIZONA STATE UNIVERSITY
UNIVERSITY OF CALIFORNIA-DAVIS
UNIVERSITY OF IOWA
UNIVERSITY OF TEXAS AT AUSTIN
TUFTS UNIVERSITY
EMORY UNIVERSITY
UNIVERSITY OF ROCHESTER
UNIVERSITY OF WASHINGTON
GEORGETOWN UNIVERSITY
NORTHEASTERN UNIVERSITY
JOHNS HOPKINS UNIVERSITY
UNIVERSITY OF SOUTH CAROLINA COLUMBIA
MICHIGAN STATE UNIVERSITY
UNIVERSITY OF MASSACHUSETTS AMHERST</p>
<p>Of the top 45 schools in the new NRC history ranking, 36 are also in the Gourman list. The correlation between the Gourman ranking and the NRC ranking is +.67 which is pretty high. I wouldn’t expect a perfect correlation because Gourman ranked undergrad and the NRC ranked grad programs.</p>
<p>This is further validation of the Gourman rankings. If the rankings are valid then all the debate about method is pointless.</p>
<p>Face it, Gourman used criteria like age of the institution, number of students in the program, quality of physical plant, quality of the administration, number of teaching assistants, etc- and this little gem: academic-athletic balance.</p>
<p>Are we to assume that an old school with a new liberal arts building and 100 majors is superior to some other? Desite any legit criteria, the presence of these is flag.</p>
<p>A few decades I go, I transferred from a university ranked in the teens by Gourman to a university ranked well within the top ten. My transfer application stressed the strenfth of “new school’s” programs versus those of “old school”.</p>
<p>In retrospect, I can’t say that there was much of a difference in quality between the two schools, at least within my powers of discernment.</p>