Man Bites Dog That Bit Him
UCLA analyzed the think tanks quoted by media to see if there was a bias toward left-leaning or right-leaning sources for quotes. This is a good mask for bias, so it's an interesting research approach -- but flawed in this case, as WSJ points out:
In a world where Cindy Sheehan thinks she got negative coverage, it's hard to devise a method to measure media bias. The UCLA approach isn't wrong on its face; it just proves that good research requires good execution.First, its measure of media bias consists entirely of counting the number of mentions of, or quotes from, various think tanks that the researchers determine to be "liberal" or “conservative." By this logic, a mention of Al Qaeda in a story suggests the newspaper endorses its views, which is obviously not the case. And if a think tank is explicitly labeled “liberal” or “conservative” within a story to provide context to readers, that example doesn’t count at all. The researchers simply threw out such mentions.
Second, the universe of think tanks and policy groups in the study hardly covers the universe of institutions with which Wall Street Journal reporters come into contact. What are we to make of the validity of a list of important policy groups that doesn’t include, say, the Chamber of Commerce, the National Association of Manufacturers, the AFL-CIO or the Concord Coalition, but that does include People for the Ethical Treatment of Animals? Moreover, the ranking the study gives to some of the groups on the list is simply bizarre. How seriously are we to take a system that ranks the American Civil Liberties Union slightly to the right of center, and that ranks the RAND Corp. as more liberal than Amnesty International? Indeed, the more frequently a media outlet quotes the ACLU in this study, the more conservative its alleged bias.
Third, the reader of this report has to travel all the way Table III on page 57 to discover that the researchers’ "study" of the content of The Wall Street Journal covers exactly four months in 2002, while the period examined for CBS News covers more than 12 years, and National Public Radio’s content is examined for more than 11 years. This huge analytical flaw results in an assessment based on comparative citings during vastly differing time periods, when the relative newsworthiness of various institutions could vary widely. Thus, Time magazine is “studied” for about two years, while U.S. News and World Report is examined for eight years. Indeed, the periods of time covered for the Journal, the Washington Post and the Washington Times are so brief that as to suggest that they were simply thrown into the mix as an afterthought. Yet the researchers provide those findings the same weight as all the others, without bothering to explain that in any meaningful way to the study’s readers.
<< Home