Blogs & Polls: A Chaotic, Untamed Frontier
Due to our expanding global reach and unfailingly blinding insights, Calbuzz is inundated with invitations to share our unconventional wisdom with organizations in the business of politics and policy (also: plumbing fixtures and pet care products, if the money’s right).
Alas, our disciplined commitment to 18 holes a day and long naps in the afternoon make it possible for us to accept only a tiny number of such requests, one of which was the recent 66th Annual Conference of the American Association for Public Opinion Research.
On a typically dreadful day in Phoenix (“Poor air quality is expected in the Valley again today due to the heat and sunshine,” advised weather person Sarah Walters, pictured below) we joined California-based public opinion experts Susan Pinkus and Mark Dicamillo, along with several other polling honchos, to discuss and dissect “The Proliferation of Polling in the 2010 California Governor’s Race.”
Here’s our presentation, as prepared for delivery by our Department of Survey Research and Abacus Repair for Calbuzzard Phil Trounstine (handsome, limited edition leather-bound copies available for $3,500 cashier’s check or money order, plenty of free parking):
A Chaotic, Untamed Frontier
The problem with the use of polling by bloggers, web sites, news aggregators and others in the online world is, that in most corners of cyberspace, polls are not the reasoned, scientific measurement of public opinion that social scientists envision – they are just one more piece of content.
In the online world (and we’re not talking here about sites connected with legacy media like the New York Times, Washington Post, NBC News or PBS) the filtering process that once was carried out by professionals trained in newsgathering tradecraft no longer exists.
Instead, there’s a chaotic, untamed frontier in which the landscape includes everything from meticulous researchers to ideological gunslingers, from thoughtful analysts to ranters and ravers. The internet includes the good, the bad, the ugly and everything in between.
So your carefully designed, two-year survey into the lingering effects of immigration from France becomes “Frogs in America Not Jumping for Joy” at pollywog.com.
One minute it’s up on the Web and suddenly, poof, it’s gone, down the page and/or into Google and Yahoo archives in the blink of an eye. Thousands or perhaps hundreds of thousands of people may have clicked on the story and read through it for a minute or so. Who knows whether the write-up accurately reported the data? Certainly neither the margin of statistical error, the methodology nor any other AAPOR-sanctioned factoid was likely reported.
A Culture of Immediacy: Ironically, from a consumer’s point of view, that’s the best case-scenario: where a legitimate, thorough, unbiased survey researcher releases a serious, methodologically sound study which is duly digested and reported.
It’s bad enough for the reporting of survey research that few, if any, web site denizens have the foggiest notion of how to read or interpret a survey. What’s worse is that survey researchers with an agenda – political, commercial, ideological, whatever – are pretty much in the driver’s seat on the web.
Our information universe is now what Bill Kovach and Tom Rosenstiel of the Project for Excellence in Journalism call the “culture of immediacy,” in which power has been shifted away from journalists to sources of information they rely on to fill airtime and web pages.
In what they call the “journalism of assertion,” where news sources – like pollsters – “are in a position to dictate the terms of use” of the information they are peddling.
In our “culture of immediacy,” Kovach and Rosenstiel write in their new book “Blur,” there is something nearly akin to a physical law: “Speed, in news, is the enemy of accuracy. The less time one has to produce something, the more errors it will contain.”
When you’re editing and publishing a fast-paced political news and analysis web site like Calbuzz, it’s vital to know what polling you can trust and what polling you have to take with a huge hunk of salt. Otherwise, you’re in danger of passing along to your readers what we’ve dubbed “crapchurn.”
This is especially true if you’re sensitive to the effects of polling on political races.
With more than a combined 60 years’ experience covering California politics, my partner and I understand that political polling can help shape the field of contestants, dramatically impact fundraising, color public opinion about who’s viable and who’s not, and seriously effect a campaign’s dynamics, strategy and tactics. Of course, modern campaigns do a lot of their own polling, but publicly released surveys can be just as important in all these ways.
The key criteria: Your average citizen – even sophisticated citizens – generally don’t know the difference between a Rasmussen or Survey USA poll and a survey produced by the Field Poll or the Los Angeles Times and USC. Sadly, it seems, plenty of our colleagues in cyberspace have no clue about the difference, either.
That’s why, back in October 2009, we explained the key information we wanted to know about any poll:
– Who paid for the poll and why was it done?
– Who did the poll?
– How was the poll conducted?– How many people were interviewed and what’s the margin of sampling error?
– How were those people chosen? (Probability or non-probability sample? Random sampling? Non-random method?)
– What area or what group were people chosen from? (That is, what was the population being represented?)
– When were the interviews conducted?
– How were the interviews conducted?
– What questions were asked? Were they clearly worded, balanced and unbiased?
– What order were the questions asked in? Could an earlier question influence the answer of a later question that is central to your story or the conclusions drawn?
– Are the results based on the answers of all the people interviewed, or only a subset? If a subset, how many?
– Were the data weighted, and if so, to what?
I cannot tell you how many times, during the 2010 governor’s race in California, Calbuzz threw cold water on snapshot, IVR (interactive voice response) polls or interest-group surveys that raced through the blogosphere breathlessly saying so-and-so was now ahead when, in fact, the quote-unquote survey being cited was utter hogwash.
Give us a break: We are fairly unflinching about separating good polling from agenda-driven polling. This is especially true of polls that use IVR, ignore cell-phone-onlys and constantly fiddle with the partisan composition of their sample. For example, here’s what we wrote in September 2010 about one such survey:
“Here’s all you need to know: the new Rasmussen poll has Whitman beating Brown among liberals 62-35%. That’s absurd. At the same time a poll from CNN, done by Opinion Research Corp., has … liberals voting for Brown 80-16%, which sounds about right.
“Rasmussen also has Whitman beating Brown 62-31% among voters 65 and older, compared to the CNN poll which has Brown over Whitman 50-47% in the same age group. Another stupid Rasmussen result.
“Mark our words: when it gets down to the wire, and reputable pollsters have weighed in with serious results from legitimate polling, outfits like Rasmussen and Survey USA will post surveys right on the money. However they get there.”
Soon after, by the way, Rasmussen tweaked the partisan make-up of its sample, going from a 2-point Democrat-over-Republican spread to a 6-point spread and came out with a survey showing Brown ahead of Whitman 50-44%. (Of course, this was after Pew Research had reported that automated polling without cell phones produced a 4-to-6 point Republican bias).
Nevertheless, Rasmussen never could get away from its apparent GOP bias and on October 29 – nine days before the election — declared the race a “tossup,” with Brown leading Whitman 49-45%. The final, by the way, was Brown 54, Whitman 41, so Rasmussen was both 5 points low for Brown and 4 points high for Whitman.
But my point is not to demonstrate how flawed Rasmussen surveys are. Nate Silver, Mark Blumenthal, Mark DiCamillo and others far more erudite and scholarly than I can handle that.
He’s up! He’s down! Oh, never mind: The problem is that too many writers and publishers on the web are simply feeding these kinds of faulty surveys through to their readers because they so desperately need new content, they’ll publish damn near anything.
Take, for example, the Huffington Post, where the results of a Public Policy Polling survey were reported just the other day under the headline “Trump , Collapses in Republican Primary Poll.”
We learned that Donald Trump was now drawing just 8 percent of the potential Republican primary vote, down from 26 percent in PPP’s previous survey in April.
What we know from Huffington Post is that PPP is a Democratic firm and that the poll was conducted between May 5 and May 8 among 610 “usual” Republican primary voters using automated telephone technology and has a margin of error of 4 percentage points.
If you follow the hyperlink from the story to PPP’s release, there was no more information about how voters were sampled, where they were located, how they were contacted (except we know it was some sort of telephonic instrument) or how “usual” Republican primary voters were identified. We do find this note:
This poll was not paid for or authorized by any campaign or political organization. PPP surveys are conducted through automated telephone interviews. PPP is a Democratic polling company, but polling expert Nate Silver of the New York Times found that its surveys in 2010 actually exhibited a slight bias toward Republican candidates.
In other words, they’re partisan and biased, but not in their own interest. That’s supposed to reassure us that their polling can be trusted. And we only found that out if we clicked on the hyperlink and read through PPP’s release.
Take a blogger to lunch: Few web sites have on staff anyone who understands the difference between random digit dialing, voter lists and IVR. They have little interest or time to understand how likely voters were identified in a survey. They have only the vaguest notion of margin or error, question order or weighting.
In short, the chances that any web story written about your legitimate survey is unlikely to include this kind of information. You may be able to get some of this included if you speak to the actual human being who will be writing up the survey, but at least make sure you include all of that on the release the story links to.
We can’t expect online news sites to put information into their stories that explains methodology. But online web sites that report on surveys should be encouraged at least to link to your release in which these kinds of questions are answered.
My advice – take a blogger to lunch. They’re poor and be happy for the free meal. Explain a bit about what makes a poll reliable and what should set alarm bells ringing. And when they run surveys that you suspect are faulty, comment on their sites. Your comments will live on in cyberspace along with the original story and just might help future readers separate the wheat from the chaff.
Another huge probelm will all kinds of polling, legitimate and not, is that the people being questioned don’t have a clue as to what they are talking about. For instance, a WaPo story yesterday headlined “Poll: More fear U.S. debt than default.” Now I ask you, how many of the people who participated do you think understood the issues?
Good point!
While I also agree with Tony about lack of voter knowledge–my own experience indicates that most can’t even reliably tell you what party they’re registered with–the polls are also at fault. Many are leading and don’t give you the options you’d really like. For example, when assessing the performance of a given politician, “poor” is usually the worst grade you can give them–rather than the “awful” rating I’d often prefer. You rarely even get an “other” or “none of the above” choice on questions.
The one time I got an open-ended question, I think I rather stunned the young woman taking the poll. When asked what I thought was the best thing then governor Schwarzenegger had done, I replied, “Kindergarten Cop.” I was rather pleased with the response. It also had the virtue of being true.
BTW, average readers may not know the difference between polling companies. But, as a Calbuzz reader, I make it a point to ignore anything Rasmussen says. They are as reliable as Faux Noise, and for the same reason.
“…legitimate polling, outfits like Rasmussen…”
Thank God for commas, at first read I though they had put “Rasmussen” and “legitimate polling outfit” in the same clause