HOME  DISINFORMATION  PEOPLE  ELLIS
Lubomyr Prytulak to Faron Ellis: Can Public Opinion Polling Subvert Democracy?
"Unless you are able to propose some justification for the JMCK claim of a 95% confidence interval of 56.9% � 2.9%, the calculation will stand accused of lacking scientific foundation, and in fact will project the appearance of a gratuitous claim whose purpose is to inspire trust in the mind of a public uneducated in scientific method." � Lubomyr Prytulak
20 October 2003



Faron Ellis
Department of Social Science
Lethbridge Community College
3000 College Drive South
Lethbridge, Alberta T1K 1L6

Faron Ellis:

Can Public Opinion Polling Subvert Democracy?

Recent events revive concern that public opinion polling may be an instrument for subverting democracy.  Take, for example, the following discrepancy that has been observed between polling results and evidence coming from everyday observation:

The news analysts cited the corroborating testimony of the opinion polls � a 70 percent approval rating for President Bush, 60 percent in favor of sending the army to exterminate Saddam � but in New York during the months of September and October I could find little trace or sign of the militant spirit presumably eager to pat the dog of war.  Not once in six weeks did I come across anybody who thought that the President had made a coherent argument in favor of an invasion of Iraq.  Whether at lunch with film producers in Greenwich Village or at dinner among investment bankers overlooking Central Park, the commentary on the President's repeated attempts at explanation invariably descended into sarcasm.  Who could take seriously the reasoning of a man armed with so few facts � no proof of Saddam's connection to Al Qaeda, no indication that Iraq threatens the United States (or even the nearby states of Jordan, Saudi Arabia, and Iran), no evidence that Saddam possesses weapons of mass destruction, the presidential indictment based on surveillance photographs too secret to be seen, on old stories of past atrocities and the premise that America "did not ask for this present challenge, but we accept it"?

The last statement, drawn from the President's speech in Cincinnati on October 7, prompted a civil rights lawyer the next morning at breakfast to mockery.  "To whom does the man think he's talking?" she said.  "To people so stupid that they can't see through the window of his lies?  A Republican administration promotes a foreign war to hide the mess of its domestic politics, and the President asks us to believe that we're being attacked by Joseph Stalin?"

The lady voiced a New York opinion, and I accepted it as such.  Given my long confinement in the city's spheres of literary influence, I don't know many people who admire President Bush or who feel anything but loathing for the reactionary scholars who teach him lessons in geography.  In New York I expect to hear Bush compared to Little Lord Fauntleroy or Bernie Ebbers, and I take it for granted that nearly everybody else in the conversation shares my own low regard for the corporate-management theory that informs the making of American foreign policy.

What I didn't expect was the fierce opposition to the Iraqi adventure that I encountered elsewhere in the country.  Traveling to California in September and October, and then in Oregon, Connecticut, and Virginia, I sought out fellow citizens unmarked by the stigmata of effete, liberal intellectualism � an aerospace engineer on the plane to Portland, a quorum of computer programmers in Hartford, two retired admirals on a golf course in San Diego, various unpublished social critics met with in hotel coffee shops and airport bars.  It was as if I hadn't left New York.  Never once did I find myself in the company of people who approved the Pentagon's strategies of "forward deterrence" and "anticipatory self-defense."  The general opinion of President Bush wavered between the phrases "toy soldier" and "dangerous fool."
Lewis H. Lapham, Notebook: Hail Caesar!, Harper's, December 2002, pp. 9-10.

The question which Lewis H. Lapham does not explicitly raise in his essay above, but which I do raise with you, is how anyone knows that the polls were representative of the views of the nation, and the Lapham observations were not?  If it should be the case that the poll results were not based on science or mathematics, or that they were subject to error and manipulation, then they might be wrong and the evidence of Lapham's experience might be right.  The question before us, then, is how can we evaluate the quality of polling results?

I Have Given Public Opinion Polling Some Thought

I am preparing a series of books on scientific method in the social sciences, a rough draft of the first part of Volume 3: Correlation being on public display at www.ukar.org/corr/corr02.html.  Unfortunately, my discussion of public opinion polling is to be found not within the above Volume 3: Correlation, but rather within Volume 4: Sampling, which I have not yet posted on the Internet.  Among my hopes is that insights gained in my discussion of polling with you might be incorporated into the forthcoming Volume 4: Sampling.

What I write about public opinion polling in Volume 4: Sampling is what I taught perhaps a thousand students in my methodology courses at the University of Western Ontario over the course of eleven years � that public opinion polling violates fundamental principles of scientific method, and that its claims to accuracy and representativeness are unfounded.  Given my long-standing position, you can perhaps now understand why I viewed Lapham's writing above as his encounter not merely with a curious but unaccountable incongruity, but as possibly his encounter with a symptom of the subversion of democracy.  That is, in addition to my being willing to entertain the hypothesis that the poll results showing support for the war happened to be representative of the national sentiment and that Lapham's personal observations happened to be unrepresentative, I am also willing to entertain the opposite hypothesis that some poll results are so unrepresentative that anyone who merely talks to people around him has a better chance of obtaining a representative impression.  Public-opinion polling in our society today plays the role of a referendum, and is accepted as measuring the pulse of the nation.  As politicians are to some degree guided by polls, and to some degree rely on them to justify their decisions, then the pollsters can be said to exercise an influence over politics which the pollsters did not earn and cannot justify.  No more urgent task confronts the citizens of a democracy than protecting its fragile institutions from destruction, in support of which protection the open discussion of public opinion polling is central.

Incidentally, as one who relies on statistics, you might be interested in my investigation into the reliability of a broad swath of statistical data that is contained in my report of 31 March 1997 to Statistics Canada (Contract File No. 72800964039) titled Some Outliers Among the Monthly Series of StatCan Origin on the CANSIM CD-ROM of January 14, 1997, © Her Majesty the Queen in Right of Canada as represented by the Minister of Industry, Science, and Technology.  Although the terms of my contract prevent me from disclosing the contents of my report, it must be possible to obtain a copy under Access to Information legislation.  My report might be of particular interest to consumers of statistical data because its algorithms were adopted by StatCan for purposes of error detection.  (Unless one is particularly interested, it would be unnecessary to acquire the possibly thousands of pages of the complete report � how many pages is not obvious offhand, as the Appendices A-L are not continuously paginated.  Rather, the introduction on pp. 1-25 followed immediately by a section of 86 graphs convey the gist, although these 86 graphs displaying functions in different colors will be interpretable only if reproduced in color.)  If the discipline of political science is to deserve the "science" that it has appropriated within its title, it must earn it by paying close attention to the accuracy and validity of the measurements on which it relies; conversely, disregard for accuracy and validity of measurement disqualifies any endeavor from deserving to be called a "science."

Are You an Expert on Public Opinion Polling?

Before proceeding further, though, it might be best to verify whether I am addressing the right person, or whether my discussion were better directed elsewhere.  That is, I address you on the assumption that you have expertise in public opinion polling, an assumption that is encouraged by such statements as the following, both taken from the JMCK web site at www.jmck.ca:

Dr. Ellis teaches political science, history and social science research methodology at Lethbridge Community College.

Our senior pollster, Dr. Faron Ellis, is a highly respected political scientist with 25 years of experience in the polling business.

At the same time, however, I fear that my assumption of the nature of your expertise could be in error.  That is, what I am looking for is not an expert in the sense of someone who trusts poll results and relies on them often, but rather an expert who has scrutinized and evaluated the methodology employed in public opinion polling.  The list of your publications that I find on the JMCK web site does not want for length, but does leave open the possibility that you may be the former kind of expert � that you trust polling and rely on it, but that you do not analyze and critique it, or in other words that you work as a political commentator, and not as a methodologist.  If you have published methodological papers, then I would very much appreciate your sending me copies.

The upshot is that if you were to inform me that your expertise lies in realms other than methodology, I would apologize for having intruded upon you, and would carry my questions elsewhere.

On top of that, if you wished to excuse yourself from any methodological discussion of polling on the ground that on this question you are not a disinterested academic, but rather derive profit from polling, and so feel it prudent and ethical to recuse yourself from a discussion which touches on one of your sources of income, then I would understand that too, and would impose on you no further.

In the event that you do feel yourself to have methodological expertise, and that you do see yourself as unencumbered by any conflict of interest, I continue my discussion.

Appraising Quality

The evaluation of product quality that a consumer is typically able to make following his purchase is possible in the realm of public opinion polling only at election time, when the pollster's pre-election results are eventually compared to the election results themselves.  If this is to be the measure of polling success, then polling would have to be deemed unsuccessful, as it commonly predicts elections incorrectly, and follows up not by admitting its failure, but rather by denying any error, and instead gratuitously miscasting a bad prediction as a last-minute shift in voter sentiment.  In fact, however, a discrepancy between polling data and election outcome may arise not only from voter sentiment shifting, but also from the polls having been wrong because methodologically unsound.

On the other hand, in the more typical case when polling does not aim to predict election results, which is most of the time, there is no way to verify polling results, and they will be protected from disconfirmation even when grossly inaccurate.  Thus, pollsters are unique among retailers � a retailer of cars that don't run, or a retailer of weather forecasts that predict sun before it starts to rain, will be exposed and will go out of business; however, the retailer of polling results that are unrepresentative of public opinion finds the poor quality of his product uniquely protected from exposure.

JMCK Does Claim Quality

Despite an absence of any possibility of verifying the quality of the bulk of its products, the JMCK web site does claim that JMCK polling is "accurate," "consistent," "reliable," "representative," "scientifically tested," "proven," "robust," "high-quality," and "dedicated to quality," as for example in the following statements:

JMCK Polling offers accurate public opinion polling and research to a wide range of clients including Global Television, the Calgary Herald and the Canadian Taxpayers� Federation.

JMCK Polling delivers consistent, scientifically tested results at reasonable prices.

JMCK Polling is dedicated to quality.

National Reach
We can survey a representative cross-section of Canadians on a statistically reliable basis � nationally and provincially.

Responsive, Reliable and Cost Effective
Responding to your needs reliably and cost effectively is our speciality.

Using creative applications of proven, statistically robust polling methodologies, JMCK Polling provides meaningful and usable information to our clients.

We don't outsource any part of the process in order to ensure you rapid, reliable, high-quality results.

Below I examine the JMCK claims of quality of its products under two headings: Push Polling and Methodology.

PUSH POLLING

JMCK's Alberta Kyoto Poll will be the basis of my discussion of push polling, where JMCK discloses its methodology as follows (and where JMCK reinforces its claims to accuracy and validity with a precise estimate of its "margin of error," which to the layman will carry a powerful message of scientific respectability):

A total of 1224 individuals of voting age were interviewed by telephone.  The sample has been statistically weighted to accurately represent the demographic distribution of the Alberta population.  The weighted sample size is 1204.  The margin of error for this sample is ± 2.9 per cent, 19 times out of 20.  The margin of error increases when analyzing sub-samples of the total.

The JMCK Alberta Kyoto Poll report presents a "selection of the findings" which I summarize as follows:

QUESTION:  Based on what you've heard, do you think the federal government should ratify the Kyoto Protocol?

ANSWERS
ALLOWED
PERCENT
CHOOSING
Ratify 23.9 %
Don't Ratify 56.9 %
Undecided   7.0 %
Don't Know Enough 12.3 %

QUESTION:  If the federal government ratifies Kyoto against the wishes of the Alberta government, what do you think Alberta should do?  Which of the following choices best reflects your opinion?
  • There's nothing we can do.
  • Albertans should begin to explore other options such as independence.
  • Alberta should seek to join the United States.
ANSWERS
ALLOWED
PERCENT
CHOOSING
Nothing 43.8 %
Independence 46.8 %
Join USA   9.4 %

Concerning the above Alberta Kyoto poll, my first question is whether those responsible for it have taken adequate care to avoid the appearance that it is a push poll.  As "push poll" has two meanings in current use, it is imperative at the outset to specify which of them I intend.

One meaning of "push poll" is the contacting of a large number of people under the guise of soliciting their opinions, but where their opinions are not being recorded, and will not be analyzed or published, and where the real purpose of contacting them is to communicate to them usually derogatory information or disinformation against a position, or against a candidate for office.  I will refer to this meaning of "push poll" as "push-poll-A."

A second common meaning of "push poll" is a poll in which data really are collected and published, but which engineers results desired by whoever commissioned the poll, or supportive of the interests of the pollsters.  I will refer to this meaning of "push poll" as "push-poll-B."

The question I put to you, then, is whether those responsible for producing the JMCK Alberta Kyoto poll have taken sufficient steps to avoid the appearance of having produced a push-poll-B?  It might seem to any objective reader of the JMCK Kyoto report that perhaps JMCK has not, for the following reasons.

JMCK Projects the Appearance of Having a Conflict of Interest

In publishing Ezra Levant's book, Fight Kyoto, JMCK Communications gives an indication that it is opposed to Canada ratifying the Kyoto Protocols; and in promoting Levant's book, JMCK Communications reveals its own opposition to Kyoto in partisan language:

"Fight Kyoto" is a thorough and insightful analysis of how implementing the Kyoto protocol will devastate Canada's economy.  Ezra Levant explains it all and tells us how to fight back.   (www.jmck.ca/Publishing.htm)

To some eyes, then, the JMCK Communications committment to opposing Kyoto contraindicates JMCK Polling conducting a public opinion poll on Kyoto.  At the very least, the JMCK Polling report should have disclosed this conflict of interest, which would have served to put readers on their guard that what they were about to read might be a push-poll-B.  You would be able to throw some light on the question of conflict of interest by disclosing whether any senior JMCK personnel have ties to the oil industry, or stand to profit from its economic success.

JMCK Invites the Perception that it is Selling Confirmation Rather Than Truth

JMCK promotional statements to the effect that JMCK offers the service of "validating" positions already taken further erodes the image that it is geared to discovering truth whether that truth is confirmatory of expectations or disconfirmatory:

Polls are invaluable tools to help you understand your customer and to validate your positions.   (Real-time Opinion Polling on the JMCK web site.)

Polls also validate your positions, thereby strengthening acceptance of decisions, products, or marketing efforts.   (Polling section on the JMCK web site.)

Scientific polling, in contrast, might be based on the proposition that he who pays for a poll is benefitted most by learning the truth, whether that truth confirms his hopes or not.  For example, a candidate who discovers that he is popular is benefitted by encouragement on a quest that has some chance of success, whereas a candidate who discovers that he is unpopular is benefitted by discouragement from a quest which promises to deplete his resources in a lost cause.

Thus, if JMCK wished to avoid the appearance of supplying push-poll-B services, it would
  1. avoid conducting a public opinion poll on a question which it had already expressed a strong opinion about,

  2. take every opportunity to disclose conflicts of interest, and

  3. advertise that it was in the business of discovering truth and not of "validating" preconceptions.
JMCK's First Kyoto Poll Question

We have already seen above that the first JMCK question in its Kyoto poll was, "Based on what you've heard, do you think the federal government should ratify the Kyoto Protocol?"  However, one has to wonder why this question's seventeen words are twice as many as are necessary, the following eight-word alternative seeming to suffice: "Should the federal government ratify the Kyoto Protocol?"

Particularly, why does JMCK preface the asking of the actual question with "Based on what you've heard"?  Is JMCK telling the interviewee to pay attention only to what he has heard, but to ignore everything that he has read?  Does JMCK want the interviewee to pay attention only to what others have told him, but to pay no attention to what he himself may have observed happening around him, and no attention to reasoning that he himself may have gone through?  Such explanations do not seem plausible, and JMCK offers no rationale for its appearing to block certain sources of interviewee information (blocking what he has read, observed, deduced) and not others (not blocking what he has heard).

The answer to this riddle that may enter some people's minds starts with the supposition that although the above-quoted question is the first thing that appears under the heading "Questionnaire" on the JMCK web site, it probably does not represent the first words spoken by the interviewer over the telephone.  Surely the interviewer introduced himself before asking any questions, and surely he mentioned that the subject of the call was Kyoto, and surely he refreshed the interviewee's memory of what Kyoto stood for.  And maybe it is such JMCK introductory statements on Kyoto that "based on what you've heard" refers to.  The interpretation that the JMCK report on its Kyoto poll invites, then, is that what the JMCK question conveyed to interviewees is something tending in the direction of, though undoubtedly not as extreme as, "Based on how terrible I've just told you Kyoto is, and based on Ezra Levant's book Fight Kyoto having proven that ratification will devastate Canada's economy, do you think the federal government should ratify it?"

JMCK could have avoided projecting this unflattering appearance by taking two additional steps beyond the three outlined above:
  1. fully disclose everything said to the interviewees, including all introductory material, and

  2. employ the eight-word version of the first question which omits the puzzling and suspicion-arousing narrowing of focus to "based on what you've heard."
JMCK's Second Kyoto Poll Question

A poll that presented interviewees with the following alternatives would discover overwhelming support for the second of them, as is indicated in imaginary percentages:

  1. I want my economic situation to remain unchanged      5 %
  2. I would like to win a million dollars in a lottery      90 %
  3. I would like to have the government raise the sales tax      5 %
However, a push-pollster who wanted to present the appearance of support for raising the sales tax could lump the last two alternatives together as follows, with more imaginary results:
  1. I want my economic situation to remain unchanged      25 %
  2. I would like to win a million dollars in a lottery and have the government raise the sales tax      75%
Some of the interviewees who flocked to B when there were three alternatives would abandon B when there were only two alternatives out of abhorrence for the part about raising the sales tax; nevertheless, many would stick with B because of the part about winning a million dollars.  The push-pollster could then represent 75% public support for B in the push poll as support for raising sales taxes, toward which end he might label the 75% bar in his graph as "Raise Sales Tax."  We might summarize the two versions of our imaginary push-poll-B in the following table:

Proposition Disinterested Poll Push Poll
Remain Unchanged   5 % 25 %
Win Million Dollars 90 % 75 %
Raise Sales Tax   5 %

What the above example illustrates is that projecting the appearance of public support for an unattractive proposition by lumping it together with an attractive proposition is a weapon in the push-poller's arsenal.  JMCK appears to be guilty of a variation of this stratagem in the second question in its Kyoto poll that we saw above.  That is, as summarized in the table below, perhaps a disinterested poll would have offered interviewees an alternative such as "Alberta should pursue options to mitigate economic harm, such as asking the federal government to help offset losses to the Alberta economy, and promoting uses for oil other than burning it, such as turning it into plastics."  Alberta following such a "mitigate harm" path would be less disruptive and traumatic to both Alberta and Canada, would impose the smallest economic loss on all concerned, would most benefit the environment, would undoubtedly be favored by interviewees, and is possibly the sort of thing that interviewees had in mind when they heard JMCK propose that "Albertans should begin to explore other options."

If JMCK's Kyoto poll had been conducted as a disinterested poll, then the results might have resembled those in the Disinterested Poll column below; and JMCK's actual results may be viewed as having joined the most attractive alternative (Mitigate Harm) with one of the least attractive (Seek Independence).  The label under 46.8% in the JMCK bar graph on the JMCK web site does not say "Mitigate Harm" or "Other Solutions" or "Other Options" or "Other" � the JMCK label, rather, says "Independence" which misrepresents the fact that the alternative really specifies "Other Options" with "Independence" being only one example of another option: "Alberta should adopt another option, such as independence."  JMCK labelling its 46.8% bar as "Independence" instead of as "Other" is almost as misleading as the push pollster in our imaginary example above labelling his 75% bar as "Raise Sales Tax" instead of "Win a Million Dollars."

Proposition Disinterested Poll JMCK Poll
Do Nothing   5.0 % 43.8%
Mitigate Harm 85.0 % 46.8 %
Seek Independence   5.0 %
Join USA   5.0 %   9.4 %

Thus, although the JMCK data read uncritically project an image of Albertans ready to take the most drastic steps in case of Kyoto ratification, in fact the data are compatible with the view that Albertans are predisposed to greeting Kyoto ratification calmly.

That what JMCK calls its "Independence" option was indeed favored by interviewees attracted to its "Other Options" component is supported by the pattern of reported data.  That is, if 56.9% chose "Don't Ratify" in the first question, and if 46.8% + 9.4% = 56.2% were for one or the other of the two radical reactions "Independence" or "Join USA," then we appear to be faced with the implausible conclusion that almost everyone who opposes ratification is also a hot-headed revolutionary yearning for the destruction of Canada (56.2 / 56.9 = 98.8%).  Escape from this impossibility, as has been argued above, lies in recognizing that many who chose the "Independence" alternative must have chosen it because it is really the "Other Options" � or in their minds, the "Mitigate Harm" � alternative.

It might also be wondered why the 12.3% who said initially that they didn't know enough about Kyoto to venture an opinion concerning ratification were not excused from being further asked whether Kyoto ratification was sufficient provocation to destroy Canada.  In exposing all interviewees to this second, and exceedingly provocative, question JMCK appears to have ventured beyond measuring public opinion to inflaming it, which JMCK might have been doing when it introduced large numbers of people to the possibility that Kyoto ratification could be so destructive to the Alberta economy as to justify Alberta seceding from Canada.

My introductory quoting of Lewis H. Lapham above suggested the possibility that President Bush relied on push-polling-B of one sort or another to justify an ill-advised and unpopular war, and here we see suggested the possibility that JMCK similarly relied on push-polling-B to justify the disintegration of Canada.  One does indeed encounter reason for the view that close public scrutiny of public opinion polling may be indispensible to preserving our fragile democratic institutions against forces of manipulation and subversion.

METHODOLOGY

Although the above discussion of push-polling-B focussed on a single JMCK poll, the discussion of methodology which begins here covers not only all JMCK polls, but in fact all public opinion polls conducted by all pollsters.

I have read the following reports of JMCK polls on the JMCK web site and find that every one of them omits two critical pieces of information, and in fact which are the two most critical of all pieces of information, and in fact which are the sine qua non of scientific polling � a description of the population from which the sample was drawn, and reason to believe that the sample was drawn randomly from that population.  If both of these were present, then sample characteristics might be projected to the population according to well-understood and thoroughly dependable formulas.  In the absence of either of these, and most certainly in the absence of both, JMCK is able to conclude little more from any of its polls than that some group of people can be found 9.4% of whom, for example, might prefer Alberta to join the US � but with no indication of what percent of Albertans generally would prefer to do the same, and with no justification whatever for claims such as that the "margin of error for this sample is ± 2.9 per cent, 19 times out of 20."

The Pollster is Unable to List the Population to which he Wishes to Generalize

In order to employ the accurate, powerful, and proven tool of random sampling, the pollster needs to start with a list of the population of interest, which is to say, the people whose feelings he wants to estimate or whose behavior he wants to predict.

When predicting an election, that list would be of all the people who are going to vote.  However, no such list exists.  There may exist a list of all the people eligible to vote; however, many of these will end up not voting, and many others will turn up to vote who are eligible but who had not been on any earlier list of eligible voters.  The absence of a list of precisely those, and only those, who are actually going to show up to vote on election day means that the pollster fails in the very first step that is required in random sampling, which is delineating the population that is to be sampled from.

What the pollster does have is some alternative list, most typically a list of telephone numbers.  As this alternative list is not a random sample from any larger population, it cannot be taken to be representative of any larger population.  For example, Canadians who do not have telephones are off the list and thus will be totally excluded from the sample.  Canadians with unlisted telephone numbers will be excluded too.  Canadians in outlying areas who are not entered in the larger telephone books may be overlooked.  If the pollster bases his telephone list on a telephone directory, then a family phone attributed to the man of the house will leave off the list all others living in his house, as for example his wife, his mother-in-law, and his son and daughter.  For such reasons, a list of telephone numbers is neither a list of people who are going to vote in an election, nor a list of all the Albertans or Canadians whose thoughts or feelings the pollster wishes to gauge, and so generalization to voters or to Albertans or to Canadians finds no justification.

Of the People the Pollster Designates for his Random Sample, Many will Prove Inaccessible

Even if the pollster does not have a list of the members of the population to which generalization is to be made, can it at least be said that he does take a random sample from whatever list that he does have?  No, this cannot be said.  What must be expected when a pollster attempts to contact people in his designated sample is that most will prove to be inaccessible.

Thus, sometimes the telephone will be busy, usually because someone is using it, sometimes because the phone is off the hook.  In a home in which a telephone provides Internet access, the line may be tied up for hours at a time by people surfing the net or playing online video games.  Occasionally the telephone line will be out of order.  Or, if the pollster telephones during the day, then all the people who work during the day will be inaccessible.  If the pollster telephones during the evening, then all the people who work during the evening will be inaccessible.  Some Canadians will be out of town on vacation or on business trips.  Some Canadians won't be home because they are out shopping, or dropping their children off at school, or visiting relatives, or at the movies, or in hospital, or in jail.  Some Canadians won't hear their telephones because they are out mowing their lawns, or are in the shower.  Some Canadians can see the caller's telephone number displayed, and won't pick up when they think the call might be from a salesman or pollster.

However, the pollster has designated his sample � randomly it is hoped � and if he fails to contact many of the people in the designated random sample, then what kind of sample does he succeed in contacting?  Well, if subjects were lost randomly (as, for example, according to random numbers generated by a computer), then the subjects not lost would still constitute a random sample.  However, if subjects are lost through any other process, the remaining sample is no longer a random sample.  The rule is that a random sample removed from a random sample leaves a random sample, whereas a non-random sample removed from a random sample leaves a non-random sample.

As it is inconceivable that all subjects in the designated random sample will be accessible, then it is inconceivable also that a pollster will succeed in addressing his questions to a random sample.  With the element of randomness missing, all claims to conforming to scientific method and to achieving accuracy collapse.

Of the People the Pollster is Able to Contact, Many Will Refuse to Cooperate

One of the weightier of the many reasons that public opinion polling methodology deviates from random sampling is that it is blighted by uncooperatives.  That is, even among the people that the pollster does succeed in contacting, many will refuse to answer his questions, of which even the methodological layman is reminded in the popular literature:

Percentage of those contacted for this survey who refused to participate in it: 60   (Harper's Index, Harper's, December 2002, p. 13.)

If it were the case that the uncooperatives were chosen randomly (which for present purposes I will take to mean chosen by a computer program relying on randomly-generated numbers), then the interviewees remaining in the sample who did answer questions would still be a random sample.  However, the non-respondents are not chosen by means of random numbers, but rather they choose themselves, which might be described as their being chosen naturally, the inescapable conclusion being that the remaining interviewees from whom answers are elicited cannot be considered a random sample of those accessed.  To repeat, a random sample removed from a random sample leaves a random sample; a non-random sample removed from a random sample leaves a non-random sample.  Inadequately represented among the cooperatives will be the shy, the private, and the busy.

So far, then, the sample that pollsters rely upon cannot be considered to be a random sample from any larger population because (1) no larger population is specified, (2) any designated random sample would be largely inaccessible, and (3) among those accessed would be a large number of uncooperatives.

To illustrate more concretely the devastation caused by inaccessibles and uncooperatives, let us imagine that the pollster initially designated a random sample of 7650, but that only 40% of these proved to be accessible, and only 40% of the accessibles cooperated, leaving an actual sample of 1224, which is the sample size in the JMCK Kyoto poll.  Now if 56.9% of the 1224 were against ratifying Kyoto, and if that 1224 had been a random sample from some large population, I would calculate the 95% confidence interval as follows, where an asterisk indicates multiplication and SQRT indicates the taking of a square root:

1.96 * SQRT(0.569 * 0.431 / 1224) = ± 2.77%

which corresponds approximately with the JMCK blanket estimate of the margin of error as ± 2.9%, and with the JMCK "19 times out of 20" being equivalent to what I am calling a "95% confidence interval."  However, calculations such as these do depend on random sampling, which polling techniques do not even approximate, making such calculations inappropriate and meaningless.

To demonstrate to my methodology students of yesteryear the devastation wrought by inaccessibles and uncooperatives on sample representativeness, I would have the students calculate an interval based on the assumption that none or all of the 7650 - 1224 = 6426 inaccessibles plus uncooperatives favored Kyoto ratification.  Thus, if none of the 6426 lost subjects favored Kyoto ratification, the percent favoring in the entire random sample of 7650 would have been

(528 + 0) / 7650 = 6.9%

and at the other extreme if all of the 6426 lost subjects favored Kyoto ratification, the percent favoring in the entire random sample of 7650 would have been

(528 + 6426) / 7650 = 91.0%

In other words, under the assumption of 40% accessible and a further 40% cooperative, what the JMCK observation of 56.9% opposing Kyoto ratification demonstrated with certainty was that the true percentage for the entire designated random sample of 7650 would have fallen between 6.9% and 91.0%, which interval is laughably unhelpful and uninteresting.

Of course it is understood that either none or all of the 6426 lost subjects favoring Kyoto is highly implausible.  More plausible than assuming either none or all is assuming that the 6426 non-respondents are not that different from the 1224 respondents � but in making such an assumption, we abandon the world of scientific method, and step into a world of assumptions, hunches, gut feelings, and seat-of-the-pants navigation, which will sometimes be right and sometimes be wrong, and which hit and miss guesswork is no part of scientific method, but rather is the sort of pre-scientific muddle that science strives to replace.  Yes, it is plausible that respondents and non-respondents will sometimes differ inappreciably, and yet it is certain that often they will differ substantially, and certain too that sometimes they will differ radically.  Trusting always that respondents and non-respondents are equivalent is not science, it is a gratuitous assumption enhancing the profit of the polling business.

Unless you are able to propose some justification for the JMCK claim of a 95% confidence interval of 56.9% ± 2.9%, the calculation will stand accused of lacking scientific foundation, and in fact will project the appearance of a gratuitous claim whose purpose is to inspire trust in the mind of a public uneducated in scientific method.

The Pollster is Never Able to Achieve Situation Representativeness

On top of the pollster's inability to designate the population to which generalizations are to be made, and on top of his inability to access all the members from any random sample that he might designate from this population, and on top of his inability to win cooperation from all the members of any sample that he does succeed in accessing, the pollster is burdened by one further obstacle blocking his path to estimating population parameters � and that final obstacle is his inability to achieve situation representativeness.

It has been the subject of broad commentary that the pollster has it within his power to manufacture any results that he wants by choosing appropriate wording, as a pollster favoring Kyoto ratification can be imagined doing:

If the federal government ratifies Kyoto, what should Albertans do?

  • Recognize a Good Thing When They See It.  Greet Kyoto ratification as a first step toward cleaning up our environment and reversing global warning, and as a stimulus to discovering alternative uses for Alberta oil, such as producing improved plastics.

  • Undermine Canada by Seeking Independence.  Follow the lead of Quebec by holding repeated, divisive, and costly referendums on sovereignty, thereby blackmailing the rest of Canada into sending Alberta subsidies.

  • Destroy Canada by Joining U.S.  Become the 51st U.S. State, thereby throwing Alberta support behind Presidents like George W. Bush whose environmental policies will transform Alberta into a desert, and whose foreign policies will send Alberta's sons to die in foreign wars and will invite terrorist reprisals against Edmonton and Calgary of the sort that have to date been reserved for New York and Washington.

The obstacle to pollster credibility, of course, is not merely pollster ability to bias results by ham-fisted manipulation such as imagined above.  The much larger obstacle to credibility is pollster inability to escape sources of bias that are powerful and yet subtle or even invisible.

Consider, first, that people (especially if questioned by strangers) do not always tell the truth.  For example, they might be embarrassed about how little they earn, and place themselves in a higher income category; or they might be embarrassed about earning money illegally, and place themselves in a lower income category.  If unemployed or on welfare, they may be reluctant to admit it.  If underage, they might inflate their age to qualify for the interview; if sensitive about being old, they might deflate it.  If they intended to vote for a candidate whom the press was labelling a racist, they might prefer to answer that they intended to vote for some less-criticized candidate.  They might be ashamed to admit that they had no intention of voting in the upcoming election, or that they never voted.  Generally, they might be eager to please their interviewer by saying what they guessed, rightly or wrongly, the interviewer wanted them to say.

A considerable social science literature documents that people's answers depend on whether they are asked the question by someone they know or by a stranger, by a man or a woman, by a young person or an old one, by a civilian or someone in uniform, face to face or over the telephone, speaking with a local accent or a foreign one.

Thus, even if the population of interest could be listed, and even if a random sample of people from this population was accessible, and even if full cooperation was obtained from all those accessed � all of which is impossible � then at the same time a random sample of situations would be neither possible nor even conceivable, and results could not be generalized beyond the situations in which interviews had been conducted.  In short, on many issues there will be a vast gulf between what people really think and feel, and what they will disclose to a stranger who phones them while they are eating dinner.

Does Piece Work Lead to Data Fabrication?

If it is the case that polling interviewers are on piece work � which is to say, paid only for completed interviews � then some may attempt to increase their income by fabricating results, as for example by attributing answers to someone who did not pick up his phone, or who refused to answer questions, or who answered some questions but did not complete the interview.  Given that computer memory is cheap, and getting cheaper by the day, it might be a light burden for pollsters to keep complete audio recordings of all polling interviews, open to inspection by government monitors or responsible investigators, for perhaps two years following every poll.  At the moment, I wonder if you would be able to inform me how thorough JMCK monitoring and verification of polling interviews is, and more particularly, the number of JMCK interviewers fired in the past year or so for detected, or suspected, fabrication of data?

The Danger Posed by Real-Time Opinion Polling

The "real-time opinion polling" promoted on the JMCK web site strikes me as a particularly insidious variation of public-opinion polling, and one which calls for the closest scrutiny and the widest discussion.  As I understand it, cell phone owners voluntarily place themselves on a JMCK list which is periodically sent questions which arrive as text on cell-phone display screens.  Although such a procedure may greatly reduce the number of inaccessibles and uncooperatives, it more than wipes out this gain by being obviously a selection of peculiar individuals, not only far from a random sample of the population at large, and not only excluding from consideration the opinions of people who don't own cell phones, but more importantly excluding the majority who seek to avoid distraction and interruption instead of inviting it.

Such real-time opinion polling may be considered more than merely biased, but actually insidious and subversive, because as the pollster asks each interviewee several, or many, questions over an extended time, the pollster is able to predict how that interviewee will answer future questions from how he answered past ones, and thus the pollster will be able to command any imaginable poll result by sampling exactly those interviewees from his list of volunteers that he knows will give him the answers desired.  This will certainly maximize the satisfaction of pollster clients yearning for a given polling result, but will clash mightily with the public interest of not drowning the populace in disinformation.

What Needs to be Done?

Standards need to be spelled out for the polling business, which might include:

  1. Disclosure of the method of defining the population drawn from, and to which generalizations are to be made.  In the case of JMCK, this would be disclosure of the origin of the list of telephone numbers that JMCK relies upon in its polls.

  2. Disclosure of the method of sample selection that aspires to be random.  The JMCK web site makes no mention of this.

  3. Disclosure of the number in the designated sample who prove to be inaccessible.  Not a word on the JMCK web site on this.

  4. Disclosure of the number accessed who refuse to cooperate.  Nothing from JMCK here.

  5. Disclosure not merely of percentages of people preferring various answers within each subset of the data, but also of the raw numbers from which such percentages are calculated.

  6. Disclosure of the method of calculation that JMCK refers to when it says "The sample has been statistically weighted to accurately represent the demographic distribution of the Alberta population," or of how an actual sample size of 1224 comes to be reduced to a "weighted" sample size of 1204.

  7. Disclosure of audio recordings of all interviews conducted.

  8. Disclosure of what guarantees the public can rely upon that polling methods which approach the same interviewees repeatedly do not consult past answers to determine which interviewees would be included in future samples.

  9. Government monitoring of pollster compliance with polling-business standards.

  10. Whistle-blowing legislation aimed particularly at the polling business which would not only protect from reprisal, but would also compensate and even reward, pollster employees who were able to document deviations from polling-business standards.

  11. Increased public awareness both of established scientific method, and of the disqualifications to validity of common polling methods, which awareness can be awakened by incorporating the study of these two topics into the high-school curriculum.

  12. Prohibition of publication of "margin of error" calculations when these are based on assumptions of random sampling from the population of interest which a poll is unable to meet.

  13. Where margin of error calculations are justified, specifying the different margin of error that applies to each percentage reported, instead of a single margin of error covering all percentages which is a mathematical impossibility.
JMCK could take a valuable and commendable step in the direction of upgrading the polling business, and in raising JMCK's stature within the polling business, by supporting an initiative based on the above recommendations.  I for one would be interested in obtaining for the JMCK Kyoto poll some of the "disclosure" data itemized above so as to enable me to more precisely explore the possibility that the Kyoto poll was indeed the push-poll-B that it gives indications of being, and also whether it may contain methodological errors in addition to those discussed in the present letter.  If you were able to supply me with even some of this additional data, I expect that it would provide grounds for discussion that would both enhance my understanding of the polling business, and JMCK appreciation of steps that could be taken to upgrade the validity of its products.




Lubomyr Prytulak


HOME  DISINFORMATION  PEOPLE  ELLIS