On the eve of the elections, public interest in sociology rose sharply. I must honestly recognize that at first the attention of parties and the government was welcome, but then came Verkhovna Rada’s express ban on a series of polls, two weeks before the elections. And one gradually comes to the realization that what is going on is not so much Prince Charming courting Cinderella as the keen interest taken in the turkey before Thanksgiving or the goose (piglet in our cuisine) before Christmas. Again, in my case no arguments could prove me on the right side of the law (whose law, I wonder!), not even the fact that a group of topnotch US experts had taken part in selecting the respondents, or that we had used a time-tested polling technique involving 300 interviewers, or that we had used special textbooks and video cassettes in training our interviewers, or that we had practiced 20% monitoringscreening by having our interviewers paying repeated visits to their respondents. And even if the results can boast only a 1% margin of error, they were doomed all the same.
There are two reasons. First, a personal conflict of interest, a reason which is very trivial and not worthy of special note. In Ukraine, there are not many polling services, maybe three or four centers with more or less reliable respondent networks. This, however, suffices to provide sound electoral forecasts. There are other “polling centers” that have no such facilities. So what happens? Do they close down for lack of information and professional support? Not at all. Instead, they specialize in casting aspersions on the centers that are really equipped to do the job. Of course, some of the political parties are loath to see the results. But if they don’t like what they read, so what? This means nothing, not in a civilized country. In Ukraine, however, it means that whoever provides such negative data is a falsifier.
The second reason is an objective one. The difference between what was predicted and what became hard facts. Here one comes across causes that anything but trivial. I will now beg the reader’s patience, because I am going to enlarge on precisely this.
The various polling centers used questioning techniques that varied little and boiled down to: (a) posing the question “If tomorrow were Parliamentary Election Day and it was Sunday, would you go to the polls? Are you absolutely sure that you would? Or maybe you have reasons to doubt it?” whereupon those who would come to cast their votes would be given hard copies listing names, done in a format very much resembling the official one. And then the respondent indicated which party or bloc he would vote for. Table 1 shows the results obtained by three polling centers - the Kyiv International Institute of Sociology (KIIS), Social Monitoring Center together with the Kyiv Institute of Social Research (SMC), and Socis Gallup in collaboration with the Democratic Initiatives Find. To make sophisticated matters simpler, we will focus on the parties getting over 1% of the votes.
On the other hand, seeing this table, an armchair analyst is bound to say something like, “Forecasts? Where can you see any you can trust? Socis Gallup? They said the Communists would get 14% of the votes. And they got close to 25%. KIIS said that only three parties were getting there one day before the elections, and there were eight! And all this ads up to one conclusion. Never trust the sociologists. They’re full of hot air!” However, a closer look at several lines ending Table 1 shows that about one-third of the respondents were undecided (in Column 2 there is 2% of those saying no comment). Actually, there were no “undecided” voters in the past campaign; some voted for either of the parties listed or against all (marking “neither” in the ballot). Thus, to form a general idea about the allocation of votes the percentage of those actually voting for or against is worked out (see Table 2). While the information contained in Table 1 could be referred to as “polling data,” that in Table 2 appears best described as “direct prognostication.” Here 100% is the total value of the columns of Table 1 less one line, “Undecided,” and without “Refused to comment” with regard to Socis Gallup findings. The result is simple to figure out (incidentally, The Day carried this data in its April 4 issue, adding “exit-poll” data provided by Socis Gallup, jointly with the Democratic Initiatives and Media Club, to make the picture complete. People were interviewed leaving polling stations, so strictly speaking this was not forecasting, because the respondents had just cast their votes; rather it was polling done prior to the official returns of the Central Election Committee).
This data can be considered as a prognosis of elections and its accuracy can be determined: precisely what yours truly has done. Table 3 provides the maximum and average (in absolute magnitude) distinctions in the returns with regard to the 16 parties getting over 1% of the votes, as well as with regard to all 30 partied.
Obviously, the data provided by different polling services tally in many respects, with precision increasing as the election date approaches.
“Direct prognosis” based on responses from those “decided” actually backs the hypothesis that those “undecided” would cast their votes just like those who have already decided.
Finally, in addition to such “direct forecasts,” there are “calculated prognoses” to be considered, done using certain mathematics techniques and models and based on other concepts with regard to 30-35% of the respondents stating they are not sure who to vote for. From what I know, such calculations were made by two researchers relying on KIIS: Professors Khmelko, head of the Sociology Department at the Kyiv-Mohyla Academy, and Ordeshuk, his American colleague.
Prof. Khmelko proceeded from the assumption that those still undecided would vote just like all those with the same demographic profile who had already decided. This assumption had been corroborated by analysis of the 1994 presidential campaign returns. Khmelko studied the factors affecting the electorate’s attitude and discovered that they were twin-axis oriented: socioeconomic (roughly, market vs. planned economy) and national-political (attitude toward the status of the Russian language and contacts with Russia). He further applied a special technique to assess the allocation of those refusing to comment, relying on their orientation. (The forecasts calculated were made public at a KIIS news conference March 10, and Prof. Ordeshuk’s computations were kept from the public before the elections.)
Maximum error in the “calculated forecasts” turned out the same: 5.9%, and average/mean error was 2.2%. In other words, the accuracy of such forecasts proved a bit lower than that obtained by direct prognostication, over the same period. Dr. Ordeshuk’s forecasts (embracing only ten parties) proved no better. Why, considering that the technique had worked at the 1994 presidential campaign? Perhaps because the electorate voting for President had a more or less accurate idea about the two candidates, so those undecided eventually made up their mind, considering their respective orientations. Now that people had to choose between thirty parties the undecided voters had very little idea, if any, about how one party differed from the other and voted as advised by those they trusted (members of the family, relatives, or friends). Hence, “direct prognostication” relying on data concerning the “decided” part of the electorate proved more accurate.
Summing up, I wish to stress that one should ascertain the issue under study before using poll findings and listening to sociologists - whether one is interested in initial data, direct, or calculated forecasts. Juxtaposing the findings provided by various polling services is best done by using direct forecasting, in that this method reveals the dynamics of the ongoing processes. One should also remember that selective polling is bound to be erroneous in one way or another. In other words, one must not expect any substantial degree of reliability from data received after interviewing, say, 1,500 or 2,000 respondents. Here precision is seldom above 2-3%. On the other hand, raising this precision by another percent will cause polling expenses to jump three to four times. For example, there is no problem identifying leaders and outsiders in any given poll, but if a party’s support is in the neighborhood of 4%, it is practically impossible to predict whether or not this party will surmount the 4% barrier. And of course the time of polling is a very serious factor. We ask our respondents, “Who would you vote for if the elections were to take place tomorrow?” Asking them about what they would do in a month’s time would be too much. For this reason what we get in response is not so much prognostication as public opinion at the time of interviewing. Considering that almost one-third of the electorate are people not sure who they will cast their votes for a week before the election, an alternative method is hard to find. Despite all these hardships, polling results during the past campaign turned out to be fairly accurate (a fact acknowledged by our learned US colleagues at the American Association for Public Opinion Research)
.
Finally, I wish to express sincere gratitude to the parties and the government for their time and consideration, and to assure People’s Deputies and all those vying for the Presidency that their fears of such polls are very overstated, because a very small percentage of the electorate will bother reading them.
WHY DID THEY BAN PRE-ELECTION POLLS
13 November, 2012 - 00:00
Rubric: