A noted economist walked up to me at The Day’s photo exhibit commemorating the newspaper’s eighth anniversary, and said with a pleasant smile:
“So you’ve become a champion of sociology (he was referring to my article on the Ukrainska Pravda Web site, 09.14.04, written jointly with V. Khmelko). A waste of time and effort, no one’s going to believe you. For a long time I’ve been trying to explain certain elementary things about economics, such as the actual inflation rate, but to no avail.”
Perhaps this was true, but his comments offended me. Two weeks before the 2002 parliamentary elections, the leading sociological centers came up with some rather accurate forecasts about which political parties would win seats (with the exception of the Green and Women for the Future parties, although these miscalculations were within the statistical error margin). The exit poll findings almost tallied with the actual turnout (less than 0.8% for all parties, barring United Ukraine that registered 1.4% more — and Our Ukraine, which received 1.4% less than predicted (see “Nationwide Exit Polls,” p. 86, Zapovit: Kyiv, 2002). American sociologists greeted their Ukrainian colleagues, saying they were doing great. Yet some journalists and a number of politicians claim that sociology has suffered the final defeat, that our sociologists are incompetent and corrupt — and I mean that such accusations are heard from those in power and the opposition. Why?
In the first place, this can be explained by campaign stunts aimed at mitigating the effect of low ratings on a given candidate, especially when the polls reflect reality and are actually trusted, while accusing sociologists of falsifying data (incidentally, the assumption that lower ratings always have a negative effect on the electorate is wrong; it does not take into account the fact that a decline in ratings may also encourage voters to rally round their candidate and persuade those who are still undecided to support him).
Second, one must keep in mind the incompetence of politicians and journalists who compare ratings received at different periods (time fluctuations may prove significant), or rely on different indices, like percentages with regard to all respondents and those who actually intend to vote (in the current campaign, Yushchenko’s supporters appear more active, and the gap between him and Yanukovych is proving to be larger in regard to people who intend to vote).
Third, alleged discrepancies in results of polls carried out by various social study centers are often created and capitalized upon by those paying the piper, including a number of news media. However, they cannot pull off stunts when they are dealing with professional and long-established social study centers, like accepting dubious offers that would jeopardize their hard-earned reputation and eventually deprive them of commissions from the West — such commissions being their principal sources of income. There are other methods. Ostap Bender, who had a healthy respect for the criminal code, said that there are comparatively many techniques that allow one to relieve others of their money. Likewise, there are several comparatively “fair” methods of manipulating poll results.
The first method ignores unfavorable statistics while publicizing favorable statistics. Thus, in the current presidential campaign major television channels have often replaced ratings with data reflecting respondents’ opinions about who will be the next president (some of the population might regard this as ratings, though). According to international standards — considering that Ukraine is a member of the ESOMAR and WAPOR (i.e., European and international polling authorities) — any person commissioning a poll owns the resulting data, so we cannot protest in public, except when we have proof that our polling data has been replaced or corrupted in any way.
Another technique consists in fixing the questionnaire in such a way that it presents questions that are neither neutral nor emphasized, or placing these questions in an order designed to corrupt the resulting data. “Push questions” are not the point here. Consider this question: “Do you agree with Ukrainian-Russian entry visas, as proposed by Viktor Yushchenko?” Mr. Yushchenko has never proposed this, but this helps to develop the electorate’s negative attitude to the candidate and can affect answers to other questions. Throughout the years of our work, we have been asked to carry out several such rigged polls, and we have always refused, no matter how much money we were offered. In the case of emphasized questions, nuances and shades of meaning are involved, rather than the imposition of a certain opinion.
The thing is that formulating a perfectly neutral question is easier said than done. Researchers, with their own convictions and stereotypes, willy-nilly insert them into questionnaires even when they have no intentions of skewing their data (a special study of such questionnaires used by professional American polling centers confirmed that these kinds of questions constitute about 10-15%). It is also difficult to define such questions; very often it is a question of taste because in such cases a sociologist can only express his doubts to the client, but the latter can always insist that the contract is carried out to a T. Here is an example. A Soviet city party committee (authorized to approve all such questionnaires) forbade us to ask any Kyiv residents about their earnings and salaries (at the time I was working at the Sociology Department of the Institute of Philosophy) because this question was considered “provocative.” Several years ago, I convinced a representative of a transnational company in Ukraine that studying the impact of advertising could never provide any reliable data, yet the client left everything as it was because for many years this issue had remained topical in dozens of countries, and because it had to be dealt with in precisely that format.
The only way to combat such “relatively fair” methods of manipulating sociological data is by gathering and publishing one’s own data, at one’s own expense — precisely what we do occasionally.
Since late September, certain events have been taking place, which are threatening the KMIS Institute’s reputation, and that of SOCIS, thereby also threatening Democratic Initiatives, a consortium created by five organizations (DI, KMIS, SOCIS, the Razumkov Center, and Social Monitoring) in order to conduct a unique exit poll (e.g., polling 50,000 voters exiting polling stations). This compelled me to take up my virtual pen.
We received the first “present” from forces close to the government.
I must say that, unlike many others who voice their views on Web sites (now and then I receive messages from “well-wishers” with references to corrupt sociologists), the existence of various political forces among our clients makes our company even more independent. Should this company work for only one political force (opposition or ruling), and should that political force refuse to publish such poll results, it would likewise prevent you from conducting your own research and publishing your own findings. If political studies are not dominant, and if the company is conducting political studies for different political clients, this company will become more independent politically. Honestly, at the peak of an election campaign we are strongly tempted to put an end to these kinds of social studies. However, apart from our fee, it is very important to check our results and verify the accuracy and effectiveness of our forecasts and methods, since no other studies allow for assessing the quality of the system as a whole. We need elections and referendums to guarantee our measurement instrument (I have repeatedly pointed out that politicians are as important to sociologists as lab mice for biologists), meaning that we could tend to our wounds in between such campaigns, and then find ourselves involved with such projects again.
This time the situation looks even more dismal.
We have spent several months doing polls for the Public Thought Foundation (Ukr. Abbr.: FGD), meaning that FGD developed the questionnaires and KMIS did the field work and computer data processing. FGD ranks with Russia’s most prominent professional polling centers (on a par with the Levada Center, former VCDGD). According to international regulations, the pollster has no right to publish any such data, not even publicize the fact that such studies are ongoing. Before mid-September, polls done for the FGD, as well as our own polls (including those done for various political forces, and for which we were duly authorized to publicize their results) indicated a noticeable gap between Yushchenko and Yanukovych, amounting to 6-8% in the last ten days of September.
After Yanukovych raised pensions and made all those pro-Russian statements, his ratings registered a quick increase, so polling time became critical (below is KMIS President Prof. V. Khmelko’s chart, which presents a clear picture).
The first poll, which lasted five days and showed Yanukovych’s higher ratings, was the one carried out by FGD. We sent the bulk of the data to Moscow and learned some time later, from news releases, that Yanukovych had roughly the same ratings as Yushchenko. Gleb Pavlovsky also said so on Channel 5 in Ukraine. Since this contradicted all the other data, it was naturally viewed as yet another Moscow canard, and the Ukrainian sociologists who were asked to comment on it insisted that FGD did not have a polling network, meaning that the whole thing was falsified or represented data compiled by incompetent personnel.
What can I say? I wouldn’t wish this situation on anyone. Although FGD promptly displayed all the 2002 campaign data of all the Ukrainian surveys on its Web sites, it is now refusing to publish them. If all the data on the polls conducted by KMIS for FGD were published regularly and in a timely fashion, when Yushchenko’s ratings were clearly higher than Yanukovych’s, FGD would have strengthened its professional reputation in Ukraine, and we wouldn’t have suffered either. The next day I received a phone call from FGD Director Aleksandr Oslonin, who said that we were allowed to publish our data and comment on our collaboration (something we had long insisted on). Thanks a lot! That was a very timely move, indeed.
And then we had the results of the KMIS and SOCIS polls that were commissioned by Mykhailo Pohrebynsky’s Kyiv’s Political and Conflict Study Center, involving over 11,000 respondents. The results, which were carried by the media, were the same: Yushchenko and Yanukovych were running even. That same day SOCIS and the Democratic Initiative Foundation (FDI) held a press conference summing up the special rolling poll carried out by SOCIS as commissioned by FDI (the table here shows this data as per single index, taken from the KMIS- SOCIS press conference’s press release of Oct. 7, 2004).
Another phase in our story ensued, which was marked by the discrediting of consortium members, engineered by anti-governmental forces. To begin with, FDI, for reasons best known to its leadership, included in its press release rolling poll data covering a twenty-day period (see the second column in the table), whereas data collected during a more recent period (the remaining 900 questionnaires reflected in the third column) differ from those that were obtained by KMIS and SOCIS, based on KCPDK questionnaires — and the difference is not more than one percent! SOCIS President Mykola Churylov said so during the press conference, but the media focused on the press release, so in the end SOCIS reported on two results. According to one, Yanukovych was catching up with Yushchenko, but according to the second, there was a 5% gap (whereas such data does not actually differ much during close periods). This did not pass unnoticed and SOCIS was also criticized by Channel 5 and on various Web sites. In a word, SOCIS also landed on the list of falsifiers. Welcome to the crowd, Mr. Churylov!
All this is damaging to the reputation of both SOCIS and FDI, and generally to the exit poll consortium.
Second, on October 1, I spoke about the differences between FGD’s and our poll results at a press conference at UNIAN, referring to (a) indices, (b) polling time, and (c) questionnaire structure (by the way, UNIAN’s statement mentioned only the questionnaire structure without mentioning other factors; also, that KMIS had conducted the field surveys, although I had mentioned all of them). I mentioned these factors at a press conference held jointly with SOCIS on Oct. 7, but stressed that, by comparing columns one and seven, and one and four, SOCIS data relating to that period also indicate that the ratings are coming close together. Therefore, the differences in the questionnaire structure can only explain 2-3% of the differences, although this requires further investigation.
I sent a questionnaire to Serhiy Taran at his request, and he published the Mass Information Institute’s statement and signed it as its director and as a postgraduate student of a reputable American university. In that publication, relying on substantiated arguments as well as purely subjective assumptions, Serhiy Taran assessed possible rating increases of 10% in favor of the pro-government candidate owing precisely to the questionnaire structure, despite the fact that empirical data do not support his hypothesis. Then there was the admonition directed at KMIS and SOCIS, to the effect that working with KCPDK, as the producer of such bad questionnaires, would be damaging to the reputations of these reputable companies. After that, Serhiy Taran and FDI Director Ilko Kucheriv held a press conference that was reported on by Channel 5 on Oct. 8: the newscaster said it could mean another 20% added to Yanukovych’s ratings — and this considering that KMIS data relating to the questionnaire at issue tallied by 1% with that of SOCIS, received a week later).
Why should anyone question our reputation by relying on assumptions about how such questionnaires can influence the election campaign’s outcome, when all this can be precisely determined by using the available data? Acting in strict accordance with ESOMAR/WAPOR regulations (І4.2 10), we commented on possible polling accuracy restrictions, as well as on our doubts concerning the questionnaire format, and that this could expand the error margin. So why should this damage our reputation? If the KCPDK ordered nonprofessional companies to carry out that poll, journalists would have never learned about such possible divergences in the data and would have never obtained the questionnaire for the purpose of analyzing it.
Judging by Internet forums and letters received, the damage done by the Mass Information Institute and Channel 5 to the reputations of KMIS, SOCIS, and FDI — and thereby to the exit poll consortium as a whole — has proved to be entirely real (although Channel 5 should be given credit for having partially rectified the situation by inviting, one day later, KMIS Director Khmelko and FDI Director Kucheriv to appear live and offer their explanations).
In summing up the above, I wish to say that:
1) Data provided by professional polling centers, which rely on the same indices and cover the same period, yield rather corresponding results, and actual statistical errors do not exceed the permissible level;
2) Political clients discredit researchers by publishing selected data that serve their respective purposes, thus creating the illusion that uncoordinated data are originating from such professional polling centers;
3) Political forces are undermining the reputation of sociologists by questioning their professional integrity every time they receive poll results that are not to their liking.
Gentlemen! Do not drag us into politics. Sociologists must assess the situation and state precisely what is happening. I know that sometimes we bring bad news to someone or other, but fighting us will not help you at all; you must fight negative phenomena, not those who monitor and record them. Our assessments are accurate enough (you should visit professional polling Web sites, not those occupied by various media). The problems are in society, not in what you read on your thermometer. If you sow the seeds of distrust against sociologists, you will smash the mirror of society, in which case exit and other polls will become meaningless, with the result that every politician will feel free to rely on that notorious “majority opinion” prepared by his campaign headquarters. Who can possibly benefit from this?
AUTHOR’S REFERENCE
A “rolling poll” means that 2,000 individuals are polled first. These respondents are selected “pro rata” — i.e., 200 individuals per batch, then another 200 respondents are polled every two days; 200 are singled out afterwards, meaning that at every stage this rolling poll characterizes the situation recorded in the previous twenty dayslike, for example, a wave added on Sept. 20-21, joining the data generated on Sept. 2-19. This way, a rolling poll, which has been accumulating data obtained over 20 days, provisionally speaking, reflects a situation dating back approximately ten days. Therefore, a rolling poll held on Sept. 13—Oct. 2, with its results made public on Oct. 6, recorded only a slight difference between the leading candidates’ ratings: from 6-8% to 4-5%.