Theory and practice of election forecasting

BY B S Chandrasekhar| IN Digital Media | 24/04/2004
An overview of forecasting methods, their perils and the pollsters’ track record so far.
 

B.S.Chandrasekhar 

Since the announcement of elections to the Lok Sabha and four State Assemblies in January 2004, the media is flooded with a large number of opinion polls. By the time voting started there were at least twenty pre-poll surveys and on the very first day of voting we had half-a-dozen exit polls. As many more pre-poll surveys and exit polls have been promised for the next three weeks, this article tries to examine the theoretical basis of election forecasting and its actual practice in our country.     

Opinion Polls -New Soap Operas  

Election surveys are not new to India. We have been exposed to such surveys for over three decades; but in the present election the television programmes presenting election forecasts have assumed the form of mega serials- some sort of ‘sizzling sagas of our supercharged times’ with unexpected twists and turns.  The Election Commission by staggering the elections over a long period has also helped in giving an ‘episodic’ character to such programmes.  Most of the news channels have planned to telecast 15-20 ‘episodes’ of election surveys in the next three weeks and it would not be surprising if the forecasters make this election a cliffhanger.   

There is yet another resemblance to soaps: the viewer has plenty to choose from. We have estimates to satisfy everyone. If you are a supporter of NDA you have the estimate of 340 seats for NDA and 105 for Congress and if you are a Congress supporter you can quote another agency, which has given 200 seats to Congress and only 230 to NDA. Similarly there has been wide variety in the results of exit polls of the first phase of the elections to satisfy all shades of political beliefs. Even at state level there is choice- in Karnataka Congress supporters are happy that NDTV exit polls say that Krishna will stay and BJP supporters are happy that Star News exit polls predict the exit of the Chief Minister.  

Multi- disciplinary Study  

 Election forecasting is a risky business everywhere.  Even in a country like UK (size of Kerala with population of Andhra Pradesh) with two major political parties and a high percentage of committed voters, there has been more than one occasion in the recent past when forecasts were completely off the mark. We still remember that George Bush was not the favourite of the forecasters in the Presidential elections in 2000. India with so much diversity and with so many political parties is always a challenge for election forecasters and it is not surprising that forecasts are proved wrong many times. It goes to the credit of our election forecasters that in spite of such heavy odds they have been right on occasions.          

Election forecast studies need expertise from various disciplines like Statistics, Sociology, Psychology, Political Science etc. There are many stages in such forecasting and the most important part is perhaps that of estimating the vote shares of different parties.  The vote shares as we all know are estimated through large-scale sample surveys and the success of such surveys depends on at least three factors - the selection of a representative sample, developing a good questionnaire and properly conducting the fieldwork. Another crucial stage in forecasting is converting vote percentages to estimates of seats in Lok Sabha or the Assembly.         

Sampling - Theory

The basic principle of sampling is that we can draw reasonably valid conclusions about the whole (population or universe) by examining only a small part (sample). Sampling is based on probability theories and is applied in almost all fields of knowledge. According to statistical theory from a representative and adequate sample we can estimate a population value within a certain interval with a certain degree of confidence. The range of the interval depends on the extent of sampling error (standard error), which in turn depends on the size of the sample. Without going to the details we can say that while estimating a percentage from a sample of 10,000 the maximum error will be 0.50 percent; and in a survey of 10000 voters if 30 percent have said that they will vote for Congress then we can say with a confidence of 95 percent that in India as whole 29 to 31 percent will be voting for Congress. In such surveys there is also another type of error called non-sampling error and controlling this error is equally important for forecasting. 

The important requisite of a good sample is that it should be representative. Representation can be achieved if we select the sample ‘at random’, which means by ensuring that each unit in the population has an equal probability to be included in the sample.  The second requisite is that the size of the sample should be adequate- it should not be too small.  

A few other points about sampling could be mentioned here. The sample error is not dependent on the population size- whether the sample is 0.01 percent or it is 0.0001 percent of the total voters does not affect the error. Increasing the sample size does not reduce the error in the same proportion. For a sample of 10000 if the error is 0.50 percent for a sample of 400000 the error will not be reduced by a quarter (to 0.125) but only by half (to 0.25). A larger sample invariably reduces the quality of fieldwork and increases the non-sampling error and as such a larger sample is not always a better sample. The standard error for the state level estimates will be very much more than the error for the national estimates.  

The Practice of sampling 

Let us see how this theory is put into practice in election surveys. Selecting randomly 10000 from 680 million voters is just impossible and this selection is done in multi-stages, the State, Region, Constituency and Polling booth etc. In this process the standard error increases but as long as the principle of ‘randomness’ is applied at each stage it is possible to estimate the error using different formulae. However more often than not convenience dictates the selection of the constituencies and the polling booths (more about this later) compromising the ‘scientific’ nature of such studies. 

In all election studies at the final stage for selecting the voters a sampling system with set quotas for different demographic and social groups is adopted. Strictly speaking, the theories of sampling will not be applicable for quota sampling as there is purposive selection. However as long as there is no deliberate selection based on political leanings of the respondents a sample based on quotas is considered as good as a random sample. 

As stated earlier with a sample of 10000 the standard error for the national estimates will be 0.50 percent but in the same survey the error for a particular State estimate could be as high as 2.5 percent and this is the reason why a survey, which has given correct estimates at the national level, would have gone wrong in some states. Most of the present surveys are not sensitive enough to track even 2-3 percent changes at state level and if some researcher talks about one percent shift against one party in one state or 2 percent shift in favour of another party in another state such statements have to be taken with not just a pinch but a large spoon of salt.   

Questionnaire

In election surveys information is collected through a questionnaire and it is very important that this questionnaire has been prepared with care. "If you ask a silly question, you get a silly answer" and in many surveys such silly questions do creep in. The questions should elicit answers, which are reliable or consistent and are valid or relevant to the context. Textbooks on Social Research methods list some of the important questions, which the designer of a questionnaire should ask himself before framing a question like the following:

  • Will all respondents understand the question in the same way?
  • Will the respondents know or reasonably expected to know the answer?
  • Will the respondents, even if he/she knows the answer, tell the truth?
  • Will the question as put provide the information needed? 

Just to give a simple example most people can give reliable answer to the question ‘Which soap you are using?’  But answering the question ‘Why are you using this soap’ may not be that easy (if it was so there would not have been so many brands of soaps in the market). When such a question is asked the respondent could arbitrarily tick one of the choices listed and if the same question is asked at a different time he could tick a different answer altogether. The answers to such questions will not be consistent or reliable. The framers of the questionnaires often forget this elementary point. One recent election forecast programme was discussing the findings about ‘the fear factor’ forgetting that a question on such a subject in a survey may well be self-defeating.  

In most of the surveys though proper care is taken in the preparation of the original questionnaire in English often the translation to regional languages is left to interviewers who may not have a good knowledge of the language and idioms of the area where the survey is to be conducted. A badly prepared or translated questionnaire can spoil an otherwise well organized survey.  

Field Work

The quality of survey research mainly depends on how well the fieldwork is conducted and this in turn depends on the professional capabilities of the interviewers who collect the data. The interviewer has to establish proper rapport with the respondents. His or her personality, the manners, tone of asking questions and even the "body language" contribute to the collection of accurate information. There has to be a proper selection of such interviewers and they should also be adequately trained. In addition there has to be proper supervision during the fieldwork.   

The current elections surveys are being conducted by different agencies.  In our country there are only two or three market research agencies with a network of field units with professionally trained researchers. Even these agencies have to recruit inexperienced local field workers for large size surveys. The other agencies presently conducting election surveys surface only during election times and there will always be a big question mark about the capabilities of their field staff and to what extent their work is supervised.   

The agencies conducting the fieldwork have to work to stringent dead lines and more often than not convenience dictates their selection of constituencies, polling booths etc. When an agency has to complete 25000 interviews in the course of 6-8 hours in hundreds of towns and villages, one can only imagine how ‘representative’ will be the sample and to what extent quality of fieldwork is ensured.  Laloo Prasad Yadav is quite justified in complaining that these surveys do not cover his constituency of voters. Even in normal times market researchers seldom reach the far off places and below-the- poverty line section of the population. (See the same author’s comparison of Census and NRS figures of TV ownership in rural areas at www.thehoot.org/mediaresearch ). 

In large-scale surveys things can go wrong in a number of ways and they do go wrong. Nevertheless in such surveys the universal law of errors operates in an interesting way. If a set of bad interviewers pulls the data in one direction another set will balance by pulling it in the opposite direction.  

Converting Vote Shares to Seats

Sample surveys provide estimates of vote shares of different parties and the next crucial part in forecasting is converting vote shares into seats. There are no ‘scientific’ formulae and forecasters have their own methods based on their accumulated experience. Recently Congress was justifiably peeved when the same data provided by a market research agency for Maharashtra was interpreted by two forecasters differently, one giving a majority of seats to that party and the other giving a big chunk of these seats to its rival.  Forecasting the number of seats is comparatively easy when there is direct contest between two parties and it is quite hazardous when there are three or more contestants. Some years back in Uttar Pradesh BJP got 2-3 percent more votes than the Socialist Party (SP) but the latter got ten seats more than BJP and no forecaster could have anticipated this possibility. For a similar reason in the last parliamentary elections all the forecasters underestimated the strengths of SP and BSP.    

Survey on Surveys

How surveys may go wrong could be illustrated with an example. CSDS has reported the results of ‘a poll on polls’ it conducted sometime back in Delhi. The questions and the responses to the questions in this survey are given below:

Q1. ‘Have you read or heard about polls?’ Yes-41%; No 59%

Q2. ‘Were you influenced by these surveys? Yes 10.2%; No 30.8%

Q3. ‘How were you influenced?’ Shifted from BJP to Congress 5.3% Bandwagon effect); Shifted from Congress to BJP to 4.9% (Sympathy factor). 

Q1 is simple which could be easily answered and 41 percent is a reasonable level of awareness for a literate community. But Q2 is not as simple a question as it looks. Political analysts have always been saying that voting decisions are influenced by many factors and isolating one factor from the others is not easy. If at all such a question were to be asked there should have been a third option- ‘Do not know/Cannot say’ and a majority of the respondents might well have ticked that answer. If we assume that the respondents have answered sincerely 10 percent of voters and a quarter of those exposed to the results being influenced makes a strong case for banning such surveys. Q3 and the answers to this raise more questions- sympathy for whom? Candidate? Party? Its leaders? What benefit in joining the bandwagon?  etc.   

Sample surveys are not panacea for everything and in this case it was the wrong method to be adopted. There are more suitable methods like depth interviews, projective techniques, longitudinal studies etc. to research such subjects.  

Track Record

Public memory is short. The morning’s newspaper becomes raddi by the afternoon and the evening’s television programmes are forgotten by the next morning and public memory on media is shorter. Media memory on media failures is perhaps the shortest. Let us recollect how correct were our election forecasters in the state assembly elections held in 2002:

Gujarat: Early forecasts- BJP may win two-third majority; mid-term corrections - Congress gaining grounds (‘Patel discontent’) but still BJP will win, Congress winning according to one magazine; Final Forecasts - comfortable majority to BJP. Result- Sweep by BJP- 25-30 percent more seats than the final forecasts.

 Madhya Pradesh; Early forecasts- Congress will lose; mid-term corrections - Congress gaining grounds (‘Hindu card of CM working) Final forecasts- comfortable majority to BJP. Result- Sweep by BJP- 20-25 percent more seats than the final forecast.

Rajasthan; Early forecasts- Congress will win; mid-term corrections Congress doing well (‘Record of CM’); Final Forecasts- Tough fight- Congress can scrape through. Result two-thirds majority to BJP.

Delhi Assembly: All stages Congress winning; Result comfortable majority to Congress.

Chattisgarh; All stages Congress winning; Result comfortable majority to BJP. 

Forecasters were right about Delhi. In Gujarat and Madhya Pradesh they were right in saying BJP will win but did not anticipate the magnitude of victory. In Rajasthan and Chattisgarh they were completely wrong. What is their success rate? Is it one out of five or three out of five? Let us leave the answer to the forecasters. The Delhi experience also confirms that at present the market researchers are only capable of understanding the urban mind.   

As mentioned in the beginning forecasting in India is a risky business and it is understandable the estimates go wrong often.  We have to rejoice that they have been right certain on occasions in spite of such heavy odds. 

Impact on Voters

There has been extensive research on the role of media in elections, as a matter of fact serious research on media started with the study of Presidential elections in America. The accumulated research so far says that media plays only a marginal role during elections and in general voting decisions are not influenced by media campaigns. Of course all our political knowledge comes from media and in the long run media plays an important role in politics. But what has been established is that in the short period of electioneering media cannot change voting decisions and as such there is no need to worry about the impact of election surveys.  

General elections in India are often compared to kumbhmelas. Election surveys could be considered as one of the side attractions of these mahamelas. Let us enjoy them as long as the mela lasts.   

(Chandrasekhar was Director, Audience Research in Doordarshan. Kannada University Hampi has published his book on ‘Research Methods in Social Sciences’.)  Contact- baguru@eth.net

Subscribe To The Newsletter
The new term for self censorship is voluntary censorship, as proposed by companies like Netflix and Hotstar. ET reports that streaming video service Amazon Prime is opposing a move by its peers to adopt a voluntary censorship code in anticipation of the Indian government coming up with its own rules. Amazon is resisting because it fears that it may alienate paying subscribers.                   

Clearly, the run to the 2019 elections is on. A journalist received a call from someone saying they were from Aajtak channel and were conducting a survey, asking whom she was going to vote for in 2019. On being told that her vote was secret, the caller assumed she wasn't going to vote for 'Modiji'. The caller, a woman, also didn't identify herself. A month or two earlier the same journalist received a call, this time from a man, asking if she was going to vote for the BSP.                 

View More