CNN-IBN-ADR's Rate your MPs survey raises questions

BY ANKITA PANDEY| IN Media Practice | 18/04/2014
The survey hopes to influence the agenda of elections by highlighting voter priorities. But there are limits to the usefulness of the conclusions drawn from the survey.
ANKITA PANDEY explains why

In an election season, survey-based programmes can help to attract an audience. No wonder sting operations that exposed pre-election surveys have already been forgotten by the Indian media and it is business as usual. We continue to be presented with a variety of surveys, whose methodology is doubtful.

Last month, CNN-IBN, a leading English television news channel, released ‘the first of its kind report card’ of our Members of Parliament (MPs). It is based on the Association for Democratic Reforms’ (ADR) Rate your MPs Survey. ADR claims that this is ‘perhaps the largest survey ever done in the world in one country . . . this survey is 10 times larger than any survey ever done.’ The survey tries to identify voter priorities. For each pre-specified issue respondents were asked to indicate their priority by choosing between Very Important, Important, and Not Important. The survey also asked respondents to assess their MP’s performance on each issue by choosing between Good, Average, and Bad. The survey reveals that overall Better employment opportunities, Drinking water, Better roads, Better hospitals/Primary Healthcare Centres, and Better electric supply are the five most important issues.

There is some difference between the priorities of rural and urban respondents. The survey also shows that only 17 per cent of the MPs obtained above average rating. This survey draws attention to secular concerns of voters and, as ADR’s founder-trustee Dr. Ajit Ranade argues, it helps ‘generate [election] agenda bottom-up’ and challenges ‘election agenda . . . imposed top-down.’ The survey hopes to influence the agenda of elections as well as governance by highlighting voter priorities. ADR’s attempt is important because our political leaders are unable to focus on development and governance and rely on sentiments and identity to appeal to voters.

But there are limits to the usefulness of the conclusions drawn from the survey. The websites of CNN-IBN and ADR do not provide essential details about their survey methods. In fact, there are discrepancies between the two websites. This raises doubts about the authenticity of the survey.

The first concern relates to the number of constituencies covered by the survey. ADR’s website says that ‘Due to limitations of time, budget and logistics, we were able to do around 525 of the 543 MP constituencies,’ whereas CNN-IBN website claims that it presents ‘report card of all of India's 543 MPs.’ In addition, as per CNN-IBN the overall sample size was 187,431, whereas ADR website suggests that ‘over 250,000 respondents in 525 constituencies’ were interviewed.

I contacted CNN-IBN/ADR about four weeks after CNN-IBN released the results of the survey. Prof. Trilochan Sastry, a founder-trustee of ADR, explained the discrepancy by noting that they ‘have just completed 525+ constituencies. At the time of giving it to CNN-IBN it was lower…The sample size was for the total survey which was just completed. When given to CNN-IBN it was 187,431.’This justification raises an important question. Why did CNN-IBN not wait till the entire survey was completed? In a multi-stage survey, results should not be released until the entire survey is completed if there is a possibility that perceptions of people are likely to be affected by information about how others are thinking. But CNN-IBN and others widely publicised the results of the (partial) survey. So, the release of part of the results while the survey was in progress could have affected the answers of respondents in the constituencies yet to be surveyed. Therefore, results from the phases of the survey completed before and after CNN-IBN’s release of results might not be comparable.

But this is not the only concern. The survey suffers from a number of other problems.

The survey was conducted across the country over a number of weeks. Since the survey was conducted close to an important national election, the survey design should have taken care of the possibility that at different stages respondents and interviewers could have been influenced by different political developments. Prof. Sastry denied the possibility of political developments affecting respondent behavior. According to him, ‘the survey only asked the citizens what their priorities are and how they rate the local governance on the same issues. These include jobs, water, electricity etc . . . The election probably has no impact as people need these things whether elections are held or not.’

But this is only partly true. During the election season parties and candidates try to influence voter priorities and the intensity of the campaign changes over time. The questions in the survey dealt with defence and anti-terrorism on the one hand and drinking water and subsidised food on the other. If a constituency is surveyed after a major riot or after the rally of a pro-defence candidate, then voter priorities could be affected. So, results from different stages of the survey may not be comparable. Unfortunately, we do not know the exact schedule according to which constituencies were surveyed.

It is not clear how many surveyors were involved and how they were trained to minimise variations between them. It seems that the questionnaire was prepared in only Hindi and English. It is not clear how uniformity was ensured in oral translations into other languages. Even the Hindi and English versions of the questionnaire are not identical. For example, the question about transport facilities is different in English (‘Better public transport’) and Hindi (‘yatayat ke achhe sadhan,’ good means of transport) versions of the questionnaire. The English version draws attention to public transport and the Hindi version is vague. In fact, even the English version of the question is vague. Public transport could refer to very different things in different parts of India. In his response, Prof. Sastry argued that they ‘consider transport as a need for people whether from public (government) or private sources. The survey only gauges the extent to which voters give importance to the issue (of say, transport).’ But this does not mean that the questions can refer to different things in different languages.

There is another concern about the manner in which the survey was conducted. There is very little information about sample design on the CNN-IBN and ADR websites. According to the ADR website, the survey covered ‘around 500 respondents in each constituency.’ The website does not give details about who was included in the survey. Prof. Sastry clarified that ‘stratified random sampling was used based on publicly available data to reflect rural-urban and other strata. All socio economic categories were adequately covered. Only potential voters over 18 [years] were covered.’ Neither the website, nor ADR’s response to queries revealed which publicly available sources of data were used to design the sample to make it representative of different socio-economic groups. (Note that the detailed socio-economic tables of the 2011 Census have not yet been released.)

Another point needs to be noted. The ADR website suggests that the sample size was more or less same in every constituency (‘around 500 respondents’). Prof. Sastry argued that ‘sample size need not vary [across constituencies]’ and Dr. Ranade added that ‘care was taken about appropriate sample sizes etc, subject to limitations of time, etc.’ But since ADR does not indicate the level – constituency, district, or state – at which the sample is representative we cannot say if their sample size was adequate and whether their sample size should have changed across constituencies.

By using the link to the online software they used to decide sample size, I found out that if your population is not divided into groups and is homogenous for the purpose of survey then the sample size (for 5% accuracy claimed on their website) does not increase after population exceeds about 223,000. But if your population is divided into groups and you want the survey results to be valid at the level of the groups then the overall sample size is the sum of sample sizes calculated for individual groups and that definitely increases the sample size.

So far we have discussed problems related to survey design and implementation. We will now discuss a few problems related to the interpretation of survey data. CNN-IBN compared the performance scores of MPs from different constituencies. But it is not clear if such comparisons are possible. We have already noted that premature release of results creates problems for comparison. There are other problems too.

For example, the people of Jhansi might be happy with 16 hours a day electricity supply and the people of Bangalore might find even 20 hours a day as unacceptable. Views can also differ within constituencies that have both urban and rural areas. So, even if electricity had the same importance across and within constituencies, views on performance could vary systematically for reasons that are not directly related to the performance of a sitting MP.

Prof. Sastry argued that ‘If someone says it is good, average or bad, (s)he may be using different scales or expectations from someone else even next door. There may be constituency or State specific differences as well. A good from someone with low expectations is different from a good given by someone with high expectations. In a large survey this is the best one can do.’ But this does not mean we can freely compare MPs’ performance scores. We should compare only those constituencies that are broadly similar in terms of some objective criteria. The following factors add to the difficulty of making comparisons. First, ruling parties often give lesser attention to opposition-controlled constituencies. But the ruling party at the Centre is not in power in all states. Second, since the issues in assembly and parliament elections are not similar, results from states where only parliament elections are being held cannot be compared directly with the results from states where both assembly and parliament elections are being held.

There is another problem. The survey data seems to have been processed assuming equal difference between performance ratings (Good, Average, and Bad). According to Prof. Sastry, ADR ‘wanted a simple scale since the mass of voters may not be interested or able to fine tune.’ This is not appropriate. A small drop in performance can make a person move from Good/Average to Average/Badin response to the question on performance. But a very large improvement maybe needed to push a person from Bad/Average to Average/Good. While a carefully designed survey can handle this point, more time has to be spent on the field to conduct such a survey.

To conclude, in a hurry to beat competitors, television channels and their survey partners are ignoring methodological requirements and expanding the scale and frequency of surveys. CNN-IBN-ADR’s survey, whose methodology and interpretation of data raise doubts, is an example of this. ADR’s response to queries does not address problems directly and stresses the motive behind the survey and the usefulness of the results of the survey. But, without more precise information about the design and implementation of the survey, ADR’s effort will go waste as policymakers and researchers cannot use the results of the survey.

Ankita Pandey is an independent researcher based in Bangalore.

Such articles are only possible because of your support. Help the Hoot. The Hoot is an independent initiative of the Media Foundation and requires funds for independent media monitoring. Please support us. Every rupee helps.
Subscribe To The Newsletter
The new term for self censorship is voluntary censorship, as proposed by companies like Netflix and Hotstar. ET reports that streaming video service Amazon Prime is opposing a move by its peers to adopt a voluntary censorship code in anticipation of the Indian government coming up with its own rules. Amazon is resisting because it fears that it may alienate paying subscribers.                   

Clearly, the run to the 2019 elections is on. A journalist received a call from someone saying they were from Aajtak channel and were conducting a survey, asking whom she was going to vote for in 2019. On being told that her vote was secret, the caller assumed she wasn't going to vote for 'Modiji'. The caller, a woman, also didn't identify herself. A month or two earlier the same journalist received a call, this time from a man, asking if she was going to vote for the BSP.                 

View More