The new TV ratings system has drawbacks too

IN Media Business | 29/05/2015
"The problem with BARC is that this measurement system is controlled by those who are being measured."
The Hoot talks to CHINTAMANI RAO about BARC which has begun delivering metrics to broadcasters and media buyers (Pix: Chintamani Rao; credit: campaignindia.com).
Chintamani Rao is one of the few people who has seen TV audience measurement in India in all its aspects. He has been a subscriber on both the broadcasting side of the business as well as on the media agency side, and has served on the industry bodies of both. He was involved with BARC (Broadcast Audience Research Council) from its inception. He was Chairman in its early days, and played a pivotal role in regulatory issues. He currently serves on the TAM Transparency Panel, an international group of independent experts to advise TAM on subscriber-related issues.

BARC is here. Does that take us closer to a more satisfactory audience measurement system than TAM, in your view? And if not why not?

That BARC – or any vendor – is here cannot be an assurance of anything. The proof of the pudding is in the eating.
 
Do you see any improvement over TAM ratings in the metrics that the new measurement system has produced so far?
 
So far, no: mainly because this is household level data. What that means is, BARC data as of now tells you only what a household is tuned into: not who in the household is watching. That’s of precious little use in media planning and buying. It is okay for testing the system and settling it down, but viewer level data is the table stakes. Until you have that you aren’t in the game. So for the purpose of media planning and buying BARC data is not yet useful.

Will BARC be able to deliver better value to advertisers and media planners?
 
In principle there is no reason why it shouldn’t. But my concern as an industry observer, and one who has been deeply involved with the subject, is that this measurement system is controlled by those who are being measured. 
 
BARC was intended to be an equal-stakes venture of three industry bodies (broadcasters, advertisers and advertising and media agencies). But finally broadcasters have 60% of the vote, while the other two constituents have only 20% each. In effect, those whose performance is being measured hold sway: they don’t need the support of the others to do pretty much anything. 
 
BARC was set up because the industry decided it didn’t want a vendor-driven system. It has traded that for a broadcaster-driven system. How that is better is for the advertisers and their media agencies to judge.
 
What are the implications and advantages of watermarking?
 
I’m not qualified to compare technologies, but there are two limitations I’m aware of.
 
One, the broadcaster controls the switch. If you’re disgruntled and don’t want your channel to be measured you can simply stop watermarking, and the system will not be able to read your channel. That is not good. It will distort the picture. If I am doing market research on shampoo, for example, and am asking sample homes which brand of shampoo they use, I don’t want Brand X deciding that it won’t allow its consumers to reveal themselves. A major network doing that could hold the whole system to ransom.
 
Two, it is expensive. Small, single-channel broadcasters, of whom there are hundreds, will find it hard to afford. Other technologies don’t require anything of the broadcaster. So it could result in a partial picture, with data only for participating channels.

The universe from which the sample was chosen is the same as MRUC’s IRS. Is this a satisfactory universe, and if not why not?
 
Considering the question mark that hangs over the IRS, I would worry about it.

The sampling is 20,000 households going to 50,000 in four years. Do you think that the sample will be sufficiently scientifically chosen to represent 160 million households satisfactorily? It is just double the earlier sampling TAM did, which was pretty small. 
 
First, the question of sample size. We are always concerned with sample size, perhaps because it is easy to grasp. 
 
People generally think what matters is the relationship between the size of the universe and that of the panel. That seems reasonable, but statisticians will tell you that a panel of 10,000 correctly representative of its universe will be as good a measure whether the universe is, say, 10 million or a billion. What matters is not the relationship between the size of the panel and the size of the universe but that between the size of the panel and the smallness of what it is trying to measure. 

When TV audience measurement started in this country the media world was relatively simple. There were few channels; TV was mainly entertainment; the most advertised products were fast moving consumer goods (FMCG), mostly targeted at women, 15-44, SEC A-C. That was a broad measure, for which the then sample of perhaps 4-5,000 homes was adequate. 
 
Over time not only the number but the genres of TV channels grew; the range of advertised product categories widened to include financial services, auto, mobile telephones, consumer durables, … and, correspondingly, the audiences targeted began to include different demographics. So now you were measuring smaller and smaller channels, in relation to more finely defined audiences. 
 
And the TAM sample grew, too. Decision point: as you add more homes, should you cover more towns or increase the sample size in current towns, i.e., cover more towns thinly or cover fewer towns in depth? That depends on what you seek to achieve. I remember when news broadcasters asked TAM not to increase the geographical spread of its sample because more towns covered would mean more carriage fees.
 
Of course you can increase the sample in both width and depth of coverage, but someone has to pay for it. There’s no free lunch.
 
Second, BARC is starting with 20,000 meters, representing all markets. 70% of the meters will be urban: that’s 14,000 meters representing 71 million urban TV households spread across more towns and further down pop strata, compared with TAM’s 10,000 meters representing 61 million in Class 1+ pop strata.
 
BARC will also represent 82 million rural households with 6,000 meters. If sample size is an issue, how can that be even remotely representative of a universe as culturally diverse and geographically dispersed as rural India?
 
Yes, it is meant to go up to 50,000 meters over time. But that’s an intention, and eventually someone has to pay for it. If they do, and if it happens, wonderful.
 
But, as I said, it’s not just about the sample size: that’s simply the easiest handle to grab.
 
BARC will be using the National Consumer Classification System instead of the socio-economic one. Please explain this and tell us what its advantage is?
 
The Socio-Economic Classification (SEC) system came about in the late 1980’s on the initiative of the Market Research Society of India (MRSI). The object was to find a better predictor of consumption behaviour than household income, which was commonly used until then. The reason is obvious: two households with similar incomes don’t necessarily behave alike with respect to consumption. So what is it about them that most influences consumption behaviour?
 
The MRSI did research to determine what parameters best indicated the propensity to consume of a household, and concluded that it was a combination of the education and occupation of the chief wage earner (CWE). That made sense intuitively, too.
 
The SEC was adopted starting with the National Readership Survey (NRS) of 1988. Some 12 or 15 years later users began to question its relevance, and Media Research Users Council began to consider a new SEC structure. In 2011 MRUC and MRSI introduced the new system, the NCCS.
 
The NCCS is based also based on two parameters, one of which is the education of the CWE. The other is not occupation, as it was earlier, but the number of durables owned by the household, from a list of 11.
 
There are all kinds of analyses to show why and how the NCCS is better. But to my mind it is terribly left-brained, the work of technocrats. If the purpose is to classify households according to their propensity to consume, we must look for indicators of their likely behaviour. Ownership of durables is manifest behaviour: they consume lots, so they must have a high propensity to consume.
 
The SEC system is more stable. For a household the parameters change slowly over time, and if they do change then that’s a result of a life change. If the CWE has acquired more education or upgraded their occupationthat will definitely lead to a higher propensity to consume, so their SEC should change.
 
In the NCCS, on the other hand, if a household that owns one durable today bought two more next year its classification would change. Over time all households will own more durables, so everyone’s classification will change. That doesn’t make sense to me. 
 
Do you think that the fact that BARC segregates the functions of data collection, analysis and reporting between three independent agencies will make it less prone to misuse and lead to more dependable metrics?
 
The practice is not unique. It is followed elsewhere in the world, too. There’s no single right way to do things. In the end what matters is panel and data security. The biggest problem is not misuse of the reported data: it’s before the data is reported, i.e., panel tampering, so the fewer the people or agencies involved in that process, the better.
 
In the end everything can be violated if you have a mind to: think of 9/11; think of 26/11. We can only try; and we can take deterrent action against those who are caught, to discourage stray thoughts in that direction.
 
The expectation is that a better designed system with more sampling will lead to news broadcasters at least reducing sensationalism and being less driven by one big eyeball grabbing story. Is that a realistic expectation?
 
I’m afraid, not. News broadcasters do what they do not because the measurement system is imperfect but because that’s what they do. Do you think Times Now will be quite happy to show up as no. 2 to CNN-IBN because – thank God!–weat last have a sensible measurement system? They will continue to push frenetically to maximize their score, whatever the scoring system.

What is the sense you are getting of the industry response so far to BARC – from broadcasters, advertisers and advertising agencies who are all represented in it ownership?  Or is it too early to tell?
 
It’s too early to tell. 
 
So far the advertisers and advertising agencies are playing ball. The advertisers have agreed to do without data until BARC is able to supply it. They spent huge sums on the IPL after planning with individual-level TAM data, and are evaluating it either with household-level BARC data; or with the previous season’s TAM data; or not at all. That’s amazing. 
 
I can’t imagine what’s motivating them, but evidently getting BARC going is important enough to them to risk hundreds of crore of their advertising money shooting in the dark at a moving target.
 
Such articles are only possible because of your support. Help the Hoot. The Hoot is an independent initiative of the Media Foundation and requires funds for independent media monitoring. Please support us. Every rupee helps.
Subscribe To The Newsletter
The new term for self censorship is voluntary censorship, as proposed by companies like Netflix and Hotstar. ET reports that streaming video service Amazon Prime is opposing a move by its peers to adopt a voluntary censorship code in anticipation of the Indian government coming up with its own rules. Amazon is resisting because it fears that it may alienate paying subscribers.                   

Clearly, the run to the 2019 elections is on. A journalist received a call from someone saying they were from Aajtak channel and were conducting a survey, asking whom she was going to vote for in 2019. On being told that her vote was secret, the caller assumed she wasn't going to vote for 'Modiji'. The caller, a woman, also didn't identify herself. A month or two earlier the same journalist received a call, this time from a man, asking if she was going to vote for the BSP.                 

View More