News over the weekend that the radio industry’s audience measurement is returning.
It’s seemingly been on a longer hiatus than One Direction, as its panel building was somewhat kiboshed by the Coronavirus.
It’s always popular to knock RAJAR (or any research methodology) but most wailing is pretty uninformed. RAJAR is one of Europe’s largest surveys with nearly 100,000 people participating in a regular year. It’s demographically and geographically representative of the UK’s listeners (and non-listeners) and that requires some effort. How do you get someone to fill in a survey about radio listening when they don’t listen to any? It’s important to track that group too!
People also fail to recognise that RAJAR isn’t one national survey, it’s hundreds of inter-locking surveys that give robust data about individual stations’ TSAs (their coverage area). So you need to be representative in Portsmouth, Southampton and Bournemouth individually, as well as across the combined regional area too.
With a truly representative sample, you can’t rely on people registering online (not everyone has the internet) or ringing people to take part (do you pick up the phone to unknown numbers, do you have a landline?). The best way is to present yourself to people in streets, or knocking on doors. Laborious, expensive but pretty resilient.
As James Cridland was reminding me the other day, the resilience is really noticeable in the data. Each quarter the entire sample changes. 20,000-odd people. But for national stations, especially behemoths like Radio 2 or Radio 4, their data stays incredibly stable quarter to quarter. There must be something in it after all.
Once you’ve got the sample, they fill in their listening over a week in a number of different ways. Mostly digitally on the computer, tablet or phone, but you can still fill in a paper book if you fancy - which is great. It makes it more representative.
Australia
In Australia, which has also used diaries, they announced yesterday an evolution of their measurement. They’re going to combine diaries with 2,000 people meters (in this case wrist-watches) that listen out for what stations people can hear, and next year they’ll add some data from streaming as well.
I think more data is generally a good thing, but if you look through all of the information, it’s perhaps telling that the diary-based element is still going to be pretty much the core of their ratings. It’s not sexy, but it’s really reliable. The shiny additions, which give the opportunity of adding some different types of information for reports, probably won’t add that much to the core ratings released.
2,000 watches over the 5 metro areas, so 400 watches per metro. Let’s say, generously, that 60% are active each day - that’s 240 watches in an area. Let’s then assume there’s 20 stations (with DAB+ there’s more) - that’s 12 watches at any time, potentially tuned to a station. It’s not going to add that much.
I remember sitting down with Sharon Dastur when she ran Z100 in New York and she showed me the previous day’s data for Ryan Seacrest’s show. It was cool to see, but the sample size was tiny - in single digits. As she said - “the information’s nice to see but has that person turned off because they didn’t like a song, or that they just got out of a cab?”.
18 Month Break
When the UK data returns, it’s likely to be pretty different to when it went away in Q1 2020. Firstly, we’re not entirely back to normal, so people’s listening habits won’t have returned exactly - particularly around car consumption (which accounts for around 20% of listening in the UK).
We’ll also see some of the effects of forced habit changes. Have people sampled and stayed with different stations? Has the mix of platforms at home meant they listened differently? Has the competition from other services in the home - the TV, music streaming - sent people down a different road to which they haven’t returned.
A change of 18 months will also throw into stark relief what’s normally hidden by slow growth or declines. The Ofcom data I talked about a few weeks go, suggests there might be more of a shift for younger audiences. We may have been like the oblivious frog in the boiling water, not seeing a catastrophic event happening.
We’ll also see some data from new stations - like Times Radio - who went on-air after they had stopped counting.
Streaming
Lots of people mention streaming data as the key to understanding audience consumption - it’s live! - it’s real! Whilst it’s incredibly helpful, there’s also lots of problems with it.
Firstly - is there anyone at the end of it listening? And if there is, how many people? 1 or 30 in an office? There’s no way of knowing. What type of people are they? If they’re not logged in, it’s impossible to know.
Every radio station’s been shouting about how much smart speaker listening they’ve been having, but no-one asks where that has come from? Has a person unplugged a broadcast radio and plugged in Alexa? Are they listening for longer, or for a shorter amount of time because they’re sharing the device with Spotify? The excitement of growth could actually be the story of decline.
Someone was telling me about the success of a competition as their streaming numbers peaked at the reveal point each day. On the surface that’s great, but what if the competition had turned off a load of regular listeners with a radio on in the background? You’re measuring the super engaged ones who are probably more likely to seek out the station for that moment, and they’re probably more likely to do that online. The peak might not be telling the full story.
That’s not to say streaming data isn’t useful. RAJAR and other research methodology average data over days and weeks and quarters. You get resilient information for when you’re doing normal things, but it isn’t great at measuring one-off changes - like a countdown or a special show. Streaming data can do a good job in those instances as you can compare it to a normal day. But just like the above example it doesn’t always tell the whole story.
Confirmation
Research is something that can be a good way to check whether you’ve made the right decision. There aren’t many methodologies, however, that measure whether the time you’ve spent, has been well spent. So, I’d like to give you some confirmation that the time you spend reading this, is well spent.
With news that Andrew Neil is off from GB News, the always-good-to-read Jim Waterson from The Guardian dredged up the predictions that I’d made before the station went on-air:
See, dear reader, all the top insight, so your time spent here is worthwhile.
There’s nearly a thousand of you that get this in your inbox, and then about the same again read it from a social link. Just like a station’s RAJAR figures, I’m always interested in growth - so if you think there’s someone who you think would like to read what I write, please do forward it across. Oh, and if you’re an occasional social media clicker, do sign-up and get it in your inbox each week: