DRNO - Daily Research News
News Article no. 25077
Published September 26 2017

 

 

 

ESOMAR Congress Review: Pollsters, Students and Robots

Contrary to what you might have concluded from our last two reports, there was lots of talk in Amsterdam about both the great trends shaping market research, and the specifics of current projects and problems. Our correspondent Nick Thomas found some gems in the latter. Part 3 of 4.

Enough of this search for inspiration then and down to the nitty-gritty. Every conference, it seems, must ponder the eternal question of where research is going and how we can remain relevant, and this one just failed to buck the trend after a promising start. But more of that tomorrow: it also looked in some detail at some more specific issues, on which I was glad to have an update and an expert opinion.

Jon Puleston
Very Accurate... but Wrong

First, the state of political polling, and specifically, are we getting worse at it? At 11.15 on Wednesday on the Blue Stage Jon Puleston (pictured), VP Innovation at Lightspeed posed this question with the subtitle 'An analysis of 60 years of international polling data' - leaving us in no doubt that his argument would be well-founded, whether or not his conclusion was heartening. In fact Jon, and Lightspeed, have made a database of 30,000 global polls from 25 countries available to ESOMAR, allowing for some fascinating analysis.

Given the spotlight on polling accuracy in the last 2-3 years, Puleston looked to provide a reality check as well as discuss some of the reasons why polls don't always get it right. Across all 30,000 polls, the average error in predicted percentages of the vote is +/- 2.5% - a figure 'you would take in most surveys'. However, three recent poll 'disasters' have put the spotlight on the profession: the UK general election of 2015, the Brexit Referendum and the US Presidential Election, both in 2016.

Puleston showed a well-rounded awareness of the situation, as well as a sense of humour, pointing out that 'polls for the Brexit referendum actually saw errors below this average rate' and polls for the US election were 'actually some of the most accurate in history' (think vote share vs electoral colleges). It's only because they got the *results* wrong that they've been seen as polling failures. That earned an intended laugh from the audience, but was also recognised as a serious point. Where a vote is actually very close, as both these were, and the polls say they are very close or even 'too close to call' as was certainly trotted out on a regular basis in both these two instances, it's possible of course to both be very close and pick the wrong side. Puleston didn't quite make the same comments about the 2015 election where the percentages were a bit further out - but does believe that problem votes have proved themselves to be 'a bit like buses' - three have come along at once, and that's why methods are so much under the microscope.

There are other findings from the database: for smaller parties, estimates of vote share early in the campaign produce some big errors, partly because people gravitate away later in the campaign, sometimes for tactical voting reasons. Voters make last minute decisions - 'up to 40% in the week before the vote', says Puleston, very surprisingly if this is true; large leads tend to erode during a campaign - perhaps down largely to the media spotlight on the leading party - and then there's a slight rebound at the end.

...all of which restored a bit of faith in the idea that pollsters are scientific, reasonable, and not doing too bad a job. Still, three in a row eh?


Payback Time

At 3.55 the previous afternoon in the same room there was a session entitled 'Let's celebrate', which managed a crowd of around 300 delegates, not the largest of the Congress but healthy. It began with a bells and whistles video celebration of ESOMAR, very slick and uplifting - who said researchers can't put together high quality presentations? Then Finn Raben announced the winner of the young researchers' 'Yes' one minute pitch competition - Stephanie Pineda from Kantar Millward Brown got a certificate, a life-changing 200 Euros and the chance to give a longer pitch, which she used to extol the virtues of Instant Messenger as a survey tool. I'm not sure what the sampling method was for the survey which showed people really liked doing surveys on IM more than via other media, but I hope and trust it wasn't an Instant Messenger survey. Clearly I'm a sceptic, but it's difficult to maintain such a position in the face of the enthusiasm of an obvious rising star like Stephanie.

The session also included finalists for the Research Effectiveness Awards, one of which was the #SeeHer movement, whose stated aim is 'to increase accurate portrayal of women and girls in media'. This was introduced by OTX founder Shelley Zalis (www.mrweb.com/drno/news11149.htm), supported by Gary Getto, President of US firm Advertising Benchmark Index who conducted the research to establish the core GEM (Gender Equality Measure). Zalis is now CEO of the #SeeHer Movement, launched by the Association of National Advertisers (ANA), and its subcommittee, the Alliance for Family Entertainment (AFE). The GEM has been applied to no fewer than 20,000 ads in the past year, helping to raise awareness of both existing bias and the benefits of removing it - #SeeHer says it has proven a strong correlation between a high GEM score and improved reputation and ROI for the advertiser.

Another finalist was work by Colmar Brunton New Zealand, whose Executive Director Jocelyn Rout presented alongside Keith Taylor, Advisor to the country's Inland Revenue. The project began with a striking and straightforward problem: a very high proportion of New Zealand students who travel for long periods overseas (and that's most of them) default on repayment of their student loans - or did so in 2011 when the agency was called in to conduct behavioural change research. The situation was described as 'politically charged and fiscally challenging'.

Interviewing the former students involved, the research identified 3 groups segmented by attitude, with 26% who intended to restart payments as soon as possible, 37% who had just deferred thinking about the problem (the 'Parkers') and 37% 'Procrastinators' who 'see no real need ever to repay it'. Different strategies were adopted for the three, from gentle encouragement to arresting people at the New Zealand border when they returned to the country. The last in particular seems to have made a dramatic difference, with the research providing 1121% ROI in its first two years and then stepping up dramatically to 2065% after the first arrest was made. Taylor was on hand to present the impressive stats, including a shrinkage in the number of 'Procrastinators'. More than $NZ 389m in additional payments have been made and there is strong community support for the actions taken.


Respect these kind souls

A little later, in the same room, SSI's marketing head Andy Jolls reviewed the progress of his company's Quest Awards, which seek to recognise those providing a good respondent experience. A word cloud of comments from those having a bad experience with surveys nicely illustrated the main problems, with 'boring' the biggest word by some distance. Jolls reiterated the importance of looking after the people without whom we'd be nowhere, with a number of quotations from industry figures - his and my favourite being 'Respect these kind souls' - Jay Levy, Chief Exec of SurveyUSA, in beautifully concise mode. SSI has created the 'Survey Score' rating which checks for various aspects of respondent experience including interview length, presentation / screen compatibility and so on, and builds this into the award rankings. The stats show an improvement in respondent experience since the ratings started, but Jolls noted that this had tailed off / rebounded a little of late and left the audience in no doubt - if they had started with any - that more needs to be done. Winners of the awards are in four categories and three world regions, as follows with US first, EU second and APAC third:

Consumer: AC Nielsen de Colombia Ltda; Givaudan France SAS; Decathlon China
B2B: Schlesinger Interactive; Statista; SG Analytics
Mobile: SurveyUSA; Direct Research; The Boston Consulting Group
Tracker: WeddingWire; The Nielsen Company; nielsen China (Guangzhou)


Aye, Robot (vs Noe, Robot)

Robot (vs Noe, Robot) All this could of course be irrelevant in just a few years' time: interviews will be conducted by robots, for robots and with robots as respondents, if some people are to be believed. Some people who have invested heavily in the development of robots, I hear you ask? Maybe - I'm kind of half cynical about this one, because I think it's massively overhyped at the moment, but clearly AI is coming on fast, and clearly it's only going to accelerate, and therefore anything you say it can't do or won't be able to do in n years' time, it may well be able to do in n+5 years' time. A bit of vagueness about timescales was to my mind the only slight problem with a very entertaining and well-run session on Wednesday on the Pink Stage (they needed to be entertaining and well-run on the Pink and Green Stages - see tomorrow's article).

Beginning at 9.35am, 'Your Honor, This is the Future of Automation' was hosted by Annelies Verhaege and Katia Pallini of InSites Consulting in Belgium, who put forward the theory 'Robots are coming whether we want it or not' and led a debate about the impact of automation on our industry. An imposing slide show (see right) and a good system of dividing the audience in two and only allowing people on one side to argue for one side of the debate, regardless of their actual views, ensured that arguments were fairly even, and delegates felt competitive, spoke up and listened. A panel of judges decided which side had won - or argued better - on each specific issue. As it turned out, the Court decided bots were able to replicate emotions, sarcasm and other relatively impenetrable human qualities, but (partly due to the eloquence of speakers rather than the facts themselves) that storytelling in the classic human style might prove beyond it for 'the forseeable future' - no kind of timescale was set, to my knowledge.

Key points in this victory for the 'bots can't' side were that AI has trouble deciding what to ignore, and at present works best within very specific parameters; and most poignantly that 'Storytelling is about bringing out the inner child in human beings' - and until a robot can understand a human life span (or even, has been a child itself), you'll never be able to get it to tell stories that'll really resonate with humans.

Someone on the other side made the very valid point that as in many of these areas, the power of a combination of human and machine was perhaps best of all. On such a positive note did the session end, with an overall vote declaring that researchers will not be replaced by robots - in the forseeable future. Hurrah for the forseeable future! This last vote was conducted by delegates picking up plastic lightsticks (the kind you bend in order to start them glowing) from under their chairs, and holding up the colour of their choice. Hurrah also for the organisers who made the whole thing very entertaining and kept everyone's attention despite the many possible distractions and the relatively early hour. The previous evening was the gala Awards night, so even starting at 9.35 was no mean feat.


Apologies for the day's delay in this instalment, which was due to illness - tomorrow we'll wrap up with a look at one of the more general 'research trends' sessions, and a verdict on the Congress as a whole. Thanks for reading!

 

 
www.mrweb.com/drno - Daily Research News Online is part of www.mrweb.com

Please email drnpq@mrweb.com with any questions.

Back to normal version.

© MrWeb Ltd