Quick Find:
MrWeb Home News (DRNO) Daily Research News, Research Diary, MRWho, HRchive


 

Sample Providers

Sample Providers
Part of: MRT - Trends - Technologies - Techniques

    Back to Sample Providers Back to Sample Providers

Register for MRT Register for MRT

Pete Cape

Pete Cape

Pete has over 20 years experience in Market Research. A founder member of TNS Interactive in the late Nineties, he has concentrated on online research ever since, joining Survey Sampling in 2005 and starting his current role in 2006.

Read the full biography here.

Who's Been a Bad Respondent?

'Dr Pete' becomes Santa Pete, checks his list twice and finds some have been misjudged

22 December, 2009

For what seems like the longest time the industry has been concerned with online data quality, particularly panel data quality. But how do we know when we have poor quality data? Sometimes it is because it does not match data we have seen from previous waves of research. Other times it does not match an existing, known external piece of data. The recent ARF Foundations of Quality study and Mktg Inc’s Grand Mean Project, among others, have highlighted the issues of inter-panel differences whilst stressing the relative stability of panel research results over time. This has lead to fruitful positive discussions on practical blending and sampling practices.

Other times it can be seen within the data set itself; mis-matches between purchase intent and product evaluations; incoherent or nonsense verbatims; inconsistencies between the same questions asked twice or oppositely worded statements; simple straightlining of an attribute battery. Poor quality data can also be identified through statistical techniques. Assuming the right sort of questions are present in the survey we ought to be able to see how the answers to one question, or many questions together, can predict the answer to another question.

The sum outcome of all this investigation has been an acknowledgement that there are very few actively fraudulent panellists but many who occasionally fail some trap or other and are subsequently deemed ‘bad’ .


But, assuming we have some degree of poor data, who is to blame? For many it is obvious and simple; the data comprises the answers given by individuals. If their answers are not good it must be the fault of that individual. Find and expel the ‘bad’ respondents and you solve the problem. For the past few years this has been the goal of many in the market research industry – to devise ever more complex traps and checks to identify the ‘bad’ respondent. Attention has focussed on the ‘inattentive’, on the ‘fraudulent’, on ‘professionals’ and on those who join multiple panels. The sum outcome of all this investigation has been an acknowledgement that there are very few actively fraudulent panellists but many who occasionally fail some trap or other and are subsequently deemed ‘bad’. The question remains: ‘are these people inherently bad or just having a bad day?’ This of course presupposes that they are ‘bad’ at all. It was once put to me that some people in life make fast decisions. They may not necessarily make good decisions but they make them fast and they are their decisions. Why would we think they would behave any differently in an interview situation? If this is so, aren’t their answers just as valid as everyone else’s? If not then we must learn to label our table bases not ‘all respondents’ but ‘all respondents who think less quickly’. Ask any telephone interviewer if they think that everyone they interview is paying 100% attention to every question and you might start to question your exclusion practices in online research.

If it is having a bad day then to what extent does what the researcher does contribute to them having a bad day? In online research the respondent is out there on their own, they have no interviewer to converse with, no-one to help guide them through the survey, to clarify questions or probe the meaning of their open answers. If we as researchers fail to give the respondent a question they can answer correctly then where does the fault lie? All elements of the research design from interview length, through qualification, to ambiguity in question wording to the design of the questions themselves – all can have a negative effect on data quality.

Ask any telephone interviewer if they think that everyone they interview is paying 100% attention to every question and you might start to question your exclusion practices in online research.


Many research-on-research studies have shown the clear connection between interview length and deteriorating data quality. The surveys are simply too hard for the respondent to concentrate on for the full duration. The researcher rarely knows this because the researcher rarely takes the survey as if they were a respondent. Consider all the mental effort you put into writing it in the first place. That’s pretty much the mental effort required on the part of the respondent to answer it! In these over-long surveys the cognitive burden is too high, it simply hurts to think; but panellists strive on, they have committed to the survey and most will finish it, but at a reduced quality. They will do just enough work to satisfy the demands of the researcher, behaviour that has become known as ‘satisficing’. They have done their bit but can we honestly say we have done ours?

We (and I mean we researchers) must become experts in online questionnaire design, not simply porting our CATI or CAPI scripts online - please don’t laugh, this does still go on. But this is hard for us in the panel industry. We have a commendable record in what can be described as ‘washing our dirty laundry in public’ (if you like) or ‘commissioning research-on-research into the panel research paradigm’ (if you don’t). But this is different, now we have to tell our customers they are not very good at their jobs! Anyone in sales can tell you that this is not always the easiest approach to the big sale and a trouble free aftersales relationship!

We must find ways to replace the voice of the interviewer with the written word or perhaps something more suited to the medium. This is not something we didn’t ever know, it’s just something we’ve forgotten. From 1986’s ESOMAR’s Consumer Market Research Handbook: ‘the art of a good [self-completion] survey is to humanise it as much as possible to show that even if real people are not delivering it they have at least written it and are concerned whether you reply or not’. Who amongst us had good training in writing postal surveys? Not anyone I know, or even knew when I started. It’s time to learn again.

Finally we must give the potential respondent something to do when they offer us their time and effort. It is a scandal how often we screen people out of surveys, often without so much as a thank you.

In short, in everything we do, it’s time to put the respondent first and treat them with the respect they are due.

Divider
Pete Cape

Comments on this article

Divider

Want to share your thoughts...?

I enjoyed reading your ideas and thoughts on this topic. Your ending statement hit home for me. I worked in recruiting for a marketing research company via telephone and it was honestly a most difficult and conflicting challenge, one I could not stand. Over my time there I wasted so many respondents time just screening them to see if they "fit the client's mould" and were acceptable. Asking personal information, and then at times searching desperately for a good lie to terminate them as I had to many times due to age, race, religion, and income. Sometimes the last qualifying question would come at the end of the screener and it was so hard to make up lies on the fly so they would not know exactly why I had to terminate them. I felt horrible about initially contacting these people, claiming they could earn money, and at times spending 10-15 minutes of their time prequalifying questions, only to turn around and search for the words to tell them "unfortunately you don't qualify for this study."

Jennifer Schutt, (Interviewer)


Want to share your thoughts?

NOTE: Please note that this board is moderated, and comments are published at the discretion of the site owner.

Add your comment now:
Displayed next to your comments.
Not displayed publicly.

 




© MrWeb Ltd 2009