Daily Research News Online

The global MR industry's daily paper since 2000

Interview: MindProber Founder and CEO Pedro Almeida

March 3 2022

Continuing our focus on 'Measuring Brains & Bodies', Pedro talks about sweat, motor racing, suppliers who always say Yes, investors who always said No, banning the bull and embracing the (one-trick) pony, and how EDA (electrodermal activity) can grow into a currency for media engagement.

Pedro Alemida
Pedro is a professor in the field of cognitive and affective neuroscience (biosocial psychology, statistics, and research methods) at the University of Porto and has combined his scientific career with consultancy in market research before founding MindProber.

This is an abridged version - watch the whole interview, including more on Pedro's entrepreneurial beginnings and discussion of MindProber's agency partnerships, at www.mrweb.com/mrt/pedroalmeida.htm



Parents

PA: Both my parents are first generation university graduates from Portugal. My mother graduated in history - so social sciences. My father, from a farming family, studied as an electrical engineer and in his work was responsible for technical development of medical devices - hence my love of instrumentation... When I was young he was teaching me how to do circuits, parallel and series, etc... So I've always wanted to be like my father, always wanted to be an engineer, but with a passion for social sciences - my mother taught me a lot of History.

I went on to study psychology, actually cognitive neuroscience - but then very much focused, during my PhD on physiology and signal processing, with an engineering component: as much of an engineer as you can be when you're a psychologist! So basically I'm as much a mixture as I can be of those two profiles. And of course my parents are sons of the Portuguese revolution so they're very liberal and I inherited those sets of values too.

My mother moved very early into management, she directs a public school, has been a director for some decades now - a 24/7 job. Being a professional was always a big part of who my parents were, and that translated into a big part of me and how we do our jobs at MindProber (MP) - a big part of our lives.


Academia

PA: My basic degree is social psychology - what became behavioural economics. I did a Master's degree and my internship, then I stayed on with that company as a consultant - it was an MR company and I started doing political polling, in 2005-6, while at the same time working as a research assistant at what was then the psychophysiology lab and is now the Neuropsychophysiology Lab at the University of Porto - my first work in cognitive neuroscience.

[Later] I reached an agreement with the lab and said 'Let's try to fund the lab by going into businesses who would want to use the facilities and the knowledge we have. We worked with the air force, looking at people piloting drones and whether they were being cognitively overloaded or not, using an EEG - looking for what reduced that load as much as possible. Then we thought OK there might be a market here in Portugal for applied cognitive neuroscience, so we started a small company called ANR - Applied Neurobehavioural Research, which we wanted to be a Portugal-focused business.

...We failed miserably because we knew nothing about business, our offering to the market was just not scalable, we had no business acumen. It took me ten years to learn how to not say things - how to not start talking scientific jargon! We would enter a meeting and start talking it and they wouldn't understand 90% of what we said, and then we would ask them for 100k to do something, and they would say [a very abrupt] No. We had to do that learning.


Skin Job

NT: Why is Galvanic Skin Response or GSR the main thing you do at MP, almost the sole focus?

PA: GSR was not what we thought we would be focusing on. My background is EEG, the thing I had most experience of, and the technique which comes from EEG called Event Related Potentials - if you search my academic profile it's around these. The first thing we thought about for MP was scalability - can we scale with EEG? There are some companies that sell wireless EEG sets and we tried a bunch of them, but the signal we were getting out of that was not lab quality, it was heavily filtered so we lost some of the components of the signal.

NT: ...because it was wireless?

PA: That and other things. If a person in an EEG setting starts moving a lot the signal gets all jittery and you can't make sense of it. The signal-to-noise ratio of an EEG is low - what's interesting vs what is not, so this needs a very, very clean signal to get something out of it. If you're using it with people in their homes you will really need to filter the signal - 'I'll just remove everything that seems uninteresting'. Most companies which do this filter the signal so much that it looks really nice, but then there's nothing there. So, this is really challenging. Also, it's very expensive to scale - and also we couldn't replicate some of the things we were seeing on the fundamental literature on EEG and emotion so we decided not even to think about this.

We thought about signals that can be scalable and those are Heart Rate and Electrodermal Activity (EDA) - the latter is a pure measure of sympathetic nervous system activity. When you start getting nervous you sweat from your hands, so EDA looks at those micro-sudation patterns. In our sensors there are two electrodes which can go on the fingers or on the palm of the hand and you measure the resistance between them. When you sweat the resistance becomes lower, which is what we call conductance - so when something excites you, you produce a GSR which is basically a difference in the resistance of the skin.

The first versions of our sensors actually had Heart Rate (HR) sensor so we would also measure HR from a sensor a plethysmograph: the problem is we got lots of data but we couldn't replicate anything from the literature that we were seeing. We were calibrating the metrics: for example showing people videos like Schindler's List which are known to be highly emotional. EDA showed these reactions very, very well and was replicable (i.e. we would look at several samples and the results would hold) - so we trust this and can go to the market and sell it - but we couldn't get this with HR.

I'm not saying that it can't be done, even with EEG, I'm just saying that maybe we were not smart enough to do it, and maybe there are consumer neuro companies who have the algorithms to do it - we don't, and we don't want to be selling something on which we're not completely certain, so we just stripped the HR component from our sensor. [There is one company who use the MP system and they have HR - we have integrated a HR sensor for them - they get HR data and they do whatever they want with the signal but not through our platform.]

NT: To be clear, from GSR you get a single metric, right - whereas from EEG for example you might get different measures at the same time from different parts of the brain?

PA: Yes and No. On the one hand 'Yes' - if you are doing EDA, what you will get is a measure of emotional arousal second by second, and 'No' because what our platform does is it allows you to mix that EDA with other data. So for instance if you are watching a football match you'll have a second-by-second engagement record and you can mix that with every time a logo is exposed or when people see this ad, or hear this commentator. So you can ask 'How is this commentator ranking in engagement vs that one?' - what should I use, what should happen when there's a boring moment in the game, who should be talking, where should I place main sponsors? You can overlay that measure of engagement on those other metrics and make sense of them, and that's all we do but I think it's a lot - there's a lot of value in it.

NT: Do sensors go on the hand because this is the loudest and clearest signal? Would you get the same signals elsewhere on the body?

PA: What you need is a big density of eccrine sweat glands- there are a couple of places where you have this big density, the hands and the feet - so [obviously] we chose the hands! Now most sensors have wires, and we wanted to build our sensor in a way that there are no wires, because we send it to people's homes and they'll break the wires, so it needed to be one piece - this goes on your hand and after a few moments you won't even feel that you have it on. We want people to forget that they're taking part in the study, and usually were looking at long format content, so people do.


Where GSR is Best

PA: We thought we were going to be a scalable copy testing biometrics platform, which is everything we did not become actually, and the reason is because honestly there are a lot better solutions for this than just using EDA. The EDA time resolution is slow and variable (it can take 2-4 seconds to respond and then the response lasts around 7-8 seconds), so for short format content, it's v hard to pinpoint exactly what is creating that response. Facial coding mixed with eye tracking for this is more powerful and much more scalable.

So where is our sweet spot? In long or very long format - no-one else can actually synchronise the data with (long-format) live events, so we started building our platform to be a sort of engagement currency for long format content. You know how many people are watching each second of the show but you actually don't know how involved people are with watching: you can overlay this analytic on other data, eg how many people are watching, and it'll give you a better picture of not only how many and who, but actually how involved they are with the content.


Case Study: On the Circuit

NT: Pick a recent project where you've had a lot of success with this, and give us an outline.

PA: OK. The two most common applications of our data are on the production side and on the commercial side, so I'll give you an example where we're using it for both, without revealing the client. This is a multi-season ongoing data collection study, so they've always renewed the contract and it's ongoing. Let's say I have a motor racing event, and I want to improve production decisions to engage my fans, including which camera angles work. So I have a new camera angle, say here [taps side of head] and it's called Cockpit View let's say: is this producing impact and in what instances? What we need here is a lot of data so we can't just have one instance - lots of things are happening - is it the person in first place or not, is anyone screaming in the background (and who?), is it a very exciting moment of the race, or not - so what you will need is lots of instances, so we would look to collect data across lots of races so you can isolate the effect of this camera angle. You can then start to build a knowledge base of what works and what doesn't, and feed that into the business: for example whenever there's a dull moment, this person needs to speak; this camera angle works but only when it's the person in second place not first position - so you can begin to understand the nuances. This is one of our most successful uses - ongoing monitoring, understanding exactly what's working and then feeding that back into the business.

So I've talked mostly about the production side, but there's also the commercial side - eg showing that people are very involved with official sponsors. If you have an official sponsor they should have brilliant positions on the content: you'll want to show if you're selling the media rights that you are getting people really engaged when they are watching the official sponsor.

NT: Do you present any of that data to them live, in a dashboard so they can adjust cameras while they're in action, or is it project based?

PA: It's one or two business days later: data can be processed in real time but what we find is that it's very, very hard to digest this data and to react to it in real time, so we use computer vision to tag every time that this camera angle was shown, and compare with the engagement. We do have real-time feeds but we use them for other things.
One respect in which we have been evolving towards real time: we felt there's an opportunity not only for engagement with TV / media, which is something the market is asking for, but tracking when people actually do something, activate, around key moments. So we're also modelling to predict Twitter activity. For instance, in a match, people start fighting! - and you see an engagement response so you know that is interesting to people, and then one minute after you'll see an increase in Twitter volume. So we've started building models from our data to external data sets - we understand we can use this data in real time to give people doing campaigns on second screens a competitive advantage.

NT: Are you doing a lot of interpretation of the data for clients, on the whole, or do you have clients who like to see the raw feed, as it were..?

PA: It will depend very much on the client. We have some who are completely hands-off, even some end clients, So we usually partner up with agencies, when there is some data crunching that needs to be done, and the agency will be an intermediate, so we feed them the data and it goes through a report to the end client, but on the other side we have some end clients who actually like to go to the platform, and we've got all those dashboards with the impact on the brand... Some clients what they do is they just export the data in an Excel file and have their own way of analysing it.

We find there's a lot of handholding in the beginning, even with agencies, and we want to teach them how to work with the fieldwork platform, with our user research platform, then they become more and more independent, they look at the data and get their own results. We've been working a lot on the platform to actually allow you to Not ask us a lot of questions - so you can just see the analytics going. Basically after a few sessions clients sort of become independent - there's no secret there.


The Future

NT: Do you think you'll stay focused on GSR because that's the one you've found most robust, or are there other things you might add into your own mix?

PA: Integrating sensors and other peripherals is easy, we have the infrastructure to do that - our latest agency partner have integrated Heart Rate - and they're a very, very well-known consumer science-focused media company, and so we're very sure that they know what they're doing and have better algorithms than us, so we give them our data and know they're doing a very good job. As a business, at MP, what we are trying to do is increase the number of business objectives that you can actually tackle using GSR, from using a real-time measure of arousal - we're very focused on advancing the theory that EDA can be a currency for media engagement, and there are lots of applications for this, so there's TV but there's less obvious applications like podcasting - we have worked with podcasting clients, with radio clients, and of course you can't get that from eye tracking for instance, not for audio - so we have a sort of trans-platform measure of engagement. You can compare across audio and TV, so if you figure out how to scale EDA measurement and synch it with media content, that opens up a realm of possibilities - media measurement, media activation, real-time activation / second screen activation...

NT: How do you see the niche for neuro techniques within MR developing?

PA: I think it will grow a lot. I think companies who actually use neuro techniques have very smart people running them, they're very knowledgeable. Sometimes there is the temptation to just say Yes to requests - it's revenue and it's a big client so I'll just say Yes, but if players in the sector can resist that... Everyone should be aware of what the abilities of these techniques are, so we do things where we can actually produce responses, and if people are creative enough, we can expand the number of things that we can tackle. To the best of my knowledge, at MP we're doing what we're doing in a way that lets me sleep at night - we have one trick, a one trick pony with one measure of arousal, so let's expand what we can do with that. If you're an expert in facial coding, let's expand what we can do with facial coding - eg in cars because it's super-important to know if you're falling asleep, they've identified an application for it - and I think that's the way to expand the market, rather than just inventing things we can do.


Motto?

PA: 'No Bullshit.'

I think I've already sort of alluded to that: when people enter MP, one of the thing we say is that we have a very transparent No Bullshit approach, and we operate that across the entire business. Everyone in MP knows the financial situation of the business, everyone's in the same boat; I won't say that we're selling more than we are or less than we are, and we enforce this across every level of the company. With investors, if there's bad news there's bad news; if there's good news there's good news; including in our conversations with clients. We're a no-bullshit company - and that has worked so far.

All articles 2006-23 written and edited by Mel Crowther and/or Nick Thomas, 2024- by Nick Thomas, unless otherwise stated.

Select a region below...
View all recent news
for UK
UK
USA
View all recent news
for USA
View all recent news
for Asia
Asia
Australia
View all recent news
for Australia

REGISTER FOR NEWS EMAILS

To receive (free) news headlines by email, please register online