UK-based video data mining specialist LivingLens has enhanced its analysis of consumer video content with facial emotion recognition, tonal recognition and object recognition - allowing users to 'decipher the full range of human behaviour demonstrated in video content'.
The firm's AI software identifies 'key landmarks and expressions of the human face' and classifies facial expressions within video content, mapping it to a specific emotion. Tonal recognition moves beyond analysis of what people are saying to how they are saying it, giving an added layer of understanding to sentiment analysis. Object recognition identifies the objects within a video, giving additional context - for example whether consumers are in a shop, at the airport or in a kitchen. Users can select from all the objects seen and navigate to where they appear in a video.
All three new options are time stamped against the corresponding video content, allowing researchers to quickly and easily pinpoint the exact moments of interest.
Company CEO Carl Wong says of the latest launch: 'We are delighted with the latest additions to our existing suite of capabilities, which provide a lens into the all-important emotions of consumers and gives additional context to consumers' content through their surroundings. Historically video has been challenging to work with but, we are seeing the use of video expand as technology continues to develop and improve, providing high levels of accuracy which previously would have required human intervention. Where once video was limited to small scale studies, it's exciting to see projects with large volumes which simply weren't practical before'.
Four weeks ago, the company launched an app-based solution called CaptureMe, allowing individuals taking part in MR projects to send videos and images.
LivingLens has offices in Liverpool, London, New York and Toronto, and is on the web at www.livinglens.tv .
All articles 2006-18 written and edited by Mel Crowther and/or Nick Thomas.