Connecting Emotionally With and Through Computers
Submitting Institution
Queen's University BelfastUnit of Assessment
Psychology, Psychiatry and NeuroscienceSummary Impact Type
TechnologicalResearch Subject Area(s)
Medical and Health Sciences: Clinical Sciences, Public Health and Health Services
Psychology and Cognitive Sciences: Psychology
Summary of the impact
Emotional signals — obvious outbursts, or more often subtle changes in
tone of voice, or facial expression — play a key part in human
communication. Psychology researchers at Queen's have made fundamental
contributions to `affective computing', which enables automatic systems to
use those signals. The team's work has influenced a new computing language
for describing these signals and the states that they reveal: EmotionML
(Emotion Markup Language). The language has been recommended as a standard
by the World Wide Web Consortium, to define how software describes
emotions.
The language is used by multinational corporations in a range of
applications in a rapidly expanding field. Queen's expertise in emotion
led Dr Gary McKeown to found a start-up company, Adoreboard (previously
known as Mediasights) along with entrepreneur Chris Johnston, which
specifically uses EmotionML in opinion and sentiment analysis in
marketing. Its product, Adoreboard, lets companies track consumers'
emotional responses to their products. The company has agreed funding of
£470,000, partnerships with three multinational corporations, and was
recently selected to take up residence at Google's campus in London.
Underpinning research
Psychologists in the Perception, Action and Communication Research group
at Queen's are pioneers in the field of affective computing, which allows
computer systems to register emotion in human behaviour and communication,
and to give appropriate signs in return. This technology describes and
interprets the signals that drive everyday human interactions: changes in
the tone and rhythm of speech, facial expressions or posture that indicate
how someone feels about a topic or interaction, and which we pick up on in
order to respond appropriately. The Queen's group has made major
contributions to the theory and evidence which allows technology to make
use of these emotional cues.
The research, led by Professor Roddy Cowie since the 1990s, has centred
on databases, which bring together details of the visible and audible
signs that convey everyday emotional `colouring', and systematic
descriptions that capture the emotional meaning of the signs [2, 3, 4], in
a variety of cultures [6]. The team pioneered ways to record apparent
emotion [1], and to analyse relationships between the signals that humans
give out and the emotional states that they correspond to [1].
Most of this work was done in international and interdisciplinary
collaborations funded by the European Commission, with the Queen's group
playing a leading part. Eight of these European projects have run
consecutively since 1998, with total funding of over £1.5 million to
Queen's alone.
One of these projects, HUMAINE (FP6, Network of Excellence), which began
in 2004 and was led by the Queen's group, started the process of trying to
understand emotion oriented computing. This interdisciplinary group set
out to establish a standard language that could be used to describe
everyday emotions, and the non-verbal signals that convey them, in
computing related contexts. Subsequently the standard that was developed
as a result of the research carried out by this interdisciplinary network
has now been recommended by the World Wide Web Consortium, and is called
EmotionML.
From 2007 the team provided the psychological input to the SEMAINE
project [4], which confirmed that human-computer conversation could make
use of emotional signals picked up from cameras and microphones. The
SEMAINE project developed avatars which talked to a real person and made
appropriate responses to the person's emotional signals. This kind of
ability has applications in emerging areas such as computer-aided teaching
systems, or computer-mediated therapy.
Related research from Dr Gary McKeown (Research Fellow; 2006 — present)
identified a different application for research on emotion. Building on
research into risk communication [5], he developed software to help
experts communicate risk to lay people, and was led to recognise the role
emotion plays within risk communication. The system uses a novel agent
based modelling approach to opinion and sentiment analysis [5], where
software sends messages to social networks as if from an individual and
the response to these interactions can then be assessed. These approaches
are now being commercially implemented by the start-up company Adoreboard.
Along with fellow Queen's researcher Ian Sneddon (Senior Lecturer,
Psychology), McKeown has also studied how emotional responses change
geographically and over time. [6]
References to the research
1. Cowie, R., Douglas-Cowie, E. Tsapatsoulis, N., Votsis, G., Kollias,
S., Fellenz, W., & Taylor, J.. (2001). Emotion recognition in
human-computer interaction. IEEE Signal Processing Magazine, 18,
32-80.
2. Cowie R., & Cornelius, R. (2003) Describing the emotional states
that are expressed in speech Speech Communication, 40,
5-32.
3. Douglas-Cowie, E., Campbell, N., Cowie, R., & Roach, P. (2003).
Emotional speech: towards a new generation of databases. Speech
Communication, 40, 33-60.
4. McKeown, G., Valstar, M., Cowie, R., Pantic, M., & Schröder, M.
(2011). The SEMAINE Database: Annotated multimodal records of emotionally
coloured conversations between a person and a limited agent. Proceedings
of the IEEE International Conference on Multimedia and Expo 2010,
1079-1084. doi:10.1109/T-AFFC.2011.20
5. McKeown, G., & Sheehy, N. (2006). Mass media and polarisation
processes in the bounded confidence model of opinion dynamics. Journal
of Artificial Societies and Social Simulation, 9(1).
6. Sneddon I., McKeown, G., McRorie, M., & Vukicevic, T. (2011)
Cross-cultural patterns in dynamic ratings of positive and negative
natural emotional behavior. PLoS ONE 6(2): e14679.
doi:10.1371/journal.pone.0014679
Details of the impact
The team's influence on affective computing has an international reach
both within the world of software and also commercially.
The World Wide Web Consortium (W3C) recommended EmotionML as a standard
in April 2013. W3C is the body that regulates the web and sets standards
for all internet-based applications (a familiar standard is HTML,
used to code all web pages). As a standard, EmotionML is more
than a research instrument: it defines a policy that has major
implications for the web. Most immediately, using the standard ensures
that any computer programme that describes or uses emotion will be
compatible with another.
The need to develop suitable descriptions for everyday emotions was
highlighted by Cowie and Cornelius in 2003 (see section 3). The process of
translating the academic analysis into a working standard was initiated
during the Queen's-led HUMAINE project, which began in 2004. Former
Queen's research fellow Marc Schroeder steered the project until 2012,
leading an incubator group that included academics, companies and W3C
representatives. The specifications for describing emotions combine ideas
and tools from many sources. In particular the project made use of Queen's
development of `trace' methods to describe emotions that change with time,
and the vocabularies needed to describe everyday emotion. The formal
defining documents for EmotionML (cited below) acknowledge the input from
Queen's researchers in developing the standard.
A W3C standard is a policy statement by a transnational body, and
influencing such a standard is an impact in its own right. A second level
of impact comes from applications of the language. In W3C terminology
these are `use cases', and establishing that such cases exist is part of
the approval process. In the 2013 implementation report, nine cases where
the standard has now been implemented are listed, and they demonstrate the
next level of impact. The commonest use of EmotionML is as an interface
that allows different components to communicate. Deutsch Telekom
implemented it to describe the output of their speech analysis software;
Swiss-based emotion technology company nViso integrates EmotionML into
market research software that infers emotion from facial expressions; the
widely used open-source MARY text-to-speech system developed at the German
Research Centre for Artificial Intelligence uses it to describe the
emotional colouring that speech should express; and the WASABI simulation
of central emotion processes, used in the artificial museum guide Max,
uses EmotionML to let WASABI interface with other software.
In a local development, that directly uses QUB's expertise and research,
a start-up company, Adoreboard was founded in collaboration with Dr Gary
McKeown in 2011 to build on the research into opinion dynamics, affective
computing, and emotion. The company has secured funding of £470,000 so
far. This includes £310,000 in private funding, £60,000 from a Knowledge
Transfer Partnership and up to £100,000 from InvestNI-the regional
business development agency. It currently has four employees and Dr Gary
McKeown serves as Chief Scientific Officer. Adoreboard is working with a
number of multinational companies who are interested in using their
sentiment analysis software to assess the emotional response that
consumers have to their products, based on their news, media and blogs, as
well as online comments on sites such as twitter, forums, chatboards and
review websites. Adoreboard was recently selected from UK start-ups by
Google to take up residence at its Google Campus in London.
Sources to corroborate the impact
W3C documentation:
Recommendation for EmotionML http://www.w3.org/TR/emotionml/
Use cases and incubator group membership http://www.w3.org/2005/Incubator/emotion/XGR-emotion/#AppendixUseCases
Implementation report: http://www.w3.org/2002/mmi/2013/emotionml-ir/
Applications of EmotionML (brief descriptions are in the W3C
Implementation report)
Deutsch Telekom: Burkhardt, F (2012) Fast Labeling and
Transcription with the Speechalyzer Toolkit http://www.lrec-conf.org/proceedings/lrec2012/pdf/110_Paper.pdf
nViso: https://developer.nviso.net/
The MARY text to speech system: http://mary.dfki.de/
WASABI: Becker-Asano, C., & Wachsmuth, I. (2010). Affective
computing with primary and secondary emotions in a virtual human. Autonomous
Agents & Multi-Agent Systems 20, 32-49.
Schuller B, Baron-Cohen S., Robinson P., Golan O., Newman S., Camurri A.,
Baranger A. (2012) Integrated Internet-Based Environment for Social
Inclusion of Children with Autism Spectrum Conditions deliverable D9.2 http://geniiz.com/wp-content/uploads/2012/12/Deliverable-289021-D9.2-Annual-report-year-1_updated.pdf
Verification of Queen's input to W3C standard
Director of Research, CNRS at LTCI, Telecom-Paris Tech, 37/39, rue
Dareau, 75014 Paris, France. (Senior member of W3C group).
Adoreboard (formerly known as Mediasights)
http://www.adoreboard.com
CEO, 3-5 Commercial Court, Belfast, BT1 2NB. Verification of issues
related to Adoreboard
Investment Executive, QUBIS Ltd, 63 University Road, Belfast, BT7 1NF.
Verification of the relationship between Adoreboard and the School of
Psychology QUB.