Discovering Emotional Engines: what will it take to trust emotionally intelligent machines with data and decisions?

By , 25 July 2017 at 18:00
Discovering Emotional Engines: what will it take to trust emotionally intelligent machines with data and decisions?
Future

Discovering Emotional Engines: what will it take to trust emotionally intelligent machines with data and decisions?

By , 25 July 2017 at 18:00

Image: Nola by Studio Drift (used with permission)

ā€œItā€™s your voice of reason ā€“ keeps you in check, pushes you, but also knows whatā€™s best for you.ā€
Adnan, UK

The Shape of Things to Come

Everyday services will soon embrace a layer of functionality akin to the human quality of emotional intelligence, enabling them to be responsive to what their users are feeling. We can also assume the emergence of another breed of services that will seek to regulate or manipulate emotion, as and when – and to what end – the user requires or desires them to do so.

Both of these manifestations of emotionally intelligent technologies will – in theory – also be able to act on their userā€™s behalf, automating daily tasks and decisions. Moreover, both promise a future in which their adopters will find themselves in a constant state of equilibrium, as their ever-attuned objects and services tirelessly hyperpersonalise their environments.

That, at least, is the vision for the future as seen through the lens of the technology. In order to resist its obvious trappings, we wanted to explore emotional intelligence through the lens of the people that may, one day, be using versions of the aforementioned products or services. To help us identify opportunities for customer-centric innovation in this area, whilst understanding the moral and ethical implications that surround it, we wanted to know – what do people think and feel about emotionally intelligent technology? What do they need or want it to do? And crucially – what will it need to do, or prove, to make sure it can be trusted with data and decisions?

Method & Initial Learnings

We recruited an optimistic, 20-strong mix of young men and women in the UK and the US, highly familiar with VPAā€™s (arguably a predecessor of emotionally intelligent ā€˜responseā€™ services), to engage in topical discussions surrounding – and design tasks utilising – emotionally intelligent technology. In turn, a 1200-strong panel across Spain, Germany and the US was consulted to quantify sentiment and needs. This is what we learned.

As we discovered early on, the assumption that emotionally intelligent services would be a direct evolution of AI and its automation capabilities was a somewhat naive hypothesis we ascribed to at the outset of our exploration. In its application across everyday services, the benefits of AI (or ā€˜functional intelligenceā€™) were perceived to be related to savings (time, money effort). The perceived benefit of emotional intelligence, on the other hand, was wellbeing. Accordingly, it should not be assumed that any AI services must necessarily evolve to become emotionally intelligent – and that any service with emotional intelligence needs to automate tasks and decisions.

For example, emotional intelligence was rejected in the context of ecommerce, where it was considered exploitative of what was perceived to be a highly vulnerable data set. Current ecommerce mechanics have already made it painfully easy to buy without thinking (rationally, as well as at all), whilst the aftermath of irrational spending sprees was frequently associated with guilt and regret. It was no surprise that the prospect of an emotionally-aware agent, sitting on top of an ecommerce platform, evoked a loss of financial control and was therefore rejected outright by our participants.

Response

An explosion of choices across both content and the platforms hosting it have increased the time spent on search and discovery, and many of todayā€™s recommendation or curation efforts often seem to compound – rather than solve – overload.

The prospect of a letting their emotions filter content in real time, under the guise of ā€˜responseā€™, was therefore seen as a welcome evolution by our respondents. In fact, it was even viewed as having the potential to bypass the trappings of content-based filter bubbles in the future (ā€œIt can open doors to find new things that might not have been found or experienced beforeā€). Its potential to enhance or mitigate acute emotions in the process also garnered interest.


Regulation

Considering the compounding stresses of modern life, emotionally intelligent technologiesā€™ potential to regulate emotion was met with great enthusiasm by our participants. This is hardly surprising, given that our contemporary age has often been dubbed an age of anxiety (1 in 5 people in the UK and 4 in 15 globally can attest to this), in which career-, financial-, social- and health related stresses are playing out against a backdrop of faltering governance, ideological polarization, and ever-widening wealth gaps. Our previous work on the future of connectivity highlights the role connected technologies play in this stress cycle as well (see ā€˜absorptionā€™ and ā€˜overloadā€™).

Appetite for emotional regulation from technology has already been whet by the continued popularity of yoga, secular adaptations of mindfulness (including clinical applications, like MBSR / Mindfulness-Based Stress Reduction), and ā€˜braintechā€™ – an emerging genre which is gaining traction and investment (to the tune of half a billion dollars across 15 start ups deemed worth watching).

Our participants revealed that beyond real-time regulation, specifically around stress and anxiety, they desired help from the technology to build psychological resilience, cultivate willpower in the face of unhelpful / nutritionally poor / unhealthy temptations, and to break bad habits ā€“ thus making the case for programs that complement data-based emotion ā€˜triageā€™ with features that facilitate mind transformation over time.

The Mechanics of Building & Sustaining Trust

Emotionally intelligent services will require people to generate and disclose a highly sensitive set of personal data. Our participants spoke about the vulnerability they felt when tracking their own emotions over several days as part of an experiment we conducted – not least because it had the potential to undermine their investments in creating idealised versions of themselves for the world to see across multiple social networks (online and in real life). Perceived to exploit this vulnerability, the idea of emotion data harnessed for ecommerce and advertising purposes, was – unsurprisingly – strongly rejected.

Accordingly, any innovation efforts around emotion data must ensure this vulnerability is acknowledged and respected when it comes to designing how data is collected, handled and secured. Allowing for choices regarding how much or how little a user wants to share, proactive transparency (where / what / why / how, communicated simply and openly), temporal storage, and corporate accountability will dictate whether or not a prospective user will disclose their data in the first place.

Trust is built and sustained by a myriad of factors, and the experience of a product plays a strong part. Lessons from human-to-human relationships provide a useful framework for how a service experience might unfold (mirroring ļƒ  knowing ļƒ  caring). Designing with a relevant archetype in mind can deliver desired characteristics and personality traits (mothers, gurus, therapists and pets were the dominant ones that emerged from our participantsā€™ design tasks).

Towards ā€˜Emotional Medicineā€™

ā€œMy dream isā€¦[for] a new field of medicine to be establishedā€¦something like emotional medicine.ā€
– Avi Yaron, med-tech pioneer and CEO of Joy Ventures, speaking at Wired 2015

The opportunities around applying emotional intelligence to improve content discovery are vast in light of the sheer number of platforms and the multi-billion dollar industries that contain them. Likewise, positioning it as a means to amplify human potential, as some early products and services like Thync or Feel have demonstrated, is lucrative as people strive to make the most of their time and increase their productivity.

When taking into account its abilities to measure, make sense of and influence emotions, the potential for this technology to have a far greater socio-cultural purpose is evident, not least through the eyes of its prospective future users. To achieve this, the medical communityā€™s quest to connect mind and body must formally give way to a new mindset in the field at large. Avi Yaronā€™s vision is a sound start – whilst ā€˜mental healthā€™ is limited to issues that require solutions, emotional medicine (or wellbeing) is an ongoing process, for everyone.

Ethics

Social networks and behavioral analytics companies have recently come under fire for inferring emotion from behavioral data for commercial and / or political gain, unbeknownst to the majority of their users / targets, and with grave socio-cultural consequences. As always, ethics and legislations lag, yet once theyā€™ve caught up and get translated into formats and languages easily understood by regular citizens, they will play a key part in ensuring these practices are transparent and easy to opt out of.

As with all new technologies, itā€™s tempting to rush progress, and learn through trial and error. But there is also a strong case to be made for truly understanding the context in which emotions manifest first, from multiple angles -ā€“ i.e.namely, the brain, consciousness and the mind. In his recent interview with Exponential View, Yuval Harari aptly warns, ā€œif we donā€™t understand the internal ecosystem, the result may be that we destabilise or unbalance it the same way we have unbalanced the external ecosystem.ā€

previous article

ONLIFEā€™s Hackathon winners validate Edge Computing

ONLIFEā€™s Hackathon winners validate Edge Computing
next article

Productive, Anxious, Lonely ā€“ 24 Hours Without Push Notifications

Productive, Anxious, Lonely ā€“ 24 Hours Without Push Notifications