Image: Nola by Studio Drift (used with permission)
“It’s your voice of reason – keeps you in check, pushes you, but also knows what’s best for you.” Adnan, UK
The Shape of Things to Come
Everyday services will soon embrace a layer of functionality akin to the human quality of emotional intelligence, enabling them to be responsive to what their users are feeling. We can also assume the emergence of another breed of services that will seek to regulate or manipulate emotion, as and when – and to what end – the user requires or desires them to do so.
Both of these manifestations of emotionally intelligent technologies will – in theory – also be able to act on their user’s behalf, automating daily tasks and decisions. Moreover, both promise a future in which their adopters will find themselves in a constant state of equilibrium, as their ever-attuned objects and services tirelessly hyperpersonalise their environments.
That, at least, is the vision for the future as seen through the lens of the technology. In order to resist its obvious trappings, we wanted to explore emotional intelligence through the lens of the people that may, one day, be using versions of the aforementioned products or services. To help us identify opportunities for customer-centric innovation in this area, whilst understanding the moral and ethical implications that surround it, we wanted to know – what do people think and feel about emotionally intelligent technology? What do they need or want it to do? And crucially – what will it need to do, or prove, to make sure it can be trusted with data and decisions?
Method & Initial Learnings
We recruited an optimistic, 20-strong mix of young men and women in the UK and the US, highly familiar with VPA’s (arguably a predecessor of emotionally intelligent ‘response’ services), to engage in topical discussions surrounding – and design tasks utilising – emotionally intelligent technology. In turn, a 1200-strong panel across Spain, Germany and the US was consulted to quantify sentiment and needs. This is what we learned.
As we discovered early on, the assumption that emotionally intelligent services would be a direct evolution of AI and its automation capabilities was a somewhat naive hypothesis we ascribed to at the outset of our exploration. In its application across everyday services, the benefits of AI (or ‘functional intelligence’) were perceived to be related to savings (time, money effort). The perceived benefit of emotional intelligence, on the other hand, was wellbeing. Accordingly, it should not be assumed that any AI services must necessarily evolve to become emotionally intelligent – and that any service with emotional intelligence needs to automate tasks and decisions.
For example, emotional intelligence was rejected in the context of ecommerce, where it was considered exploitative of what was perceived to be a highly vulnerable data set. Current ecommerce mechanics have already made it painfully easy to buy without thinking (rationally, as well as at all), whilst the aftermath of irrational spending sprees was frequently associated with guilt and regret. It was no surprise that the prospect of an emotionally-aware agent, sitting on top of an ecommerce platform, evoked a loss of financial control and was therefore rejected outright by our participants.
An explosion of choices across both content and the platforms hosting it have increased the time spent on search and discovery, and many of today’s recommendation or curation efforts often seem to compound – rather than solve – overload.
The prospect of a letting their emotions filter content in real time, under the guise of ‘response’, was therefore seen as a welcome evolution by our respondents. In fact, it was even viewed as having the potential to bypass the trappings of content-based filter bubbles in the future (“It can open doors to find new things that might not have been found or experienced before”). Its potential to enhance or mitigate acute emotions in the process also garnered interest.
Considering the compounding stresses of modern life, emotionally intelligent technologies’ potential to regulate emotion was met with great enthusiasm by our participants. This is hardly surprising, given that our contemporary age has often been dubbed an age of anxiety (1 in 5 people in the UK and 4 in 15 globally can attest to this), in which career-, financial-, social- and health related stresses are playing out against a backdrop of faltering governance, ideological polarization, and ever-widening wealth gaps. Our previous work on the future of connectivity highlights the role connected technologies play in this stress cycle as well (see ‘absorption’ and ‘overload’).
Appetite for emotional regulation from technology has already been whet by the continued popularity of yoga, secular adaptations of mindfulness (including clinical applications, like MBSR / Mindfulness-Based Stress Reduction), and ‘braintech’ – an emerging genre which is gaining traction and investment (to the tune of half a billion dollars across 15 start ups deemed worth watching).
Our participants revealed that beyond real-time regulation, specifically around stress and anxiety, they desired help from the technology to build psychological resilience, cultivate willpower in the face of unhelpful / nutritionally poor / unhealthy temptations, and to break bad habits – thus making the case for programs that complement data-based emotion ‘triage’ with features that facilitate mind transformation over time.
The Mechanics of Building & Sustaining Trust
Emotionally intelligent services will require people to generate and disclose a highly sensitive set of personal data. Our participants spoke about the vulnerability they felt when tracking their own emotions over several days as part of an experiment we conducted – not least because it had the potential to undermine their investments in creating idealised versions of themselves for the world to see across multiple social networks (online and in real life). Perceived to exploit this vulnerability, the idea of emotion data harnessed for ecommerce and advertising purposes, was – unsurprisingly – strongly rejected.
Accordingly, any innovation efforts around emotion data must ensure this vulnerability is acknowledged and respected when it comes to designing how data is collected, handled and secured. Allowing for choices regarding how much or how little a user wants to share, proactive transparency (where / what / why / how, communicated simply and openly), temporal storage, and corporate accountability will dictate whether or not a prospective user will disclose their data in the first place.
Trust is built and sustained by a myriad of factors, and the experience of a product plays a strong part. Lessons from human-to-human relationships provide a useful framework for how a service experience might unfold (mirroring knowing caring). Designing with a relevant archetype in mind can deliver desired characteristics and personality traits (mothers, gurus, therapists and pets were the dominant ones that emerged from our participants’ design tasks).
Towards ‘Emotional Medicine’
“My dream is…[for] a new field of medicine to be established…something like emotional medicine.” – Avi Yaron, med-tech pioneer and CEO of Joy Ventures, speaking at Wired 2015
The opportunities around applying emotional intelligence to improve content discovery are vast in light of the sheer number of platforms and the multi-billion dollar industries that contain them. Likewise, positioning it as a means to amplify human potential, as some early products and services like Thync or Feel have demonstrated, is lucrative as people strive to make the most of their time and increase their productivity.
When taking into account its abilities to measure, make sense of and influence emotions, the potential for this technology to have a far greater socio-cultural purpose is evident, not least through the eyes of its prospective future users. To achieve this, the medical community’s quest to connect mind and body must formally give way to a new mindset in the field at large. Avi Yaron’s vision is a sound start – whilst ‘mental health’ is limited to issues that require solutions, emotional medicine (or wellbeing) is an ongoing process, for everyone.
Social networks and behavioral analytics companies have recently come under fire for inferring emotion from behavioral data for commercial and / or political gain, unbeknownst to the majority of their users / targets, and with grave socio-cultural consequences. As always, ethics and legislations lag, yet once they’ve caught up and get translated into formats and languages easily understood by regular citizens, they will play a key part in ensuring these practices are transparent and easy to opt out of.
As with all new technologies, it’s tempting to rush progress, and learn through trial and error. But there is also a strong case to be made for truly understanding the context in which emotions manifest first, from multiple angles -– i.e.namely, the brain, consciousness and the mind. In his recent interview with Exponential View, Yuval Harari aptly warns, “if we don’t understand the internal ecosystem, the result may be that we destabilise or unbalance it the same way we have unbalanced the external ecosystem.”