An AI for the person you wish you were
Philippe Beaudoin Philippe Beaudoin
March 4 11 min

An AI for the person you wish you were

The most pressing issues we face as a species are global. We need to work together to figure out what the world needs in order to return to equilibrium, to become sustainable again, and, eventually, to thrive.

Science is our greatest ally in this endeavour, but it's not enough. Only deliberate, well thought-out collaborative efforts can get us where we need to be.

Collaboration on a global scale is a great thing to wish for, but there is no way for us to work together cohesively unless we understand what motivates us as humans. In order to solve problems that rely on collective action, we need to understand what drives us as individuals and what we truly want—our deepest personal hopes and ambitions.

Yet it is not enough to understand only our aspirations. The truth is that we often behave in ways that are very far from these aspirations. If we’re honest with ourselves, I’m sure we will all agree there is a difference between the person we are and the person we wish we were.

What the world needs

So we’re facing two chasms: the difference between what we do and what we want, and what we want and what the world actually needs.

If we really care about leaving this world better off than we found it, we should make it a priority to bridge these gaps.

Some say technology is the answer. But the solution to global problems isn’t better versions of existing code or improving the gizmos we build. It’s true that we’ve achieved great things with the hardware and software developed over the past 40 years. But the goal has rarely been the betterment of humanity. And while we can build better satellite systems, more efficient farming equipment, or more engaging mobile apps, those don’t directly solve the big problems we face.

What the world needs

Unlocking our potential

The solution to global problems is us. It’s humanity. As humans, we do amazing things when we’re the best versions of ourselves. Let’s go back to the drawing board and build systems that help us achieve that.

We went to the moon and back with slide rules and manual computation. Imagine what we could do with assistance from technology that helps us unlock our potential.

We should look for ways to help people understand what the world needs and aspire to achieving those goals. At the same time, we should find ways to help people behave according to their aspirations.

The first gap is the realm of politics and priorities. Though important, it’s a much larger discussion beyond the scope of this article. The second gap is our focus: Our current AI-driven software often increases the distance between who we are and who we aspire to become.

We spent the last 20 years building software that makes it harder for us to reach our potential. Though it was unintentional, our technology is built in a way that reinforces our previous actions and behaviours—not what we want to or could be. But taking a new approach could shift this tendency and turn our AI assistants into strong allies in our quest for a better world.

We know AI-enabled software is powerful when it comes to user engagement. What if we could turn that power to help us live the lives we want to live?


Design for the reptilian brain

Design for the reptilian brain

AI systems today make their decisions by collecting huge amounts of data. Many of these systems are trying to help us, the users, and to do that they need to collect data about us.

Yet the only data these tools collect is about what we do when we use them. What we click on, the web pages we visit, the news we reshare, the apps we spend time in.

To these AI, people are the sum total of what they do. Their behaviours and nothing more.

The behaviours we adopt when we interact with our technological tools are almost always fast, instinctive, emotional. These are the kind of behaviours neuroscientists in the 1960s would have attributed to our reptilian brain. They emerge from what Daniel Kahneman calls “System 1” in his best-seller Thinking, Fast and Slow. System 1 is fast, instinctive, and emotional. Mobile apps, video games, even television news takes advantage of these modes of thought.

The logician inside

We have an entire other way of thinking, though, what Kahneman calls “System 2”. It’s a slower, more deliberate and more logical form of thinking.

This is the system we refer to as the “conscious self”. It holds our beliefs and allows us to perform complex reasoning and make conscious choices after considering the outcomes.

Crucially, it’s the part of us that can monitor our own behaviour. This is also the part of us that we so rarely rely on when we interact with technology.

That’s because System 2 requires concentration and work, while we can tap into System 1 almost effortlessly.


Tracking systems

Incomplete data leads to cat videos

What does this mean for our AI systems? Today, these systems mostly rely on machine learning and are therefore heavily dependent on the data they are exposed to.

Since we rely on System 1 for most of our interactions with technology, our AI systems end up being trained on data that is heavily biased towards decisions we make instinctively and emotionally. No effort is ever taken to rebalance this training data by considering that the decisions we make instinctively only capture a fraction of who we are.

The concrete effect of overfitting AI systems on instinctive and emotional decisions are numerous and pretty darn awful.

Let’s start with a first example: cat videos.

If you’re like me and billions of other travellers of the Internet, you find cat videos adorable. If a cute kitten catches your eye on a video website, you might very well click on it and coo delightfully as you watch mister kitten bounce clumsily around the screen.

There’s little doubt that System 1 is responsible for that click. Yet, to the AI, this is just another data point to add to its collection. What it learns from it is that you love spending your day watching cute cat videos.

So the next time you visit this video website, the recommendation engine is likely to throw more cute cats at you.

If you never spend some time away from your technological gizmos, reflecting about what you watch on the Internet, you may end up clicking on more and more cat videos and the AI behind these recommendations will think it is doing its job perfectly.

Naturally, most of us have enough control to skip over cat videos when we need to do real work. However, the impact of AI trained on System 1 behaviours can be a lot more insidious.


Flood of notifications

Flood of notifications

A recent study by research firm Wonder found that people receive on average 30 to 80 push notifications a day on their smartphones. In other words, your phone is going to vibrate in your pocket 30 to 80 times a day, urging you to take it out and take a look at what you just received. It could be a call, a text, an email, a status update on Facebook, or anything else.

It’s very instinctive to react to that vibration and take a quick look at what’s on your phone. This is System 1 in action.

Now, if you do react, then the software responsible for this notification is gathering yet another data point. Even if you end up not clicking on the notification, it’s totally possible that the software uses the inertia sensors to detect that you’ve taken your phone out of your pocket.

All these signals result from our instinctive reaction to a vibrating phone. Yet they could all be taken by an AI software as reinforcement that the notification was desirable.

As you can see, an AI trained on such instinctive behaviours is likely to drive us deeper and deeper into notification hell.

The examples of cat videos and notifications show how AI trained on System 1 behaviours can negatively impact our personal interactions with technology. This is bad enough, yet things can get even worse when we look at the impact of such AI on our social interactions.

Echo Chambers

Echo Chambers

In a paper published in 2015 in the journal Science, researchers analysed the news feeds of 10.1 million Facebook users who self-identified as Republicans or Democrats. They found that roughly 70 per cent of the news articles people saw on Facebook were aligned with their political views. In other words, the echo chambers are real.

It’s not unexpected that people would see more of the content they align with. After all, it’s much more tempting to click Like or to share a news post that reinforces our political beliefs.

Yet the act of liking or sharing often stems from the emotional reaction we had when we read the post. Again, this is a behaviour that can be attributed to our System 1 thinking.

These behaviours are used to train the AI behind each user’s personalized Facebook newsfeed. The news someone is exposed to is therefore largely dependent on their emotional reaction to previous similar news.

If, like me, you believe that the best way to form an enlightened opinion is to be exposed to different views and keep an open mind, then you may try to control every one of your clicks and to be very careful which links you follow. Even that, though, is likely to fail given how much of what you see depends on the behaviours of your friends on the social network.

In fact, it’s so hard to be exposed to the other echo chamber that The Wall Street journal built an application called Blue Feed, Red Feed to expose you to news from both sides of the divide.

Designing better AI to better you

If our AI systems were trained on a balanced diet of data, capturing our behaviours and our aspirations, then it’s likely that we would be watching fewer cat videos, we’d receive notifications only for the stuff we really cared about, and we would read more diverse news.

So, how can we design such AI?

Nobody really knows yet, however it’s interesting to identify a number of steps that might take us in this direction.

Tracking

First, it’s important to realize that the user data we train our machine learning models on can come from fast, instinctive, and emotional actions; or it can come from slow, thoughtful, deliberate actions. As designers and engineers, we can take steps to identify which actions come from System 1 and which come from System 2.

For example, we could measure the time from stimulus to action. If that time is too short, then the user is likely to be in System 1 thinking.

Once we know this, we can keep track of that information, associating each point of user data with the type of thinking that generated it.


Rebalancing

Rebalancing

The next step might be to try to rebalance our machine learning models by weighting the rare data points coming from System 2 more heavily than the numerous data points coming from System 1.

The result would likely be AI software that seems to understand us at a deeper level. A bit like a friend who knows we’re trying to quit smoking is likely to help us do so, even if they see us smoking a pack a day.

Capturing our aspirations

Tracking and rebalancing may be good first steps. However, we make the best use of System 2 when we sit quietly and reflect on the differences we want to make in our lives.

For now, there’s basically no way for our AI systems to be exposed to these deeper aspirations. If we want to change our behaviour in a meaningful way, for example by developing a new habit that would help improve the environment, then though luck. Our technological gizmos will probably never learn about it.

This could be a new frontier in AI development. We don’t have the answers yet, but we have brilliant scientists from around the world already working on AI research. We should take time to think about how to build AI systems that capture our aspirations and who we want to be, not only who we are.

This may lead to AI assistants that encourage us to spend a lot more time offline. And that may mean AI software that helps us work together to solve the world's most pressing issues.