Artificial intelligence is on the rise again. Or, better said, is finally on the rise. While AI has long been a favourite subject in popular culture (Her, Humans and Ex-Machina), only recently is it being used in real-world applications. AI has finally crept off our television screens and edged outside the circles of academia, into the products and services we use every day.
Lately Google launched a special program to replace all the algorithms behind its tools with machine learning ones; we can also find AI algorithms in the way Facebook orders posts in our wall feed and in Netflix film recommendations. It’s in stock market transactions, pilot services for aiding doctors in diagnostic imaging and in research centres developing self driving cars.
Shifting our attention to connected objects, one of the great benefits of connectivity is the ability to take advantage of online services. This means that connected objects have at their disposal an amount of processing power and memory they would never be able to afford on their small chips. They are even able to use computational and data heavy tools such as those using artificial intelligence algorithms. And with a plethora of online services offering ‘free to test’ AI functionalities, prototyping and testing products that integrate technologies such as image recognition, speech recognition or even ad-hoc designed machine learning algorithms is not reserved solely to R&D labs.
The use of AI in objects will lead to new kinds of relationships between products and humans. With these new relationships come new challenges (and responsibilities) for designers and technologists. “Smart has no loyal opposition” argues Bruce Sterling, talking about smart cities and criticising the use of the term for describing a city filled with sensors and apps. ‘Smart’ at its worst is a one-size-fits-all attribute that steers away from actually considering the advantages of the upgrade. This argument also applies perfectly to objects. It’s easy to see the same problem when we look at the current gold-rush for the smartification of things, where putting a chip in it constitutes a value per-se regardless of the actual utility of the new product. It’s important to eschew this oversimplification and start to be more nuanced with our vocabulary in order to better understand designing “smart” things.
Evgeny Morozov points out some considerations when distinguishing between ‘good smart’ and ‘bad smart’ with regard to connected objects. ‘Good’ here means smart objects that assist and support, but eventually leave the final decision to the user. He mentions a teapot with a light indicator signalling the current state of the electrical grid, to encourage users to make an energy-informed conscious decision each time they go to boil the kettle, thus potentially helping avoid overloading the grid in times of high consumption. Crucially, the final decision is left to the user, who can choose to ignore the advice and still make a cuppa.
Now for the ‘bad’. These are objects that make decisions for the user, preventing certain options or enforcing specific ones. This approach, Morozov argues, is problematic because it assumes that the designer always knows how we should behave, and, in his terms, “fails to treat us as autonomous human beings, capable of weighing the options by ourselves”. Matt Webb and Matt Jones also weigh in on the matter. Both former principals at innovative design studio BERG, they have been speculating on robots and products for many years, and in light of the recent resurgence of AI, briefly expose their view on designing with artificial intelligence. Matt Webb writes about intelligent products and services designed to do something for us as opposed to allowing us to do even more. Matt Jones picks up on that, and explains his idea of centaur designers - which sees human and non-human intelligences as collaborators for creation - and opposes that to the Butler AI, which simply does things for us.
These arguments expose interesting points, but they also reveal cases where the preferred approach is not always the ideal, because for better or worse, sometimes we might just want a product to do things for us and lighten our task load, even accepting that it might not always behave as we would like it to. A system that enforces or prevents certain user actions can surely be frustrating, but Morozov himselfs admits that agency might be an acceptable price to pay in certain critical situations - for instance in a system that checks if we are sober before letting us start the car.
AI technology is finally showing its capabilities in real world cases, and tech companies such as Google, Microsoft, Amazon and IBM are making their AI technology available to use. These recent open source AI resources, alongside the discussions we have just detailed, inspired us to spent a week trying to find a systematic approach to use this new tool and test that approach with some rapid prototypes.
To help us better understand the different ways AI can function, we developed different AI ‘personas’: the Butler, the Police and the Buddy. These personas demonstrate three different ways of employing smartness in order to help us better design AI powered products. By no means a rigid set of rules, these three personas are more of a vocabulary to help start thinking and designing with artificial intelligence.
Borrowing the term from Matt Jones’ aforementioned blog post, the Butler AI simply does things for us. The Butler in AI is what we would refer to as ‘seamless’ in UX: a system that knows what you might need and does it, requiring little or no explicit interaction.
There are cases in which this is a highly desirable capability, for instance a robot vacuum cleaner’s behaviour. The new Roomba 980 uses computer vision to map places in your house in order to know where it has been and where it still needs to go. We are perfectly fine with it doing this autonomously, and don’t moan about the robot not asking for permission to go in one room or another. But if there are hardly any risks in letting technology take care of an underwhelming task such as cleaning the floor, there are cases where this approach would be problematic.
The first is that such a model requires the designer either to consider all different kinds of users and design for each, or, a more common scenario, design a standard default behaviour supposedly apt for the great majority of users, an approach that can lead to problems designing for diversity. The second issue centres on legibility. Seamless interactions and invisible interfaces might lead us to become even more disconnected from the technologies we use and rely on. As these become more complex and the effect of their actions more important, the risk of being alienated from such an important infrastructure becomes fraught with threatening implications.
A second approach we identified is the Police. This describes a smart system that prevents some choices and enforces others, following some ‘higher good’ that might even go against the user’s intentions.
This comprises quite a few cases; a banal example is that of a smart alarm clock. We can enhance it by making it aware of sleeping cycles, syncing with our calendar and learning from our routines, but eventually it will still need to enforce a certain action in order to perform its core function. But the alarm clock is inherently, even before any technological upgrade, a ‘police’ device, while there are other products where smartness introduces a novel functionality. We already talked about a breathalyser that checks if we are sober before letting us start our car as a good example of a police-like smart system. One more nuanced example is the learning function in the Nest thermostat. The first version of the algorithm set the temperature to a value that users never set in order to help them save energy. This might be fine with some users who accept the inconvenience for the perspective of a higher good (environmental or a cheaper bill), but eventually user reactions against the feeling of a lack of control led the company to change their algorithm to eliminate this behaviour.
It’s easy to see the benefit of this approach in enforcing certain widely beneficial choices (environment, savings, safety… ), but the problem here is to decide who’s in charge of those decisions, and to what extent they should be enforced. Exceptions occur, and there might be cases where it’s better to get fined for driving after a couple of drinks than not to drive to the hospital in the case of an injury. The implication of those decisions might turn out to be highly critical and range from design for good to designing dystopia.
The last persona is The Buddy, which is AI that supports the user without substituting them. The machine here is a friend. You ask it for help, and it won’t judge you or serve you predictably in any way. The crucial aspect of this metaphor, and what distinguishes it from the previous two, is that the final choice is always left to the user. Options are not enforced or implied, but rather suggested.
A good example for this type of AI relationship is an historical one. Chessmaster Garry Kasparov was famously beaten by Deep Blue, IBM's supercomputer, in 1997. What's less famous about this story (although quite a common reference when talking about machines vs. humans) is what happened after. Kasparov didn't resign in desolation from his profession, hailing the new robot lord of the game, he instead theorised and instituted what goes by the name of Centaur chess. This follows the same rules of traditional chess, the only difference being that players are allowed to use specific computer programs to predict all possible moves at each stage of the game. And, most importantly, this man-machine team has proven to be superior to both men and machines alone. It's easy to see the appeal of such a reference. Here we have an example of a human-computer entity where the human is empowered rather than replaced (quite a positive message, us being in the middle of a new luddist wave against tech). In Centaur Chess, the machine does what's it's best at: precise data crunching. The human is left to apply those skills he or she excels at: creativity and intuition.
Moving to a more concrete example, consider IBM Chef Watson. This cooking version of the IBM AI software generates a number of ingredient pairings and recipes, starting from an ingredient and a cooking style. The result is obtained by manipulating and processing information from both existing recipes and scientific data about flavour compounds. What’s interesting is that Chef Watson is not culturally biased, and not afraid of mashing-up foods from different origins and pairing unexpected ingredients (a bit of playing with the service resulted in a cutlet using tofu and mozzarella - something that would surely make the spines of both both my Italian parents and Japanese friends shiver. These unexpected pairings can then be a novel starting point for creating new recipes.
What makes Chef Watson Buddy AI? It does not try to automate a task, but rather complement and augment human work. Although the AI can access, process and cross-reference a huge amount of data about global cuisines and flavour compound science, an ability that not even the most experienced and culturally-open chef will ever be able to fully develop in a life-time, these ingredients and recipe suggestions constitute just a starting point for the cook. They are left with the final decision on how to combine and prepare them, based on their own experience and taste.
The persona metaphors used here introduce a shift in the perception of how we relate to objects, a shift towards more accurate and healthy relationships. Firstly, they help us us avoid the use of the term magic, often used to show a narrative with connected objects. Promising experiences where things react to our desires as if enchanted is risky, and when the tech fails in critical situations, the fairy tale might even turn into a horror story. And as AI makes smart behaviours more complex and nuanced, seamless and frictionless interactions will likely be rare.
Using personalisation in smart objects is not a novel approach. BERG uses a real-world metaphor when they discuss B.A.S.A.S.P. (be as smart as a puppy) in smart products. “Making smart things that don’t try to be too smart and fail, and indeed, by design, make endearing failures in their attempts to learn and improve. Like puppies.” More recently Dan Hill uses another real-world metaphor in describing a machine-human relation in an opinion piece for Dezeen. Here he refers to shepherd-sheepdog as a useful way to describe how robot and humans will cooperate in the city. And one particularly interesting aspect of this is the quality of the relation, which is never master-servant, but separate entities with possibly the same agenda but different ways of doing things. In his words: “animals are rarely willing to just be servants [...] So "shepherd-sheepdog" might be a usefully complicated relationship, but it's a relationship with clear motive nonetheless.”
Finally, as people are not reducible to only one character, when designing behaviour for AI powered objects, we are more likely to employ each of the previous categories in different proportions depending on each interaction and experience route of the product. Nonetheless isolating them in specific aspects gives clarity on the goal/relation to achieve when developing the behaviour of the product.
The initial prototype born from our ‘personas’ AI exploration is Timbre, an emotional radio.