Complexity bias is one thing, but what other considerations do companies need to make? How can they limit the potential risks in their automated systems? What responsibilities should they take?
“When building an AI solution, you first need to consider the context of the solution you are going to build: whether you’re using it to make decisions on an object or a person, for an individual or for a group, whether the impact is immediate or long term, etc. The result of this analysis will help you understand if an end-to-end automation in your solution is desirable. Some examples: When it comes to automated recommendation engines on web shops, the implications on our lives aren’t very severe. A personalized news feed, however, might not have an immediate effect on you personally, but on a longer time scale, it can have a polarizing effect on society. As a company, you want to make your user aware of this, just like we are made aware of the effects of long-term use of medication. Last but not least, a system that automatically selects the best candidate for a job position… that’s where things get risky and we touch upon the well-known problem of unconscious bias.
“AI critics, like mathematician Cathy O’Neil, often use examples of systems that have been automated end to end, without any form of human agency and critical analysis upfront. These systems have had a big impact on people in the real world. My advice to companies is to be absolutely vigilant at the start of the development of AI systems, and even more important, proactively prevent ways to correct course when needed.
“But it’s also important to keep in mind that not every problem with automated systems can be attributed to algorithms. Sometimes it’s just a human error. This can occur when models are trained in an environment different from the one in which they are eventually deployed. A typical example is a facial recognition model trained in Western countries and deployed in Asia. That might be a painful error, but in the end, it’s simply the result of a data scientist not receiving the proper context from the business.”
Data makes it possible to strike an optimal balance between internal financial goals and customer satisfaction.
The use of computer vision applications, like facial and image recognition, make sense in contexts like security and production. What do you think of the use of AI for marketing purposes?
“During my stint at Microsoft, I was asked by the management to join the marketing team. As an engineer specialised in robotics and AI, I was initially reluctant. Marketing was just branded t-shirts and pens to me at the time, I didn’t see how my expertise could offer any value.
“Soon, however it became clear to me that customer data held a wealth of insights that could turn the marketing department from a cost into a profit center, simply through the application of data science. The goal of a marketeer is to understand and predict the behaviour of customers, and to go from a push to a pull model. By making data-driven decisions, customer loyalty can be redefined. Additionally, it allows companies to strike an optimal balance between internal financial goals and customer satisfaction.”
What has changed in terms of the data marketeers get access too?
“In the past, all marketeers had access to the same sociodemographic data. Today, data can provide far more valuable insights. Based on your clicking behaviour, for example, AI can tell whether you are in a surfing or a buying mode. Insights like these are a dream for any neuro-marketeer. They say a lot about customer preferences and how to optimize for them. However, marketing professionals need to keep an open mind when it comes to interpreting the data and adjusting according to real-world feedback.”
“For a long time, especially in the US, there has been an unduly emphasis on hardcore marketing driven by personal data. In contrast, many European companies today are focusing on improving customer experience and services instead. In marketing, what one person experiences as inspiring can be highly irritating to someone else. Although my husband and I have almost identical sociodemographic profiles, we react very differently to certain forms of marketing. I absolutely hate it when my bank invites me to a face-to-face meeting to discuss investments. He loves that kind of personal approach.”
In marketing, what one person experiences as inspiring can be highly irritating to someone else.
And finally, do we, like Elon Musk, need to be fearful of AI?
“Someone recently sent me a great cartoon of an off switch, with the caption ‘man wins Go match against AI in just one set’. I mean, we need to remember that AI is just a tool, and that, in the end, we can switch it off at any time. It is not something we need to believe in, it is something that just needs to work following our human ethics.
“I like to think of AI more in terms of opportunities and intelligent inspirations from nature, as it allows us to achieve things that are beyond our human capabilities. Articles that mention how AI techniques like image recognition are deployed to protect bee colonies inspire me a lot. In my ideal world, AI will help us undo some of the damage we’ve done to the natural world.”
We need to remember that AI is just a tool, and that, in the end, we can switch it off at any time.
Need a comprehensive overview of AI, its implications, and what’s on the horizon? Grab a copy of Mieke De Ketelaere’s book Mens versus Machine (now in Dutch, French and English)