7 May 2019
In this guest blog, Daniel Hulme - CEO of AI solutions company Satalia - discusses the future of AI and the role it will play in forcing us to collectively agree on the ethical behaviours that constitute 'the right thing to do'.
There are two definitions of Artificial Intelligence (AI), and the most popular one is the weakest, which is getting machines to do things that humans can do.
Over the past decade, due to advances in technologies like deep learning, we have started to build machines that can do things like recognise objects in images, and understand and respond to natural language. Humans are the most intelligent things we know in the universe, so when we start to see machines do tasks once constrained to the human domain, then we assume that is intelligence.
Bench-marking machine intelligence against human intelligence is not a sensible thing to do. Humans are good at finding patterns in at most four dimensions, and we’re terrible at solving problems that involve more than seven things. Machines can find patterns in thousands of dimensions and can solve problems that involve millions of things. These technologies aren’t AI, they’re just algorithms, and do the same thing over and over again. In fact, the definition of stupidity is doing the same thing over again and expecting a different answer.
The best definition of intelligence that I’ve found is ‘goal-directed adaptive behaviour’. Goal-directed means trying to achieve some defined objective, which in business might be to roster your staff more effectively, or allocating marketing spend to sell the most ice creams. Behaviour is how quickly or frictionlessly can you move resources to achieve the objective. For example, if my goal is to sell lots of ice creams, how can I allocate my resources to make sure that I’m achieving the objective? But the key word for me in the definition of goal-directed adaptive behaviour is 'adaptive'. If your computer system is not making a decision and then learning whether that decision was good or bad and then adapting its own internal model of the world, I would argue that it’s not true AI.
The true paradigm of AI (the second definition of AI) are systems that can learn and adapt themselves without the aid of a human. Adaptability is synonymous with intelligence.
In fact, most companies don’t have Machine Learning problems, they have optimisation problems. Optimisation is the process of allocating resources to achieve an objective, subject to some constraints. Optimisation problems are exceptionally hard to solve. For example, how should I route my vehicles to minimise travel time, or how do I allocate staff to maximise utilisation, or how do I spend marketing money to maximise impact, or how do I allocate sales staff to opportunities to maximise yield. There are only a handful of people across the world who are good at solving problems like this with AI.
For millennia, philosophers have debated how society should be structured and what it means to live a 'good' life. This is an exciting time in human history because designing AI systems is forcing us to collectively agree what the ‘right’ thing to do is in certain situations. We have to agree on what humanity's objective (or goal) is, which means questioning the purpose of our organisation and the socio-political and economic structures of the plant. As our environments start intelligently interacting with us, we're giving them the power to create and destroy. If you’re building algorithms now that are making decisions in people’s lives, then you need to be able to explain how those algorithms are making those decisions, which is extremely hard. We have to embed ethical behaviours into these systems, so now have to agree on what those ethical behaviours should be.
LCP DC Quarterly Update
What's on the horizon for defined contribution pensions? In this edition of our DC update we look at key market updates from the past quarter, as well as news on legislative changes that may require you to take action.Read the update