If popular media is to be believed, Artificial Intelligence (AI) is everywhere and it’s either going to solve all our problems or robots are going to take over the world and we’re going to lose our jobs. AI is the new oil, or the new electricity.
In the enterprise market, I’d like to think we are slightly more used to navigating the sweeping predictions and assumptions that come with emerging technology hype. Or are we?
At the beginning of 2019, there was a lot of chatter about enterprises finally being ready to make the move to deliver on the promise of machine learning (ML). However, what we actually saw, again, was enterprises struggling to get into production with ML and other types of AI.
At DiUS, for the past three to four years, we have seen enterprise demand in two main areas:
- conversational AI—voice interaction and chatbots—to improve customer engagement and loyalty, and
- computer vision models to reinvent traditional processes and augment human decision-making processes—freeing workers to be more creative and deliver a more efficient and accurate outcome.
It was only at the end of last year that we started delivering ML projects based on recommendation algorithms and sentiment analysis. Approaches that we’ve been talking to clients about for a long time.
And finally—and most excitingly—we’ve also started seeing, and working on, ML-powered products that were not feasible before. We know that the future has AI / UX at its centre, as it’s this combination that will drive those massive outcomes in how organisations interact with their customers. And that’s what DiUS’ AI Practice is focused on.
It’s definitely an exciting period, with technology advancements happening at rapid pace. Whether you’re already using ML in your enterprise or starting a new project, there’s a lot to consider.
We asked the following of DiUS tech and UX experts, Tom Wall, Gerd Wittchen, Elliott Murray, Sadia Mir, and Duy-Tin Truong, and Paul Mash, where they see ML going in 2020.
Tom Wall, Experience Designer
Our Iives are already enhanced by ML in small hidden ways and 2020 will see those enhancements grow – saving repetitive labour, correcting our clumsy human mistakes, optimising our lives to be greener, safer, faster and personalising services at a large scale.
ML is getting creative – 2019 was good for uncanny text generating algorithms, with GPT-2 giving its first interview to The Economist, AI-written ads outperforming human copywriters and hilariously surreal Batman scripts. Spammy bot generated content is already flooding the internet, but risk-taking garage geniuses with new tools trying to make a buck with Deepfakes / GPT might just create something new and unexpected. ML has escaped the academic bottle, it doesn’t always do what we expect and things will get weird.
ML is scaling – we’ll see new businesses built with data at their core, using ML to scale, unrestricted by legacy operating constraints and evolving faster than traditional enterprise. The Chinese Alipay spin-off, Ant Financial demonstrated this ‘servicing 10x the customers with 1/10th the employees needed’ with algorithm’s approving loans, providing financial insights and tailoring services dynamically for customers.
I’m hoping for a year of creativity, human enhancement and optimisation – but I think we’ll see another year of disruption and a string of unintended consequences. Human centered successes will elevate us, but failures will continue to highlight issues of privacy, ownership of data and the need to understand the human impact of things going wrong.
As for all technology, there is a double edge – a young Chinese couple approved for a home loan instantly probably don’t care about how a decision is made, but as we’ve seen, the struggling unemployed Centrelink recipient singled out by a system mistakenly can bear the brunt of a system’s consequences.
AI is misunderstood by the general public, and big tech is building trust with small useful discreet ML services but as the data gets bigger, the models more sophisticated – we should probably be ready for things to get weirder and more unexpected before they get better – this is still all new.
Gerd Wittchen, ML Specialist
I believe we will see a growing split between two areas of AI. There will be increasing doubt, suspicion and awareness of AI’s limitations—and the businesses who promised magical solutions. On the other side, I predict lots of interest in businesses focused on achievable goals and that have actually delivered ML-powered solutions which will grow stronger with careful managed iteration.
We will see a move towards ‘explainable AI’ to provide more transparency, accountability, and reproducibility of AI models and techniques. I expect to see a move towards automation of common data analytics / data science tasks. Data Scientist and Data Engineers are a scarce resource and development of tools and platforms will allow businesses to scale their AI capability.
With the recent advancements in neuro-linguistic programing, using more advanced word and sentence embedding is expected to enable companies to scale and transform the customer experience, while opening up a 24/7 pathway to the customer.
Elliott Murray, Head of Tech
I think we’ll start seeing progress to making ML accessible to a wider audience without necessarily needing as many highly specialised individuals. Demand is outstripping supply and that will continue, so making it easier to use these technologies is essential for adoption.
Managing data and having a clear and consistent path to production are still the biggest blockers for most organisations getting AI solutions to a real outcome. We’ve gone through the initial hype/PoC cycle and now it is about people finding how to apply these things in their own context.
The two ways in which I think we’re already seeing is ML as a service and better tooling from big vendors to allow more people to do ML. Big cloud providers have started offering vertical offerings in different industries as a sign for the first point. And in the second, technologies such as Kubeflow and Airflow - while still immature - are first steps to having opinionated ways to manage your ML lifecycle.
From an algorithmic point of view, starting to see people apply Reinforcement Learning and generative adversarial networks will be interesting. I can see these driving some interesting outcomes for any online services with large customer bases, such as retail.
Sadia Mir - Experience Designer
After the widely publicised scrutiny and government inquiries into data privacy concerns over algorithms deployed at tech titans such as Facebook, Google and Apple, I see AI practitioners in 2020 and beyond, shifting the focus of accuracy as their central theme for success to trust as an equally valued metric. Actively working towards this, Google has released several tools that help ML practitioners shed some transparency into the ‘black box’ of neural networks. Once such technique, is using Activation Atlases to help explore neural network behavior which can aid with interpretability of machine learning models.
Furthermore, I believe the industry will push towards even more ethically conscious AI. At NeurIPS last year, a panel of industry leaders discussed AI for climate change. During the session, Jeff Dean, Google’s AI chief, spoke about his desires for the AI industry to strive to become a zero-carbon industry. This is a valid concern after the University of Massachusetts conducted research and found that training even a single AI model, can generate as much of a carbon-footprint as 5 cars in their lifetime. With the focus on faster training models for neural networks, and more success with shifting to smaller data sets, I predict there will be significant advances in more energy efficient ML.
Finally, my hope is that this coming decade sees greater learning and adoption of AI/ML across non-engineering and engineering disciplines, which will further broaden diversity within the field and bring about much needed variety in perspectives.
Duy-Tin Truong, ML Specialist
I hope to see smaller and more efficient DL models and new ML approaches that allow training models with a fraction of the data that is currently required. According to the current investment of giant companies like Google, Facebook and Nvidia in model optimisation and specialised hardware, I expect that ML models will run faster on CPUs and edge devices.
Finally, I am especially interested in neural symbolic AI which is the closest form to human intelligence. However, that field is still young so we may not see its application soon.
Paul Marsh, IoT Specialist
Cloud-based ML has its place, but I’m hoping we get closer to ML on the edge in 2020. IoT and ML are complementary systems. IoT provides vast amounts of data to ML systems to create more accurate models and ML provides the intelligence and adaptability that would otherwise be labour intensive.
Let’s take the Tesla car for example, which constantly updates environment data and user actions from cars on the road to provide a better automated driving experience. Perfect example of an IoT system using ML to interpret streaming data, such as video, without human observation and acting on those interpretations.
But isn’t that rather a lot of network traffic to be doing inference every time? And what do I hear you say? 5G? That’s right. As IoT devices grow above the thousands, the bandwidth of incoming data and processing of that data through an ML engine in the cloud becomes expensive and unavailable. And shifting large amounts of data to the cloud for processing can produce a lag that may have a negative impact on time-critical applications, like applying brakes to stop the car.
ML models still need to be created in the cloud, but the application is much more feasible to be run at the edge or even in the IoT device itself. The need for an increase in bandwidth and processing is eliminated as each edge device handles the incoming data from its subset of IoT devices. Only data to or from the cloud is needed, as with Tesla, when an anomaly occurs and new models are required to be created to handle it