By David J. Blumberg
It’s no surprise that AI is experiencing (another) hype cycle. Several decades after its development in the 1970s at MIT, AI is enjoying a rebirth – this time with the promise of improving the way we work, live and play.
Certainly, businesses are adopting AI with gusto. Recent data shows that global enterprise use of AI has grown 270 percent over the past four years. However, consumers remain cautious or even fearful about its use, doubtful of its value and unsure of its relevance to their lives.
Why are consumers generally fearful of AI? It’s primarily because much of their knowledge is gained through popular movies that show a sensational and dystopian future – one where jobs are lost and uncontrollable machines take over. Further, consumers often conflate cyber hacking with the risks of using AI, leading to fear that their personal data will end up in the wrong hands. And lastly, consumers simply gravitate more to the familiarity of human interaction rather than interaction with software they don’t understand and believe they can’t control.
The lack of education about AI and its practical, real-world uses has also contributed to this cultural fear. In fact, only 26 percent of consumers think they use AI on a daily basis. However, nearly half of companies already use AI in their customer-facing applications.
If AI is as pervasive as the data suggests, it’s safe to say that most consumers are interacting with AI every day through their smartphones, search engines, voice assistants, banking apps or one of the many other functions where the technology has already been deployed.
Where do we go from here?
How can businesses – both startups and large enterprises – combat this consumer reluctance and ensure that AI reaches its full potential in the near future?
The first step is to understand what is driving the fear. What is causing people to believe that AI will harm rather than improve their lives?
The answer is found all around us, every day. As mentioned, pop culture fundamentally misrepresents AI and fuels this fear. Over half (58 percent) of consumers get their information on AI from movies, TV or social media. With this in mind, it’s no surprise there is much misinformation, an emphasis of potential negatives versus positives and a consequent lack of understanding among consumers about how they interact with and benefit from AI on a daily basis.
There is also a deep, underlying fear around data privacy. Consumers have the perception that AI is directly tied to sharing personal information, and, given the frequent headlines of major data breaches in the news, consumers lack trust in the use of AI when it comes to their sensitive information.
Highlighting AI for good
It’s the public duty and in the self-interest of technology companies to increase education and transparency of AI in order to boost comprehension, promote rational discourse and, hopefully, switch the narrative.
We must share examples of how AI is being used for good – whether it’s to save a life by preventing a heart attack, predicting natural disasters or stopping a terrorist attack – in order to truly change the fearful perception of AI in the minds of consumers.
However, while these life-enhancing use cases will go a long way in shifting the overall cultural perspective, it’s the smaller scale interactions that will soothe the preconceived fears of consumers and demonstrate the everyday benefits.
For example, LinkedIn might suggest a job opening that proves a perfect fit. Or Google Maps calculates a faster route on the daily commute. This is AI at work. It helps master the firehose of data and pares it down to a narrow solution set – a job that consumers cannot do for themselves, nor afford, especially not in real-time. AI is truly improving lives.
We must also address the real privacy concerns. AI requires voluminous and sometimes sensitive data, opening up the potential of cyber hacking and heightening consumer fears. But in reality, AI actually can make data moresecure. With the ability to analyze large volumes of data in near real-time, there is less need to store sensitive data. Moreover, AI can recognize patterns of malicious behavior to detect threats and mitigate them before they materialize.
Regulation of AI is in its early days but will prove more difficult than other technologies due to its proprietary value, diverse nature and the rapidly evolving power of its algorithms. It’s important for technology companies to work with the proper authorities to create systems that are flexible and that reward innovation while preventing societal harm.
Increased awareness, increased hope
AI doesn’t need to be a black box. We should promote models that are transparent and explainable with appropriate documentation, making decisions interpretable, limiting bias and open to external vetting. This will result in more accountability on behalf of businesses, technologists and users, while providing an opportunity for other stakeholders to understand AI-driven decision making.
Rather than relegating AI to research labs and the back office, let’s expose the technology for what it is and what it can do. Once consumers gain a better understanding, AI will be more readily, widely and rapidly adopted, unleashing the vast potential of this groundbreaking technology.
David J. Blumberg is the founder and managing partner of Blumberg Capital. Follow him on Twitter at @davidblumberg