can we futureproof AI?

can we futureproof AI?

by Annefloor Robijn
Azalea Roseli, Ellen Söderberg, Stan van Wijk
22-05-2023

This article in 1 minute: 

  • You do not have to look too far to see how humans have wholeheartedly incorporated AI into our daily lives and as a jack of all trades. With the rate of development of AI currently in, it will radically transform industries across the board.
  • Yet, despite AI’s proliferation in our homes, schools, and businesses, it has been unable to outrun the debate on ethics and morality left in its wake. Alarm bells have been raised on the risks posed by using AI without proper regulation and oversight. 
  • As we continue to push the boundaries of what we can use AI to achieve, we must consider the ethical implications of this development. By doing so, we can ensure that the capabilities of AI are harnessed for the benefit of society as a whole and not only for a select few. 
  • How can we use this technology to improve the world and create a positive future for all? The possibilities are endless, and it's up to all of us to take action and shape the future we want to see.

If you were to enter the prompt “Will AI take over the world?” into OpenAI’s ChatGPT, it will assure you AI will do no such thing. You are reminded that despite its friendly demeanor, ChatGPT is an artificial intelligence system created and programmed by humans for human use. ChatGPT ends its answer on a particularly rosy future where AI will work alongside humans to “enhance our abilities and improve our quality of life”. Given that a super-intelligent AI overthrowing human society is a common theme throughout pop culture as well as debates on artificial intelligence, OpenAI programmers would have certainly prepared for this line of questioning. It definitely seems like we have skipped over the flying cars and extra-terrestrial colonization tropes of science fiction to pursue the artificial intelligence plotline, but is our story a utopia or dystopia? 

From our traffic lights, email inboxes, and even our vacuums, you do not have to look too far to see how humans have wholeheartedly incorporated AI into our daily lives. Even the speed and scale of AI are not to be underestimated – the AI chatbot ChatGPT reached 1 million users in just five days upon being made accessible to the public. For comparison, it took Netflix 3.5 years, Facebook 5 months, and Instagram 1.5 months to reach the same.  Yet, despite AI’s proliferation in our homes, schools, and businesses, it has not been able to outrun the debate on ethics and morality left in its wake. Alarm bells have been raised, from the very people working on AI themselves, on the risks posed by using artificial intelligence without proper regulation and oversight. In March 2023, over 1,000 technology leaders and researchers penned an open letter calling for a moratorium on AI development with the ominous warning that humans are racing to create ever-powerful digital minds which cannot be reliably controlled. 

It seems as though the timeless question of whether technology is a force for good or evil has found a new battleground in the arena of artificial intelligence. But before delving into arguments of morality and ethics, it is worth considering the scale, scope, and complexity of AI. What are we getting ourselves into?

AI as fuel for transformation

The adoption of AI in standard businesses has increased by 25% per year in recent years and is expected to continue.  As a result, AI is expected to have a significant impact on the global economy, contributing up to $15.7 trillion by 2030.  So, what is coming our way, and how will it impact our lives? As a jack of all trades, we are only at the beginning stages of understanding how AI will radically transform industries across the board. It is common knowledge by now that AI can speed up routine tasks, from creating presentations to summarizing long documents. However, the benefits of AI extend beyond merely saving time. 

In a study of AI and customer support calls, MIT and Stanford researchers found that besides resolving 14% more issues per hour on average, less-experienced workers saw a greater increase in productivity than their more-experienced counterparts.  Rather than merely making a task easier, AI can provide extra information and suggestions to help those in junior positions lessen the knowledge gap when entering a new role. 

Beyond its economic impact, another key explanation for AI's revolutionary effect is its low entry barrier. With all you need being a device and an adequate internet connection, more than 60% of the world’s population can access AI.  In terms of educational benefits, AI can act as a virtual tutor or encyclopedia for students with less access to resources. However, this is not to say that AI will be replacing teachers in schools anytime soon. On the contrary, some teachers are integrating the use of AI into their lesson plans as a way to prepare their students for working and living in a rapidly developing world.  For every student using ChatGPT to cut corners and write their essay for them, there are a dozen other ways that the same software can be used to enhance their learning and deepen critical thinking skills. 

We are only beginning to scratch the surface of what a future intertwined with AI can look like, but navigating the road ahead is not all smooth sailing. 

Who is at the steering wheel?

While navigating the road of AI advancements, it is crucial to recognize the potential risks. It’s easy to get caught up in the exciting promises of this technology, but it is critical to be aware of the far-reaching risks posed by AI. The prevailing surge in AI technology advancements may lead to the belief that ‘newer is superior’. However, it raises the question of whether this rapid speed of progress is truly advantageous, and if so, for whom? According to Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered AI, the potential for AI to cause tremendous harm is a real concern, especially if it is developed without careful consideration for ethical implications. 

Some of the key ethical concerns regarding AI include algorithmic bias, deep fake misuses, content inaccuracy, and trustworthiness. Whistle-blowers at Google and Microsoft have reported that in their eagerness to win with the so-called ‘AI race’, tech giants have chosen to gloss over ethical considerations raised by the use of artificial intelligence.  For example, OpenAI’s ChatGPT and other regenerative tools are considered a significant risk for spreading misinformation among their users.  Different studies have already proven that ChatGPT can be used to produce natural-sounding and persuasive material which repeats conspiracy theories and misleading narratives.  Recently, the CEO of OpenAI, Sam Altman, has called for the United States to form a new agency with the mandate of licensing AI companies such as his own amidst concerns about how AI can undermine democratic practices.  Across the pond, the European Parliament approved the EU Artificial Intelligence Act, pushing it even closer to becoming law. The landmark act was proposed in April 2021 and drew from the UNESCO AI and ethics framework, which championed principles such as data privacy and protection, transparency, non-discrimination, and multi-stakeholder collaboration.  Experts have noted that the EU AI Act could serve as a ‘golden standard’ for countries looking to develop their own regulations for AI. 

As the debate on AI’s benefits and drawbacks continues to unfold across all sectors, it is worth reminding that this conversation about whether technology is ‘good’ or ‘bad’ is nothing new. For as long as human creativity and ingenuity have pushed the limits of science and technology to ever-greater lengths, it has always been matched with calls for caution and reflection. The conversation on AI is the latest iteration of one that humans have engaged with for a long time: Should we attribute a moral character to the tools we create, or should we focus on the intentions of those who wield them?

How else can we think about AI beyond the binary of good or evil?

The Traffic Light Model

As an ever-present yet rarely noticed staple in our daily lives, even the humble traffic light has gotten some futuristic upgrades with the addition of artificial intelligence. Smart traffic lights are an element of an intelligent transportation system that uses technology, including AI, to make our roads more efficient, safer, and, well, smarter. Inspired by this useful tool, we can adopt a traffic light model to characterize the different approaches actors from businesses, government, and NGOs in utilizing AI thoughtfully and impactfully. 

Red light

  • You are: Cautious and meticulous, and people can rely on you to get the job done right the first time.
  • Keyword: Safety
  • Views: Wants precautionary frameworks for regulating AI to ensure safety in the long term.
  • As a driver: Always make safety checks on the car before beginning a journey.

Imagine a time in the not-so-distant future when the incorporation of AI technology into businesses is so seamless that the very debates we are having now seem silly in retrospect. We have specialized AI Committees representing the perspectives of multiple stakeholders across society, expert committees to advise and regulate potential risks, and a united vision of how to use AI in a safe and responsible manner. Lengthy and fruitful discussions have produced widely adopted frameworks on how to best use AI in daily operations, which promote efficiency while avoiding the worst excesses of AI. With business, society, and AI aligned along the same values, what were we ever worried about? 

Although this might seem like a wishful fantasy, some businesses are already forging a path toward an ethical AI-friendly business environment. It seems like the trick is to set in place safeguards to use AI responsibly before getting into any accidents. In other words, safety first! For example, companies like IBM have created new positions such as ‘Global Chief Data Ethics Officer’ where potential risk events concerning AI are projected into the future, and pre-emptive measures are integrated into the company’s systems.  A similar position is the Head of Ethical AI at Google, which oversees the ethical implications of the company’s ongoing AI initiatives. Beyond individuals, AI ethics boards also regularly assess risks posed by AI while enlisting the expertise of additional specialists.  

Businesses can extend this proactive approach toward AI regulation by integrating ethical considerations into their core products and management practices. These measures can involve bias testing of data pools, monitoring where data comes from, and staying vigilant on wider risks to copyright, intellectual property, data privacy, and disinformation. As is clear with the European Union’s AI Act, policymakers are still at the early stages of tackling these challenges at scale, and doing so properly will take time, but businesses do not need to play catch up. Rather than waiting for rules from the top, businesses can spearhead their own initiatives to internally regulate their usage of AI. While implementing these structures may be costly upfront, the potential risks of neglecting them are far greater.

Yellow light

  • You are: Optimistic yet pragmatic, and you are often the mediator or diplomat among friends.
  • Keyword: Flexible
  • Views: Interested in the potential of AI, but as a tool for promoting human collaboration rather than a silver bullet solution.
  • As a driver: You know where your destination is, but you don’t mind using a GPS to find an alternative shortcut.

As far as traffic signals go, the yellow light often gets a bad reputation - we've all heard the joke that where green means go, yellow means go faster! However, the yellow light can be an important reminder to be open to assessing the situation before proceeding with caution. In this case, the uncertainty of AI represents as much opportunity as it does risk. How can the risks of AI usage be managed to reap its rewards?

A possible solution could be to highlight the collaborative potential of AI between different actors. Instead of viewing AI as a replacement for the human brain, we can perceive it as a catalyst that brings minds together, fostering collective intelligence and imagination. AI has a transformative role that drives positive collaboration to tackle the world’s most pressing issues like climate change, poverty, hunger, and healthcare. By injecting a fresh perspective into these wicked problems, NGOs, governments, and businesses can use AI to address societal challenges more efficiently. 

In 2016, the visionary movement ‘AI for Good’ was established to harness the power of AI to address some of the world’s biggest challenges.  With governments, industry leaders, and NGOs taking part, this movement is a powerful example of what we can accomplish when working together with AI.  And it is already happening! Take the Philippines, where policymakers have used AI to locate areas where poverty is most prevalent.  Armed with this data, targeted interventions for specific contexts can be developed to make a tangible difference in people's lives. It is an exciting time to be alive, and with AI for Good leading the way, there's no telling what can be achieved with the help of AI to alleviate the worst of the world’s injustices. 

However, as with any ambitious movement, the AI for Good initiative faces its challenges. To achieve its full potential, AI should be applied to issues that demand immediate action - but who gets to decide how the world’s most pressing problems should be prioritized?  Perhaps the biggest hindrance to collaboration is striking a delicate balance between the free flow of data and protecting sensitive information. NGOs, in particular, may have reservations about sharing data with industry partners or other organizations they need to trust fully.  This is where the yellow light’s reminder for thoughtful decision-making and willingness to adjust course when necessary comes into play. While obstacles may arise, they should be taken as opportunities for growth rather than deterrents on the greater collaborative journey.

NGOs play an essential role in shaping the future of AI for good, and their efforts will be critical to achieving meaningful social impact, but they cannot stand alone. As we see the rise of AI-powered tools in various industries, private actors must ask themselves whether we use this technology for society's betterment or personal gain. The decisions we make today will shape our society's future, and it is up to us to ensure that AI is used to benefit all members of our global community, not just a select few.

Green light

  • You are: Always ahead of the curve, a trailblazer with larger-than-life goals.
  • Keyword: Innovation
  • Views: AI as a second brain that can boost our strengths and reduce our blind spots.
  • As a driver: In using AI to accelerate social good, speed can be an asset considering the urgency of such issues.

The green light metaphor extends beyond simply ‘go’ to symbolize how businesses can leverage AI to gain a competitive edge while creating positive externalities for the environment and society as a whole.  With more and more companies incorporating ESG and sustainability measures into their practices, adding AI into the mix can produce a win-win situation for all actors. Companies that invest in AI have a chance to expand their role as innovators, not only in terms of technology but also in social entrepreneurship, by applying AI to offer unique solutions to societal issues. 

It is not an overstatement to say that integrating AI in specific fields where technological improvements are needed can be game-changing. Waste management is one such field where AI can be used to identify and sort recyclable materials, thus significantly reducing the amount of waste sent to landfills.  AMP Robotics is one of the companies at the forefront of this technology. With the ability to accurately and efficiently identify and categorize different types of recyclable materials, AMP Robotics is helping to reduce contamination in recycling streams and increase the overall recovery rate of recyclable materials.  Companies that choose to invest like AMP Robotics can create significant environmental benefits and move us closer to a more sustainable future without compromising on efficiency. 

AMP Robotics offers benefits that go beyond just helping the environment. By reducing the amount of waste sent to landfills, companies can ultimately save on landfill fees and transportation costs, translating into significant financial advantages. Moreover, this technology can increase the efficiency of the entire waste management process, leading to a reduction in labor costs and improving overall operational efficiency. Additionally, with environmental regulations becoming more stringent, companies are required to reduce waste production. AI technology, such as that provided by AMP Robotics, can assist in complying with these measures and avoiding penalties.

While investing in AI can be revolutionary for businesses, it is not without challenges. One of the biggest obstacles is the cost of investment. Implementing AI requires substantial investments in technology, infrastructure, and personnel, including highly skilled professionals who can develop, implement, and maintain AI systems.  That is why industry experts advise companies to develop a comprehensive AI strategy considering their goals, available resources, and ethical considerations. By doing so, businesses can make informed decisions about investing in AI and maximizing its benefits while minimizing its costs and potential risks. It is a balancing act that can lead to significant financial and strategic advantages for those who get it right. 

Whether a company is socially conscious, profit-driven, or both, investing in AI could be the key to success. The green light is a call to action and an invitation to continue innovating beyond the technology of AI to imagine disruptive solutions to our biggest challenges. 

In the driver’s seat

In 1920, Czech writer Karel Čapek published the play R.U.R, which stood for ‘Rossum’s Universal Robots’. The plot of R.U.R is straightforward, frightening, and by now very familiar: Man invents robot, and robot overthrows man to our demise. Besides kicking off one of the most popular science fiction tropes of all time, Čapek also introduced the very word ‘robot’ into the English language, with its root laying in the Czech word robota, directly translated to corvée or serf labor.

Some hundred years from then, our relationship with robots will look very different. Far from its etymology of hard work and drudgery, the robots we use on a daily basis are literally manufactured for intelligence and deep learning. How can we begin to imagine how this brave new world of man and machine will look a hundred years from now? The answer may not be as far-fetched as it seems.

The fact of the matter is that artificial intelligence is produced by humans for other humans to use. Furthermore, given the extent to which AI has already been integrated into our lives, it is hard to imagine a future where artificial intelligence does not play a significant role. However, just because AI is here to stay does not mean that we cannot exercise it as a force for societal betterment. Ultimately, the debate on the ethics and morality of AI is a mirror in which we are forced to ask some hard questions: what is the purpose behind our quest for further and further technological development? 

The traffic light model has sketched out how different receptivity levels toward artificial intelligence can be translated to real-world applications of AI in building a better future. The key takeaway with this approach is that there is no ‘right’ or ‘wrong’ color; a traffic light needs all three colors to function properly and ensure the safety of our roads. Likewise, a traffic light with only green or red lights would not be useful for anybody. Whether you agree with advocating for AI regulation, see AI as a catalyst for collaboration, or fully embrace the fast lane of AI acceleration, the variety of these approaches is a testament to the transformative power of AI. 

The responsibility of creating a positive future with AI is the acknowledgment that AI policy does not rest solely with businesses or the people who create it. The perspectives of government, policymakers, experts, and everyday citizens are critical in building a regulatory framework for AI usage that promotes responsible use, transparency, and accountability. At a time when society faces many crossroads, it is up to us to decide whether AI will be a tool or a hindrance for the future. 

So what is next? Will our descendants, a hundred years from now, look back at this time of unfettered AI development as the precursor to a dystopian future where robots rule the world? Or will AI revolutionize lives for the better, making the benefits of technological progress more accessible for everyone? Either way, we can take the unprecedented development of artificial intelligence as an invitation to invent radical ideas of how we can use this technology for good. Perhaps it is time to move beyond red, yellow, and green and embrace a new spectrum of possibilities. The answer lies with you: how will you make AI future proof? 

sources