The Potential Perils of AI: Insights from Yoshua Bengio
The technology could become more intelligent than people and eventually take over, according to Yoshua Bengio
Loading...
2023 is a turning point in the evolution of artificial intelligence and its role in society. This year saw the rise of generative AI, bringing a technology from the shadows to the center of the public imagination. We've also seen boardroom drama at AI startups in the news cycle for several days.
by Anjana Susarla, Casey Fiesler and Kentaro Toyama, The Conversation
This prompted the Biden administration to issue an executive order and the European Union to pass legislation aimed at regulating AI. This can be explained as an attempt to control a horse already galloping. We brought together a panel of AI scientists to describe the challenges facing AI developers, regulators, and ordinary people by 2024, and to express their hopes and recommendations.
Casey Fisler, Associate Professor of Information Science, University of Colorado Boulder
2023 was the year of artificial intelligence hype. Despite talk of AI saving or destroying the world, visions of what AI can do have seemed to cloud the actual reality. I believe that anticipating future harm is an essential part of overcoming the ethical duty of technology and that overhyping it risks creating a vision of AI that seems magical rather than a technology that can be shaped by clear choices. But to take control, you need to understand this technology better.
One of the big debates in AI in 2023 was about the role of chatbots, similar to ChatGPT, in education. This time last year, the most relevant topics focused on how students can cheat and how teachers can try to stop them in ways that do more harm than good.
But over the years, it's been recognized that not being able to teach students about AI puts them at a disadvantage, so many schools have lifted the ban. I don't think we need to rethink education to put AI at the center of everything. But if students don't know how AI works, they won't understand its limits and how useful and relevant it is. What to do and what not to do. This is not just a story for students. The more people understand how AI works, the more opportunities there will be to use and test it.
So my prediction or hope for 2024 is that there will be a great desire to learn. In 1966, Joseph Weizenbaum, creator of the chatbot ELIZA, wrote that while the machine was "enough to astonish even the most experienced observer," the spell would be broken if "its inner workings were explained in plain language ". The challenge with generative AI is that it is much harder to find a language that is "clear enough" to bring out the magic in AI than simple ELIZA pattern matching and replacement techniques.
I think this can be done. Hopefully, universities rushing to hire AI technology experts will also make the same effort to recruit AI ethics. I hope the media can help cut through the noise. I hope everyone will think about the uses and implications of this technology. I hope tech companies think about what choices they will make for the future and listen to valid criticism. Kentaro Toyama is a professor of community information at the University of Michigan.
In 1970, artificial intelligence pioneer and neural network skeptic Marvin Minsky told Life magazine, "Within three to eight years, we will have machines with the intelligence of an average human." It matches and surpasses human intelligence. We're not there yet. It could be said that Minsky failed at least 10 times. Making predictions about AI is risky.
However, predicting a year does not seem so risky. What can we expect from AI in 2024? First, the race begins! Advances in artificial intelligence have been steady since Minsky's heyday, but the public launch of ChatGPT in 2022 launched an all-out war for profit, glory, and global dominance. Expect powerful AI with a wave of new AI applications.
The biggest technical challenge is how quickly and thoroughly AI engineers can tackle deep learning's current Achilles' heel: deductive logic, aka generalized complex reasoning.
Are quick tweaks to existing neural network algorithms enough, or is a fundamentally different approach needed, as neuroscientist Gary Marcus suggests? AI scientists are working on this problem, so some progress is expected by 2024.
In addition, new AI applications may bring new challenges. Soon, you'll be able to hear chatbots and AI assistants talking behind your back on your behalf. Some of them are confusing. It's either funny or sad. Deepfakes, AI-generated images, and videos that are difficult to detect will continue to proliferate despite regulation, causing further harm to individuals and democracies around the world.
And there may be new kinds of AI disasters that were unthinkable even five years ago. Speaking of problems, those who talk the most about AI, like Elon Musk and Sam Altman, don't seem to be giving up on building ever more powerful AI. I hope they continue to do more.
They are like people who start a fire, report the fire they started, and ask the authorities to put it out. From this perspective, what I hope for most in 2024 is stronger regulation of AI both nationally and internationally, even if it seems to be progressing slowly. Anjana Susanla is a professor of computer systems at Michigan State University.
In the year since ChatGPT launched, the development of generative AI models has continued at an incredible pace. Unlike ChatGPT a year ago, which took text tags as input and generated text as output, the new type of generative AI model is multi-modally trained. This means that the data used for training does not come from text sources such as Wikipedia.
You can find it not only on Reddit, but also in YouTube videos, Spotify songs, and other audio and visual information. The state-of-the-art multimodal large language models (LLM) that power these applications allow you to use text input to generate not only images and text but also audio and video.
Companies are racing to develop LLMs that can run on a variety of hardware and applications, including smartphones. The emergence of these lightweight and open-source LLMs could lead to a world of autonomous AI agents. The global community is not necessarily ready.
These advanced AI capabilities provide incredible innovation power in a variety of applications, from enterprise to precision medicine. My biggest concern is that these advanced features will not only create new challenges in distinguishing between human-generated content and AI-generated content but will also lead to new forms of algorithmic harm.
The flow of synthetic content generated by generative AI could create a world where malicious individuals and institutions can create synthetic identities and orchestrate disinformation on a large scale. The flood of AI-generated content designed to use algorithmic filters and recommendation systems could soon overwhelm critical functions such as fact-checking, information literacy, and serendipity provided by search engines, social media platforms, and digital services.
The Federal Trade Commission has warned of fraud, deception, privacy violations, and other unfair practices that could arise due to the ease of creating AI-based content. Digital platforms like YouTube have established policies for disclosing AI-generated content, but privacy-minded regulators like the FTC and US data privacy laws should scrutinize algorithmic harm.
A new bipartisan bill before Congress seeks to codify algorithmic literacy as a core component of digital literacy. As artificial intelligence becomes more and more intertwined with everything people do, it has become clear that the time has come to think about algorithms not as part of technology, but in terms of the context in which they operate: people, processes, and society.
Editor
The technology could become more intelligent than people and eventually take over, according to Yoshua Bengio
Countries should not compete through space programs, as this will only hurt humanity, astronaut Rakesh Sharma believes
It will be the first-ever spacewalk on a private mission. But does the US bear no responsibility for SpaceX’s soaring ambitions?