On July 6 and 7, the United Nations hosted the sixth annual AI for Good Global Summit. During the panel “The next wave of AI for Good – towards 2030,” experts on generative AI pointed out the risks generative AI poses today, how to educate the next generation on what it can do and how the global community should come together to solve regulatory and social problems.
Jump to:
- Risks of generative AI include misinformation and unequal access to data
- Preparing the next generation for the world of generative AI
- Developing generative AI safely starts with a community
- How AI intersects with global concerns about distribution of resources
- The AI field needs to ease tension between innovation and regulation
Risks of generative AI include misinformation and unequal access to data
“The biggest near-term risk [of generative AI] is deliberately created misinformation using large language tools to disrupt democracies and markets,” said Gary Marcus, an entrepreneur, former professor of Psychology and Neural Science at New York University and chief executive officer of the newly created Center for Advancement of Trustworthy AI.
Marcus sees some upsides to generative AI as well. Automatic coding can reduce the strain on overworked programmers, he proposed.
Wendell Wallach, the co-director of the AI and Equality project within the Carnegie Council for Ethics and International Affairs, flagged inequality between wealthy northern hemisphere countries and poor southern hemisphere countries (the so-called Global North and Global South) as a problem exacerbated by generative AI. For example, the World Economic Forum published a blog post in January 2023 that notes generative AI is primarily both made and used in the Global North.
Generative AI draws from training data in a variety of languages. However, the languages with the most number of speakers will naturally generate the most data. Therefore, people who speak languages in which a lot of data is produced are more likely to be able to find useful applications for generative AI, Marcus said.
“You have an expansion of inequality because people who operate in languages that are well-resourced and have a lot of money are able to do things people using other languages do not,” he said.
SEE: Generative AI also has artists concerned about copyrighted material. (TechRepublic)
Preparing the next generation for the world of generative AI
Karishma Muthukumar, a cognitive science graduate of University of California, Irvine and specialist in using AI to improve healthcare, pointed out that she hears from children who learn about generative AI from their peers or at home, not at school.
She proposed a curriculum with which the use of artificial intelligence could be taught.
“It’s going to require an intergenerational dialogue and to bring together the greatest minds to find a curriculum that really works,” Muthukumar said.
Developing generative AI safely starts with a community
Many panelists spoke about the importance of community and making sure all stakeholders have a voice in the conversation about generative AI. That means “scientists, social scientists, ethicists, people from civil society,” as well as governments and corporations, Marcus said.
“Global platforms like the ITU [International Telecommunication Union, a UN agency] and conferences like this are beginning to make us feel more connected and help AI help humans feel more connected,” Muthukumar said.
“My hope is that part of what’s coming out of this gathering we’ve had over the last few years is a recognition that this is on the table and that recognition passes on to our leaders so they begin to understand this is not one of those issues that we should be ignoring,” Wallach said.
In regards to the ethical issues of using generative AI to solve global problems, Muthukumar proposed that the question opens up other questions. “What is good, and how can we define it? The sustainable development goals of the UN are a great framework and a great starting point to find these sustainable goals and what we can achieve.”
How AI intersects with global concerns about distribution of resources
Wallach pointed out that the mass amounts of money being poured into generative AI companies do not necessarily solve the problems to which the AI for Good summit proposes AI should be put to.
“One of the problems with the value structure intrinsic to the digital economy is there’s usually a winner in every field,” he said. “And the capital gains go to those of us who have stocks in those winners. That’s deeply problematic in terms of the distribution of resources to meet sustainable development goals.”
He proposes that companies that develop generative AI and other technological solutions to global problems should also have “some responsibility to ameliorate the downsides, the trade-offs, to the solution [they] are picking.”
The AI field needs to ease tension between innovation and regulation
The United Nations also came under discussion. Wallach noted that while the UN’s efforts to bring stakeholders together to discuss global problems are commendable, the organization has “a mixed reputation” and cannot solve “the cacophony between the nations.”
However, he hopes that bringing the conversation about generative AI and ethics to a wider audience will be beneficial.
What ethical considerations mean in AI could be different depending on circumstance, as well. “For instance, the concept of fairness in AI varies greatly based on its application,” said Haniyeh Mahmoudian, global AI ethicist at the AI and machine learning software company DataRobot and member of the U.S. National AI Advisory Committee, in an email interview with TechRepublic. “When applied to a hiring system, fairness could mean equal representation, whereas in a facial recognition context, fairness might refer to consistent accuracy.”
Marcus sees government regulation as an important part of ensuring a future in which generative AI works for good.
“There’s a tension right now between what’s called fostering innovation and regulation,” he said. “I think it’s a false tension. We can actually foster innovation through regulation that tells Silicon Valley you need to make your AI trustworthy and reliable.”
He compared the generative AI boom to the social media boom, in which companies grew faster than the regulation around them.
“If we play our cards right, we will seize this moment — in individual countries like the U.S., where I’m from, and at the global level — where people realize something needs to be done. If we don’t, we’ll have a year of hand-wringing,” Marcus said.