On Wednesday, the Rutgers Eagleton Institute of Politics hosted Danielle Allen, a professor in the Department of Government at Harvard University, to discuss her research regarding artificial intelligence’s effects on democracy and human existence.
Saladin Ambar, an associate professor in the Department of Political Science and senior scholar at the Center on the American Governor at Eagleton, sat down with Allen to analyze the benefits and detriments of AI and how policymakers should adapt to these rapidly emerging innovations.
"You have to focus on developing tech that is human complementing, not human replacing," Allen said.
During the discussion, Allen said that she, alongside thousands of other technologists, political theorists and ethicists, signed an open letter that calls for a pause in technological development by AI companies and governments worldwide. She said that this letter gained the attention of policymakers and pushed them to begin implementing regulations toward AI.
Allen said policymakers need to start building frameworks to better regulate and govern technologies that are currently developing with little to no restrictions. She said that the White House recently issued an executive order requiring technology firms to register under the government so that there is more transparency between the two.
"If you look at a firm like OpenAI, it is literally about 100 people who are determining the design of models that are going to transform every sector of our existence," Allen said.
The discussion shifted toward concern about the effects of AI in university classroom settings. Allen said that technologies like ChatGPT can benefit students when beginning to research or brainstorm, though only if they are in control and fact-checking what ChatGPT outputs.
Allen, who is also the director of the Allen Lab for Democracy Renovation at Harvard University Kennedy School's Ash Center for Democratic Governance and Innovation, said these new technologies can help structure a more individualized learning pathway than the current standard model.
"The quality of what you get out depends on the quality of what you put in," Allen said.
Ambar asked what the future of humanity will look like with the rising influence of AI. Allen said that current AI models involve singularity, but human society is centered around plurality with diverse ideas and theories that AI cannot comprehend.
She said these new technologies need tools to enforce this plurality in human society to be efficient and effective.
"The argument is that the way we protect our humanity is actually by changing our sense of what the goal is when we are developing technology in the first place," Allen said.
The discussion between Allen and Ambar concluded with a question highlighting possible inconsistencies in regulations for new technology that might exist among different countries worldwide. Allen said that there are some parallels between these nations.
She said the U.K. and the European Union have started diplomacy, and there is a shared framework for regulating new AI innovations. She also said the Chinese government is innovating technology within a governing framework rather than through private corporations.
The U.S. has conducted its technological advancements outside government control, and there is a need to shift from shareholder to stakeholder capitalism, Allen said. She said AI is immensely centralized in private business, which threatens political, economic and democratic power.
"The goal for technology development should be human flourishing," Allen said. "Human beings flourish when their empowerment is supported, and that is both their individual empowerment but also their work together in communities as co-creators of shared goals and norms to democracy."