
I agree with each of these points, and they may guide us in determining the practical boundaries at which we might consider mitigating the dark side of AI. Things like sharing content that trains large language models, such as those behind ChatGPT, and allowing those who don’t want their content to be part of what LLM presents to users to opt out. Rules against built-in bias. Antitrust laws prevent some large corporations from creating an AI cabal that homogenizes (and monetizes) nearly all information we receive. And protect your personal information used by those omniscient AI products.
But reading the list also highlights the difficulty of turning exciting proposals into actual binding laws. When you take a closer look at the points in the White House blueprint, it becomes clear that they apply not just to artificial intelligence, but to pretty much everything in tech. Each seems to embody a user right that has been violated since ancient times. Big Tech isn’t sitting around waiting for generative AI to develop unfair algorithms, opaque systems, data abuse practices, and lack of opt-outs. Here’s the bet, folks, and the fact that these issues are being raised when discussing new technology only highlights the failure to protect citizens from the ill effects of our current technology.
During the Senate hearing where Altman spoke, the senator sang the same words one after another: We screwed up regulating social media, so let’s not screw up AI. But there is no statute of limitations for enacting laws to curb previous abuses. Last time I checked, billions of people, including nearly everyone in the United States with the ability to poke a smartphone display, were still being bullied on social media, had their privacy compromised, and were exposed to horror. There’s nothing stopping Congress from getting tougher on these companies, most importantly passing privacy legislation.
The fact that Congress has yet to do so casts serious doubt on the prospect of an AI bill. No wonder some regulators, notably FTC Chair Lina Khan, won’t sit around and wait for the new law. She claims that current law gives her agency sufficient jurisdiction to deal with issues of bias, anti-competitive behavior and privacy violations posed by new AI products.
Meanwhile, this week the White House released an update to the AI Bill of Rights, underscoring the difficulty of actually enacting new laws — and the enormity of the work that still remains to be done. It explained that the Biden administration is working to develop a national AI strategy. But it is clear that the “national priorities” in the strategy are still undetermined.
Now, the White House wants tech companies and other AI stakeholders — and the public — to submit answers to 29 questions about the benefits and risks of AI. Just as a Senate subcommittee asked Altman and members of his panel to propose a way forward, the administration is asking companies and the public for input. In the request for information, the White House pledged to “consider each comment, whether it contains personal narratives, experiences with AI systems, or technical legal, research, policy, or scientific material, or otherwise.” (see No Large Language Models Solicited I’m relieved that GPT-4 will be a significant contributor despite this omission.)