The guidelines, which have been in the works for a year, are not binding in any way. But the White House hopes it will convince tech companies to take additional steps to protect consumers, including clearly explaining how and why an automated system is in use and designing AI systems to be equitable. The blueprint joins a number of other voluntary efforts to adopt rules regarding transparency and ethics in AI, which have come from government agencies, companies and non-government groups, CNN reported.
The video above is ABC7's 24/7 livestream.
[Ads /]
Though the use of AI has proliferated in recent years - being used for everything from confirming people's identities for unemployment benefits to generating a highly realistic picture in response to a written prompt - the US legislative landscape has not kept pace. There are no federal laws specifically regulating AI or applications of AI, such as facial-recognition software, which has been criticized by privacy and digital rights groups for years over privacy issues and leading to the wrongful arrests, of at least several Black men, among other issues.
A handful of individual states have their own rules. Illinois, for instance, has a law known as the Biometric Information Privacy Act (BIPA), which forces companies to get permission from people before collecting biometric data like fingerprints or scans of facial geometry. It also allows Illinois residents to sue companies for alleged violations of the law. Since 2019, a number of communities and some states have also banned the use of facial-recognition software in various ways, though a few have since pulled back on such rules.
The Blueprint for an AI Bill of Rights includes five principles: That people should be protected from systems deemed "unsafe or ineffective;" that people shouldn't be discriminated against via algorithms and that AI-driven systems should be made and used "in an equitable way;" that people should be kept safe "from abusive data practices" by safeguards built in to AI systems and have control over how data about them is used; that people should be aware when an automated system is in use and be aware of how it could affect them; and that people should be able to opt out of such systems "where appropriate" and get help from a person instead of a computer.
"Much more than a set of principles, this is a blueprint to empower the American people to expect better and demand better from their technologies," said Alondra Nelson, the deputy director of the White House Office of Science and Technology Policy, during a press briefing.
While some privacy and technology advocates responded positively to the guidelines, they also pointed out that they are just that, guidelines - and not legally binding.
RELATED: No, Google's AI is not sentient: Tech company shuts down engineer's claim of program's consciousness
[Ads /]
In a statement, Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, a Washington, DC-based nonprofit, said, "Today's agency actions are valuable, but they would be even more effective if they were built on a foundation set up by a comprehensive federal privacy law."
In a separate statement, ReNika Moore, director of the American Civil Liberties Union's Racial Justice Program, called the principles "an important step in addressing the harms of AI" and added that "there should be no loopholes or carve-outs for these protections."
"It's critical that the Biden administration use all levers available to make the promises of the Bill of Rights blueprint a reality," Moore said.
(The-CNN-Wire & 2022 Cable News Network, Inc., a Time Warner Company. All rights reserved.)