• Reports claim that the White House Office of Science and Technology Policy is developing law to guard against artificial intelligence — with input from the public.
  • Biden’s chief science adviser voiced the need to develop new safeguards against faulty and harmful uses of AI that can unfairly discriminate against people or violate their privacy.

This year has so far been an important inflection point for artificial intelligence (AI) law. Within the first few months of 2021 alone, government bodies — including US financial regulators, the US Federal Trade Commission, and the European Commission — have announced guidelines or policies for regulating AI. 

Now, top science advisers to President Joe Biden are urging for a new “bill of rights” to guard against powerful new AI technology. In fact, The White House’s Office of Science and Technology Policy last Friday, launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character. If anything, it reflects how the regulation of AI is rapidly evolving. 

The right regulation, like the one by the European Commission, that gets out in front of emerging technology can protect consumers and drive innovation. The Biden administration is aware of that, hence, in an op-ed published yesterday in Wired, Biden’s chief science adviser, Eric Lander, and the deputy director for science and society, Alondra Nelson, mentioned the need to develop new safeguards against faulty and harmful uses of AI that can unfairly discriminate against people or violate their privacy.

They both reckon enumerating the rights is just a first step. “What might we do to protect them? Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this ‘bill of rights,’ or adopting new laws and regulations to fill gaps.”

Truth be told, even though this is not the first time the Biden administration has spoken about AI and its possible consequences to society. It is, however, one of America’s clearest steps toward doing something about it. The federal document that was filed on Friday basically seeks public comments from AI developers, experts and anyone who has been affected by biometric data collection.

According to a report by the Associated Press, the software trade association BSA, backed by companies such as Microsoft, IBM, Oracle and Salesforce, said it welcomed the White House’s attention to combating AI bias but is pushing for an approach that would require companies to do their own assessment of the risks of their AI applications and then show how they will mitigate those risks. 

“It enables the good that everybody sees in AI but minimizes the risk that it’s going to lead to discrimination and perpetuate bias,” the report said, quoting BSA’s vp for global policy Aaron Cooper.

Both Biden’s top science advisers said the government is “starting here because of how widely they’re (AI) being adopted, and how rapidly they’re evolving, not just for identification and surveillance, but also to infer our emotional states and intentions.” Both believe that developing a bill of rights for an AI-powered world won’t be easy, but it’s critical.

Elsewhere, European regulators in April this year have already taken measures to rein in the riskiest AI applications that could threaten people’s safety or rights. To top it off, just last week, European Parliament lawmakers passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places. 


Dashveenjit Kaur

Source link