Trustworthy AI is imperative, but shouldn’t be over-regulated, the White House says.
The White House’s Office of Science and Technology Policy (OSTP) has issued a draft memo to government agencies which spells out the principles agencies must abide by when creating regulations for the use of AI. The principles are designed to achieve three goals: Ensure public engagement, limit regulatory overreach and promote trustworthy technology. The memo includes 10 principles that agencies must consider when drafting AI regulations.
The memo follows on from President Trump’s executive order on AI in February 2019, which set out the administration’s strategy for accelerating the US’s position of leadership in AI. This includes fostering public trust in AI systems by establishing appropriate governance of, and standards for the technology.
These principles are designed to help agencies such as the Food and Drug Administration with its approval process for AI-powered medical devices, or the Transportation Department’s work on autonomous vehicles and drones.
“The principles promote a light-touch regulatory approach. The White House is directing federal agencies to avoid pre-emptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth,” wrote Michael Kratsios, CTO of the United States, in an op-ed published this week. “Agencies will be required to conduct risk assessments and cost-benefit analyses prior to regulatory action to evaluate the potential tradeoffs of regulating a given AI technology. Given the pace at which AI will continue to evolve, agencies will need to establish flexible frameworks that allow for rapid change and updates across sectors, rather than one-size-fits-all regulations. Automated vehicles, drones, and AI-powered medical devices all call for vastly different regulatory considerations.”
The ten principles for “Stewardship of AI” listed in the document are: public trust in AI, public participation, scientific integrity and information quality, risk assessment and management, benefits and costs, flexibility, fairness and non-discrimination, disclosure and transparency, safety and security, and interagency coordination.
Deputy CTO of the United States, Lynne Parker, described the US government’s hands-off approach to regulation of the use of AI technology during a panel discussion at CES this week.
“Certainly, I think at the beginning, the role of the federal government is not to get in the way,” she said. “We want to foster innovation and make sure it’s being used in ways we can all benefit from, but… there are many areas in which we need to have more oversight.”
AI presents a unique challenge, she said, since the White House wants to foster innovation in technology but wants to make sure the result maintains the public’s trust, which panelists agreed was of paramount importance to any nation wishing to become a world leader in AI.
“At some point, the federal government needs to step up and say, okay, we’re actually hampering innovation by not having regulatory oversight or a process for it, or having any consistency,” she said.
“The bottom line is that these technologies are new… in terms of the application impact on society, and there are a lot of people that are concerned about a lot of use cases, but rather than jumping immediately to saying we’re so afraid of it, or we don’t want to use it, we need to be able to learn,” said Parker. “By being able to have safe areas, like regulatory sandboxing where we can test out these ideas and learn what works, what doesn’t work, then over time we can achieve those kinds of benefits that are useful for everyone.”
The White House has also urged allies such as Europe not to over-regulate AI.
European Commission president Ursula von der Leyen, in her pre-election manifesto “My Agenda for Europe”, made the human and ethical implications of AI a priority, promising to put forward legislation for a co-ordinated European approach during her first 100 days in office.
Since then, a report by Germany’s Data Ethics Commission recommended tough new rules for AI ethics with strong measures taken against “ethically indefensible uses of data.” This was widely seen as an indication that any new EU rules on AI uses would be just as tough, since a previous report from the Data Ethics Commission was the basis for the EU’s GDPR (General Data Protection Regulation).
As Europe tries to enact its own vision for ethical leadership in AI, it therefore seems likely that it will do so by defining more regulation, not less.
“Europe and our allies should avoid heavy handed innovation-killing models,” said a statement issued by the US OSTP. “The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”
In his op-ed, Kratsios referred to alarming developments in countries such as China which have provoked international outcry against such use of AI technologies including facial recognition. However, the US government’s own use of AI technology on its citizens is not covered by the principles set out in the new memo.
The regulatory environment will certainly play a part in which of the global superpowers becomes a leader in AI technology, but whether technology leadership is incompatible with the ethical use of AI is unclear at best.