- Top execs aboard nonprofit’s AI governance center
- Initiative for privacy pros will offer certification
The chief privacy officers from
The International Association of Privacy Professionals is launching an AI governance center Wednesday that will feature resources, training, and a certification course. The offerings add to the nonprofit’s existing privacy certification programs for job-seekers in sectors that must comply with data protection regulations across the globe.
The latest initiative comes in response to the accelerating deployment of AI tools such as Microsoft’s Bing and Google’s Bard chatbots. Products that rely on computers to complete tasks or make decisions bring both potential promise and possible pitfalls for people’s privacy and civil rights, as well as for a company’s brand reputation and the spread of misinformation or scams.
While the legal regime for AI is still being sorted out, some organizations have pledged to uphold principles like safety and fairness in systems that involve automated decision-making. Now there’s growing demand for professionals who can help navigate risks and make sure the technology is used responsibly.
“There are lots of AI frameworks and principles,” said J. Trevor Hughes, president and CEO of the IAPP, which has 80,000 members worldwide. “What there’s less of right now is tools and structures inside organizations to give life to these principles and frameworks.”
Microsoft’s Julie Brill, Google’s Keith Enright, and International Business Machines Corp.'s Chief Privacy and Trust Officer Christina Montgomery are among the advisers on a board that will work with the association’s new AI governance center. The advisory board’s members include representatives from the private sector, government, and academia.
Issues top of mind for organizations grappling with the implications of AI include privacy, harmful bias, bad governance, and lack of legal clarity, according to a recent survey from the association and FTI Consulting. More than half of surveyed organizations that are establishing AI governance approaches say they’re building on top of existing privacy programs.
One area of overlap is impact assessments. Privacy professionals use such assessments to determine how product launches or other business strategies would affect data an organization holds, including personal information. Similar assessments could measure how an AI system may impact users or society more broadly—for instance, whether automated decision-making could perpetuate discrimination in lending or other areas.
Compliance and controls for AI will have to operate “at the speed of digital business,” IBM’s Montgomery said in a statement.
“Privacy professionals are ideally suited to this challenge, and IAPP’s new AI Governance Center is key to ensuring they are ready to meet it head on,” Montgomery said.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.