Tech Being Tested to Bar Bots From Making Rulemaking ‘Comments’

July 27, 2020, 3:41 PM

The federal government is in the final stages of testing a year-long redesign of the Regulations.gov website, the central digital portal for agencies to post regulations and notices and for the public to file comments during the rulemaking process.

The new site is slated for launch in October. All visitors to Regulations.gov late last week were redirected to a beta site to test a mechanism designed to block bots from tainting the public comment process. A verification tool from Google, known as reCAPTCHA technology, was added to the beta site to improve system integrity by ensuring that comments can only be filed by humans.

The original Regulations.gov site, launched in January 2003, provides the public with access to federal regulatory content. It has received almost 10 million public comments since 2006, making it a critical component of government transparency, the General Services Administration said in a blog post explaining the redesign.

But like other online platforms designed to receive submissions from the public, the site has been vulnerable to bots, fake identities, and mass-submissions, undermining confidence in a 2014 Environmental Protection Agency rulemaking and one at the Federal Communications Commission three years later.

“The last thing we need is a common view that essentially the entire rulemaking process is being gamed by a variety of machines and shadowy players,” Michael Fitzpatrick, head of global regulatory affairs at Google, said at a public forum GSA hosted in January.

In addition to the bot blocker, the beta site includes enhanced search capabilities, a simplified commenting process, and a modernized design to improve user experience when comments are filed via multiple devices, according to the GSA, which took over management of Regulations.gov last year.

The beta version, created a year ago, is operating in parallel with Regulations.gov until it is ready to replace the original. The agency since May has been automatically redirecting users to the beta site on specific days to test different features and user experience. In the first phase of testing, the agency paused redirects on June 11 while it worked to preserve third-party links.

“The redirects will be ramped up further, based on public feedback, towards the goal of achieving a full cut-over to the new site and decommissioning the old in the fall,” a GSA spokesperson said.

Looking ahead, the agency plans to soon update the beta site’s application programming interface. These changes will allow the next-generation site to offer read-and-write capability, enabling users to both download regulatory information and upload comments.

The technology “would improve the ability of organizations to submit comments on behalf of their memberships,” the spokesperson said.

Restoring Trust

The anti-bot technology now being tested only guards against comments submitted by bots. The beta site doesn’t require users to certify their identity, and anonymous comments will continue to be permitted.

The integrity of the public comment process arose as a major concern in federal rulemaking in 2014, as the EPA was developing a rule on greenhouse gas emissions. The agency reported it received more than 4 million total comments, some from questionable sources.

Similarly, the FCC’s divisive 2017 net neutrality rulemaking garnered more than 22 million public comments. The New York attorney general’s office said “tens of thousands of New Yorkers may have had their identities misused” during this process.

Potential abuses of the e-rulemaking system was the subject of a joint hearing last October called by Sen. Rob Portman (R-Ohio), chairman of the Senate Homeland Security and Governmental Affairs Permanent Subcommittee on Investigations.

Yet the extent of the problem remains unclear. The Government Accountability Office in August 2018 launched an investigation into fraudulent identities used in public comments as a means of swaying federal agencies as they draft regulations. GAO issued one report in June 2019 on agency practices to identify the sources of public comments, and additional reports are expected in 2021, a GAO spokesman said.

The 2019 GAO report did not address the issue of bot-generated comments. The Administrative Procedure Act doesn’t require those who file comments to disclose their identities, nor does it require agencies to verify identities during the rulemaking process, the report said.

While agencies take any fraud seriously, public comments submitted during the rulemaking process are not the equivalent of votes for a rule. Agency officials have said they are able to identify mass-comment campaigns that use fake names and weigh them accordingly.

But a group of Democratic lawmakers wrote in 2017, after the FCC’s net neutrality rulemaking, that any use of false identities can taint the integrity of the rulemaking process, and that public trust in the system is critical—sentiments that senators from both political parties echoed during the Senate Homeland Security subcommittee hearing last year.

To contact the reporter on this story: Cheryl Bolen in Washington at cbolen@bgov.com

To contact the editors responsible for this story: John Lauinger at jlauinger@bloomberglaw.com; Andrew Harris at aharris@bloomberglaw.com

To read more articles log in. To learn more about a subscription click here.