Every day, government offices undertake a vast range of administrative tasks: granting licenses, issuing payments, adjudicating claims, and enforcing rules. Historically, this work has been executed by government workers. But in an increasingly digital world, more government work is likely to be automated.
Members of the public may even come to expect a digital automated state as they grow increasingly accustomed to conducting their private transactions online and having their complaints with online retailers resolved automatically through interactions with chatbots.
Further changes are on the horizon, as government agencies at every level start to explore ways that artificial intelligence can handle tasks traditionally driven by humans. Instead of having human officials make decisions, such as whether an individual qualifies for disability benefits or unemployment insurance, agencies could eventually rely on automated systems.
Automation’s obvious potential lies with efficiency, speed, consistency, and accuracy. But automation can also bring forth a government with more consistency and fairness, too. Algorithms, after all, pose less risk of corruption than do human officials.
And even though some early algorithmic tools have been shown to exhibit unjust biases (usually because bias is baked into the human-created data on which they rely), these algorithms can be mathematically adjusted to reduce undesired, unjust outcomes—a much more achievable task than eliminating humans’ implicit biases. Because algorithms do exactly what they are told, an automated state could even be seen as the culmination of the ideals of fair, apolitical public administration that emerged in the Progressive Era at the end of the 19th century.
Yet, even with a responsible, uncorrupted, unbiased automated state, a key ingredient of good governance would turn out to be missing: the human touch. A future of a digital government seems, at first blush, quite sterile and devoid of empathy.
Empathetic engagement with the public, after all, is an essential ingredient of any responsive, responsible governmental authority. Ensuring that members of the public have opportunities to engage with human officials, share their voices, and have their views acknowledged will remain a vital function of government—perhaps especially so in an increasingly digitized world.
In that world, government will continue to make highly consequential decisions that affect individuals. At these times, empathy will demand that administrative agencies provide opportunities for human interaction, listening, and validation of concerns.
The public’s need for empathy, though, does not mean that government should avoid automation. If planned well, the transition to an automated state could, surprisingly, make interacting with government more humane, not less.
Government should seek to ensure that, even as routine tasks become increasingly automated, civil servants work deliberately to “bring the personal, creative, and compassionate touch,” as technologist Kai-Fu Lee puts it in his book AI Superpowers.
Making government more empathetic might even be easier to accomplish in an automated state. By freeing up administrators, clerks, and auditors from drudgery, and by reducing backlogs of files that need processing, government employees will have more time to offer genuine engagement with those individuals affected by their agency’s decisions.
The current human-driven bureaucracy, after all, is often far from sufficiently attentive and empathetic to many members of the public. Although in the future existing government jobs could be automated, the ability of the state to meet the public’s need for empathy could be enhanced if new responsibilities could be conceived and created for civil servants.
Think of these new jobs not just as “essential workers”—to use the vernacular of our Covid time—but as “empathy workers.”
A Right to Human Engagement and Empathy
To ensure that government provides human empathy in an automated state, we will need to create processes and laws to reinforce the maintenance of human contact.
Today, a “right to a human decision” seems a rallying cry for those who wish to resist the spread of artificial intelligence. But if automated decisions turn out increasingly to be more accurate and less biased than human ones, a right to a decision by humans would only deny the public the desirable improvements in governmental performance that digital algorithms can deliver. And everyone knows anyway that not all human decisions are empathic ones.
Instead of a right to a human decision, the challenge should be to define a right to human empathy. This right would acknowledge that people deserve an opportunity to be listened to in human-to-human interaction when confronting life-altering decisions.
If public administrators themselves do not assume the mantle of carrying out this right on their own, then courts in the future may need to determine when and how agencies must provide supplemental engagement and empathy to individuals subjected to automated processes.
The prospect of automating a vast swath of governmental decisions, then, promises more than just a path toward more efficient delivery of government services. It can provide, at the same time, an important opportunity to lead toward a more empathetic government.
To reinvigorate the human element in governing, officials who today might gravitate toward the creation of digital apps also need to think about ways to create what Sherry Turkle calls “empathy apps.” They must never lose sight of the need to find ways to take a genuine interest in listening and responding to what members of the public have to say.
This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.
Cary Coglianese is the Edward B. Shils Professor of Law and professor of political science at the University of Pennsylvania Carey Law School, where he also serves as director of the Penn Program on Regulation. His most recent book is “Achieving Regulatory Excellence.”