- LCW attorneys examine DHS recommendations for AI tools
- Using a deliberate approach, appropriate safeguards is crucial
Generative artificial intelligence can supercharge how public agencies provide community services. The Department of Homeland Security published a playbook last month to guide the public sector in deploying generative AI.
Although the playbook was drafted under the Biden administration and may not receive support from the current administration, its recommendations remain sound and applicable. It reveals that the level of buy-in and care from senior leadership largely determines the success of implementation.
The DHS developed its playbook using three pilot programs. First, generative AI allowed DHS investigators to summarize and search reports for contextually relevant information. This system could lead to greater detection of fentanyl-related networks, better identification of perpetrators and victims of child exploitation, and improved aid in identifying patterns and trends, the playbook noted.
Second, the Federal Emergency Management Agency used generative AI to help local governments prepare hazard mitigation plans and apply for related grants. The AI tool created draft planning elements from publicly available, well-researched sources that were then customized.
Third, US Citizenship and Immigration Services used generative AI to create personalized interview training with different scenarios, policies, and laws based on refugee and asylum immigration officers’ specific needs.
Following these pilots, the DHS recommended that public agencies embrace generative AI but realize its limitations, as well as its risks that demand a careful and thoughtful development plan.
Agencies using the tool must understand their obligations to protect sensitive data and civil rights. Buy-in from risk management, compliance, legal and civil rights, and civil liberties experts is critical to protect agencies from risk.
The DHS advises designating an existing governance body—including technical experts—to provide oversight, or standing up a new one. Managers must have a level of fluency with generative AI to effectively supervise employees using it.
Public agencies should study the playbook to create a strategic approach for rolling out generative AI. It advises integrating generative AI into “mission-enhancing” processes but not using it for mission-critical tasks until it’s tested on lower-stakes functions. The playbook recommends narrowing the scope of a pilot program to define a specific, measurable need for which it can measure success or failure.
Monitoring and measuring performance effectively are crucial to ensure success and trustworthiness of deployments. This requires creating clear metrics, monitoring systems, and processes for regular assessment. The metrics should address accuracy and usefulness of results, impact on the mission, and adherence to responsible use principles.
Regular reporting and iterative improvements based on feedback are essential to refining AI systems. For example, USCIS measured success in its training pilot program by looking for reduced interview times and improved officer training exam scores. These metrics provided objective data that allowed the DHS to measure the tools’ effectiveness.
The DHS urges agencies to consider their needs carefully when choosing an AI tool. It noted the limitations for each of the generative AI applications: commercial, open-source, and open-weight.
Commercial AI platforms are publicly available on a free or subscription basis and can analyze résumés or redact video, for example. Open-source AI models offer more control and visibility into model functions and tend to be cheaper but may involve long-term maintenance costs. Open-weight models, which release the parameters but close off the code and architecture, generally require less maintenance and can be run offline—a potential plus for law enforcement.
DHS also recommends requiring individual users to be trained before they’re granted access to generative AI tools. While many members of the public have experimented with AI, an agency-directed approach can establish a baseline level of technical skill.
Agencies must emphasize to their employees the importance of human review of AI outputs, both for matters of civil rights and accuracy. Generative AI is prone to bias because it learns from humans, who exhibit their own explicit and implicit biases. It also can produce “hallucinations,” creating incorrect or misleading results. AI outputs always should be overseen by real, human intelligence.
The DHS playbook confirms generative AI holds great potential for public agencies, as long as they implement it with a deliberate approach and appropriate safeguards. Public agencies can use the DHS’s experience to benefit not just the organization, but also the community it serves.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Paul D. Knothe is partner at Liebert Cassidy Whitmore in Los Angeles.
Nicholas M. Grether is an associate at Liebert Cassidy Whitmore in Los Angeles.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.