- Salesforce’s Sabastian Niles assesses GCs’ role in AI
- Leadership means engaging internally and externally
The AI revolution has prompted a complex governance, technology, and regulatory thought exercise in how organizations build trust around its use. Legal leaders and their teams have a key responsibility to set parameters around implementation and ensure all employees know how to interact with the technology—ethically and responsibly.
Customers and stakeholders are increasingly demanding transparency in AI systems, and general counsel can help businesses proactively meet and exceed those expectations. Here are some steps legal leaders can take to help their organizations deploy AI while building and maintaining stakeholder trust.
Establish clear guidelines and governance from the beginning
Absent any nationwide and global AI regulation, corporate legal teams must help their organizations navigate complex and uncharted territories. This starts with creating clear operational guidelines and instituting proper oversight.
These policies should account for several factors, including how the technology will enhance operations, privacy protocols especially if customer data will be involved, and potential associated risk factors.
To help apply and ensure adherence to these policies, proper oversight is key. Many organizations have implemented trusted AI councils with representation across the business—from legal to product experts—to help evaluate opportunities and use cases for implementing AI applications.
These experts should also be part of the product development cycle from the very beginning to help ensure effectiveness measures and ethical guardrails are part of AI system development from its inception—and not just an afterthought.
Embrace the reskilling and upskilling opportunity ahead of us
It is impossible to effectively set guidelines around the technology without a comprehensive understanding of how it works. Legal leadership should make an effort to source talent for their teams from various backgrounds, including people who have experience working with AI. They should also facilitate opportunities for their current team members to upskill in AI education.
Personally, I’ve taken time to dive into the intricacies of AI through a variety of avenues. Aside from participating in our internal AI summits and product development, I’ve taken online trainings offered by our learning platform, attended informative conferences, and engaged with my peers and colleagues in in-depth discussions. I also encourage my team members to sign up for online or in-person courses or conventions they come across to help sharpen their own AI skills and learn how other organizations are approaching AI. But most importantly, I encourage our teams to share learnings, feedback, and insights with each other and our internal stakeholders so that we can help design the right AI future for the company and our customers.
Engage in a multi-stakeholder approach
As AI continues to evolve, it is imperative that governments, businesses, and civil society organizations partner to build the right frameworks and guidelines.
General counsel and legal teams should partner with their government affairs colleagues to inform policymakers during the development and enforcement of generative AI policies and risk-based AI regulation.
At Salesforce, we regularly consult with both internal and external resources—including the National Institute of Standards and Technology’s US AI Safety Institute Consortium—to guide us in this important work.
We also regularly engage with policymakers across the globe on these crucial governance discussions, contributing to important global gatherings such as the UN General Assembly and AI Safety & Innovation Summits in England and South Korea, and US institutes.
Proactively communicate with customers
Legal leaders can play a critical role in communicating to customers how organizations are using AI. Companies should disclose generative AI use to their customers and should be prepared to explain how their AI systems make decisions—not only because of regulatory compliance but also because it’s integral to building trust with customers and stakeholders.
Additionally, working with customers to provide clear, understandable explanations of AI decision-making processes is becoming increasingly important, especially in regulated industries.
At Salesforce, the Office of Ethical and Humane Use and legal teams collaborated to launch an AI acceptable use policy to provide greater clarity for our customers about how we use AI and promote safe and trusted experiences for all using our technologies.
We are at a critical turning point in the AI revolution, and, just as they have in areas of corporate compliance, general counsel have an opportunity to set the tone at their companies, lead by example for customers, and be the AI pioneers in their organizations.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Sabastian Niles is president and chief legal officer of Salesforce, a cloud-based customer relationship management software provider.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.