Synthetic Data Can Reconcile Transatlantic AI Policy Divergences

Feb. 6, 2025, 9:30 AM UTC

Synthetic data adoption represents a significant opportunity for insurers and other industries to reconcile cross-pond regulatory frameworks with domestic guidance. The influence of European sentiments on data and AI are well established—the GDPR brought about the adoption of the California Privacy Rights Act and an alphabet soup’s worth of comparable state laws.

The expansion of European influence may accelerate in the wake of New York’s announcement in January of an international secondment program with the Bank of England. The program will allow regulators in New York and the UK to exchange staff, and enable “greater sharing of resources, knowledge, and regulatory approaches,” the New York State Department of Financial Services’ release said.

Domestic entities regulated by NYSDFS should be eagerly looking to not only the UK, but the general European Economic Area for potential guidance on a variety of issues, including on data governance and artificial intelligence development and deployment. Data minimization is central to the European approach to AI and data governance. This principle dictates that personal data must only be processed when absolutely necessary for the specific purposes.

Meanwhile, New York has prioritized avoiding bias that would result in unfair or unlawful discrimination. Insurers should review NYSDFS’ Insurance Circular Letter No. 7, which presented NYSDFS’s interpretation and application of existing rules and regulations to AI systems. The directive outlines expectations for development of AI systems and use of consumer data, emphasizing that certain tests and analyses must be performed to ensure AI systems don’t contain undiscovered biases.

The European Data Protection Board’s focus on data minimization might appear to conflict with NYSDFS’s emphasis on anti-discrimination measures. To accurately perform assessments as directed by NYSDFS, a data controller may be required to collect additional information not relevant to the insurance transaction or may require data proliferation into multiple repositories, expanding the disclosure and use of personal information.

However, these priorities need not be inherently contradictory. Insurers and insurance producers can anonymize sensitive data. However, anonymization can be costly, lead to inefficiencies, and—depending on the forms—may result in re-identification depending on the level of sophistication.

Synthetic data may provide a more powerful approach. Synthetic data is artificially generated data, derived from real-world individuals. It provides a compelling mechanism to bridge the regulatory approaches presented by the EDPB and NYSDFS—efficiently and effectively realizing the organization’s goals, while limiting the need to expand the use and collection of personal data.

Synthetic data allows organizations to train AI models on reliable and accurate data sets without the constraints placed on them by individual data subjects. Synthetic data offers additional advantages. Unlike real-world data, which can often be incomplete or have population sample biases, synthetic data can be meticulously tailored to reflect specific training needs. This controlled approach enhances the accuracy and robustness of AI models. Moreover, synthetic data eliminates the risk of reidentification, satisfying the EDPB’s criteria for safeguarding individual privacy.

Organizations can leverage synthetic data to align their practices with regulatory requirements and also enhance their AI systems’ fairness and efficiency. This alignment underscores the potential to create global standards for responsible AI that protect individual rights while promoting innovation—the underlying goal of all AI oversight programs. To realize the guidance of both the EDPB and NYSDFS, insurers and insurance professionals should embed synthetic data practices and strategies into their AI development frameworks.

As the regulatory landscape continues to evolve, synthetic data emerges not only as a tool for compliance but also as a means to lead the charge in ethical AI development and deployment.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Ian Guthoff is associate in-house counsel at GuarantR Inc. and served as a special referee and court attorney for the New York Supreme Court.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jada Chin at jchin@bloombergindustry.com; Jessie Kokrda Kamens at jkamens@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.