Developers of artificial intelligence should be cognizant of potential biases perpetuated by their systems and address risks of intellectual property infringement, according to finalized AI guidance by the National Institute of Standards and Technology.
The AI Risk Management Framework released Thursday by the non-regulatory agency emphasized the importance of trustworthy AI systems and outlined general considerations that users of the technology could implement in their work to build public trust.
The guidance comes as the use of AI to create art, hire people, and write code has heightened concerns over copyright violations, employment bias, and cybersecurity. The ...
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.