Developers of artificial intelligence should be cognizant of potential biases perpetuated by their systems and address risks of intellectual property infringement, according to finalized AI guidance by the National Institute of Standards and Technology.
The AI Risk Management Framework released Thursday by the non-regulatory agency emphasized the importance of trustworthy AI systems and outlined general considerations that users of the technology could implement in their work to build public trust.
The guidance comes as the use of AI to create art, hire people, and write code has heightened concerns over copyright violations, employment bias, and cybersecurity. The ...
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.