For three years, chatbots have been the face of generative artificial intelligence. Type anything in them to get a personalized response, which morphs into a seemingly magical dialogue with a machine. While that conversational interface may seem the best way to harness large language models, some companies are starting to ditch chatbots, worried about liability and loss of control.
They’ve found that even with guardrails, users can “jailbreak” the technology and get a chatbot to go off topic, sometimes in harmful or unsavory directions. They might be leaving magic on the table, but these firms are also potentially building safer, more focused products, and ...
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.