That’s what I thought of, at first. Interestingly, the judge went with the angle of the chatbot being part of their web site, and they’re responsible for that info. When they tried to argue that the bot mentioned a link to a page with contradicting info, the judge said users can’t be expected to check one part of the site against another part to determine which part is more accurate. Still works in favor of the common person, just a different approach than how I thought about it.
I like this. LLMs are powerful tools, but being rebranded as “AI” and crammed into ~everything is just bullshit.
The more legislation like this happens where the employing entity is responsible for the - lack of - accuracy, the better. At some point they’ll notice they cannot guarantee the correct information is the only one provided as that’s not how LLMs work in their function as stochastic parrots, and they’ll stop using them for a lot of things. Hopefully sooner rather than later.
That’s what I thought of, at first. Interestingly, the judge went with the angle of the chatbot being part of their web site, and they’re responsible for that info. When they tried to argue that the bot mentioned a link to a page with contradicting info, the judge said users can’t be expected to check one part of the site against another part to determine which part is more accurate. Still works in favor of the common person, just a different approach than how I thought about it.
I like this. LLMs are powerful tools, but being rebranded as “AI” and crammed into ~everything is just bullshit.
The more legislation like this happens where the employing entity is responsible for the - lack of - accuracy, the better. At some point they’ll notice they cannot guarantee the correct information is the only one provided as that’s not how LLMs work in their function as stochastic parrots, and they’ll stop using them for a lot of things. Hopefully sooner rather than later.