Anonymous wrote:Anonymous wrote:AI is an echo chamber. If a substantial number of sources start putting out articles saying something wrong but new, such as "extensive research by leading medical researchers shows that tea leaves cause autism in children," and a large number of popular websites agree to repeat the wrong claim, after a while, AI would repeat the same wrong claim. AI is a parrot, that fortunately is often right because what's out there on the web is often right. But AI can be fooled with a concentrated effort to train it with garbage.
Kinda. The thing is, though, that companies training foundational models are aware of this. The models aren’t just indiscriminately ingesting stuff. Microsoft tried that with Tay, which continuously learned from interactions with users. Twitter taught it to be a Nazi misogynist within 24 hours. Now companies curate what models ingest more carefully - well, models other than Grok.
Anonymous wrote:Anonymous wrote:AI is an echo chamber. If a substantial number of sources start putting out articles saying something wrong but new, such as "extensive research by leading medical researchers shows that tea leaves cause autism in children," and a large number of popular websites agree to repeat the wrong claim, after a while, AI would repeat the same wrong claim. AI is a parrot, that fortunately is often right because what's out there on the web is often right. But AI can be fooled with a concentrated effort to train it with garbage.
Kinda. The thing is, though, that companies training foundational models are aware of this. The models aren’t just indiscriminately ingesting stuff. Microsoft tried that with Tay, which continuously learned from interactions with users. Twitter taught it to be a Nazi misogynist within 24 hours. Now companies curate what models ingest more carefully - well, models other than Grok.
Anonymous wrote:Anonymous wrote:Only people who don't understand what AI is would trust its advice.
this sounds like college counselor speaking