This is the best news. It’s one thing for AI to assist, it’s another to replace. Fuck em
Agreed. There’s a difference between using AI to assist workers and using it to replace them outright
If it was actually AI (or good) it’d be different.
I’m ashamed of millennial gamers for not forcing the terminology to be VI. Did we learn nothing from Mass Effect?
It’s a marketing thing. Calling LLM’s “AI” was a very intentional move, to evoke that sense of hyperintelligence. Whether it’s truly an artifical intelligence up for debate, but calling them AI absolutely helped them gain attention (good and bad).
Also, obligatory “shut up Avina”.
It’s actually quite amusing to me that Wikipedia is an authority on “reliability”. It makes perfect sense, but can you imagine explaining that to a public school teacher twenty years ago?
Try explaining anything to a public school teacher lol. They always think they know better.
They’re not an authority though. They might want to be but they’re not.
January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes. […] After the revelation, CNET management paused the experiment, but the reputational damage had already been done.
So the “AI experiment” is not active anymore. But the damage is already done.
It was also new to me that Wikipedia puts time-based reliability qualifiers on sources. It makes sense of course. And this example shows how a source can be good and reliable in the past, but not anymore - and differentiating that is important and necessary.
Honestly they should just create a new category called AI generated. Reliability in journalism should only be for humans.
CNET began publishing articles written by an AI model under the byline “CNET Money Staff”.
(emphasis mine)
What a label. I assume that “byline” was their “article author”? “Money Staff”. Baffling.