- cross-posted to:
- technologie@feddit.de
- cross-posted to:
- technologie@feddit.de
The tech giant is evaluating tools that would use artificial intelligence to perform tasks that some of its researchers have said should be avoided.
Google’s A.I. safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from A.I. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.
Most potential technology will of course be acontextual and lacking any kind of critique of capitalism. I witness people I know make all kind of “wise” decisions based on career expectations or ambition and definitions of “success,” and they’re actually really miserable yet can’t seem to recognise why. I’m unconvinced AI developed by capitalist companies would provide healthier perspectives. Heck, even humans fall short in such roles as counselling, when they lack class consciousness or a critique of neoliberalism. AI probably doesn’t stand much more chance.