Drew Crecente last spoke to his daughter Jennifer Ann Crecente on February 14, 2006. A day later, Jennifer, a senior in high school who was in an abusive relationship, was murdered by her ex-boyfriend, who was later convicted and is serving time in prison. That year, Crescente started a nonprofit in her name to prevent teen dating violence and now routinely monitors any piece of media coverage related to her.
But he was appalled when he received a Google Alert notification at 4:30 a.m. on Wednesday that somebody had created a chatbot on popular AI platform Character AI using his daughter’s yearbook photo and name.
“A grieving father should not have to find out that his dead daughter is being used to try and make money as a chatbot on some website,” he told Forbes. “It shocks the conscience, and it’s unacceptable behavior”…
Y’all better get used to this shit. It’s only a matter of time before the hot girl in class or at work can be your personal onlyFans model. Your going to have a fight with your partner over something your chatbot said to them. People will marry, divorce, and leave inheritances to their chatbots. You will run into your ex out shopping with a version of you.
you’re
*yarn
I’m not sure which is creepier: someone creating a AI chatbot of a girl who was murdered 18 years ago or Google matching the face in the photo and notifying her father.
That’s horrifying. Assuming identities without permission is going to continue to be a problem going forward.
On the same note, how cool would it be to train a LLM on everything you know and you’re speech patterns and how you feel about things so that descendants could have a somewhat informed conversation with you.
Pretty sure there was a Black Mirror episode about this.
Doctor Who recently as well, and Fall; or, Dodge in Hell by Stephenson
There’s probably a boatload of it in books.
There are already causes of action for this sort of thing - we do not need to try and come up with something new.
Because lawyers are fucking tech stupid and that’s who will be making the decisions.
Would it be trouble to ask you to clarify?
I can’t wrap my head around "There are already causes of action for this sort of thing "
Have you read this article yet (warning: long and sad)? The AI used was the private GPT-3 Beta in mid-2021, almost 1.5 years before ChatGPT was launched.
https://www.sfchronicle.com/projects/2021/jessica-simulation-artificial-intelligence/
damn dude. I have tears in my eyes.
your
Black Mirror
“Be Right Back”
pet sematary.
late stage coomerism
I’m all for suing the company into the ground on this one. With the abilities of genai to just create images and voices, why else, other than simple intent, would you use someone else’s likeness or voice?
the wildest thing they admit to finding on character.ai is a murdered girl
Wait til someone tells them about chub.ai, then you’ll see some real crazy shit.
Monkey Paw!
This is just heartless 🙁
So he owns his dead daughter’s image? I understand why some people might feel uncomfortable about this, but the other option is controlling how people use chatbots on their own machines, which sounds much worse to me.
His daughter can’t give permission. No one should use her image like that
that chatbot site dgaf about your machine they make bank off of investors and giving parasocial relationships to 12 year olds
The alternative isn’t controlling how people use chatbots on their own machines. It’s limiting corporations from profiting off of chatbots that use another person’s likeness.
You don’t need to jump to assuming regulations would have to control what you do on your computer specifically.
Considering that when people pass away, whatever belonged to them usually goes to a family member, I’d fucking say so, yeah? ESPECIALLY in the case of using her image, without consent, for a fucking LLM. You’re fucking disgusting. I wish you the worst.
If it was from a Google alert, it would have had to be publicly available, right?