People are adding typos, aggressively casual language and references to ‘The Office’ to stay ahead of armchair detectors; ‘It’s like the new McCarthyism.’
As for Lemmy, there are many articles from lesser known sources that get positive attention here that seem to use AI but do not get removed
Well, Lemmy is not one thing, your instance for example is explicitly in favor of boosting AI-generated content. So that behavior is what I would expect if I had an account there. I personally wouldn’t go there expecting to see links to human-made content.
I don’t believe it’s possible for human writers to write both authentically and also in a way that is coded to verify they are human (as the article discusses) that an LLM couldn’t eventually come to replicate. I also don’t believe it’s possible for an LLM to write from their unique perspective. Therefore, I believe the strongest method for verifying ones own human-ness is to write from one’s own unique perspective.
signalling humanity in a way that resists automated systems
I think I would understand your perspective better if you gave an example or two of what signals could be used?
What I’m talking about is posted across all popular instances and is not specific to db0, and imo there is a very big difference between content that is explicitly AI and AI blog posts that portray themselves as being human written. I support the existence of a space for the former while opposing the latter.
Therefore, I believe the strongest method for verifying ones own human-ness is to write from one’s own unique perspective.
I agree, but it is possible to adjust your personal filter to let your unique signature be expressed in different ways, and it’s possible to write with your audience in mind without being inauthentic. Throwing up your hands and giving up is not the right approach, even though it’s a hard problem that by its nature resists specific actionable answers. The article gives an example of a contrived way AI can attempt to falsify such a signal:
“You’ll be reading someone’s Substack or blog post, and all of a sudden in the middle of a perfect paragraph, there’ll be a mistake sitting out there like a sore thumb,” said O’Bryan, 62. “It’s like, try harder.”
There are lots more, such as reducing the probability of the top weighted words the LLM chooses from in the last stage of its process. But this level of extra attention to automated signaling isn’t always applied, and I believe it can be defeated by developed intuition if people will bother to try to develop it. From the writing side, the approach should be to put more of yourself into more parts of what you write, to try to match the intuitions of readers, and to reduce efforts to converge on concepts of correct writing that could be in conflict with this.
reducing the probability of the top weighted words the LLM chooses from
My feeling is that a writer who adjusts their word choice to present a particular way is definitionally behaving inauthentically. I would characterize such writing as “slop” even if it’s human made, because it was still heavily influenced by how LLMs “write”.
Put another way- I don’t believe that “not worrying about appearing as an LLM” is “giving up”, I think it’s a recognition that an LLM is not capable of fighting you in the first place. If you, a creative soul, allow fear of “coming off a certain way” (ANY way) to determine how you write, you have already lost.
Well, Lemmy is not one thing, your instance for example is explicitly in favor of boosting AI-generated content. So that behavior is what I would expect if I had an account there. I personally wouldn’t go there expecting to see links to human-made content.
I don’t believe it’s possible for human writers to write both authentically and also in a way that is coded to verify they are human (as the article discusses) that an LLM couldn’t eventually come to replicate. I also don’t believe it’s possible for an LLM to write from their unique perspective. Therefore, I believe the strongest method for verifying ones own human-ness is to write from one’s own unique perspective.
I think I would understand your perspective better if you gave an example or two of what signals could be used?
What I’m talking about is posted across all popular instances and is not specific to db0, and imo there is a very big difference between content that is explicitly AI and AI blog posts that portray themselves as being human written. I support the existence of a space for the former while opposing the latter.
I agree, but it is possible to adjust your personal filter to let your unique signature be expressed in different ways, and it’s possible to write with your audience in mind without being inauthentic. Throwing up your hands and giving up is not the right approach, even though it’s a hard problem that by its nature resists specific actionable answers. The article gives an example of a contrived way AI can attempt to falsify such a signal:
There are lots more, such as reducing the probability of the top weighted words the LLM chooses from in the last stage of its process. But this level of extra attention to automated signaling isn’t always applied, and I believe it can be defeated by developed intuition if people will bother to try to develop it. From the writing side, the approach should be to put more of yourself into more parts of what you write, to try to match the intuitions of readers, and to reduce efforts to converge on concepts of correct writing that could be in conflict with this.
My feeling is that a writer who adjusts their word choice to present a particular way is definitionally behaving inauthentically. I would characterize such writing as “slop” even if it’s human made, because it was still heavily influenced by how LLMs “write”.
Put another way- I don’t believe that “not worrying about appearing as an LLM” is “giving up”, I think it’s a recognition that an LLM is not capable of fighting you in the first place. If you, a creative soul, allow fear of “coming off a certain way” (ANY way) to determine how you write, you have already lost.
To clarify, that quote was not what I am suggesting, rather it’s part of the bar to be overcome.