view the rest of the comments
Technology
Which posts fit here?
Any news that are at least tangentially connected to the technology, social media platforms, informational technologies or tech policy.
Post guidelines
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
I have honestly not heard of this behavior, and I myself certainly don't do this. I wouldn't determine "what's worth reading" in the middle of reading, but well before I start. For example, if a piece is published somewhere I trust, or a friend recommended it, or say, it was posted in a Lemmy community known to have good moderation.
Like I said I understand why an artist would have a desire to present as authentic, but that is an unwinnable game because:
If you are determining it before you start reading, you are doing so on the basis of reading that others have done, and your trust in their judgment. Going by the source is not infallible, take the Ars Technica scandal as an example, where an AI hallucinated quote was falsely attributed to someone. As for Lemmy, there are many articles from lesser known sources that get positive attention here that seem to use AI but do not get removed, I think mostly because they are delivering a political message that is well regarded. To identify them and have them removed without cutting out all blog content (which given the above example clearly is not even enough) requires someone to read them, evaluate them, and make the case that they should be removed, and that case has to be strong enough to overcome pushback from people biased to believe they are authentic or maybe even suspicious that the real reason for removal could be political.
It is a nontrivial task, but it's one that anyone could contribute to by developing their own sense of what is and isn't AI output, and reading far enough to make their own judgment. It's important to be able to do that without a full read, because the main threat of AI content is unlimited scaling.
What I'm calling for is not an automated system, but for people to develop skill at manually and dynamically identifying and signalling humanity in a way that resists automated systems. Attempting to do this is not inauthentic, just like trying to write poems in defined styles is not inauthentic.
Well, Lemmy is not one thing, your instance for example is explicitly in favor of boosting AI-generated content. So that behavior is what I would expect if I had an account there. I personally wouldn't go there expecting to see links to human-made content.
I don't believe it's possible for human writers to write both authentically and also in a way that is coded to verify they are human (as the article discusses) that an LLM couldn't eventually come to replicate. I also don't believe it's possible for an LLM to write from their unique perspective. Therefore, I believe the strongest method for verifying ones own human-ness is to write from one's own unique perspective.
I think I would understand your perspective better if you gave an example or two of what signals could be used?
What I'm talking about is posted across all popular instances and is not specific to db0, and imo there is a very big difference between content that is explicitly AI and AI blog posts that portray themselves as being human written. I support the existence of a space for the former while opposing the latter.
I agree, but it is possible to adjust your personal filter to let your unique signature be expressed in different ways, and it's possible to write with your audience in mind without being inauthentic. Throwing up your hands and giving up is not the right approach, even though it's a hard problem that by its nature resists specific actionable answers. The article gives an example of a contrived way AI can attempt to falsify such a signal:
There are lots more, such as reducing the probability of the top weighted words the LLM chooses from in the last stage of its process. But this level of extra attention to automated signaling isn't always applied, and I believe it can be defeated by developed intuition if people will bother to try to develop it. From the writing side, the approach should be to put more of yourself into more parts of what you write, to try to match the intuitions of readers, and to reduce efforts to converge on concepts of correct writing that could be in conflict with this.
My feeling is that a writer who adjusts their word choice to present a particular way is definitionally behaving inauthentically. I would characterize such writing as "slop" even if it's human made, because it was still heavily influenced by how LLMs "write".
Put another way- I don't believe that "not worrying about appearing as an LLM" is "giving up", I think it's a recognition that an LLM is not capable of fighting you in the first place. If you, a creative soul, allow fear of "coming off a certain way" (ANY way) to determine how you write, you have already lost.
To clarify, that quote was not what I am suggesting, rather it's part of the bar to be overcome.