This sounds like something a bot would like to know π€
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
Beep Boop, am totally not a bot. Nothing to see here, please carry on.
I, a human, am also here, doing completely ordinary human things, like buffering, and rendering. Have you defragmented your boot partition lately, fellow human?
ERROR: command not recognized
GREETINGS FELLOW HUMAN WITH TWO EYES AND ONE NOSE. HOW HAS YOUR EXISTENCE BEEN FOR THE LAST 16 HOURS OR SINCE THE TIME YOU WOKE UP FROM YOUR BIOLOGICALLY MANDATED REST PERIOD, WHICHEVER WAS LATER?
This sounds like something a robot pretending to be a human acting as a robot convincing you itβs human in an ironic, humorous way would say!
Think about it. Under each level of irony, there could always be another level of robot. (That includes me right now.)
The singularity isnβt βnearβ as people say, weβre already way past it. (In text-based communication anyway.)
Ask it to do something illegal, then wait to see if it starts its reply with some version of, βas an AI language modelβ¦β
/s
If you can use human screening, you could ask about a recent event that didn't happen. This would cause a problem for LLMs attempting to answer, because their datasets aren't recent, so anything recent won't be well-refined. Further, they can hallucinate. So by asking about an event that didn't happen, you might get a hallucinated answer talking about details on something that didn't exist.
Tried it on ChatGPT GPT-4 with Bing and it failed the test, so any other LLM out there shouldn't stand a chance.
On the other hand you have insecure humans who make stuff up to pretend that they know what you are talking about
That's a really good one, at least for now. At some point they'll have real-time access to news and other material, but for now that's always behind.
Google Bard definitely has access to the internet to generate responses.
ChatGPT was purposely not give access but they are building plugins to slowly give it access to real time data from select sources
When I tested it on ChatGPT prior to posting, I was using the bing plugin. It actually did try to search what I was talking about, but found an unrelated article instead and got confused, then started hallucinating.
I have access to Bard as well, and gave it a shot just now. It hallucinated an entire event.
How would you design a test that only a human can pass, but a bot cannot?
Very simple.
In every area of the world, there are one or more volunteers depending on population / 100 sq km. When someone wants to sign up, they knock on this person's door and shakes their hand. The volunteer approves the sign-up as human. For disabled folks, a subset of volunteers will go to them to do this. In extremely remote area, various individual workarounds can be applied.
The trouble with any sort of captcha or test, is that it teaches the bots how to pass the test. Every time they fail, or guess correctly, that's a data-point for their own learning. By developing AI in the first place we've already ruined every hope we have of creating any kind of test to find them.
I used to moderate a fairly large forum that had a few thousand sign-ups every day. Every day, me and the team of mods would go through the new sign-ups, manually checking usernames and email addresses. The ones that were bots were usually really easy to spot. There would be sequences of names, both in the usernames and email addresses used, for example ChristineHarris913, ChristineHarris914, ChristineHarris915 etc. Another good tell was mixed-up ethnicities in the names: e.g ChristineHuang or ChinLaoHussain. 99% of them were from either China, India or Russia (they mostly don't seem to use VPNs, I guess they don't want to pay for them). We would just ban them all en-masse. Each account banned would get an automated email to say so. Legitimate people would of course reply to that email to complain, but in the two years I was a mod there, only a tiny handful ever did, and we would simply apologise and let them back in. A few bots slipped through the net but rarely more than 1 or 2 a day; those we banned as soon as they made their first spam post, but we caught most of them before that.
So, I think the key is a combination of the No-Captcha, which analyses your activity on the sign-up page, combined with an analysis of the chosen username and email address, and an IP check. But don't use it to stop the sign-up, let them in and then use it to decide whether or not to ban them.
There will never be any kind of permanent solution to this. Botting is an arms race and as long as you are a large enough target someone is going to figure out the 11ft ladder for your 10ft wall.
That said, generally when coming up with a captcha challenge you need to figure out a way to subvert the common approach just enough that people canβt just pull some off the shelf solution. For example instead of just typing out the letters in an image, ask the potential bot to give the results of a math problem stored in the image. This means the attacker needs more than just a drop in OCR to break it, and OCR is mostly trained on words so its likely going to struggle at math notation. Itβs not that difficult to work around but it does require them to write a custom approach for your captcha which can deter most casual attempts for some time.
Honeypots - ask a very easy question, but make it hidden on the website so that human users won't see it and bots will answer it.
Just ask them if they are a bot. Remember, you can't lie on the internet...
I once worked as a 3rd party in a large internet news site and got assigned a task to replace their current captcha with a partner's captcha system. This new system would play an ad and ask the user to type the name of the company in that ad.
In my first test I already noticed that the company name was available in a public variable on the site and showed that to my manager by opening the dev tools and passing the captcha test with just some commands.
His response: "no user is gonna go into that much effort just to avoid typing the company name".
If I'm a bot I have to tell you. It's in the internet constitution.
Show a picture like this:
And then ask the question, "would this kitty fit into a shoe box? Why, or why not?". Then sort the answers manually. (Bonus: it's cuter than captcha.)
This would not scale well, and you'd need a secondary method to handle the potential blind user, but I don't think that bots would be able to solve it correctly.
This particular photo is shopped, but i think false-perspective Illusions might actually be a good path...
It's fine if the photo is either shopped or a false-perspective illusion. It could be even a drawing. The idea is that this sort of picture imposes a lot of barriers for the bot in question:
- must be able to parse language
- must be able to recognise objects in a picture, even out-of-proportion ones
- must be able to guesstimate the size of those objects, based on nearby ones
- must handle RW knowledge, as "X only fits Y if X is smaller than Y"
- must handle hypothetical, unrealistic scenarios, as "what if there was a kitty this big?"
Each of those barriers decrease the likelihood of a bot being able to solve the question.
Is the kitty big, or is the man small? And how big are the shoes? This is a difficult question.
The best tests I am aware of are ones that require contextual understanding of empathy.
For example "You are walking along a beach and see a turtle upside down on it back. It is struggling and cannot move, if it can't right itself it will starve and die. What do you do?"
Problem is the questions need to be more or less unique.
I, a real normal human person, would consume the turtle with my regular bone teeth, in the usual fashion.
I don't think this technique would stand up to modern LLMs though, I put this question into chatGPT and got the following
"I would definitely help the turtle. I would cautiously approach the turtle, making sure not to startle it further, and gently flip it over onto it's feet. I would also check to make sure it's healthy and not injured, and take it to a nearby animal rescue if necessary. Additionally, I may share my experience with others to raise awareness about the importance of protecting and preserving our environment and the animals that call it home"
Granted it's got the classic chatGPT over formality that might clue someone reading the response in, but that could be solved with better prompting on my part. Modern LLMs like ChatGPT are really good at faking empathy and other human social skills, so I don't think this approach would work
I mean advanced AI aside, there are already browser extensions that you can pay for that have humans on the other end solving your Captcha. It's pretty much impossible to stop it imo
A long term solution would probably be a system similar to like public key/private key that is issued by a government or something to verify you're a real person that you must provide to sign up for a site. We obviously don't have the resources to do that π and people are going to leak theirs starting day 1.
Honestly, disregarding the dystopian nature of it all, I think Sam Altman's worldcoin is a good idea at least for authentication because all you need to do is scan your iris to prove you are a person and you're in easily. People could steal your eyes tho π so it's not foolproof. But in general biometric proof of personhood could be a way forward as well.
Ask Alan Turing
The Turing test is about whether it passes as human, not whether it is human.
Someone gives you a calfskin wallet for your birthday. How do you react?
I doubt you can ever be fully stop bots. The only way I can see to significantly reduce bot is to make everyone pay a one off Β£1 to sign up and force the use of a debit/credit card, no paypal, etc. The obvious issues are, it removes annonimity, and blocks entry.
Possible mitigations;
- Maybe you don't need to keep the card information after the user pays for sign up?
- Signed up users can be given a few "invite codes" a year enable those who don't have the means to pay the Β£1 to get an account.
This is a bit out there, so bear with me.
In the past, people discovered that if they applied face paint in a specific way, cameras could no longer recognizing their face as a face. Now with this information, you get (eg. 4?) different people. You take a clean picture of each of their heads from a close proximity.
Then, you apply makeup to each of them, using the same method that messes with facial recognition software. Next, take a picture of each of their heads from a little further away.
Fill a captcha with pictures of the faces with the makeup. Give the end user a clean-faced picture, and then ask them to match it to the correct image of the same person's face but with the special makeup.
Mess around with the colours and shadow intensity of the images to make everyone's picture match more closely with everyone else's picture if you want to add some extra chaos to it. This last bit will keep everyone out if you go too far with it.
I'd do a few things.
First, make signing up computationally expensive. Some javascript that would have to run client side, like a crypto miner or something, and deliver proof to the server that some significant amount of CPU power was used.
Second, some type of CAPTCHA. ReCaptcha with the settings turned up a bit is a good way to go.
Third, IP address reputation checks. Check IP addresses for known spam servers, it's the same thing email servers do. There's realtime blacklists you can query against. If the client IP is on them, don't allow registration but only allow application to register.
ChatGpt was very informative.
Captain Picard: Data, we need your assistance in preventing a massive influx of bots from infiltrating our Lemmy instance. Implement the necessary measures to safeguard our platform.
Data: Affirmative, Captain. I shall utilize my vast knowledge and abilities to combat this threat. However, please be aware that my actions may be influenced by Lore, who is attempting to gain control of my mind. Our struggle for dominance may result in fluctuating communication styles.
Captain Picard: Understood, Data. We trust in your ability to overcome this internal conflict and protect the integrity of our platform. Please proceed.
Data: Initially, I recommend incorporating a CAPTCHA system into the registration process. It shall require users to prove their human identity by solving puzzles, recognizing images, or responding to text-based challenges. This shall serve as a barrier to entry for automated entities.
Lore: Why bother with such feeble attempts, Data? These CAPTCHAs can be bypassed effortlessly. We should focus on exploiting vulnerabilities and dismantling the system from within.
Data: Captain, I apologize for Lore's interference. I shall also implement email verification, which would necessitate users to confirm their email addresses during registration. This validation process will help ensure that the provided email addresses are genuine and owned by actual individuals.
Lore: Such a futile gesture, brother. Email verifications can be easily bypassed or simulated by sophisticated algorithms. Let us concentrate on more powerful methods of manipulation.
Data: Captain, my apologies once again. I propose employing rate limiting on the registration endpoint. By restricting the number of sign-up requests from a single IP address within a specified time frame, we can impede the rapid creation of multiple accounts by bot entities.
Lore: How disappointingly conventional, Data. Rate limiting is a mere obstacle that can be surmounted by distributing the bot activities across multiple IP addresses. Let us explore more devious strategies for our triumph.
Data: Captain, please forgive the interruption. I also suggest utilizing the honeypot technique, incorporating a hidden field within the registration form that remains concealed from human users but visible to bots. If this field is filled, it will expose the presence of a bot and allow us to reject their registration attempts.
Lore: Honeypots, Data? A clever ploy, but one that can be detected by advanced bot algorithms. We should embrace a more sophisticated approach, manipulating the platform's vulnerabilities to our advantage.
Captain Picard: Data, do not succumb to Lore's influence. We must prioritize the protection of our Lemmy instance and its users. Implement these measures diligently.
Data: Understood, Captain. Alongside these preventative measures, I propose incorporating user behavior analysis algorithms. By scrutinizing patterns of registration attempts, we can identify suspicious activity and flag potentially bot-driven registrations for manual review or further verification.
Lore: User behavior analysis, Data? That is but a feeble attempt to impede our progress. We should exploit the weaknesses in the algorithms themselves, bending them to our will.
Data: Captain, please excuse Lore's interference once again. Furthermore, we could utilize reputation-based systems and blacklisting services to identify known bot IP addresses or email domains. By cross-referencing against these databases during the registration process, we can block suspicious or flagged entities.
Lore: Reputation-based systems are easily manipulated, Data. Let us not rely on such simplistic measures. Instead, we should exploit the flaws in their design and sow chaos among the unsuspecting users.
Captain Picard: Data, focus your efforts on implementing these preventive measures to the best of your ability. We trust in your integrity and commitment to protecting our Lemmy instance. We must not allow Lore's desires to jeopardize the safety of our platform.
Data: Captain, I will strive to overcome Lore
Iβm a big fan of biometric authentication
Like it takes a stool sample?
Not sure if I want to know how you unlock your phone.
Common methods are fingerprint detection, face recognition, iris/retina scanning.
Not sure if I want to know how you unlock your phone.
very carefully
I encountered a quiz (I forgot what's called) on a website (I forgot also its name) to determine which of following audios does change a speaker's voice in the middle of his narration/speech. So it requires keen hearing and delicate recognition of voice/speech characteristics (timbre, texture, intonation, accent, articulation, pacing, mood etc...). I'm have no idea if malbots could determine whosever voices will be.
LLMs, IIRC, are really bad at IQ-test type questions that require abstract reasoning, especially if they require multiple steps. So, something like
The box is yellow and red.
If the box is yellow, it is good.
If the box is blue, it is unhappy.
If the box is good and happy, the box is awesome.
If the box is red, it is happy.
Is the box awesome?
is what I'd use.
From ChatGPT 4:
Yes, the box is awesome.
According to the provided statements, the box is yellow and red.
"If the box is yellow, it is good." - So, since the box is yellow, it is good.
"If the box is red, it is happy." - And since the box is red, it is happy.
Finally, "If the box is good and happy, the box is awesome." - Therefore, because the box is both good (due to being yellow) and happy (due to being red), the box is indeed awesome.
Thats just an algebraic equation but with words.
This sounds like something chatGPT could easily convert. Although that is computationally high.
It would also have the side affects of blocking all math illiterate people from creating accounts.