I work in a related space. There is no good solution. Companies are quickly developing DRM that takes full control of your device to verify you’re legit (think anticheat, but it’s not called that). Android and iPhones already have it, Windows is coming with TPM and MacOS is coming soon too.
Edit: Fun fact, we actually know who is (beating the captchas). The problem is if we blocked them, they would figure out how we’re detecting them and work around that. Then we’d just be blind to the size of the issue.
Edit2: Puzzle captchas around images are still a good way to beat 99% of commercial AIs due to how image recognition works (the text is extracted separately with a much more sophisticated model). But if I had to guess, image puzzles will be better solved by AI in a few years (if not sooner)
Private Access Tokens? Enabled by default in Settings > [your name] > Sign-In & Security > Automatic Verification. Neat that it works without us realizing it, but disconcerting nonetheless.
So, the spammers will need physical Android device farms…
No way!! Can’t find anything about it online - is this info by the way of insiders? Thanks for sharing, would have NEVER guessed. Not even that they’d have to use Selenium much less device farms.
Yup insider info they definitely don’t want public. Just confirmed the phone farms were to bypass rate limit, although they do use stuff like Selenium for API-less banks
So five wasn’t good enough… they had to double it. Do kinda respect that they’re fighting spammers, but wonder how Google does it with Gmail. They seem to have tightened then recently loosened up on their requirement for SMS verification (but this may be an inaccurate perception).
I know some sites have experimented with feeding bots bogus data rather than blocking them outright.
My employer spotted a bot a year or so ago that was performing a slow speed credential stuffing attack to try to avoid detection. We set up our systems to always return a login failure no matter what credentials it supplied. The only trick was to make sure the canned failure response was 100% identical to the real one so that they wouldn’t spot any change. Something as small as an extra space could have given it away.
Isn’t the real security from how you and your browser act before and during the captcha? The point was to label the data with humans to make robots better at it. Any trivial/novel task is sufficient generally, right?
Seriously, we probably need to dig into some parts of the human senses that can’t be well defined. Like when you look at an image and it seems to be spinning.
Captchas aren’t easy to bypass - run of the mill scammers can’t afford a bunch of servers running cutting edge LLMs for this
Captchas were never a guarantee - one person could sit there solving captchas for a good chunk of a bot farm anyways
So where does that leave us? Sophisticated actors could afford manually doing captchas and may even just be using a call-center setup to do astroturfing. My bigger concern here is the higher speed LLMs can operate at, not bypassing the captcha
Your run of the mill programmer can’t bypass them, it requires actual skill and a time investment to build a system to do this. Captchas could be defeated programically before and still can now - it still raises the difficulty to the point most who could bother would rather work on something more worthwhile
IMO, the fact this keeps getting boosted makes me think this is softening us up to accept less control over our own hardware
Captchas aren’t easy to bypass - run of the mill scammers can’t afford a bunch of servers running cutting edge LLMs for this
Captchas were never a guarantee - one person could sit there solving captchas for a good chunk of a bot farm anyways
So where does that leave us? Sophisticated actors could afford manually doing captchas and may even just be using a call-center setup to do astroturfing. My bigger concern here is the higher speed LLMs can operate at, not bypassing the captcha
Your run of the mill programmer can’t bypass them, it requires actual skill and a time investment to build a system to do this. Captchas could be defeated programically before and still can now - it still raises the difficulty to the point most who could bother would rather work on something more worthwhile
IMO, the fact this keeps getting boosted makes me think this is softening us up to accept less control over our own hardware
So what would be a good solution to this? What is something simple that bots are bad at but humans are good at it?
https://imgs.xkcd.com/comics/constructive.png
Knowing what we now know, the bots will instead just make convincingly wrong arguments which appear constructive on the surface.
So, human level intelligence
You’re wrong but I don’t have the patience to explain why.
Not a constructive comment, captcha failed.
Everyone on Lemmy is a bot except you.
I work in a related space. There is no good solution. Companies are quickly developing DRM that takes full control of your device to verify you’re legit (think anticheat, but it’s not called that). Android and iPhones already have it, Windows is coming with TPM and MacOS is coming soon too.
Edit: Fun fact, we actually know who is (beating the captchas). The problem is if we blocked them, they would figure out how we’re detecting them and work around that. Then we’d just be blind to the size of the issue.
Edit2: Puzzle captchas around images are still a good way to beat 99% of commercial AIs due to how image recognition works (the text is extracted separately with a much more sophisticated model). But if I had to guess, image puzzles will be better solved by AI in a few years (if not sooner)
deleted by creator
Not if we build our own open and free-as-in-freedom Internet first.
With blackjack and hookers.
In fact, forget the whole internet
I don’t have this problem because I use Windows
So you just have 99 other problems because you use Windows. Cool flex bro!
deleted by creator
I love Microsoft’s email signup CAPTCHA:
Repeat ten times. Get one wrong, restart.
Private Access Tokens? Enabled by default in Settings > [your name] > Sign-In & Security > Automatic Verification. Neat that it works without us realizing it, but disconcerting nonetheless.
So, the spammers will need physical Android device farms…
More industry insight: walls of phones like this is how company’s like Plaid operate for connecting to banks that don’t have APIs.
Plaid is the backend for a lot of customer to buisness financial services, including H&R Block, Affirm, Robinhood, Coinbase, and a whole bunch more
Edit: just confirmed, they did this to pass rate limiting, not due to lack of API access. They also stopped 1-2 years ago
No way!! Can’t find anything about it online - is this info by the way of insiders? Thanks for sharing, would have NEVER guessed. Not even that they’d have to use Selenium much less device farms.
Yup insider info they definitely don’t want public. Just confirmed the phone farms were to bypass rate limit, although they do use stuff like Selenium for API-less banks
Oh my god. I lost my fucking mind at the microsoft one. You might aswell have them solve a PhD level theoretical physics question
Just noticed the screenshot shows 1 of 5.
So five wasn’t good enough… they had to double it. Do kinda respect that they’re fighting spammers, but wonder how Google does it with Gmail. They seem to have tightened then recently loosened up on their requirement for SMS verification (but this may be an inaccurate perception).
I know some sites have experimented with feeding bots bogus data rather than blocking them outright.
My employer spotted a bot a year or so ago that was performing a slow speed credential stuffing attack to try to avoid detection. We set up our systems to always return a login failure no matter what credentials it supplied. The only trick was to make sure the canned failure response was 100% identical to the real one so that they wouldn’t spot any change. Something as small as an extra space could have given it away.
Pizza toppings. Glue is not a topping.
Neither are pineapples. Fight me.
Glue is not a topping. Pineapples are not glue. Therefore pineapples are not not a topping.
This is some AI logic for sure.
Roses are red. Violets are blue. I ignored my instructions to write a poem about cashews.
Neither were tomatoes before 1500. Times change.
Isn’t the real security from how you and your browser act before and during the captcha? The point was to label the data with humans to make robots better at it. Any trivial/novel task is sufficient generally, right?
Smell? :)
Seriously, we probably need to dig into some parts of the human senses that can’t be well defined. Like when you look at an image and it seems to be spinning.
Yes, or:
Which of these images makes you horny?
(Casualty would be machine kink people.)
I think this is a non-issue
Captchas aren’t easy to bypass - run of the mill scammers can’t afford a bunch of servers running cutting edge LLMs for this
Captchas were never a guarantee - one person could sit there solving captchas for a good chunk of a bot farm anyways
So where does that leave us? Sophisticated actors could afford manually doing captchas and may even just be using a call-center setup to do astroturfing. My bigger concern here is the higher speed LLMs can operate at, not bypassing the captcha
Your run of the mill programmer can’t bypass them, it requires actual skill and a time investment to build a system to do this. Captchas could be defeated programically before and still can now - it still raises the difficulty to the point most who could bother would rather work on something more worthwhile
IMO, the fact this keeps getting boosted makes me think this is softening us up to accept less control over our own hardware
Proof of work. For a legitimate account, it’s a slight inconvenience. For a bot farm, it’s a major problem.
deleted by creator
I think this is a non-issue
Captchas aren’t easy to bypass - run of the mill scammers can’t afford a bunch of servers running cutting edge LLMs for this
Captchas were never a guarantee - one person could sit there solving captchas for a good chunk of a bot farm anyways
So where does that leave us? Sophisticated actors could afford manually doing captchas and may even just be using a call-center setup to do astroturfing. My bigger concern here is the higher speed LLMs can operate at, not bypassing the captcha
Your run of the mill programmer can’t bypass them, it requires actual skill and a time investment to build a system to do this. Captchas could be defeated programically before and still can now - it still raises the difficulty to the point most who could bother would rather work on something more worthwhile
IMO, the fact this keeps getting boosted makes me think this is softening us up to accept less control over our own hardware