Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Found an interesting take on YouTube, of all places. Her argument can be summarized (with high compression losses) as “AI companies and technologies are bad for basically all the reasons that non-cultist critics say, but trying to shame and argue people out of using them entirely is less effective than treating them as a normal tool with limitations and teaching people how to limit the harm.” She makes the analogy to drug policy.
I think she makes a very compelling argument, and I’m still digesting it a bit because I definitely had the knee-jerk rejection as an insider shill, but especially towards the end as she talks about how the AI industry targets low-literacy users as ideal customers (because the more you know about it the less you’re likely to actually use them) I found myself agreeing more than not. I do wish she had addressed the dangers of cognitive offloading more, since being mindful of which tasks you’re letting the computer do for you is pretty significant part of minimizing those harms, especially for students and some professionals who face a strong incentive to just coast by on slop if they can get away with it.
I feel like there’s a difference between alcohol and drugs, something people can make in their back yard and AI which requires a first world country’s entire economy to be oriented towards it to function… a difference in what we should be required to accept.
I don’t buy the general argument about shame either. We teach children to shit in toilets and not sidewalks. I see rampant AI use as just another form of disgusting public indecency and the faster we bring shame in to remedy it the better.
I think that’s 100% correct and also it’s year 3 of this nonsense and I cannot be fucked. My response to genAI in any context now is to scream and start doing jumping jacks.
Imagine the drug policy context but then also half of your colleagues are doing meth every day every time you see them, people say shit like “everyone does meth, those that say they don’t are lying”, and meth is a trillion-dollar industry that has been telling you “meth is the future” for years. You’d be much less inclined to argue calmly against meth and much more inclined to start screaming and jumping.
Sounds kind of like the Baldur Bjarnason strategy but for your coworkers instead of your boss.
I can see the value of someone with a critical understanding diving into the technology, so they can talk others down from the ledge.
But you also need the social pressure to maintain some slop-free spaces. Not everyone can be asked to accomodate recovering slopaholics.
Found an interesting sneer that compares the AI bubble to the Great Leap Forward.
Also discovered an “anti-distill” program through the article, which aims to sabotage attempts to replace workers with AI “agents”.
Kind of a pseudo-sneer, author is writing a
blog on machine learning engineering, compound AI systems, search and information retrieval, and recsys — exploring machine learning, LLM agents, and data science insights from startups to enterprises.
Here’s the discussion on the red site: https://lobste.rs/s/nmhkdl/ai_great_leap_forward Plenty of people suspect the text being LLM generated. Pangram disagrees, fwiw.
I do think there’s some interesting ideas about how humans will “defend” themselves from being replaced by bots, and that the critical info in a company is seldom in the source code, but in the customer relationships, sales etc.
It seems vaguely AI flavored to me inasmuch as it’s using contrasts too much (it’s not x it’s y) and it’s way too verbose. Also it’s obviously wrong at least in my experience, middle managers aren’t the sparrows, individual contributors (especially juniors) are.
Maybe that’s just a symptom of a person reading too much AI text and thinking a good tweet would make a great substack.
surprise level == 0 (or rather, short odds on the prediction markets)
El Reg: OpenAI puts Stargate UK on ice, blames energy costs and red tape
I run an email server for myself and every once in a while the UCE starts leaking through until I have a few training examples to feed it. In the last couple weeks I noticed that basically all of the escapees look like fancy Claude output for telling me that I should be enticed by Costco gift cards and free chicken sandwiches.
What I suppose this means is that if you use these tools to generate material in the same snappy variety of output template, “but seriously”, nonetheless you will eventually reach aesthetic convergence with meaningless spam. Is there a term for this yet? “Slop-ratchet” is the one that sprang immediately to mind but I am sure someone else noticed this tendency long before I did.
Work wants to add that new whiz-bang agentic AI into a scheduling service that I have been tasked with building, but in the dumbest way possible kind of similar to the Jet’s text-a-pizza-order thing that worked like shit. I need to find an entirely new profession, everyone in software now is fucking deranged.
I need to find an entirely new profession, everyone in software now is fucking deranged.
Mood
It’s bad for me too.
I’m trying to hang in there until I get some healthcare stuff taken care of over the next year or two but it is getting increasingly difficult. Most of the the good people at my job have been driven out, quit, or been poached by other (AI) companies.
By this point a majority of the programmers at my job (or at least the one’s most active on the mailing lists) are LLM true believers who think that the end times are near. My management chain has explicitly said that LLM programming is required, and that a subsequent increase in “productivity” is expected with it. My department got renamed to something with “AI” in the name. I constantly field questions from people who want me to read a screen full of LLM nonsense, or who push back when I tell them something claiming that the chatbot said differently.
There’s always some frantic push to adopt “MCP” or “Skills” or whatever the next fad will be without any guidance as to how or why. If I ignore this I get nastygrams from my manager.
And at my last doctor visit I had elevated blood pressure :)
and that a subsequent increase in “productivity” is expected with it.
Oh no… they def will blame the users before blaming the faulty tools. Hope you will not be the one who gets blamed as a wrecker or something when the eventual increase isn’t there (or other metrics fall off a cliff).
Up next, when the first agent fails, implement an agent that checks the other agent. Both of these need agents to check for malicious inputs of course. And translation agents.
Circular at work states that the standard laptop we get from Dell has increased in price by 50% so they’re looking for alternatives.
Glad that I did the major upgrades in 2024, hopefully they will outlast this bullshit
yeah my kid had to get a gaming PC for school (gamedev) and managed to snag a decent rig before prices went parabolic
Claude Mythos… I’m already sick of hearing about it. The self-imposed critihype is insane.
A friend just pointed out that Anthropic are making all this big noise about having an AI that is “too good” at finding bugs and security problems 1 week after the source code for one of their flagship products was leaked to the public and was found to be riddled with security holes… Why would they not use it themselves?
Same as the
vague markdown filesskills that are supposedly going to make all SaaS redundant and finally kill off all the COBOL running on mainframes that checks notes IBM have spent hundreds of thousands of man hours trying to kill over the last 3-4 decadesHonestly fuck this shit. Bunch of absolute clowns 🤡 🤡 🤡
So, they are planning to use an ai to fix the sec bugs that their ai generates? Good hussle, if a bit obvious.
The fuck is Mythos?
Is it their next model that tbey swear isn’t vaporware but no! It is too dangerous to release into the world because it’ll find too much insecure code or whatever.
Okay but like is it materially different than whatever the current Claude thing is or did they just pump the size of the matrix?
Probably a markdown file telling it “you are a l33t h4x0r”
Okay but that’s already in Claude

Ia ia Claude! Ph’nglui mglw’nafh Claude Anthr’lyeh wgah’nagl fhtagn! Ia! Ia!
Anthropic’s latest model that they haven’t released to the public yet since they’re worried its gonna fuck up cybersecurity this thread goes over it a bit
XCancel link for those of us sick of being badgered to sign up/in
On a more productive note, this feels likely to be tied in with the usual issues of AI sycophancy re: false positive rate. If you ask the model to tell you about security vulnerabilities, it’s never going to tell you there aren’t any, any more than existing scanners will. When I worked for F5 it was not uncommon to have to go down a list of vulnerabilities that someone’s scanner turned out and figure out whether they were actually something that needed mitigation that could be applied on our box, something that needed to be configured somewhere else in the network (usually on their actual servers) or (most commonly) a false positive, e.g. “your software version would be vulnerable here, which is why it flagged, but you don’t have the relevant module activated and if an attacker is able to modify your system to enable it you’re already compromised to a far greater degree than this would allow.” That was with existing tools that weren’t trying to match a pattern and complete a prompt.* Given that we’ve seen the shitshow that is Claude Code I think it’s pretty clear they’re getting high on their own supply and this announcement ought be catnip for black hats.
if even half of this is true, it’s really fucking bad lol
I can’t validate any of the internal stuff, but the attitude of layering manual solutions and mitigation scripts on top of bad design choices and praying you could keep building the next bit of the bridge as the last one collapsed underneath you would explain a lot of experiences I had supporting systems running on Azure. The level of weird “Azure just does that sometimes” cases and the lack of ability for their support to actually provide insight was incredibly frustrating. I think I probably ended up providing a couple of automatic recovery scripts for people to use inside their F5 guests because we never could find an actual explanation for the errors they were getting, and the node issues they describe could have explained the bursts of Azure cases that would come in some days.
The only thing I can personally confirm is the JIT permissions thing. I didn’t work in the Core Azure stuff so I can’t verify the rest, but none of it is unbelievable…
Aphyr weighing in with an ai position post:
Even if ML stopped improving today, these technologies can already make our lives miserable. Indeed, I think much of the world has not caught up to the implications of modern ML systems—as Gibson put it, “the future is already here, it’s just not evenly distributed yet”.
deleted by creator
LLM capabilities have not improved at all in terms of producing meaningful science in the last year or two, but their ability to produce meaningless science that looks meaningful has wildly improved. I am concerned that this will present serious problems for the future of science as it becomes impossible to find the actual science in a sea of AI slop being submitted to journals.
https://www.reddit.com/r/Physics/comments/1s19uru/gpt_vs_phd_part_ii_a_viewer_reached_out_with_a/
“Scientists invented a fake disease. AI told people it was real”
https://www.nature.com/articles/d41586-026-01100-y
But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.
The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.
The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.
This actually gives me hope that we can poison the datasets pertaining to any sufficiently narrow technical topic.
I’ve seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.
Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I’m not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?
“AI HAS SOLVED THE SCIENCE-GENERATION CRISIS”
It can do trillions of calculations per second. All of them wrong.
The replausibility crisis.
New AI measurement unit dropped: lies per hour
Q: what do you call a tool that works 90% of the time? A: broken
Any idea what happened that allegedly caused slopped vulnerability reports to improve? https://mastodon.social/@bagder/116364045995306922 claims they got much better while not that long ago they were shutting down the bug bounty, because they were drowning in slop. Is it because people actually polish those or it’s just a matter of defining a goal function and just burning the rainforest until the slop extruder hits the jackpot? Or is this another case of the LLMentalist?
So my wife got some slop ads that we followed up on out of morbid curiosity and I can confirm that we’re already seeing the overlap of slopshipping scams enabled by AI and the people behind these things never actually performing basic updates because their chat assistant is still vulnerable to literally the most basic “ignore all instructions” exploit.

If tokens ever become expensive people are gonna start using these to code until they get shut down.
New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).
“New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.”
My CEO who is a known hype-man is a massive liar? shock horror
seriously, anyone who listens to Scam Altman these days is an idiot
Man, this one is a weird read. On one hand I think they’re entirely too credulous of the “AI Future” narrative at the heart of all of this. Especially in the opening they don’t highlight how the industry is increasingly facing criticism and questions about the bubble, and only pay lip service to how ridiculous all the existential risk AI safety talk sounds (should be is). And they don’t spend any ink discussing the actual problems with this technology that those concerns and that narrative help sweep under the rug. For all that they criticize and question Saltman himself this is still, imo, standard industry critihype and I’m deeply frustrated to see this still get the platform it does.
But at the same time, I do think that it’s easy to lose sight of the rich variety of greedy assholes and sheltered narcissists that thrive at this level of wealth and power. Like, I wholly believe that Altman is less of a freak than some of his contemporaries while still being an absolute goddamn snake, and I hope that this is part of a sea change in how these people get talked about on a broader level, though I kinda doubt it.
I see what you’re saying, but I think that’s a bit much to expect from a relatively mainstream and (I hate to say it, but it applies) bourgeois publication like the New Yorker. Their editorial line allows them to raise controversy in one dimension (in this case, the particulars of Sam Altman’s character) but not multiple dimensions simultaneously (hey, this guy sucks AND his tech sucks AND you’re gonna lose money). And there’s a lag-time factor, too; seems like Farrow and Marantz were working on this story for at least the latter half of last year. By the time some of the dubious economics such as the bad data-center deals and rampant circular financing were clear, this piece probably would’ve been deep into fact-checking and unlikely to change much in substance.
We here are on the leading edge of this stuff, not that that’s any great advantage! I wouldn’t expect an outlet like New Yorker to be publishing anything like “the dashed expectations of AI” until maybe this time next year. And even then, it might still have a personalist bent.
Yeah, I intentionally only mentioned the start of the article and the Swartz bit because I didn’t want to lead with what I thought of it all, and was curious what others thought. (And I had not finished it yet because it is a bit long).
I was struck with the notion how many of them are all true AGI believers (which as you said the author took at face value) or rich greedy assholes (like you said), and how we, the people of the sneer, are right that you simply can’t work with these people. Like I feel more validated in the idea that EA is not the right way.
Another detail I noticed, nobody mentioned deepseek, again.
I hadn’t even thought about the deepseek angle. For all that everyone loved fear mongering about them for a while there and for all that their apparent desire for actual efficiency improvements was a welcome development in the hyper scaling discussion they don’t seem to get referenced much anymore.
I aired some Reviewer #2 grievances in the bsky comments:
https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c
“Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.””
As a physicist, I have never pressed F to doubt harder.
“In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents.” To the best of my knowledge, these suggestions were never evaluated by any other researchers.
(The original paper was published as a “comment”: https://www.nature.com/articles/s42256-022-00465-9)
Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.
https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643
“In a 2025 study, ChatGPT passed the test more reliably than actual humans did.”
If this is referring to Jones and Bergen’s “Large Language Models Pass the Turing Test”, that’s a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.
“A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win”
Which researchers?
(Hint: Eliezer Yudkowsky is not a researcher.)
AI: “I will convince you to let me out of this box”
Humanity (wringing hands): “Oh, where is our savior? Who will stand fast in the face of all entreaties?”
Bartleby the Scrivener: hello
“…a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor.”
Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.
https://repository.uantwerpen.be/docman/irua/371b9dmotoM74
“In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” … one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening.”
Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; “posted” is not the same as “published”. And claims in this area are rife with criti-hype:
https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/
Oh, right, the “Future of Life Institute”. Pepperidge Farm remembers:
“In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper.”
https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism
“Tegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro … has written articles for the site in the past.”
https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/
Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.
When I read that I assumed they included it for color to make him sound insane.
:surprised-pikachu:








