My company is strongly pushing AI. There are lot of experiments, demos, and effort from decently smart people about integrating it into our workflows. There are some impressive victories that have been made with AI tooling producing some things fast. I am not in denial about this. And the SE department is tracking improved productivity (as measured by # of tickets being done, I guess?)
The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam. I think it’s obvious how google search results are spam, how spam songs and videos are being produced, etc. But even bad results from AI that have to be discarded, IMO, are spam.
And that isn’t even getting into all massive amounts of theft to train the data, or the immense amounts of electricity it takes to do training and inference, as well as run, all this crap. Nor the psychosis being inflicted onto people who emplace their trust into these systems. Nor the fact that these tools are being used to empower authoritarian regimes to track vulnerable populations, both here (in the USA) and abroad. And all this AI shit serves to enrich the worst tech moguls and to displace people like artists and people like myself, a programmer.
I’m literally being told at my job that I should view myself basically as an AI babysitter, and that AI has been unambiguously proven in the industry, so the time for wondering about it, experimenting with it, or opposing it is over. The only fault and flaw is my (i.e. any given SE’s) unwillingness to adapt and onboard.
Looking for advice from people who have had to navigate similar crap. Because I feel like I’m at a point where I must adapt or eventually get fired.
Try to distance yourself from the quality of your work.
Produce AI slop like your overlords fetishise, then have a mouse jiggler wiggle the cursor and an AI answer your Teams messages.
You’re the sane one.
Stop carrying about the quality of your output and just copy and paste the slop back and forth. its what they want.
Faster, not better.
The slop being copied back and forth is actually is what they want. At the recent all-hands they basically said this without exaggeration. Quality and correctness were demoted to secondary importance.
This actually made something click for me: why I haven’t been able to find work for 3 years in software QA. It’s not that AI came for my job or that it replaced me. At some point people stopped caring about quality so the assurance became moot.
I’m not quite in the same boat (not a programmer) but my job has suggested giving AI a try for some things. I’ve poked around a bit to see if it’s good at those tasks but so far it’s wildly inconsistent. I told my boss I didn’t think it made much sense to try to come up with meaningful workflows using AI because every few weeks the behavior changes and you’d have to re-write documentation and re-think the workflow given the new behavior to make sure it still resulted in good data that could be meaningfully compared to decades of archive. This frequent re-assessment of the entire workflow is necessary to ensure data integrity and takes longer than just doing it manually in steps I purposely designed to make it hard for a human to mess up.
AI needs human oversight for complex projects because you can’t have pieces shifting around without careful consideration. In my case using AI is a bigger time sink than using tools with precise and reliable behavior because it requires a lot more review. My boss found my reasoning compelling.
I can imagine it being useful to people in the early early prototype stage when the specifics or efficiency don’t matter much and you’re just trying to get the gist of a new idea.
I am also encouraged to use AI at work and also hate it. I agree with your points. I just had to learn to live with it. I’ve realized that I’m not going to make it go away. All I can do is recognize it’s limited strengths and significant weaknesses and only use it for limited tasks where it shines. I still avoid using it as much as possible. I also think “improved productivity” is a myth but fortunately that’s not a metric I have to worry about.
My rules for myself, in case they help:
- Use it as a tool only for appropriate tasks.
- Learn it’s strengths and use it for those things and nothing else. You have to keep thinking and exploring and researching for yourself. Don’t let it “think” for you. It’s easy to let it make you a lazy thinker.
- Quality check everything it gives you. It will often get things flat wrong and you will have to spend time correcting it.
- Take lots of deep breaths.
I agree with all your points. The problem is that quality cheching AI outputs is something that only a few will do. The other day my son did a search with chat GPT. He was doing an analysis of his competitors within 20km radius from home. He took all the results for grated and true. Then i looked at the list and found many business names looked strange. When i asked for the links to the website, i found that some were in different countries. My son said “u cant trust this”. When i pointed it out to chatgpt, the dam thing replied “oh im sorry, i got it wrong”. Then you realise that these AI things are not accountable. So quality checking is fundamental. The accountability will always sit with the user. I’d like to see the day when managers take accountability of ai crap. That wont happen, do jobs for now are secure.
Use it as a tool only for appropriate tasks.
Which tasks do you use it for?
For my purposes I find it good for summarizing existing documents and categorizing information. It is also good at reformatting stuff I write for different comprehension levels. I never let it compose anything itself. If I use it to summarize web data, and I rarely do, I make it provide the URLs of all sources so I can double-check validity of the data.
Sounds good. It can also write corporate emails well. I’m just writing insults and harsh truths like I would want to throw against my conversation partners, and the LLM tones it down to some bland corpo speak.
I also think “improved productivity” is a myth
Stop thinking, start knowing: https://fortune.com/2025/07/20/ai-hampers-productivity-software-developers-productivity-study/
Thanks! I skimmed this and have it in my reading list for later. I wonder how this pans out across disciplines other than software development. I would imagine there’s a huge diversity of skills out there that would affect how well people can craft prompts and interpret responses.
Ask ChatGPT “How do I unionize my workplace to protect my job against AI obsessed management?”
+1 also look for “reverse centaur”, its a metaphor by cory doctorow which you may find interesting
Same as everything else in life - like the bits that are useful to you and ignore the rest.
As for doing what you’re told at work, who said we had to like it provided it’s a reasonable request?
I’m at a point where I must adapt
What’s wrong with adapting? The one constant in life is that things change. This is a change and you’re not the only person who has faced their job changing - at least you still have it. Adapt or go raise goats.
Step one: get a hammer
Step two: smash noggin with hammer
Step 3: continue to smash your noggin with hammer
Step 4: keep smashing
Step five: you are now a tech bro who loves AI.
I remind my boss that giving AI full access to our codebase and access to environmemts, including prod, is the exact plot of the Silicon Valley episode where Gilfoyle gave Son of Anton access. His AI deleted the codebose after being asked to clean the bugs…deleting the entire codebase was the most efficient way of doing that.
If you don’t mind me asking, what do you do and what kind of AI? Maybe it’s the autism but I find LLMs are bit limited and useless but other use cases aren’t quite as bad Training image recognition into AI is a legitimately great use of it and extremely helpful. Already being used for such cases. Just installed a vision system on a few of my manufacturing lines. A bottling operation detects cap presence, as well as cross threads or un-torqued caps based on how the neck vs cap bottom angle and distance looks as it passes the camera. Checking 10,000 bottles a day as they scroll past would be a mind numbing task for a human. Other line is making fresnel lenses. Operators make the lenses, and are personally checking each lens for defects and power. Using a known background and training the AI to what distortion good lenses should create when presented is showing good progress at screening just as well as my operators. In this case it’s doing what the human eye can’t; determine magnification and defraction visually.
The AI in this case is, for all intents and purposes, using Copilot to write all the code. It is basically beginning to be promoted as being the first resort, rather than a supplement.
I’m literally being told at my job that I should view myself basically as an AI babysitter
Feel you 100%.
I dunno why but my entire career everyone always talks like doing IT is simply a stepping stone to becoming a manager, so stupid. Like god forbid you’re not the lEaDeRsHiP type.
And now with the rise of “Agentic IDEs” it’s even fucking worse, I don’t want to be managing people let alone herding a pack ofblind catsautonomous agents.Unfortunately the only solution is to stop caring, Yes, really.
I know it hurts producing sub-par garbage when you know you’re capable of much more, but unfortunately there’s no other way.
If upper management doesn’t care about delivering quality products to their consumers anymore, you shouldn’t either. You’ll stress and burn yourself out meanwhile those responsible won’t lose a blink of sleep over it.
Do exactly what they want. Slop it all. Fuck it. Save your energy for what really matters.That or start looking for another job, but you might struggle to find one that isn’t doing the same shit.
My company does annual reviews. You have to write your own review, then they will read it over and then sit down to talk to you about it.
Last year, I just had ChatGPT write it for me based on all of my past conversations with it. Turned it in. The first question they asked me was, ‘Did you use AI to write this?’ Without hesitation, I said absolutely. They loved it so much, they had me show everyone else how to do it and made them redo theirs. I couldn’t frikin believe it. Everyone is still pissed they have to use ChatGPT this year, but the bosses love that corporate hogwash so much.
They’re about to receive a stack of AI-generated drivel so bad that I bet they have everyone go back to handwriting them.
That’s especially saddening because writing the review is specifically meant for you to contemplate what went well and perhaps what can go better next time. You would think managers would want you to reflect on that. For the benefit of the company, at minimum.
But with stories like yours it is becoming more clear that the only objective is to “use ai” or deliver ai-generated results. Why even bother caring or trying when management does not?
Make a list of the tasks you hate doing or are repetitive, time consuming and not normally automateable. Then see if any of them are a good fit for an AI workflow.
Many people think they’re 20% more productive with AI, but they’re actually 20% less productive.
https://fortune.com/2025/07/20/ai-hampers-productivity-software-developers-productivity-study/
The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam.
You are describing one type of AI, that being Generative AI. Even more specifically, Generative AI from publicly trained models, examples being ChatGPT, Claude, and Grok. If you hate those, don’t use those. This isn’t the only AI that exists.
We’re getting into data science here, but you can build and train Machine Learning models exclusively on your own data. So no theft/spam contamination here. If your needs are in the Generative AI space, you could even build and deploy your own Fine Tuned model from your own data on top of one of the public models, so it would have knowledge of your business or industry.
All AI incarnations are just tools. You don’t start with a tool. You start with a problem to solve, and you use a tool to assist or make it better. So the beginning of this journey is asking the question: “What problem are you trying to solve?”



