I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.
They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.
I feel like I am living in a nightmare.


We had a discussion about AI at work. Our consensus was that it doesn’t matter how you want to do your work. What matters is the result, not the process. Are you writing clean code and on finishing tasks on time? That’s the metric. How you get there is up to you.
While this sounds like a good idea, leaving individual decisions to people, longterm it is quite dumb.
if you let an LLM solve your software dev problems, you learn nothing. You don’t get better at handling this problem, you don’t get faster, you don’t get experience in spotting the same problem and having a solution ready.
you don’t train junior devs this way, and in 20 years there will be (or would be without the bubble popping) a massive need for skilled software developers. (and other specialists in other fields. Better pray that medical doctors handle their profession differently…)
you really enjoy tweaking a prompt, dealing with “lying” LLMs and the occasional deleted harddrive? Is this really what you want to do as a job?
(bonus point) Would your company be ok with someone paying a remote worker to do his tasks for a fraction of the salary, and then do nothing? I doubt that. so, apparently it does matter how the work gets done.
Old enough to remember how people made these same arguments about writing in anything but assembly, using garbage collection, and so on. Technology moves on, and every time there’s a new way to do things people who invested time into doing things the old way end up being upset. You’re just doing moral panic here.
It’s also very clear that you haven’t used these tools yourself, and you’re just making up a straw man workflow that is divorced from reality.
Meanwhile, your bonus point has nothing to do with technology itself. You’re complaining about how capitalism works.
If this is an example of your level of reading comprehension, then i guess it’s no surprise that you find LLMs work well for you. Your answer addresses none of the points i made, and just tries to do the Jedi-mind-trick-handwave, which unfortunately doesn’t work in real life.
Correct, my answer does not address obvious straw man points of scenarios that don’t exist in the real world.
A bit like your ability to reason and provide arguments. But i guess that happens when you have used LLMs for too long.
I guess using personal attacks like a child is all you can do when you don’t have any actual point to make.
All the technologies you listed behave deterministically, or at least predictably enough that we generally don’t have to worry about surprises from that abstraction layer. Technology does not just move on, practitioners need to actually find it practical beyond their next project that satisfies the shareholders.
Again, you’re discussing tools you haven’t actually used and you clearly have no clue how they work. If you had, then you would realize that agents can work against tests, which act as a contract they fill. I use these tools on daily basis and I have no idea what these surprises you’re talking about are. As a practitioner, I find these things plenty practical.
I’ve literally integrated LLMs into a materials optimizations routine at Apple. It’s dangerous to assume what strangers do and do not know.
I’m not assuming anything. Either you have not used these tools seriously, or you’re intentionally lying here. Your description of how these tools work and their capabilities is at odds with reality. It’s dangerous to make shit up when talking to people who are well versed in a subject.
Your description of the tools was to make an inaccurate comparison. But sure, I am the “dangerous” one for showing how those examples are deterministic while gAI is not. Your responses with personal attacks makes it harder to address your claims and makes me think you are here to convince yourself and not others.
I didn’t make any inaccurate comparisons. The whole deterministic LLM argument was just the straw man you were making. I’m merely pointing out your dishonesty here, if you choose to perceive it as a personal attack that’s on you.