

still entertaining, and does describe how large corps work internally fairly accurately based on my experience
I was complaining about copilot specifically, which is an embarrassingly terrible product
I’ve heard about this kind of shit, but never seen it myself.
yeah most companies don’t even bother with the courtesy email anymore


Is it oversampling or just the fact there are a lot of users from China?


I thought horny chatbots were their latest business model?


To definitively say whether something is or isn’t conscious we’d first need to have a clear definition of what we mean by consciousness in functional terms. So far, there are a number of competing theories, and the definition will vary based on which theory you subscribe to. I’m personally a fan of the higher order theory of consciousness which suggests that conscious experience constitutes higher order thoughts which observe other thoughts, awareness of your own thoughts is the self referential property that would be a plausible explanation. To show that a model was conscious in this framework, you’d have to show that there are secondary patterns that occur in response to the primary patters which are a result of a stimulus.


Right, it’s the lack of any double checking that’s shocking. I use LLMs to make mermaid diagrams of code all the time, it’s super useful, but you have to actually read through what it generates.


Yes, and my point is that operational cycle of the model dominates total energy consumption. And turns out that it’s not actually that high in the grand scheme of things, and continues to improve all the time.
Meanwhile, it’s absolutely necessary to contextualize AI energy use in relation to the other ways we use energy to understand whether there’s something exceptional happening here or not. All the information for figuring out how much energy AI is using is available. We know how much energy models use, and rough numbers of people using them. So, that’s not a big mystery.


Whether they’re trained from scratch or not is very much material because it takes far more energy to do that. Meanwhile, we consume energy as a civilization in general. And frankly, a lot of energy is consumed on far dumber things like advertisements. If you count all the energy that goes into producing and displaying ads, that dwarfs AI energy use. So, it’s kind of weird ti single AI energy use out here.


Models training is a one off effort. Model usage is what matters because that’s where energy is used continuously. Also, practically nobody trains models from scratch right now. People use existing base models to tune and extend them.


At this point, I’d trust the AI over the clowns running the Burger Reich.


I’m pretty excited to live to see western hegemony over the world finally breaking.


I get a strong impression that the whole extinction of humanity narrative is really just an astroturf marketing campaign by AI companies. They’re basically scaremongering because it gets in the news, and the goal is to convince investors how smart these things are. It’s the whole OpenAI claiming they’re on the verge of AGI right before pivoting to doing horny chatbots. These are useful tools, and I also use them day to day, but the hype around them is absolutely incredible.
I think we have plenty of real risks to humanity to worry about, like the US starting a nuclear holocaust. We don’t need to waste time worrying about imaginary risks like AGI here.
I’d also argue the whole energy consumption argument is very myopic. The reality is that these things have been getting more and more efficient, and there is little reason to think that’s not going to be continue being the case going forward. It’s completely new tech, and it’s basically just moved past proof of concept stages. There’s going to be a lot of optimization happening down the road. And even when you contextualize current energy usage, it’s not as crazy as people seem to think https://www.simonpcouch.com/blog/2026-01-20-cc-impact/
We’re also starting to see stuff like this happening https://www.anuragk.com/blog/posts/Taalas.html


Honestly, that’s the most amazing revelation here. Turns out there’s no human reviewing what the agent does, and no testing environment to make sure stuff that gets pushed to prod is even minimally working.


when life imitates art, amazing how that was made a decade ago too
Ah, I never read the book. Sounds like it could be entertaining.