This is great composition, and a cool photo
My meme/shitposting alt, other @Deebster
s are available.
This is great composition, and a cool photo
It’s almost a productivity hack.
So is this a human doing a great Attenborough impression, AI doing it, or the man himself*?
* wildcard option
You’ve had a good definition, but Wikipedia has (a lot) more info: https://en.wikipedia.org/wiki/Kayfabe
The quote’s a famous monologue from Hamlet.
Are they allowed to put jokes in legal documents like this? (I know it’s gone now)
I can’t find a suitable word in English, but I’m shocked and dismayed that German doesn’t have anything we could steal.
There’s a lot of smaller communities that are only kept going by one dedicated poster, or never got the critical mass to keep going, which is a shame.
Works here too, but when I tried to save it to the Internet Archive the saved page doesn’t have AI results 😟
Rickroll: (v) to troll the youth using memes
Makes sense!
Ironically, it’s a pretty well-known one itself (you see people just refer to it by mentioning “today’s 10000”).
The Dublin-NYC one’s reopened now with automated blurring out of a bunch of stuff.
Huh, so it is! Growing up in the UK, the US version seemed to be on more, and I’d assumed that that was the original.
You’ve missed off the !
so Voyager thinks it’s an email address.
I doubt that you can get your skin hot enough to denature those proteins without damaging yourself. I’ve given myself a blister before trying.
Over many years, I’ve settled on hydrocortisone cream followed by an ice cube. Those little buggers love me.
Hmm, I think they’re close enough to be able to say a neural network is modelled on how a brain works - it’s not the same, but then you reach the other side of the semantics coin (like the “can a submarine swim” question).
The plasticity part is an interesting point, and I’d need to research that to respond properly. I don’t know, for example, if they freeze the model because otherwise input would ruin it (internet teaching them to be sweaty racists, for example), or because it’s so expensive/slow to train, or high error rates, or it’s impossible, etc.
When talking to laymen I’ve explained LLMs as a glorified text autocomplete, but there’s some discussion on the boundary of science and philosophy that’s asking is intelligence a side effect of being able to predict better.
What a cute little Firefox