• 2 Posts
  • 69 Comments
Joined 2 years ago
cake
Cake day: July 31st, 2023

help-circle






  • I mean considering law is the practical application of a moral construct, and this moral construct is mostly agreed upon, I would not wanna question the laws that make killings a crime for example, although there is obviously nuance.

    I understand that some people think “there can be a justification for a killing” but I would always say, if we justify some killings, there is always a chance people will abuse this “loophole” for crime we created. So saying all killings are illegal is kind of the best application of our morals we have. Unfortunately, it’s impossible to include every little nuance and detail in the moral system we base our laws upon, but that’s why our laws are not absolutely rigid, and our moral systems are bound to change inherently.

    I get it, it feels wrong, I really do. But there can be both. I can both say that even a CEO shouldn’t be killed, and at the same time acknowledge that there is good reasons and something like that was inevitable given the status quo.


  • I think, it’s because this case is so big, that the amount of people talking about can’t really increase, but also there’s so much more to the case than this aspect. Which makes it difficult to focus conversation on this.

    I also wanna say that it makes sense for him to get charged, even though a lot of people don’t like it. Killing another human is an issue no matter what it is. And just because we think this crime stands for something bigger, that doesn’t justify the killing in the first place. It’s just shades of immorality.

    That said, healthcare is a huge issue and I hope this changes things finally. I also don’t agree with the charging of terrorism, as it says “terrorism” in it, and even though there’s a small chance it fulfills the requirements, I have no angle to personally view this as terrorism.

    Does it instill terror? Everyone gets scared when someone is killed, but this does not exceed it to the point that there is now a present danger. There’s no furtherance to the terror, only vigilance in the crime.

    Some lawyers even argue this is a pile-on to the charges, which might be the case, although I’m not an expert.

    But I do think it’s gonna be hard to prove the terrorism as opposed to everything else. Truly, the only threat to the prosecution of the other counts is jury nullification, which poses completely different risks.

    But that’s a story for another day.



  • Used it as a toy for the longest time but by now I had to do a lot of coding and I was actually able to make good use of code completion AI.

    Saved me about a quarter of my time. Definitely worth something. (FYI I use supermaven free tier).

    Also I’m using ChatGPT to ask dumb questions because that way I don’t have to constantly interrupt other people. And also as a starting point to research something. I usually start with ChatGPT, then Google specific jargon and depending on the depth of the topic I will read either studies, articles or forum threads afterwards.

    It did take me a long time to figure out which AI and when to use it, so mandating this onto the entire government is a gong show more than anything.

    No AI is not useless, but it’s always a very specific use case.

    If you’re interested, I suggest using the free ChatGPT version to ask dumb questions together with Google to get a feel for what you get. Then you can better decide if it’s worth it for you.


  • Yeah none of this is good advice generally.

    A person who just made a suicide attempt nedds routine and normality but with a lot of friendly interactions sprinkled throughout that.

    Sometimes family can even be one of the reasons a person gets so driven to go that far.

    So cutting off contact and potentially making his family put even more pressure on him is one of the worst things you could do.

    Cutting off contact only helps if you can’t handle the situation or the person anymore, and that still doesn’t help them, it just shields you from mental stress.









  • This might be a wild take but people always make AI out to be way more primitive than it is.

    Yes, in it’s most basic for an LLM can be described as an auto-complete for conversations. But let’s be real: the amount of different optimizations and adjustments made before and after the fact is pretty complex, and the way the AI works is pretty close already to a brain. Hell that’s where we started out; emulating a brain. And you can look into this, the base for AI is usually neural networks, which learn to give specific parts of an input a specific amount of weight when generating the output. And when the output is not what we want, the AI slowly adjusts those weights to get closer.

    Our brain works the same in it’s most basic form. We use electric signals and we think associative patterns. When an electric signal enters one node, this node is connected via stronger or lighter bridges to different nodes, forming our associations. Those bridges is exactly what we emulate when we use nodes with weighted connectors in artificial neural networks.

    Our AI output is quality wise right now pretty good, but integrity and security wise pretty bad (hallucinations, not following prompts, etc.), but saying it is performing at the level of a three year old is simultaneously under-selling and overselling how AI performs. We should be aware that just because it’s AI doesn’t mean it’s good, but it also doesn’t mean it’s bad either. It just means there’s a feature (which is hopefully optional) and then we can decide if it’s helpful or not.

    I do music production and I need cover art. As a student, I can’t afford commissioning good artworks every now and then, so AI is the way to go and it’s been nailing it.

    As a software developer, I’ve come to appreciate that after about 2y of bad code completion AIs, there’s finally one that is a net positive for me.

    AI is just like anything else, it’s a tool that brings change. How that change manifests depends on us as a collective. Let’s punish bad AI, dangerous AI or similar (copilot, Tesla self driving, etc.) and let’s promote good AI (Gmail text completion, chatgpt, code completion, image generators) and let’s also realize that the best things we can get out of AI will not hit the ceiling of human products for a while. But if it costs too much, or you need quick pointers, at least you know where to start.