Joined the Mayqueeze.

  • 2 Posts
  • 157 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle



  • It’s an assumption that many people will be unemployed and unemployable in other functions. So far, every big change (like the Industrial Revolution or the advent of computers in the workplace) have lead to temporary displacements, and the longer ago it happened violent side effects. But in the big picture, we have found ways to put the human resource back into the machine. Accountants were supposed to go extinct with the arrival of Microsoft Excel. But their numbers have increased because they can do more useful things with their time than doing the math. The assumption may be more fear mongering. (And it’s too early to tell if you ask me.)

    So I don’t think they will kill us off just yet because it isn’t entirely clear that we’re not needed. It’s also possible that so-called AI frees up people and resources that can be channeled into what are chronically underfunded professions today, like teaching or medical care. We have a tendency to think in Matrix or 1984 terms of the future when more positive outcomes exist.













  • My preference is simple:

    Minimalist Lemmy - ordered by new, chronological (used to be the same on reddit before I stopped) Mastodon - chronological

    If I look at how the algorithms on YouTube or Instagram (don’t know which category they fall in) treat me, they always surface 80% irrelevant stuff and 20% that is okay but only in the rarest cases mindblowingly good. And that’s why on YouTube I tend to ignore the Home tab.

    Especially in the short video algorithms, I fucking hate that if you didn’t respond within a microsecond you’ll now get fed sloth videos or car crashes until you die. I’m all algorithm’ed out.


  • It remains to be seen if reading about all the emotions and morals is the same as feeling them, acting according to them, or just being able to identify them in us meatbags. So even with the sum total of human knowledge at their disposal, this may not matter. We already don’t know how these models actually organize their “knowledge.” We can feed them and we can correct bad outcomes. Beyond that it is a black box, at least right now. So if the spark from statistical spellchecker (current so-called AI) to actual AI (or AGI) happens, we’ll probably not even notice until it writes us a literary classic of Shakespearean proportions or until it turns us into fuel for its paperclip factory. That’s my assessment anyway.