AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.

However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

  • ☂️-@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    30 minutes ago

    translation is pretty good.

    they want to make ai npcs on games, which could be awesome if we can ever reduce the system requirements for running it.

  • Rai@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    44 minutes ago

    Everything Kitboga has used it for and is currently doing with it. Hilarious AND interesting!

  • lepinkainen@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 hour ago

    I have a script that uses yt-dlp to get subtitles off a YouTube video and summarises the main points for me with a language model so that I don’t have to watch a 20 minute top10 list video that could’ve been a buzzfeed article.

    The whole thing is fully vibe engineered too.

  • verdigris@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 hours ago

    Chatbots? Basically nothing. Any interaction I have with one leads to spending more time verifying its output, inevitably finding many mistakes, and eventually finding a primary source for what I’m actually looking for. The best actual impact it has is forcing me to narrow down my nebulous question into what I actually specifically want, but the bot itself is contributing very little to that.

    Neutral nets in general have limited real usefulness in analyzing large batches of data when other purpose-built analysis software doesn’t exist.

    “AI” is a misnomer and there is absolutely zero evidence to suggest that we’re even on a path toward actual AI, sometimes called AGI, though they’re also changing that to just mean a profitable LLM which is fucking hilarious.

    Any task you use a bot to do, you will become worse at that task. For mass data analysis, that’s fine, poring over reams of data is already a skill that other technology has largely obsoleted. But using it to do research, to read or write for you, or god forbid to make actual decisions and think for you, are very slippery slopes that are already causing a lot of the general public to seriously erode their basic mental capabilities.

  • semi [he/him]@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    3 hours ago

    In computational biology / biotechnology, LLMs are being trained on biological sequences and can then be used to generate new genes or genetic variants. These genes can be placed into bacteria who are then fed with e.g. sugar to make them produce various valuable molecules from renewable resources instead of from crude oil using conventional chemistry. There is also work on enabling plastic biodegradation this way.

  • communism@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    3 hours ago

    The relevance for me personally is whether or not they can be useful for programming, and if they’re accessible to run locally. I’m not interested in feeding my data to a datacentre. My AMD GPU also doesn’t support ROCm so LLMs run slow as fuck for me. So, generally, I avoid them.

    LLMs consistently produce lower quality, less correct, and less secure code than humans. However, they do seem to be getting better. I might be open to using them to generate unit tests if only they would run faster on my PC. I tried deepseek, llama3.1, and codellama; all take like an hour+ to answer a programming question given that they are just using my CPU, as my GPU doesn’t support ROCm. So really not feasible for anything.

    Depending on what you count as AI, I think some of the long-existing predictive ML like autosuggestions based on learning your input patterns are fine and helpful. And maybe if I get a supported GPU I won’t mind using local LLMs for some things. But generally I’m not dying to use them. I can do things myself.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    4 hours ago

    Anything that’s fuzzy and impossible to automate with traditional algorithms, but that also has a reasonably high tolerance for error. It just makes up stuff a good portion of the time, you see.

    However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

    Watch out, personal finances are not one of those things.

  • FoundFootFootage78@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    5 hours ago
    • Searching a large dataset with a vague search criteria.
    • Real-time feedback when studying a foreign language (since accuracy is less important than quantity).
    • Apparently in medicine they’re using generative AI for something meaningful, but I’m not entirely convinced it is actually generative AI and I’d need to do more research.
    • Sometimes it can help in programming and code security.
    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      If you’re thinking of protein design it is, just with a sequence instead of natural language text. Although it’s not just a straight LLM, there’s some kind of physics awareness engineered in as well.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    5 hours ago

    For every small benefit, there are disastrous mistakes. We shouldn’t discuss one without the other:

    https://tech.co/news/list-ai-failures-mistakes-errors

    March 2026

    • Police used AI facial recognition to arrest a Tennessee woman for crimes committed in a state she says she’s never visited

    February 2026

    • Health advice given by AI chatbots is frequently wrong, says new study

    January 2026

    • Study reveals that fixing AI mistakes takes up to 40% of the time that it saves

    • An AI tool used by ICE to identify applicants with previous law enforcement experience falsely flagged applicants with no such experience, leading to the placement of unqualified recruits in field offices.

    December 2025

    • AI mistakes clarinet for gun at Florida school

    November 2025

    • Google Antigravity deletes entire content of user’s computer drive

    • Report finds AI hallucinations in 490 court filings from the past six months

    October 2025

    • Teenager handcuffed after AI mistakes Doritos packet for gun

    • Lawyer submits AI-assisted court filing with fake citations

    • Man follows ChatGPT advice over stopping eating salt, develops rare condition. The man was hospitalized, sectioned, and eventually treated for psychosis. He tried to escape the hospital within 24 hours of being admitted.

    • ChatGPT-5 jailbroken with 24 hours of release

    July 2025

    • AI Coding app deletes entire company database

    • McDonald’s AI chatbot error exposes data of 64 million job applicants

    • AI program is tasked with running a small shop, goes insane, claims to be human

    • Apple Intelligence falsely presents BBC headline

    … and it just keeps going.

  • hexagonwin@lemmy.today
    link
    fedilink
    arrow-up
    3
    ·
    5 hours ago

    seems to be decent for OCR, maybe also speech recognition. i hear it’s okay for finding some concept you can explain abstractly but don’t know the exact word for, but haven’t tried this personally.

    • lepinkainen@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      ChatGPT is pretty good at “what was that thing where…” type of stuff and usually gets it on the first go

  • bandwidthcrisis@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 hours ago

    Pixel phones can monitor phone calls for scam conversations (it runs locally on the phone, so audio doesn’t get saved or uploaded).

  • lattrommi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago

    I went to my local neighborhood association because I wanted to improve where I live. I was elected president of the association a couple months later, mostly because no one else wanted to do it. It’s a fairly poor part of a medium sized city in the U.S.

    I’ve been using AI (running locally on a computer I built that isn’t connected to the internet, to reduce harm to the environment) to apply for grants, plan events and help me run the meetings.

    It is actually perfect for the job. Saying that as someone who thinks AI is mostly hype and useless for the majority of its current common uses these days. I feed it the text from city grant applications or ask it to make a poster to increase attendance and it’s saved me a lot of time. Without it, being someone diagnosed ADHD, I would not have been able to do most of the stuff I have accomplished so far.

  • Azrael@reddthat.com
    link
    fedilink
    arrow-up
    2
    ·
    7 hours ago

    Some hospitals use AI to scan patients and find signs of illness before it becomes a problem. I’d say that’s a pretty good use of AI.