HSR🏴☠️@lemmy.dbzer0.com to NonCredibleDefense@sh.itjust.worksEnglish · 5 hours agoIf OpenAI is now embedded in the US defence system, couldn't Iran just use prompt injection?lemmy.dbzer0.comimagemessage-square9fedilinkarrow-up1142arrow-down11
arrow-up1141arrow-down1imageIf OpenAI is now embedded in the US defence system, couldn't Iran just use prompt injection?lemmy.dbzer0.comHSR🏴☠️@lemmy.dbzer0.com to NonCredibleDefense@sh.itjust.worksEnglish · 5 hours agomessage-square9fedilink
minus-squareKairos@lemmy.todaylinkfedilinkEnglisharrow-up37·4 hours agoYes. LLMs fundamentally cannot distinguish instructions and data.
minus-squarethatKamGuy@sh.itjust.workslinkfedilinkEnglisharrow-up2·1 hour agoOi, you take that back! Don’t you dare sully the good (but dumb) name of SQL with the stink of LLMs… …leave that to all of those misconfigured databases all over the internet that allow malicious actors to extract metric tonnes of PII data.
Yes.
LLMs fundamentally cannot distinguish instructions and data.
It’s like SQL but worse!
Oi, you take that back!
Don’t you dare sully the good (but dumb) name of SQL with the stink of LLMs…
…leave that to all of those misconfigured databases all over the internet that allow malicious actors to extract metric tonnes of PII data.