• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: June 24th, 2023

help-circle
  • I think that is less of a problem with the technology itself, but rather in how it is currently used or created. I wouldn’t say that anything generated with AI is stolen work, as that predicates that AI necessarily involves stealing.

    I vaguely remember Adobe Firefly using images only with proper licensing to the point they will allow themselves to be legally held responsible (though some AI generated work did make it into their stock image site, which makes the ethics part vague, even if it will in all likelihood be legally impossible to pin down). Sadly, this is Adobe, and this stuff is all behind closed doors, you have to pay them pretty significant sum and you can’t really mess with the internals there.

    So for now there is a choice between ethics, openness, and capability (pick at most two). Which, frankly, is a terrible state to be in.


  • The difference is photography can be art, but it isn’t always. Photo composition and content are used to convey meaning. The photo is a tool under the artist’s complete control. The photo is not art on its own. Just like if you accidentally spill paint on a canvas it isn’t necessarily art, a photo taken without intent isn’t necessarily art. If I accidentally hit the camera button on my phone that doesn’t make me a photographer.

    I don’t completely agree. While an accident is one example where intent is missing, publishing accidental shots could be a form of art in its own way as the act of publishing itself has intent associated with it.

    Furthermore, nature photography is in my view also art, but provides much less control than studio photography, as the scene and subject are free to do whatever they want.

    AI generated images can not do this. The user can give a prompt, but they don’t actually have control over the tool. They can modify their prompt to get different outputs, but the tool is doing its own thing. The user just has to keep trying until they get an output they like, but it isn’t done by their control. It’s similar to a user always accidentally doing things, until they get what they want. If you record every moment of your life you’re likely to have some frames that look good, but you aren’t a photographer because you didn’t intend to get that output.

    I don’t think recording everything would make it less of an artpiece: you would have intentionally chosen to record continuously to capture that frame, and skimmed though those frames to find the right one. Like splattering paint on a canvas intentionally, you don’t intend to control the full picture - where the paint ends up - but rather the conceptual idea of a splatter of paint, leaving the details, in part, up to physics.

    There are limits to what repeatedly prompting an AI model can do, but that doesn’t stop you from doing other things with the output, or toying with how it functions or how it is used, as my example shows.

    While I wouldn’t discount something if it was created using AI, I need there to be something for me to interact with or think about in a piece of art. As the creation of an image is effectively done by probability, anything missing in the prompt will in all likelihood be filled with a probabilistically plausible answer, which makes the output rather boring and uninteresting. This doesn’t mean that AI cannot be used to create art, but it does mean you need to put in some effort to make it so.




  • The same thing happened to photography, and other kinds of modern art, too. Things are often excluded from being art until they are included (to at least a subset of people).

    With AI it is often questionable how much ‘intent’ someone has put into a work: ‘wrote a simple trivial prompt, generated a few images, shared all of them’ results in uninteresting slop, while ‘spent a lot of time to make the AI generate exactly what you want, even coming up with weird ways to use the model (like this / non-archive link)’ is a lot more interesting in my view.



  • I would love if things weren’t as bad as they looked, but…

    Most of the destruction of buildings in Gaza is of empty buildings with no inhabitants. The IDF blows up or bulldozes buildings when they find booby traps in them, have tunnel entrances, provide military advantage, were used for weapons storage or command, were used as sniper or RPG nests, block lines of sight, to clear security corridors, space for military camps and operations, and so on. The list of reasons is long and liberally applied by the bulldozer operators and sappers on the ground. (emphasis mine) While destroying military targets is fair, pretty much every building blocks line of sight, including civilian housing, shops, hospitals, and so on. If applied liberally, this essentially amounts to destroy all buildings. Having your house (and nearby facilities, like shops, schools, hospitals) bulldozed will have a severe negative impact on your ability to live, even if you don’t die in the bulldozing or destruction of your house.

    The IDF warns before major operations and then almost all civilians leave the area. The evacuation of Rafah is a good example for this. There are also targeted attacks, usually by air, in non evacuated areas, but these are only responsible for a small fraction of the destruction. (emphasis mine) While the IDF does do this, and this avoids immediate death for many, it still deprives people of human right to housing. Furthermore, a warning does not provide those who evacuate / flee with housing, food and water - for these there are currently significant shortages, while acting on the warning will have a severe negative impact on being able to provide for oneself - one can only carry so much. A disregard for innocent human lives isn’t just civilian deaths, it is also the deprivation of resources that one needs to live.


  • It says ‘a neighborhood’ not 'one neighborhood '. Furthermore, in the article, it specifically mentions it represents other neighborhoods in Gaza.

    A neighborhood provides an example of the disregard for innocent human lives behind the Israeli attacks, with visual proof provided by satellite imagery, even if it is one of many.

    Stating one neighborhood would imply it is the only one. While the NY Times does not have the best track record, it is needlessly reductive for an article that shows what is happening in Gaza. Especially as a picture of a neighborhood can actually be more impactful than the whole: close enough that you can see individual places where people leave, far enough to see the extent of destruction.


  • Yes, true, but that is assuming:

    1. Any potential future improvement solely comes from ingesting more useful data.
    2. That the amount of data produced is not ever increasing (even excluding AI slop).
    3. No (new) techniques that makes it more efficient in terms of data required to train are published or engineered.
    4. No (new) techniques that improve reliability are used, e.g. by specializing it for code auditing specifically.

    What the author of the blogpost has shown is that it can find useful issues even now. If you apply this to a codebase, have a human categorize issues by real / fake, and train the thing to make it more likely to generate real issues and less likely to generate false positives, it could still be improved specifically for this application. That does not require nearly as much data as general improvements.

    While I agree that improvements are not a given, I wouldn’t assume that it could never happen anymore. Despite these companies effectively exhausting all of the text on the internet, currently improvements are still being made left-right-and-center. If the many billions they are spending improve these models such that we have a fancy new tool for ensuring our software is more safe and secure: great! If it ends up being an endless money pit, and nothing ever comes from it, oh well. I’ll just wait-and-see which of the two will be the case.


  • Not quite, though. In the blogpost the pentester notes that it found a similar issue (that he overlooked) that occurred elsewhere, in the logoff handler, which the pentester noted and verified when spitting through a number of the reports it generated. Additionally, the pentester noted that the fix it supplied accounted for (and documented) a issue that it accounted for, that his own suggested fix for the issue was (still) susceptible to. This shows that it could be(come) a new tool that allows us to identify issues that are not found with techniques like fuzzing and can even be overlooked by a pentester actively searching for them, never mind a kernel programmer.

    Now, these models generate a ton of false positives, which make the signal-to-noise ratio still much higher than what would be preferred. But the fact that a language model can locate and identify these issues at all, even if sporadically, is already orders of magnitude more than what I would have expected initially. I would have expected it to only hallucinate issues, not finding anything that is remotely like an actual security issue. Much like the spam the curl project is experiencing.