• 0 Posts
  • 13 Comments
Joined 2 years ago
cake
Cake day: September 3rd, 2023

help-circle
  • While reducing his sources of income will hurt him a little bit, unfortunately Starlink is very appealing to militaries and emergency services. Being able to access the internet is great for morale in the navy, and mission critical for plenty of emergency services. This is particularly true in Australia where we have vast unpopulated areas with very patchy phone coverage, let alone bandwidth for data services. I know some services are installing starlink as emergency backups for stations and in forward command vehicles. They’ll be paying the big bucks for Starlink.




  • There’s two types of military SONAR: active and passive. Active SONAR is using pings and listening for the echo, but it’s not very commonly used by submarines. Passive sonar is just listening out for engine noise from ships and prop noise from other submarines. Warships use active SONAR more because they’re already noisy and so if they think an enemy submarine is in the area they’ll use it to try and locate, but in peacetime it’s mostly reserved for training exercises. It’s still terrible for wildlife, even if they don’t die it’s very distressing and disorientating but it’s not constantly scanning like a RADAR.




  • Copyright gives the copyright holder exclusive rights to modify the work, to use the work for commercial purposes, and attribution rights. The use of a work as training data constitutes using a work for commercial purposes since the companies building these models are distributing licencing them for profit. I think it would be a marginal argument to say that the output of these models constitutes copyright infringement on the basis of modification, but worth arguing nonetheless. Copyright does only protect a work up to a certain, indefinable amount of modification, but some of the outputs would certainly constitute infringement in any other situation. And these AI companies would probably find it nigh impossible to disclose specifically who the data came from.


  • Nobody has been able to make a convincing argument in favour of generative AI. Sure, it’s a tool for creating art. It abstracts the art making process away so that the barrier to entry is low enough that anyone can use it regardless of skill. A lot of people have used these arguments to argue for these tools, and some artists argue that because it takes no skill it is bad. I think that’s beside the point. These models have been trained on data that is, in my opinion, both unethical and unlawful. They have not been able to conclusively demonstrate that the data was acquired and used in line with copyright law. That leads to the second, more powerful argument: they are using the labour of artists without any form of compensation, recognition, permission, or credit.

    If, somehow, the tools could come up with their own styles and ideas then it should be perfectly fine to use them. But until that happens (it won’t, nobody will see unintended changes in AI as anything other than mistakes because it has no demonstrable intent) use of a generative AI should be seen as plagiarism or copyright infringement.