I recieved a comment from someone telling me that one of my posts had bad definitions, and he was right. Despite the massive problems caused by AI, it’s important to specify what an AI does, how it is used, for what reason, and what type of people use it. I suppose judges might already be doing this, but regardless, an AI used by one dude for personal entertainment is different than a program used by a megacorporation to replace human workers, and must be judged differently. Here, then, are some specifications. If these are still too vague, please help with them.
a. What does the AI do?
- It takes in a dataset of images, specified by a prompt, and compiles them into a single image thru programming (like StaDiff, Dall-E, &c);
- It takes in a dataset of text, specified by a prompt, and compiles that into a single string of text (like ChatGPT, Gemini, &c);
- It takes in a dataset of sound samples, specified by a prompt, and compiles that into a single sound (like AIVA, MuseNet, &c).
b. What is the AI used for?
- It is used for drollery (applicable to a1 and a2);
- It is used for pornography (a1);
- It is used to replace stock images (a1);
- It is used to write apologies (a2);
- It is used to write scientific papers (this actually happened. a2);
- It is used to replace illustration that the user would’ve done themselves (a1);
- It is used to replace illustration by a wage-laborer (a1);
- It is used to write physical books to print out (a2);
- It is used to mock and degrade persons (a1, a3);
- It is used to mock and degrade persons sexually (a1, a3);
- It is used for propaganda (a1, a2, a3).
c. Who is using the AI?
- A lower-class to middle-class person;
- An upper-class person;
- A small business;
- A large business;
- An anonymous person;
- An organization dedicated to shifting public perception.
This was really tough to do. I’ll see if I can touch up on it myself. As of now, Lemmy cannot do lists in lists.
This seems more like a collection of examples than an actual attempt at a definition.
At its core, AI is a program that takes a given input and returns the output that, during it’s training phase, would be expected to minimize it’s error (or maximize it’s reward).
I’m should add b12 “It is used to impersonate celebrities” and b13 “It is used to sell scams” next draft. Found this thanks to Flying Squid
I mean, I doubt that an organization dedicated to shifting public perception would use AI for pornography lmao
deleted by creator