

That print does actually look pretty nice, but I hate how inconsistent the two images are. It’d drive me fing crazy to have those prints on my wall when the continuity of design is so clearly lacking.
That print does actually look pretty nice, but I hate how inconsistent the two images are. It’d drive me fing crazy to have those prints on my wall when the continuity of design is so clearly lacking.
I’m afraid that we seen to disagree on who an artist is and what is a valid moral trade off.
Is it really the democratization of art? Or the commodification of art?
Art has, with the exception of extraordinary circumstances, always been democratic. You could at any point pick up a pencil and draw.
Ai has funneled that skill, critically through theft, into a commodified product, the ai model. Through with they can make huge profits.
The machine does the art. And, even when run on your local machine the model was almost certainly trained on expensive machines through means you could not personally replicate.
I find it alarming that people are so willing to celebrate this. It’s like throwing a party that you can buy bottled Nestle water at the grocery store which was taken by immoral means. It’s nice for you, but ultimately just further consolation of power away from individuals.
Sorry, I might have went a bit ham on you there, it was late at night. I think I might have been rude
Intellectual property theft used to be legal, but protections were eventually put in place to protect the industry of art. (I’m not a staunch defender if the laws as they are, and I belive it actually, in many cases, stifles creativity.)
I bring up the law not recognizing machine generated art only to dismiss the idea that the legal system agrees wholeheartedly with the stance that AI art is defensibly sold on the free market.
A) To suggest a machine neutral network “thinks like a human” is like suggesting a humanoid robot “runs like a human.” It’s true in an incredibly broad sense, but carries so little meaning with it.
Yes, ai models use advanced, statistical multiplexing of parameters, which can metaphorically be compared to neurons, but only metaphorically. It’s just vaguely similar. Inspired by, perhaps.
B) It hardly matters if AI can create art. It hardly even matters if they did it in exactly the way humans do.
Because the operator doesn’t have the moral or ethical right to sell it in either case.
If the AI is just a stocastic parrot, then it is a machine of theft leveraged by the operator to steal intellectual labor.
If the AI is creative in the same way as a person, then it is a slave.
I’m not actually against AI art, but I am against selling it, and I respect artists for trying to protect their industry. It’s sad to see an entire industry of workers get replaced by machines, and doubly sad to see that those machines are made possible by the theft of their work. It’s like if the automatic loom had been assembled out of centuries of collected fabrics. Each worker non consensually, unknowingly, contributing to the near total destruction of their livelihood. There is hardly a comparison which captures the perversion of it.
Counterpoints:
Artists also draw distinctions between inspiration and ripping off.
The legality of an act has no bearing on its ethics or morality.
The law does not protect machine generated art.
Machine learning models almost universally utilize training data which was illegally scraped off the Internet (See meta’s recent book piracy incident).
Uncritically conflating machine generated art with actual human inspiration, while career artist generally lambast the idea, is not exactly a reasonable stance to state so matter if factly.
It’s also a tacit admission that the machine is doing the inspiration, not the operator. The machine which is only made possible by the massive theft of intellectual property.
The operator contributes no inspiration. They only provide their whims and fancy with which the machine creates art through mechanisms you almost assuredly don’t understand. The operator is no more an artist than a commissioner of a painting. Except their hired artist is a bastard intelligence made by theft.
And here they are, selling it for thousands.
Yes, sorry, where I live it’s pretty normal for cars to be diesel powered. What I meant by my comparison was that a train, when measured uncritically, uses more energy to run than a car due to it’s size and behavior, but that when compared fairly, the train has obvious gains and tradeoffs.
Deepseek as a 600b model is more efficient than the 400b llama model (a more fair size comparison), because it’s a mixed experts model with less active parameters, and when run in the R1 reasoning configuration, it is probably still more efficient than a dense model of comparable intelligence.
Yeah, I was thinking diesel powered trains
This article is comparing apples to oranges here. The deepseek R1 model is a mixture of experts, reasoning model with 600 billion parameters, and the meta model is a dense 70 billion parameter model without reasoning which preforms much worse.
They should be comparing deepseek to reasoning models such as openai’s O1. They are comparable with results, but O1 cost significantly more to run. It’s impossible to know how much energy it uses because it’s a closed source model and openai doesn’t publish that information, but they charge a lot for it on their API.
Tldr: It’s a bad faith comparison. Like comparing a train to a car and complaining about how much more diesel the train used on a 3 mile trip between stations.
I was thinking as a cost cutting measure. As long as performance is comparable to a moderate CPU GPU combination, it’s less silicone, interconnections, ram, and coolers and less likely to break during shipping / assembly. Like a gaming console.
Such a PC could still use sockets with upgradable APUs or CPUs, as well as PCI slots for dedicated gpus, retaining basic upgradability. A lot depends on the upcoming AMD APUs.
Imo, 4060ti performance in a 600 to 800 dollar box running a amd Apu with 16 to 32 gb of shared ram. That’s all they need.
https://openai.com/index/how-openai-is-approaching-2024-worldwide-elections/
Here is a direct quote from openai:
“In addition to our efforts to direct people to reliable sources of information, we also worked to ensure ChatGPT did not express political preferences or recommend candidates even when asked explicitly.”
It’s not a conspiracy. It was explicitly thier policy not to have the ai discuss these subjects in meaningful detail leading up to the election, even when the facts were not up for debate. Everyone using gpt during that period of time was unlikely to receive meaningful information on anything Trump related, such as the legitimacy of Biden’s election. I know because I tried.
This is ostentatiously there to protect voters from fake news. I’m sure it does in some cases, but I’m sure China would say the same thing.
I’m not pro China, I’m suggesting that every country engages in these shenanigans.
Edit it is obvious that a 100 billion dollar company like openai with it’s multude of partnerships with news companies could have made gpt communicate accurate and genuinely critical news regarding Trump, but that would be bad for business.
Perhaps now it is, but leading up to the election, I found gpt would outright refuse to discuss Trump in voice mode. Meta ai too. It was very frustrating. It would start, and then respond with something like, “I’m not able to talk about that, yet.”
https://www.wired.com/story/google-and-microsofts-chatbots-refuse-election-questions/
There are plenty of examples of Ai either refusing to discuss subjects of the elections (I remember meta ai basically just saying “I’m learning how to respond to these questions.” Or in the above case, just hand waving away clear issues of wrong doing.
Chat gpt advanced voice mode would constantly activate its guardrails when asked about trump or “politically charged” topics.
Incidentally, no Western ai would make a statement on Donald Trump’s crimes leading up to the election. Ai propaganda is a serious issue. In China the government enforces it, in America, billionaires.
I frequently forget that chrome is installed on my phone. The only time I’m forced to use it is about once a year when I order Papa John’s Pizza takeout. Their checkout page doesn’t seem to work in any other browser.
Something which clarified Zuck’s behavior in my mind was an interview where he said something along the lines of, “I could sell meta for x amount of dollars, but then I’d just start another company anyways, so I might as well not.”
The guy isn’t doing what financially makes sense. He’s Uber rich and working on whatever projects he thinks are cool. I wish Zuck would stop sucking in all his other ways, but he just doesn’t care about whether his ideas are going to succeed or not.
I actually don’t think this is shocking or something that needs to be “investigated.” Other than the sketchy website that doesn’t secure user’s data, that is.
Actual child abuse / grooming happens on social media, chat services, and local churches. Not in a one on one between a user and a llm.
I wonder if you could just use your PC to hotspot when you need to use VR.
Yeah, right? I mean, imagine if YouTube when down and just deleted all the videos. People would be up and arms demanding legislative action. There would be endless lawsuits.
As a creative, you rely on platforms to not obliterate your stuff. At least not immediately. This guy has a horse in the race of this site.
https://store.steampowered.com/app/2382520/Erenshor/