

Thanks!
Thanks!
Trouble is your statement was in answer to @[email protected]’s comment that labeling lonely people as losers is problematic.
Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to “cause” it.
And Hastalavista if you wanted to find things that Altavista didn’t.
That’s really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.
Namely, emotional dependence will give AI’s big tech company owners increased power over people.
It’s not as if these concepts aren’t widely discussed online, everything from Meta’s emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.
That thing about macaques is interesting.
For now. Internationally, that phrase has been around for much longer than that, and I think it will be around for a lot longer.
I’ve always viewed it as a call to repurpose the resources currently hoarded by the rich.
I think Reddit is just pretending to interpret it as incitement to cannibalism.
Nooo, enshitification. I’ve only recently stared using it.
What do we use instead? Is Matrix the only option?
I think I’m just going to have to agree to disagree.
AI getting a diagnosis wrong is one thing.
AI being bulit in such a way that it hands out destructive advice human scientists already know is wrong, like vaccines cause autism, homeopathy, etc, is a malevolent and irresponsible use of tech imo.
To me, it’s like watching a civilization downgrading it’s own scientific progress.
I take your point. The version I heard of that joke is “the person who graduated at the bottom of their class in med school”.
Still, at the moment we can try to avoid those doctors. I’m concerned about the popularizing and replication of bad advice beyond that.
The problem here is this tool is being marketed to GPs, not patients, so you wouldn’t necessarily know where the opinion is coming from.
I’d hope the bar for medical advice is higher than “better than the worst doctor”.
Will be interesting to see where liability lies with this one. In the example given, following the advice could permanently worsen patients.
Given that the advice is proven to be wrong and goes against official medical guidance for doctors, that could potentially be material for a class action lawsuit.
When we look at passing scores, is there any way to quantitatively grade them for magnitude?
Not all bad advice is created equal.
It would have to be the fail rate of an average doctor, because if average doctors are the use case then moving the bar to fail rate of a bad doctor doesn’t make any sense. You would end up saying worse outcomes = better.
I think the missing piece here is accountability.
If doctors are being encouraged to give harmful out-of-date advice, who will end up with a class action lawsuit on their hands - doctors or OE?
Openr
Openster
Open .io
Ikr. Doing that with the richest man instead does not seem to have the same effect.
President Dwayne Elizondo Mountain Dew Herbert Camacho will be interested.
Thanks, cool! I was able to start following you by searching for [email protected] in my instance. For some reason your pictures aren’t showing up for me yet but maybe it will populate soon.
Me too. I can’t find you on pixelfed?
Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-“lobotomized”.