Randomgal

joined 2 years ago
[–] Randomgal@lemmy.ca 1 points 9 hours ago

The three answers say literally completely different things. Wtf? Lmao

[–] Randomgal@lemmy.ca 6 points 14 hours ago

Pale brown with a green tint.

[–] Randomgal@lemmy.ca 6 points 15 hours ago (4 children)

Wtf is torrent

[–] Randomgal@lemmy.ca -3 points 2 days ago

Do protests? Oh good here I thought they were doing absolutely nothing.

[–] Randomgal@lemmy.ca 5 points 2 days ago (1 children)

You you're literally voluntarily choosing ignorance.

[–] Randomgal@lemmy.ca 7 points 2 days ago

It you haven't seen them you need to start looking because it's not even that hard.

[–] Randomgal@lemmy.ca 7 points 2 days ago (3 children)

Lmao. I think Monsanto is pouring money into Lemmy because holy shit there are some brown noses bootlickers defending daddy Monsanto.

[–] Randomgal@lemmy.ca -2 points 2 days ago (1 children)

It's because the 'people' you are talking are either propagandists or bots.

[–] Randomgal@lemmy.ca -2 points 3 days ago

Don't worry I had AI TL&DR it for you:

Summary of "The Reverse-Centaur's Guide to Criticizing AI"

Cory Doctorow distinguishes between centaurs (humans assisted by machines) and reverse-centaurs (humans serving as appendages to machines). His core thesis: AI tools are marketed as centaur-making devices but deployed to create reverse-centaurs—workers subjected to algorithmic control and expected to catch machine errors while being blamed for failures.

The AI bubble exists primarily to maintain growth stock valuations. Once tech monopolies dominate their sectors, they face market reclassification from "growth" to "mature" stocks, triggering massive valuation drops. AI hype keeps investors convinced of continued expansion potential.

AI's actual business model: Replace high-wage workers (coders, radiologists, illustrators) with AI systems that cannot actually perform those jobs, while retaining skeleton crews as "accountability sinks"—humans blamed when AI fails. This strategy reduces payroll while maintaining superficial human oversight.

Why expanding copyright won't help creators: Despite 50 years of copyright expansion, creative workers earn less both absolutely and proportionally while media conglomerates profit enormously. New training-related copyrights would simply become contractual obligations to employers, not worker protections.

The effective counter-strategy: The U.S. Copyright Office's position that AI-generated works cannot receive copyright protection undermines corporate incentives to replace human creators entirely. Combined with sectoral bargaining rights (allowing industry-wide worker negotiations), this creates material resistance to worker displacement.

On AI art specifically: Generative systems produce "eerie" outputs—superficially competent but communicatively hollow. They cannot transfer the "numinous, irreducible feeling" that defines art because they possess no intentionality beyond statistical word/pixel prediction.

The bubble will collapse, leaving behind useful commodity tools (transcription, image processing) while eliminating economically unsustainable foundation models. Effective criticism should target AI's material drivers—the growth-stock imperative and labor displacement economics—not peripheral harms like deepfakes or "AI safety" concerns about sentience.

[–] Randomgal@lemmy.ca 0 points 3 days ago

No, it's only on the other direction of extremism.

[–] Randomgal@lemmy.ca 2 points 5 days ago

Bots are easier to test on smaller platforms. Lemmy also has a lot of extremists that than be easily tricked to blindly believing the bot because they agree with the machine propaganda.

Bots are rampant, you just need ti pay attention.

[–] Randomgal@lemmy.ca 6 points 5 days ago (2 children)

It's much worse if you start paying attention to text posts.

view more: next ›