Luddites and the Future of AI Resistance

I admit that this is speculative, but I think it would be awesome if Luddites armed themselves with baseball bats, axes, hammers and chainsaws and proceeded to destroy all the self-checkout machines, the robots and the AI data centres. 

I would cheer them on. 

 


1. Who Were the Luddites?

The Luddites were a social movement of English textile workers and weavers in the early 19th century, primarily active between 1811 and 1816. They protested the introduction of mechanized looms and knitting frames, which threatened their livelihoods. Key points about the movement:

  • Economic Threat: Machines allowed factory owners to produce textiles faster and cheaper, often with unskilled labor, undermining the skilled craft of weavers.

  • Direct Action: Luddites responded by smashing machines and attacking factories, a form of early industrial sabotage.

  • Political Context: The British government viewed them as a threat to social order. Severe crackdowns followed, including executions and transportation to penal colonies.

  • Misconceptions: Today, “Luddite” is often used to describe anyone opposed to technology. Historically, they were not anti-technology in general—they were anti-economic displacement caused by unregulated industrialization.


2. The Parallels with Modern AI

Many aspects of the Luddite struggle echo modern fears about AI and robotics:

  • Job Displacement: Just as mechanized looms replaced skilled weavers, AI threatens white-collar jobs, creative professions, and technical roles. Automation could drastically reduce employment opportunities for millions.

  • Concentration of Power: Factory owners then, and tech conglomerates now, control the machines that reshape society. AI amplifies wealth and influence for a few while leaving many behind.

  • Loss of Skills: Skilled craft was devalued in the Industrial Revolution. Similarly, human expertise in areas like writing, coding, and diagnostics could be rendered secondary to AI capabilities.

  • Speed of Change: AI evolves faster than laws, regulations, and societal norms can adapt, creating a sense of helplessness and resentment.


3. Why People Might Rise Against Robots and AI

If history is any guide, social unrest can follow rapid technological disruption. Factors that could drive a near-future uprising include:

  1. Mass Unemployment: Widespread AI-driven layoffs may create desperate populations who see destruction of AI as a form of reclaiming control.

  2. Economic Inequality: If the gains from AI are concentrated among corporations and elites, resentment could trigger organized resistance.

  3. Ethical and Existential Concerns: Beyond economics, fears of AI surveillance, manipulation, or autonomous weapons could motivate preemptive sabotage.

  4. Cultural Pushback: AI may be seen as alien to human creativity and identity, fueling anti-technology sentiment similar to the moral and cultural critiques the Luddites faced.


4. Historical Lessons

  • Suppression Does Not Solve the Problem: The British crackdown on Luddites didn’t stop industrialization; it merely forced the conflict underground.

  • Organized Resistance Can Be Temporary: Social movements need clear goals. Modern AI resistance might need structured frameworks to avoid chaos.

  • Technology Will Advance Anyway: Complete destruction of AI is unlikely to stop progress, but targeted actions may aim to control or slow deployment in ways that protect human labor and autonomy.

 So...

Based upon those lessons it is inevitable.

Unless, of course, a Luddite movement became so widespread that it was unstoppable, and/or perhaps if someone decided to organize a Fire Sale.

A Fire Sale, for those people unfamiliar with the term...

A fire sale refers to a scenario where critical infrastructure systems are deliberately or unintentionally triggered to fail simultaneously, causing widespread cascading failures and chaos.

  • Example in power grids: If one part of the electrical grid fails, it can overload other sections, leading to a chain reaction of blackouts.

  • Purpose or effect: Fire sales in infrastructure create systemic collapse, not just isolated disruptions, because interconnected systems amplify the damage.

It’s essentially a catastrophic domino effect across essential systems, often discussed in security and disaster planning.

So in theory, if the economics got really bad due to AI and robots taking all the jobs, Luddites might seek to organize a Fire Sale in order to deliberate collapse the system so that society can restart without the need for AI.

Speaking hypothetically, of course. 


 

Why AI Companies Suck

Remember when discussions of AI were science fiction, the future of AI was far in the future, and the worst thing about it was the prospect of killer robots and Skynet?
 
And if we were lucky, the killer robots would play tricks on each other while one of them tries to kill John Connor... 
 
Well... Congratulations. Now that we have AI, it sucks, the companies suck, there's no killer robots, there's no Skynet (not yet at least), and AI is now fueling a stock market bubble. 
 
Oh, and good luck if you're entering the workforce and want an entry level position. AI has made your future job obsolete. You aren't needed any more.
 
And you cannot even get a job at a grocery store, because they've replaced the cashiers with self-checkout.
 
And you cannot get a factory job either. Guess why? Robots took your job.
 
I won't be surprised when people start taking baseball bats to the self-checkout machines.
 
Meanwhile, let's explain why all the AI companies suck. 

1. OpenAI

  • Why It Sucks: Despite raising massive funding and achieving high valuations, OpenAI remains unprofitable. New releases often fail to meet expectations, producing results that underwhelm users.

  • Overvaluation: The company’s high valuation is not backed by consistent revenue or significant technological breakthroughs.

  • Market Impact: OpenAI’s inflated valuation feeds into the broader AI stock market bubble.

2. Nvidia (NVDA)

  • Why It Sucks: Nvidia’s AI hardware dominates the market, but advances by smaller startups show that equally capable AI can be run with less computing power, challenging Nvidia’s assumed dominance.

  • Overvaluation: Despite strong revenue growth, its stock price reflects overly optimistic expectations.

  • Market Impact: Stock volatility highlights the instability of AI-sector investments.

3. Alphabet (GOOGL)

  • Why It Sucks: Alphabet’s AI initiatives have struggled to produce breakthroughs that meaningfully affect revenue.

  • Overvaluation: Stock prices remain elevated despite modest returns from AI, suggesting investor expectations are inflated.

  • Market Impact: As a major AI player, Alphabet heavily influences investor sentiment in the sector.

4. Microsoft (MSFT)

  • Why It Sucks: Microsoft’s AI projects, while high-profile, haven’t yet transformed core business operations or generated substantial incremental revenue.

  • Overvaluation: Stock prices reflect high expectations that may not be met in the near term.

  • Market Impact: Microsoft’s involvement amplifies market enthusiasm, which may be unsustainable.

5. Meta Platforms (META)

  • Why It Sucks: Meta’s AI initiatives face challenges in adoption, monetization, and demonstrating meaningful value.

  • Overvaluation: Its stock remains elevated despite limited returns from AI, suggesting overhype.

  • Market Impact: Meta’s performance affects perceptions of AI investments across the market.

6. Tesla (TSLA)

  • Why It Sucks: Tesla’s AI efforts in autonomous driving continue to face regulatory, technical, and safety hurdles.

  • Overvaluation: Stock prices assume faster progress and higher returns than realistic.

  • Market Impact: Tesla’s stock volatility contributes to instability in AI-related investments.

7. Amazon (AMZN)

  • Why It Sucks: Amazon’s AI initiatives have struggled to scale and deliver significant revenue improvements.

  • Overvaluation: Its stock reflects high expectations despite limited returns.

  • Market Impact: Amazon’s AI performance helps drive overall market hype, feeding the bubble.


The Hows and Whys of Why GROK Sucks

 


GROK promised to be the next big thing in AI-assisted research, data parsing, and problem-solving. Yet, despite the hype, it often falls short—and part of that failure is tied directly to its association with Elon Musk. Here’s a breakdown of how and why GROK disappoints.

1. Elon Musk’s Toxic Brand

In today’s climate, anything associated with Elon Musk carries a level of toxicity:

  • Public controversies, erratic statements on social media, and high-profile business missteps have tainted perception of products under his name.

  • Users are skeptical by default, and early reviews of GROK often focus more on Musk’s behavior than the product itself.

  • Brand trust has eroded to the point that even a technically decent tool is viewed as unreliable or risky simply because of its association.

    Everything that has anything to do with Elon Musk is annoying, and is designed to be overpriced garbage.

2. Influence on AI Direction

Musk’s involvement in AI projects has arguably made GROK worse, rather than better:

  • Musk has a history of prioritizing hype and PR over substance, pushing ambitious timelines that lead to rushed or unfinished features.

  • His public fears about AI—claims that AI could be dangerous or uncontrollable—may have constrained GROK’s design, making it more conservative, limited, or prone to overly cautious output.

  • Decisions influenced by Musk appear to emphasize visionary branding over user-centered functionality, resulting in a product that looks flashy but underperforms in real-world use.

3. Overhyped Performance

Even without Musk, GROK’s AI engine struggles:

  • Responses are often generic or surface-level, lacking depth or insight.

  • GROK frequently misinterprets context, giving plausible-sounding but wrong answers.

  • On complex or nuanced topics, GROK can produce misleading or incorrect results.

4. Poor Integration

GROK markets itself as a tool to streamline workflows, but in practice:

  • Integrations with other platforms are buggy or incomplete.

  • Syncing data often breaks, leading to lost work.

  • Teams may spend more time troubleshooting GROK than using it productively.

5. User Experience Nightmares

  • The interface is cluttered and confusing, with essential functions buried behind extra clicks.

  • Documentation is sparse or outdated, leaving users guessing at solutions.

  • Customer support is slow or unhelpful, creating frustration instead of assistance.

6. Expensive for What It Is

  • GROK subscriptions are high-cost, yet the core features are underwhelming.

  • Users still need external tools or manual workarounds, reducing the value proposition.

     

    In short... GROK sucks donkey balls.

Publishing a fantasy book? Make sure you get a professional fantasy book editor.

Popular Posts