01 logo

Meta Is Worse Than You Think

Most of the criticism leveled at Meta—the privacy violations, the algorithm-driven radicalization, the Instagram mental health effects—is accurate.

By Shahzaib Published about 3 hours ago 3 min read

But what makes Meta genuinely dangerous isn't any single scandal or policy. It's something more systemic and harder to articulate: the company has built a business model that profits directly from the degradation of human attention and social connection, and it's defended that model so aggressively that it's reshaped how we think about what's possible in technology.

The privacy stuff is real and worth being angry about. Meta collects data about you in ways most people don't fully understand—not just what you post, but where you go, what you buy, what you search for, which ads you linger on. The company uses this data to build profiles so detailed that advertisers can target you with surgical precision. When Frances Haugen leaked internal documents in 2021, they revealed that Meta knew its platforms were harming teen mental health, particularly for girls, and chose not to meaningfully change its approach. That's a choice, not an accident.

But here's what bothers me more than the privacy breaches: Meta has successfully convinced much of Silicon Valley—and by extension, much of the world—that this extraction model is the only viable way to run a social platform at scale. That's the real damage.

Before Meta dominated, different business models existed. Flickr wasn't destroying itself with engagement algorithms designed to maximize outrage. Early YouTube was messy and inefficient, but it wasn't algorithmically funneling people toward conspiracy videos. These platforms had real problems, but they weren't as systematically incentivized to corrupt human behavior as Meta's properties are. The difference is that they weren't operating under the assumption that user engagement is the primary metric that matters.

Meta's contribution wasn't innovation—it was normalization. The company didn't invent targeted advertising or engagement metrics. What it did was prove that if you optimize ruthlessly for engagement above all else, you can make billions. And then it did that repeatedly, across multiple platforms, at planetary scale. Instagram wasn't always a vehicle for curating your appearance and comparing yourself to others. It became that after Meta acquired it and integrated it into its engagement-maximization system. WhatsApp wasn't designed as a privacy-respecting alternative to carrier messages—it was that, until Meta owned it and began dismantling those protections.

This creates a kind of gravity well. Once Meta proved that the extraction model works, competitors felt pressure to follow. TikTok doesn't have Meta's surveillance infrastructure, but it has an algorithm that's arguably even more effective at determining what keeps you scrolling. YouTube adopted recommendation systems based on engagement that push people toward increasingly extreme content. Reddit, Snapchat, Twitter—they all moved in this direction because the business case seemed undeniable.

The company's response to criticism has been contempt dressed up as inevitability. When researchers point out that Facebook's algorithm amplifies divisive content, Meta argues that the algorithm is just reflecting what people want. When critics raise mental health concerns, Meta publishes research that it funded itself, which finds minimal harms. When regulators threaten action, Meta hires armies of lobbyists and threatens to abandon markets. The company doesn't seem to believe it bears any responsibility for the systems it built and actively maintains.

What makes this worse is that Mark Zuckerberg has shown, repeatedly, that he doesn't believe there's a better way. Or rather, he's convinced himself that admitting there might be is the same as admitting his entire project is immoral. So instead of exploring different business models, different governance structures, different ways of measuring success, Meta doubles down. It's all-in on the metaverse. It's investing billions in brain-computer interfaces. It's fighting regulators in courts around the world because the company cannot tolerate the idea that it might need to operate differently.

The worst part is that this has become self-fulfilling. Venture capitalists fund startups based on Meta's playbook because it's proven profitable. Talented engineers build careers optimizing engagement metrics because that's where the money is. Policy moves slowly, and by the time regulation catches up, the damage is already embedded in how we think about technology. The idea that "you're the product" isn't even shocking anymore—it's just how things are.

What we've lost, in the Meta era, is the sense that technology companies might have obligations beyond maximizing shareholder value. We've lost the possibility of social platforms that prioritize being useful over being addictive. We've normalized a level of surveillance that would have seemed dystopian twenty years ago. And we've taught a generation that their attention and their data are things to be harvested, not protected.

The company itself is just one actor. But it's a powerful one. Meta has shaped what's possible in technology, and right now, what Meta has made possible is extracting maximum value from human behavior while distributing maximum harm. That's not a feature. It's the whole business model.

The scary part is that it's working. That's what Meta is worse for not fixing—the fact that this works, and everyone knows it works, and now they're all copying it.

appscybersecurityproduct reviewtech newsmobile

About the Creator

Shahzaib

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.