Big Tech is Using Our Data to Shape Us, Not Just Sell to Us

I’ve always known that my activity online was being tracked. Any time I opened Instagram or TikTok, it was obvious. At first, the feed felt totally random. Then within minutes, videos popped up that fit my interests perfectly. Old friends I hadn’t thought about in years were suddenly being suggested.

What I hadn’t fully grasped until recently, though, is just how far this tracking actually goes and how risky it can be. The moment this clicked for me was during a conversation with my law professor this spring. We were talking about U.S. politics when he said something that stuck with me. When people say they’ll “do their own research,” it’s not always as safe as it sounds. Platforms like TikTok and Instagram don’t just reflect what we’re curious about. They feed us exaggerated, misleading, or outright false information, reinforcing whatever we already believe.

I’d vaguely thought about social media’s role in political polarization before. But I hadn’t realized the full extent of how these platforms manipulate opinions or that they shape the very ideas people think they’ve come up with on their own. That realization made me want to dig deeper. How exactly do tech companies collect our data? What do they do with it? And why does it matter for more than just personalized ads?

Big Tech’s influence over our lives is growing faster than we can keep up with. It’s changing how we communicate, work, shop, and even how we think. That’s why it’s so important to understand what it really means when we say these companies "have our data."

What Data Are Tech Companies Actually Collecting?

When people hear that their data is being collected, they usually imagine things like their name or email address. But it’s so much more than that. Big Tech builds entire digital profiles of us by gathering everything from contact info and IP addresses to browsing history, clicks, purchase records, and even how long we pause on certain posts.

Basically, if you’re using a device, you're being tracked. Giants like Google, Meta, TikTok, and Amazon have turned personal data into a trillion-dollar business using what they learn to predict what we’ll like, what we’ll buy, and what will keep us glued to the screen.

How Do They Collect This Data?

There are three main ways this happens.

  1. You Give It to Them
    Anytime you sign up for an account, fill out a form, enter a contest, or do a survey, that’s data you’re handing over willingly.

  2. They Watch You
    Every site you visit is tracking something. How long you stay, what you click, how fast you scroll. Little tools like cookies and pixels follow you around the internet, gathering details even when you think you’ve left the site.

  3. They Buy It
    Companies buy data from brokers who’ve pieced together information through cross-platform sharing from apps and websites. This gives them scarily complete profiles of who you are, what you do, and what you’re likely to want next.

What Do They Do With All This Data?

Their main goal is to create a super-detailed profile of you so they can target you with content and ads designed to keep you engaged as long as possible. The more time you spend, the more money they make.

Take TikTok. Its entire algorithm is built to feed you exactly what you’ll find hard to stop watching. It's not about showing what you might like. It’s about keeping you there, endlessly scrolling.

But there’s more. Companies like Meta, which owns Facebook and Instagram, are in the business of surveillance advertising. That means they make their money not by running platforms but by selling ultra-targeted access to you. Meta watches your activity across millions of sites, even when you’re not on Facebook or Instagram. This became obvious when Apple introduced new privacy settings on iPhones. Meta lost billions overnight because they suddenly couldn’t track users the way they used to. That’s how central your data is to their business.

Why Is This So Dangerous?

Sure, the idea of being sold to is annoying, but the risks are much bigger than that.

  1. Biased or Manipulative AI
    AI models learn from the past, including all its biases. One study showed mortgage approval algorithms were more likely to reject black applicants than white ones with identical finances because the training data reflected decades of unfair lending practices.

  2. Microtargeting and Opinion-Shaping
    Algorithms aren't just recommended. They influence. Think of Cambridge Analytica, which used Facebook data to target U.S. voters with political ads designed to sway their decisions. Or YouTube’s recommendation engine, which researchers found nudged right-leaning users toward conspiracy content and political extremism. These platforms aren’t just reflecting your interests. They’re actively shaping them.

  3. Privacy Erosion
    Even harmless-looking data points like what you search or where you shop can reveal deeply personal things: your sexuality, religion, health, or political leanings. This info can be sold or shared without your knowledge. One report even showed that ads could track U.S. military personnel on base, down to locations of suspected nuclear sites.

  4. Surveillance Misuse
    Governments and police increasingly buy this data, no warrants needed. In the UK, live facial recognition tech was found to misidentify women and people of color far more often than white men. Why? Because the systems were trained on biased and incomplete data.

So What Now?

When I finished researching all this, I felt frustrated and helpless. I hadn’t realized how much of my online life was being tracked or how that tracking could influence what I think and believe. Part of me wanted to quit social media altogether.

But that’s not realistic. These platforms are how I stay connected, make plans, and keep in touch with family and friends. Tech is woven into every part of life. It’s not something I can or want to fully walk away from.

So instead, I made some changes. I cut down my screen time. I stopped mindlessly scrolling. I got intentional about where I get my news and how I stay informed.

But personal changes aren't enough. We also need bigger shifts. Changes in how tech companies operate and rules that put people first, not profit. That’s how I discovered the concept of humane technology.

Humane technology is all about designing tools and platforms that prioritize human well-being over clicks and dollars. It asks companies to think about mental health, fairness, ethics, and sustainability, not just engagement. It reminds us that tech should serve us, not manipulate us.

Individual choices matter. But real change means holding Big Tech accountable, pushing for better policies, and demanding systems that respect people.

To continue this conversation, the next post will explore what humane technology really means and how it could help us build a healthier digital future.

_____

Forbes Technology Council. (2025). 20 modern tech tools that are advancing public safety. Forbes. https://www.forbes.com/councils/forbestechcouncil/2025/05/23/20-modern-tech-tools-that-are-advancing-public-safety/

IT Chronicles. (n.d.). How do big companies collect customer data? https://itchronicles.com/big-data/how-do-big-companies-collect-customer-data/

ScienceDirect. (n.d.). User data. https://www.sciencedirect.com/topics/computer-science/user-data

Forbes Technology Council. (2022). What does big tech actually do with your data? Forbes. https://www.forbes.com/councils/forbestechcouncil/2022/02/16/what-does-big-tech-actually-do-with-your-data/

O’Flaherty, K. (2021). Apple’s new iPhone privacy features cost Facebook $10 billion. Forbes. https://www.forbes.com/sites/kateoflahertyuk/2021/11/06/apples-new-iphone-privacy-features-cost-facebook-10-billion/

Stacey, E. (2020). Can AI ever treat all people equally? Forbes. https://www.forbes.com/sites/edstacey/2020/02/20/can-ai-ever-treat-all-people-equally/

Rosenberg, M., & Confessore, N. (2018). Cambridge Analytica and Facebook: The scandal and the fallout so far. The New York Times. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html

Martineau, P. (n.d.). How the U.S. military buys location data from ordinary apps. Wired. https://www.wired.com/story/phone-data-us-soldiers-spies-nuclear-germany/?utm_source=chatgpt.com

Lehigh University. (n.d.). AI exhibits racial bias in mortgage underwriting decisions. https://news.lehigh.edu/ai-exhibits-racial-bias-in-mortgage-underwriting-decisions?utm_source=chatgpt.com

Fransiska Blois

Fransiska is a Business Management student at ESCP Paris and a Product Management Intern at hansel.ai.

Previous
Previous

Forging a More Humane Technology

Next
Next

Technology is a Tool, Not an End State