Internet Privacy And Control: Why Privacy Is Dying
- Michael Burgess

- Feb 25
- 15 min read

Internet privacy and control: why the internet feels like it's closing
You've likely experienced the unsettling Awareness that our world is increasingly built on the principles of control rather than on the needs of individuals. Despite your efforts to lead a normal life, finding solace in work, unwinding during leisure time, engaging in meaningful conversations, pursuing knowledge, and sharing laughter, the landscape seems to shift beneath your feet.
Rules that once felt stable are now in constant flux, ushering in an array of new checks and limitations. Each day brings fresh restrictions, ostensibly designed for our safety but often feeling more like encroachments on our Freedom. With every new policy and updated term of service, it becomes increasingly clear: our privacy erodes a little more, our choices narrow with each passing moment. The once straightforward path of daily life is now lined with obstacles, leaving us to navigate a world that feels increasingly like a carefully controlled maze. This is really about internet privacy and control. You feel the change because it shows up in normal life: more checks, more tracking, more rules, and less Room to move. It's not just "privacy stuff" anymore. It's how the internet is being reshaped around compliance, risk scoring, and gatekeeping.
So what is going on?
Let's discuss this candidly, without any pretence of imagining scenarios. Why does it often seem as though the world is working against its own citizens?
A significant part of this perception can be attributed to underlying incentives. Governments prioritise stability and risk minimisation to maintain social order. Meanwhile, large institutions have a vested interest in reducing uncertainty, as it allows for predictability in their operations and financial outcomes. Technology platforms, driven by their need for continual growth, seek to create a safe environment for advertisers, ensuring minimal backlash and legal repercussions. Payment networks are primarily focused on compliance with a myriad of regulations that govern financial transactions, which adds another layer of complexity.
Moreover, businesses that thrive on data, those that rely on intricate algorithms and analytics, yearn for more information, clinging to the belief that more signals will lead to better decision-making.
When these diverse incentives converge, the result is often a familiar and troubling outcome: increased surveillance, stricter gatekeeping measures, heightened friction for everyday individuals, and an accumulation of power in the hands of those who manage these systems.
This phenomenon doesn't necessarily require a grand, orchestrated conspiracy. Instead, it tends to manifest gradually and insidiously. We see what's often referred to as policy drift, where regulations shift subtly over time, corporate drift, where businesses continuously prioritise profit over public interest, and fear-based drift, where decisions are influenced by the prevailing anxieties of the moment, all of which inexorably push toward a similar end point: a society that feels increasingly restrictive and disempowering for its citizens.
Why is privacy dying?
The modern internet operates on a framework of extraction. In our quest for convenience and access, we allow various platforms to collect extensive data about our online activities, our browsing habits, geographic locations, click patterns, and even our moments of hesitation. This accumulation of information leads to detailed profiling, which can evolve into predictive analytics. These predictions, in turn, have the potential to exert control over our choices and behaviours, or at a minimum, exert significant influence.
A recent report from the Carr Centre for Human Rights Policy at Harvard clearly articulated this dynamic, emphasising that both corporations and governmental entities consistently extract, analyse, and monetise personal data. This often occurs without meaningful consent from individuals, creating a stark power imbalance that is, in many ways, the crux of the issue.
With the advent of artificial intelligence, the situation becomes even more complex. A report from Stanford University shed light on how leading AI developers incorporate user inputs back into their models. This often occurs due to unclear privacy documentation. For instance, when users share sensitive information with an AI system, such as personal messages, medical records, or proprietary files, there is a risk that this data may be stored and used for ongoing AI training, potentially leading to further dissemination of private information.
Thus, the erosion of privacy is not a result of personal failure to safeguard our data. Rather, it is a systemic issue rooted in the default configurations of the contemporary digital economy, which incentivises the accumulation of data over the respect for individual privacy. In such an environment, the relentless drive to collect more data often overshadows the imperative to protect what we share.
Why is Freedom of speech being shut down?
The internet has increasingly taken on the role of a regulated public square, even as it remains largely under private ownership. In response to mounting pressure from governments worldwide, digital platforms are compelled to remove illegal content swiftly. This pressure often leads to a phenomenon known as over-removal. Platforms, wary of the potential consequences of leaving harmful content online, tend to err on the side of caution, leading to unnecessary suppression of legitimate speech.
In the European Union, the Digital Services Act establishes a framework of measures and codes specifically designed to combat illegal hate speech in the digital sphere. The EU Commission has made it clear that these regulations entail expectations for swift removal, contributing to a more stringent approach to content moderation. A recent Reuters report highlighted how these European regulations are fostering a tougher stance on what constitutes "harmful" content, creating significant tensions with the United States. In response, some companies are even developing plans for portals that could bypass existing content bans altogether, reflecting a growing international divergence in digital policy.
At first glance, the call to remove illegal content may seem uncontroversial; most people can agree on the necessity of such actions in principle. However, the crux of the issue lies in the ambiguous grey areas that arise. The definition of what constitutes "harm" continues to expand and evolve, leading to increasingly automated enforcement mechanisms. Consequently, appeals processes for affected individuals can be sluggish and inefficient, often leaving them at a loss. Furthermore, vital context surrounding individual cases be often overlooked, creating a disconnect between the regulations' intentions and their real-world implications.
The consequences of this landscape are not always a straightforward manifestation of censorship. More insidiously, we witness a rise in self-censorship, where individuals hesitate to express their thoughts or share their views for fear of repercussions. This chilling effect stifles open dialogue and fosters an environment where people choose silence over potential conflict, ultimately undermining the fundamental principles of free expression that many value.
Why is information being suppressed?
At times, the manipulation is overt; at others, it operates subtly beneath the surface. Freedom House's 2025 report on Freedom on the Net presents a stark assessment: the internet landscape has become more tightly controlled and expertly manipulated than ever before, marking a 15th consecutive year of persistent decline in global internet freedom.
Crucially, the report underscores a significant shift in how information is handled online: the art of manipulating narratives has transitioned from being a mere byproduct of internet dynamics to a fundamental tactic employed by various actors. Information is no longer just shared or suppressed; it is actively shaped, amplified, buried, and distorted to serve specific agendas.
In this new paradigm, the concept of a "ministry of truth" becomes obsolete. Instead of relying on a centralised authority to dictate information, the mechanisms of control can be implemented through more insidious methods. For instance, content feeds can be adjusted, the reach of certain posts can be stifled, key intermediaries can be coerced into compliance, and a deluge of conflicting information can overwhelm genuine discourse.
The centralisation of these systems means that a handful of strategic decisions can dramatically alter the information landscape, effectively reshaping what millions of individuals access and perceive online. This manipulation not only impacts personal worldviews but also has far-reaching implications for public discourse, democracy, and the very fabric of society.
Why is the internet being shut down and connections being cut off?
Access to the internet is synonymous with power, and when that power is compromised, it can be weaponised against individuals and communities. The Internet Society has documented a concerning rise in internet shutdowns, reporting 133 incidents in 2024 alone, with the trend continuing into 2025. These shutdowns are characterised as a blunt and disproportionate strategy that undermines fundamental rights, disrupts economies, and erodes trust within societies.
This phenomenon represents a clear and alarming manifestation of a world where "the internet is closing." With a mere flip of a switch, entire populations find themselves cut off from their ability to organise protests, report on critical issues, engage in commerce, or even maintain contact with loved ones. The sudden deprivation of online access can yield devastating consequences, creating an environment of uncertainty and fear.
Even in situations where a complete shutdown does not occur, individuals may experience a pervasive sense of restriction that manifests in various forms. This can include reduced anonymity online, increased content blocks, more frequent Identity verification prompts, heightened regional barriers, and the proliferation of paywalls. Furthermore, users may frequently encounter messages indicating, "this content is not available in your country," which serves as a stark reminder of the limitations imposed on their digital interactions.
What we're witnessing is not an isolated incident but a troubling pattern of behaviour that threatens the open and free nature of the internet. It's a complex web of barriers designed to control access and limit the flow of information, ultimately shaping the way individuals and societies leverage the digital landscape for Freedom and expression.
Why do we "own nothing", and why are subscriptions everywhere?
The landscape of contemporary business models has shifted significantly away from traditional ownership structures due to their inherent drawbacks. Subscription-based services have become a preferred approach, providing companies with predictable revenue streams. Licensing agreements empower organisations by granting them control over their digital offerings, while cloud services enable providers to modify terms and conditions instantly. Furthermore, digital rights management (DRM) technologies can limit the usability of purchased content, leading consumers to question the true extent of their ownership, as in statements like, "I paid for it, but I don't own it."
This growing confusion has caught the attention of lawmakers, who are taking action. For instance, California enacted Assembly Bill 2426, which mandates clearer disclosures when consumers acquire a license to a digital product rather than actual ownership. This legislation aims to prevent companies from marketing these licenses as permanent purchases without clearly conveying the limitations involved.
On the subscription front, regulators have received numerous complaints about "negative option" marketing tactics, where the sign-up process is straightforward but opting out is difficult. The Federal Trade Commission's (FTC) "click to cancel" initiative seeks to address these prevalent issues, highlighting the magnitude of the problem despite various legal challenges that have complicated its implementation. This ongoing situation underscores the pressing need for transparency and fairness in the marketing and management of digital goods and services.
So, is the phrase "own nothing and be happy" literally a policy? No. Is the world gravitating toward a paradigm of "access over ownership"? The answer is a resounding yes. This shift is occurring primarily because it is financially advantageous, and the digital economy enables such practices to be enforced seamlessly.
Why is child safety often used as a justification when the underlying motives extend beyond mere concern for children?
The answer lies in the intricate dynamics of the political landscape. Among the various justifications used in policymaking, child safety stands out as arguably the most compelling and emotionally resonant. Policymakers are acutely aware that when a policy is presented as essential to "protect children," it often bypasses the usual legislative scrutiny and is fast-tracked through the lawmaking process. This framing leverages society's deeply rooted instinct to safeguard its youngest members, making it politically risky for lawmakers or advocacy groups to oppose such measures. As a result, critics and opponents of these policies are frequently painted as unsympathetic or even malicious, their concerns about potential overreach dismissed or ignored. This dynamic stifles robust public debate, as few are willing to risk reputational harm by appearing to oppose child protection, even when legitimate questions arise about the broader impact of the proposed laws.
It is crucial to acknowledge an uncomfortable reality. While safeguarding children online is both genuine and vitally important, the means by which this objective is pursued can have far-reaching consequences. The tools, technologies, and regulatory frameworks established in the name of child protection often extend well beyond their stated purpose. For instance, age verification systems, content filtering technologies, and increased surveillance mechanisms designed to shield minors can easily be repurposed or expanded to monitor and control the digital activities of adults. This raises significant concerns about privacy, Freedom of expression, and the potential for government or corporate overreach. In effect, policies intended to protect children can inadvertently create infrastructure that impacts the broader population, reshaping how everyone navigates and interacts within the digital realm. It is therefore essential to critically examine not only the intent behind such measures but also their potential for unintended and lasting societal changes.
Look at the UK.
The Online Safety Act establishes legal obligations on digital platforms to protect children online. Central to this framework is the explicit expectation for platforms to implement highly effective age verification processes. This requirement aims to prevent minors from accessing pornography and other potentially harmful content, underscoring the need for robust measures as part of child safeguarding initiatives.
Ofcom's enforcement program and accompanying guidance clearly delineate that age verification checks are mandatory compliance obligations rather than optional practices. This shift signifies a critical change in how online safety is approached, prioritising stringent measures to verify users' ages.
In practical terms, the demand for "highly effective" age verification means that adults will also need to undergo similar processes to prove their age. Such measures often entail invasive steps such as facial recognition scans, the submission of government-issued identification, or even linking to established Identity verification providers. This shift lays the groundwork for a comprehensive Identity verification system under the guise of enhancing child safety.
However, the construction of this Identity infrastructure tends to evolve beyond its initial purpose. As platforms increasingly implement age gating, users seeking privacy may resort to using Virtual Private Networks (VPNs) to circumvent it. Reports, such as those from The Guardian, have indicated a notable increase in the use of VPNs and other privacy-protecting technologies following the implementation of new age-verification codes in the UK, highlighting growing resistance among users.
The next phase in this trajectory often involves increasing scrutiny of VPN services. A case in point is the proposal by Wisconsin lawmakers to introduce an age-verification Bill that included provisions aimed at blocking VPN users' access. Although this provision was ultimately retracted due to public backlash, the mere introduction of such a proposal signals a concerning trend: broad regulatory mechanisms ostensibly aimed at protecting children could lead to increasingly aggressive restrictions on privacy tools for all users.
This pattern reveals a troubling dynamic: when sweeping control measures are enacted "under the pretence of safeguarding children," there inevitably follows a tightening of restrictions on the very tools that individuals employ to safeguard their privacy online.
Chat Control, end-to-end encryption, and "lawful access"
The topic at hand evokes significant concern among many individuals, and these worries are far from unwarranted. The European Union has been engaged in extensive discussions regarding a proposed regulation aimed at preventing and combating child sexual abuse online, commonly referred to as "Chat Control." The crux of the matter is straightforward: to effectively identify illegal content in private communications, Technology may be required to scrutinise messages exchanged on encrypted platforms.
Recent reports from Euronews indicate that EU member states have encountered considerable challenges in reaching a consensus on this proposal. Central to the debate are pressing issues surrounding privacy and cybersecurity. The proposal to compel messaging services to monitor private conversations, encompassing not only text messages but also images, videos, and URLs, has emerged as a major point of contention.
In 2022, the European Data Protection Board and the European Data Protection Supervisor issued grave warnings about the potential implications of the proposal as it was then articulated. They cautioned that it could lead to widespread, indiscriminate scanning of personal communications, underscoring the critical importance of maintaining end-to-end encryption to protect user privacy.
Furthermore, there is a growing climate of political scrutiny surrounding this issue. A parliamentary inquiry conducted in 2025 explicitly highlighted concerns about the prospect of blanket surveillance of private communications under the "Chat Control" initiative, underscoring widespread unease about the erosion of digital privacy in the name of safety.
When we step back to evaluate the broader context, it becomes evident that the European Commission is advancing a more comprehensive internal security agenda. Its "lawful access to data" roadmap frames the situation as one in which law enforcement agencies require access to electronic evidence, a need underscored by the fact that a significant percentage of criminal investigations rely on digital data. This is particularly significant given the uptick in requests for data access from law enforcement agencies across Europe.
While the intentions behind these efforts may be genuine and aimed at apprehending criminals, there is a palpable risk of mission creep, in which the presumption of guilt extends to all individuals, effectively treating everyone as a potential suspect.
The battle over encryption itself is not merely a theoretical debate; it has substantive, real-world implications. For instance, The Financial Times reported that Apple decided to withdraw its Advanced Data Protection feature, which offers end-to-end encrypted iCloud backups, from the United Kingdom following directives from the government under the Investigatory Powers Act. This incident serves as a stark illustration of the mounting pressure being placed on encryption technologies.
Thus, when individuals assert that "they're coming for encryption," it's not an exaggeration; the evident pressure is persistent and recurring, often justified under the banner of public safety and security.
EU digital Identity wallets
The European Union has established a comprehensive legal framework for a European digital Identity wallet, which is designed to empower member states to offer digital wallets that seamlessly connect users' identities with various attributes, such as driving licenses and other official documents. This initiative emphasises voluntary participation and prioritises user control over personal data.
In theory, this framework offers numerous advantages, enabling individuals to verify specific pieces of information without disclosing their identities. For instance, a user could confirm their age without revealing their exact birthdate or other sensitive details. However, it is crucial to raise a significant concern. Once a standard for digital Identity verification is implemented, it could become a default requirement for access to various services and platforms. This concern intensifies with the normalisation of practices such as age verification and the designation of certain identities as "trusted." As these systems become more entrenched, there is a risk that individuals will be increasingly compelled to conform to a single method of Identity verification, potentially jeopardising personal privacy and autonomy over time.
Digital euro
The Council of the European Union has reached consensus on the digital euro framework, highlighting its dual functionality for both online and offline transactions. This initiative is designed to provide users with a high level of privacy, enabling the digital euro to coexist with existing private payment methods and thereby enhancing consumer choice in financial transactions.
According to Reuters, the European Central Bank (ECB) estimates that implementing the digital euro could cost EU banks billions of euros. The ECB has indicated that a fully operational version of the digital euro could be available by 2029, with pilot programs potentially launching in the years leading up to full deployment.
Euronews has characterised the introduction of the digital euro as a proactive measure in response to the notable decline in cash usage across the continent and the growing dependency on non-European payment platforms. Additionally, it emphasises promises of enhanced privacy features for offline transactions, ensuring that users have access to a "secure and private" means of making payments in a rapidly digitising economy.
People harbour a deep-seated fear of centralised financial systems that can be tracked, restricted, or manipulated. Even if the current design starts with promises of privacy, history shows us that systems evolve. Legal frameworks shift. Unforeseen emergencies arise. Each of these factors increases the temptation to implement additional controls over money and data.
This fear taps into the fundamental emotional truth behind this idea: there is a profound mistrust in the trajectory of our financial and digital systems.
So what's the worry?
So, the question arises: Is there a coordinated "agenda" working against individual freedoms?
If you're envisioning a singular, clandestine plan devised in a secret conference Room, I cannot honestly affirm that such a scheme exists. However, if you consider a convergence of various incentives that increasingly erode your autonomy while amplifying institutional control, then yes, that phenomenon is very real.
The evidence is troublingly clear: the findings of Freedom House highlight the decline in civil liberties, the alarming surge in internet shutdowns, the regulatory push for stringent age verification, repeated efforts to undermine encryption, the proliferation of Identity management systems, and a shift in how we access services from ownership to subscription models, all of which indicate a troubling trend toward greater control.
The motivations behind these developments are not universally malicious; sometimes they stem from bureaucratic inertia, fear-based decision-making, profit motives, or political agendas. Nevertheless, the ramifications of these trends inevitably land squarely on your shoulders, impacting your daily life and autonomy.
So, what practical steps can you take (without resorting to extreme measures like going off the grid)?
Firstly, it's unnecessary to adopt a paranoid mindset. However, it is crucial to cultivate intentionality in your actions. Here are some strategies you might consider:
Utilise Privacy-Conscious Tools
Leverage technologies that minimise data collection wherever feasible. Opt for messaging services that prioritise end-to-end encryption, and remain vigilant when governments push for "lawful access," which often morphs into blanket forms of surveillance.
Be Discerning with Subscriptions
Treat subscription services as if they were debts. If you have the opportunity to buy and own a product outright, do so. If subscription payments are unavoidable, maintain a list of your services and evaluate it monthly to eliminate any that no longer serve you.
Maintain Local Backups
Regularly back up important data, such as your photographs, writing, and professional work. While cloud services offer convenience, they often require permissions and may be subject to unexpected access restrictions.
Support Advocacy Groups
Contribute to organisations and individuals who champion digital rights. Effective policies emerge not merely from rational discourse but also from collective pressure and activism.
Engage in Open Dialogue
Perhaps the most significant step you can take is to initiate conversations about these issues. Silence and indifference only facilitate the gradual encroachment of authoritarian measures.
It's essential to recognise that none of this is predetermined; it represents a battleground of contested ideas and values.
So, do you believe we are witnessing a transient shift toward greater control that will eventually be corrected, or is this the new normal unless individuals actively resist and advocate for their freedoms? When you put it all together, internet privacy and control stop feeling like a niche concern. It becomes the main story. Age verification, pressure on encryption, friction around VPN use, and Identity systems all point in one direction: a more permissioned internet, where access depends on proving who you are.




Comments