From Care to Control
Why Trump’s “Making Health Technology Great Again” Policy Should Alarm Us All
By the Center for Racial and Disability Justice
In a White House event on July 30, 2025, President Trump stood flanked by HHS Secretary Robert F. Kennedy Jr. and CMS Administrator Dr. Mehmet Oz to unveil his latest “innovation”: a sweeping new digital health initiative dubbed “Making Health Technology Great Again.” With more than 60 major tech and health companies — including Apple, Google, Amazon, Epic, OpenAI, and CVS Health — pledging support, the policy was touted as a long-overdue modernization of America’s health data infrastructure. Behind the scenes, however, this policy is far more dangerous than it appears.
While marketed as a voluntary, patient-empowering system that will improve care coordination, this initiative is in fact part of a much larger, more troubling trend: the centralization of federal data under the Department of Government Efficiency (DOGE). Under the guise of technological progress, this effort risks turning health care systems into surveillance engines, placing some of the country’s most marginalized people — disabled individuals, low-income communities, and immigrants — at risk of coercion, exploitation, and state violence.
Policy Claims
The core of the policy is a CMS-managed Digital Health Ecosystem that allows Medicare and Medicaid recipients to “opt in” and share their personal health information — including electronic medical records, wearable data, and app-tracked wellness metrics — with a federated digital system. The ecosystem would aggregate public and private data to provide AI-powered patient tools and enable seamless data exchange between government, providers, insurers, and private tech developers.
At first glance, this sounds appealing. Who wouldn’t want their doctor to have a more complete view of their health, or an AI assistant to help manage a chronic condition? But dig deeper, and a more insidious picture emerges.
What Could Go Wrong?
This digital health ecosystem exists within a legal and regulatory framework that is inadequate to protect the rights and privacy of marginalized populations. It is being deployed through programs serving communities already subject to disproportionate surveillance, administrative hurdles, and systemic discrimination. Centralizing their health data in a federal infrastructure only heightens their exposure to harm.
HIPAA, our primary health privacy law, does not apply to many of the tech companies in this ecosystem, who are largely unregulated when it comes to protecting health data. As a result, sensitive personal information can be used in ways that the public neither expects nor consents to (e.g., monetization, misuse, or third-party surveillance). According to the Center for Telehealth & E‑Health Law, the plan intensifies ambiguity over how health data will be regulated. Without oversight and consent mechanisms, the policy risks harming patients more than helping them. Yet, there are no standards or oversight frameworks for the AI-powered tools in this ecosystem; leaving patients vulnerable to inaccurate recommendations, biased algorithms, and opaque decision-making.
This raises urgent concerns about data being used to profile, flag people as “high-risk,” or justify restrictions on benefits/services. The lack of transparency, oversight, and consent creates a regulatory vacuum — one that endangers disabled people, immigrants, racialized communities, and those living in poverty.
Perhaps most alarming, the people most impacted had no voice in shaping this policy. There was no advisory council, no listening session, no public design process. Instead, decisions about how millions of Americans’ health data will be collected, processed, and shared are being made in closed rooms with private tech companies at the table, and the patients themselves on the menu. This system opens the door to medical surveillance and administrative control, deepening existing inequalities and stripping individuals of agency over their health information.
“Voluntary” Participation
The policy is described as “opt-in,” but when the rollout is being implemented by the federal agency responsible for Medicare and Medicaid, we have to ask: what does “voluntary” really mean?
For millions of low-income, elderly, and disabled Americans who rely on these programs, participation may not feel optional. With few alternatives to public insurance, beneficiaries may feel compelled to consent simply to avoid bureaucratic delays, maintain access to services, or keep up with “modernized” requirements. This functional coercion undermines the very concept of informed consent; particularly when compounded by barriers related to literacy, language access, digital inclusion, or cognitive disability. Unlike many with private insurance, Medicaid and Medicare recipients often have no safety net if data is misused. Further, there is no clear appeals process or liability protections.
Marginalized people are the testing ground for this policy. Without civil rights guardrails, data governance frameworks, or strong oversight, this “voluntary” portal could become a new form of digital coercion, wrapped in the language of empowerment.
Surveillance State Health
The true danger of this initiative lies in the centralization of federal data infrastructure under DOGE, which has rapidly consolidated access to a staggering range of federal databases — merging Social Security, IRS, immigration, and now health data — into centralized repositories. Powered by AI tools, DOGE has slashed federal regulations, eliminated tens of thousands of jobs, and upended privacy protections established over decades. AI-driven tools built with biased data reinforce racist, ableist assumptions, especially in diagnostics, mental health screening, or “risk scoring” systems.
The CMS digital health ecosystem plugs directly into this architecture. There is no legal or technical firewall preventing health data from being accessed by agencies tasked with immigration enforcement, policing, or public benefit fraud investigations. This includes not just clinical information from hospitals and providers, but also data from fitness apps, wearable trackers, and symptom-checking AI. The implications are staggering.
From Health to Harm
We’ve seen this before. The “public charge rule” has been used to deny green cards to immigrants receiving medical care or disability-related services. In schools, disabled students of color are disproportionately subjected to disciplinary surveillance and exclusion. In jails and prisons, health information is used to justify solitary confinement or forced treatment.
This centralized federal database, linked to AI-powered surveillance tools, risks institutionalizing discrimination on a national scale. This isn’t healthcare. It’s medicalized control.
There are steps we can take:
- Demand Legislative Guardrails: Congress must bar law enforcement or immigration access to CMS data and require federally funded digital health platforms to meet HIPAA-equivalent privacy standards.
- Hold CMS Accountable: Civil rights advocates should press CMS to halt implementation until accessibility, consent, data governance, and anti-discrimination protections are enforceable.
- File FOIA Requests & Lawsuits: Advocacy groups can seek records on data-sharing with DHS, ICE, or other enforcement agencies and sue if constitutional rights are violated.
- Center Marginalized Voices: Any patient-serving system must be co-designed with those most impacted: disabled people, immigrants, racialized communities, and low-income patients. Nothing about us, without us.
“Making Health Technology Great Again” claims to modernize American healthcare, but it is a radical reconfiguration of power — one that threatens to turn our most intimate health data into a tool of state surveillance. Technology can indeed improve healthcare, but not if it compromises our privacy, undermines our rights, and deepens inequality. The question we must ask is not whether the system is innovative — but who it serves, who it harms, and who decides.
We deserve better than a future where care is a cover for coercive control.
The Northwestern Pritzker Law Center for Racial and Disability Justice (CRDJ) is a first-of-its-kind center dedicated to promoting justice for people of color, people with disabilities, and individuals at the intersection of race and disability.
Learn more about CRDJ by visiting the Center for Racial and Disability Justice webpage.
