Former President Donald Trump has announced a new artificial intelligence project that focuses heavily on reducing federal oversight and tackling what he terms political partiality within AI systems. As artificial intelligence quickly grows in numerous fields—such as healthcare, national defense, and consumer tech—Trump’s approach marks a shift from wider bipartisan and global endeavors to enforce stricter scrutiny over this advancing technology.
Trump’s latest proposal, part of his broader 2025 campaign strategy, presents AI as both an opportunity for American innovation and a potential threat to free speech. Central to his plan is the idea that government involvement in AI development should be minimal, focusing instead on reducing regulations that, in his view, may hinder innovation or enable ideological control by federal agencies or powerful tech companies.
Aunque otros líderes políticos y organismos reguladores en todo el mundo están desarrollando marcos orientados a garantizar la seguridad, transparencia y uso ético de la inteligencia artificial (IA), Trump está presentando su estrategia como una medida correctiva frente a lo que considera una creciente interferencia política en el desarrollo y uso de estas tecnologías.
At the heart of Trump’s plan for AI is a broad initiative aimed at decreasing what he perceives as excessive bureaucracy. He suggests limiting federal agencies’ ability to utilize AI in manners that may sway public perspectives, political discussions, or policy enforcement towards partisan ends. He contends that AI technologies, notably those employed in fields such as content moderation and monitoring, can be exploited to stifle opinions, particularly those linked to conservative perspectives.
Trump’s proposal suggests that any use of AI by the federal government should undergo scrutiny to ensure neutrality and that no system is permitted to make decisions with potential political implications without direct human oversight. This perspective aligns with his long-standing criticisms of federal agencies and large tech firms, which he has frequently accused of favoring left-leaning ideologies.
His strategy also involves establishing a team to oversee the deployment of AI in government operations and recommend measures to avoid what he describes as “algorithmic censorship.” The plan suggests that systems employed for identifying false information, hate speech, or unsuitable material could potentially be misused against people or groups, and thus should be strictly controlled—not in their usage, but in maintaining impartiality.
Trump’s artificial intelligence platform also focuses on the supposed biases integrated into algorithms. He argues that numerous AI systems, especially those created by large technology companies, possess built-in political tendencies influenced by the data they are trained with and the objectives of the organizations that develop them.
While researchers in the AI community do acknowledge the risks of bias in large language models and recommendation systems, Trump’s approach emphasizes the potential for these biases to be used intentionally rather than inadvertently. He proposes mechanisms to audit and expose such systems, pushing for transparency around how they are trained, what data they rely on, and how outputs may differ based on political or ideological context.
Her proposal does not outline particular technical methods for identifying or reducing bias; however, she suggests the creation of an autonomous entity to evaluate AI tools utilized in sectors such as law enforcement, immigration, and digital interaction. She emphasizes that the aim is to guarantee that these tools remain “unaffected by political influence.”
Beyond worries about fairness and oversight, Trump’s strategy aims to ensure that America leads in the AI competition. He expresses disapproval of current approaches that, in his opinion, impose “too much bureaucracy” on developers, while international competitors—especially China—progress in AI technologies with government backing.
To address this, he proposes tax incentives and deregulation for companies developing AI within the United States, along with expanded funding for public-private partnerships. These measures are intended to bolster domestic innovation and reduce reliance on foreign tech ecosystems.
On national security, Trump’s plan is less detailed, but he does acknowledge the dual-use nature of AI technologies. He advocates for tighter controls on the export of critical AI tools and intellectual property, particularly to nations deemed strategic competitors. However, he stops short of outlining how such restrictions would be implemented without stifling global research collaborations or trade.
Notably, Trump’s AI framework makes limited mention of data privacy, a concern that has become central to many other proposals in the U.S. and abroad. While he acknowledges the importance of protecting Americans’ personal information, the emphasis remains primarily on curbing what he views as ideological exploitation rather than the broader implications of AI-enabled surveillance or data misuse.
The lack of involvement has been criticized by privacy advocates, who claim that AI technologies—especially when utilized in advertising, law enforcement, and public sectors—could present significant dangers if implemented without sufficient data security measures. Opponents of Trump argue that his strategy focuses more on political issues rather than comprehensive management of a groundbreaking technology.
Trump’s approach to AI policy is notably different from the new legislative efforts in Europe. The EU is working on the AI Act, which intends to sort systems by their risk levels and demands rigorous adherence for applications that have substantial effects. In the United States, there are collaborative efforts from both major political parties to create regulations that promote openness, restrict biased outcomes, and curb dangerous autonomous decision-making processes, especially in areas such as job hiring and the criminal justice system.
By advocating a hands-off approach, Trump is betting on a deregulatory strategy that appeals to developers, entrepreneurs, and those skeptical of government intervention. However, experts warn that without safeguards, AI systems could exacerbate inequalities, propagate misinformation, and undermine democratic institutions.
The timing of Trump’s AI announcement seems strategically linked to his 2024 electoral campaign. His narrative—focusing on freedom of expression, equitable technology, and safeguarding against ideological domination—strikes a chord with his political supporters. By portraying AI as a field for American principles, Trump aims to set his agenda apart from other candidates advocating for stricter regulations or a more careful embrace of new technologies.
The proposal also reinforces Trump’s broader narrative of fighting against what he describes as an entrenched political and technological establishment. AI, in this context, becomes not just a technological issue, but a cultural and ideological one.
Whether Trump’s AI plan gains traction will depend largely on the outcome of the 2024 election and the makeup of Congress. Even if passed in part, the initiative would likely face challenges from civil rights groups, privacy advocates, and technology experts who caution against an unregulated AI landscape.
As artificial intelligence continues to evolve and reshape industries, governments around the world are grappling with how best to balance innovation with accountability. Trump’s proposal represents a clear, if controversial, vision—one rooted in deregulation, distrust of institutional oversight, and a deep concern over perceived political manipulation through digital systems.
What remains uncertain is whether such an approach can provide both the freedom and the safeguards needed to guide AI development in a direction that benefits society at large.