The US government’s latest move on artificial intelligence has shocked the scientific community. Under President Trump, the National Institute of Standards and Technology (NIST) has stripped “AI safety” and “AI fairness” from its priorities. Instead, researchers are being told to focus on eliminating “ideological bias” and boosting America’s competitive edge. What does this mean for the future of AI—and for society?
Key Points at a Glance
- NIST’s new directive removes references to AI safety, fairness, and misinformation prevention.
- Researchers are now tasked with reducing “ideological bias” and promoting human flourishing and economic competitiveness.
- Critics warn these changes could lead to AI systems that are unsafe, discriminatory, and unaccountable.
- The US AI strategy now prioritizes global leadership over ethical concerns and safety.
- The AI Safety Institute’s original mission has been radically reshaped under Trump’s administration.
The landscape of artificial intelligence in the United States is undergoing a seismic shift. This March, researchers associated with the AI Safety Institute (AISI), an organization founded to safeguard against the dangers of advanced AI systems, received a startling update. Gone are the priorities of fairness, safety, and responsibility. In their place, a new directive from the National Institute of Standards and Technology (NIST) emphasizes eliminating “ideological bias” and promoting American economic dominance.
For many in the AI research community, this shift represents not just a change in wording but a profound realignment of priorities. Under President Biden, the AISI had focused on mitigating the risks of AI—ensuring powerful models weren’t used for malicious purposes, from cyberattacks to the creation of biological weapons. Fairness, responsibility, and safety were front and center. But President Trump’s administration has reimagined these goals, with an emphasis on competitiveness and reducing what they call ideological slants in AI systems.
The new agreement makes no mention of efforts to combat misinformation or track the origins of synthetic content. Labeling deepfakes and ensuring the authenticity of AI-generated information are no longer considered essential. Instead, researchers are being directed to develop tools that strengthen America’s global AI leadership, sidelining earlier commitments to ethical AI principles.
Critics argue this pivot may open the door to AI systems that are inherently discriminatory and unsafe. “Unless you’re a tech billionaire, this is going to lead to a worse future for you and the people you care about,” warned one anonymous researcher connected to the AI Safety Institute. They believe the removal of fairness and safety concerns will allow harmful biases in AI systems to go unchecked, exacerbating inequality and undermining public trust in AI technologies.
Some insiders claim the directive stems directly from the Trump White House. The Department of Government Efficiency (DOGE), led by Elon Musk, has been tasked with cutting costs and reducing government bureaucracy. Under Musk’s stewardship, DOGE has already led to mass firings within NIST and other government agencies, dismantling diversity, equity, and inclusion initiatives along the way.
Elon Musk himself has long criticized AI systems developed by Google and OpenAI, accusing them of harboring “woke” biases. His AI company, xAI, is competing fiercely with these tech giants. A researcher affiliated with xAI recently proposed techniques for shifting the political orientations of large language models—developments that raise fresh concerns about who controls AI and for what purposes.
The new policies were unveiled following an executive order issued in January 2025. While retaining the AI Safety Institute in name, the order rescinded Biden’s earlier guidelines, emphasizing that AI systems should be “free from ideological bias or engineered social agendas.” Trump’s vice president, JD Vance, further solidified the new administration’s stance during the AI Action Summit in Paris, stating that “the AI future is not going to be won by hand-wringing about safety.”
This hardline approach has drawn condemnation from many in the scientific community. Stella Biderman, executive director of EleutherAI, remarked, “The administration has made its priorities clear. Rewriting the plan was necessary to continue to exist.” Others view the shift as more sinister, accusing researchers of compromising ethical standards to maintain influence within the new power structure. “These people and their corporate backers are face-eating leopards who only care about power,” said one disillusioned expert.
The removal of safety and fairness mandates has sparked fears about the potential consequences. Without clear guidelines to prevent discrimination or address misinformation, AI models could perpetuate and even amplify societal inequalities. The sidelining of responsible AI initiatives, critics argue, could lead to AI technologies that harm the very people they are meant to serve.
Despite these concerns, Trump’s administration appears resolute in its course. The AI Safety Institute, once envisioned as a watchdog against AI’s darkest potentials, has been repurposed as a tool to fortify American dominance in the AI arms race. Whether this strategy will lead to the promised human flourishing or usher in a more dangerous AI era remains to be seen.
Source: WIRED