The U.S. government is providing taxpayer funding to universities and non-profits to develop AI-powered tools to permanently establish a mass online censorship regime.
The House Judiciary’s Select Subcommittee on the Weaponization of the Federal Government Committee earlier revealed the U.S. government is funding Artificial Intelligence (AI) censorship technology through the National Science Foundation (NSF) in order to distribute to Big Tech platforms for the purpose of censoring public debate of political issues and government policies.
“This interim report details the National Science Foundation’s (NSF) funding of AI powered censorship and propaganda tools, and its repeated efforts to hide its actions and avoid political and media scrutiny,” the report states.
“In the name of combatting alleged misinformation regarding COVID-19 and the 2020 election, NSF has been issuing multi-million-dollar grants to university and non-profit research teams. The purpose of these taxpayer-funded projects is to develop artificial intelligence (AI)- powered censorship and propaganda tools that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others,” the report adds.
The NSF also developed a media strategy “that considered blacklisting certain American media outlets because they were scrutinizing NSF’s funding of censorship and propaganda tools,” the report added.
As reported by The Epoch Times on March 2, an NSF spokesman categorically rejected the report’s allegations.
“NSF does not engage in censorship and has no role in content policies or regulations. Per statute and guidance from Congress, we have made investments in research to help understand communications technologies that allow for things like deep fakes and how people interact with them,” the spokesman said.
“We know our adversaries are already using these technologies against us in multiple ways. We know that scammers are using these techniques on unsuspecting victims. It is in this nation’s national and economic security interest to understand how these tools are being used and how people are responding so we can provide options for ways we can improve safety for all.”
As the report added, “the spokesman also denied that NSF ever sought to conceal its investments in the so-called Track F program, and that the foundation does not follow the policy regarding media that was outlined in the documents discovered by the committee.”
The House committee report cites the Speaker’s notes from the University of Michigan’s first pitch to the National Science Foundation about its NSF-funded, AI-powered WiseDex tool.
“Our misinformation service helps policy makers at platforms who want to… push responsibility for difficult judgments to someone outside the company… by externalizing the difficult responsibility of censorship,” the statement reads.
The Congressional report reveals the censorship and propaganda tools, universities and non-profits, as well as Big Tech companies involved in the mass censorship campaign.
The National Science Foundation is distributing grants to the University of Michigan, the University of Washington, the University of Wisconsin, the Massachusetts Institute of Technology, and the non-profit Meedan to develop public censorship tools such as WiseDex, Course Correct, SearchLit, and Co-Insights, which are or were to be used at Big Tech firms like Facebook, Reddit, YouTube, and Twitter (now X), among others, the report revealed.
The House Weaponization Committee’s report criticizes the “fact checking” industry developed to fight “disinformation” as a “pseudo-scientific” endeavor to censor disfavored ideological viewpoints and thereby to undemocratically crush political dissent.
The University of Michigan’s Wisedex, one AI-powered censorship tool costing taxpayers $750,000, is designed to “assess the veracity of content on social media and assist large social media platforms with what content should be removed or otherwise censored.”
Meedan’s Co-Insights, for example, models itself on a digital snitching program called “community tiplines” in order to conduct what it refers to as “disinformation interventions.” The taxpayer bill for this public censorship program is $5.75 million.
The University of Wisconsin-Madison’s Course Correct is a tool to purportedly “empower efforts by journalists, developers, and citizens to fact-check” “delegitimizing information” about “election integrity and vaccine efficacy” on social media. The taxpayer-funded grant for this program is $5.75 million.
MIT’s SearchLit is designed to develop “effective interventions” to educate Americans—specifically, those that the MIT researchers alleged “may be more vulnerable to misinformation campaigns”—on how to discern “fact from fiction” online, the report notes, adding, “In particular, the MIT team believed that conservatives, minorities, and veterans were uniquely incapable of assessing the veracity of content online.”
Thus, concerns that this is an ideologically biased operation to suppress political dissent are substantiated, according to the researchers’ own statements.
As the report notes as background about the National Science Foundation, “The scope of NSF’s mission has shifted over the years to encompass social and behavioral sciences. For example, NSF used to fund political science projects from the 1960s until 2012, when Congress banned such research from receiving NSF funding. However, in recent years, and after the academic outcry that Americans elected President Trump only because
of “Russian disinformation,” NSF has spent millions of taxpayer dollars funding projects to
combat alleged mis- and disinformation.”
In June 2023, the National Science Foundation responded to House Judiciary Chairman Jim Jordan (R-OH) by deflecting criticism with the claim the tools are designed to protect the American public from “foreign interference” in elections and public health emergencies.
“NSF is encouraged to consider additional research efforts that will help counter influence from foreign adversaries on the Internet and social media platforms designed to influence U.S. perspectives, sow discord during times of pandemic and other emergencies, and undermine confidence in U.S. elections and institutions,” the NSF claimed. “To the extent practicable, NSF should foster collaboration among scientists from disparate scientific fields and engage other Federal agencies and NAS to help identify areas of research that will provide insight that can mitigate adversarial online influence, including by helping the public become more resilient to undue influence.”
However, in the attached NSF documents, previously not available to the public, the agency states openly that, “The overarching goal of this work is to equip the general public with the knowledge and skills needed to find trustworthy information online.”
But what if the watchdogs appointing themselves to be gatekeepers of “trustworthy” information are themselves not trustworthy?