News

Scale Ai Announs Multimillion-Dollar Defense Deal, A Major Step in Us Military Automation

Scale AI CEO Alexandr Wang Testifies Before A House Armed Services Subcommite on Cyber, Information Technology, and Innovation Hearing About Battlefield Ai On On On Washington, 2022.

Jonathan Ernst | Reuters

Scale Ai on Wednsday Announced a Landmark Deal with the Department of Defense that Could be a Turning Point in the Controversial Use of Artificial Intelligence tools in the Military.

The Ai Giant, which provides training data to key ai players like Openai, Google, Microsoft and MetaHas Been Awarded A Prototype Contract from the Defense Department for “Thunderforge,” The Dod’s “Flagship Program” to use ai agents for us milling for usual plans and operations.

It’s a Multimillion-Dollar Deal, According to a Source Familiar with the Situation, Who Requested Anonymity Due to the Confidential Nature of the Contract.

Spearheaded by the defense innovation unit, the program will increase a team of “Global technology partners,” Including anduril andpuril and Microsoft, to Develop and Deploy Ai Agents. Uses will include modeling and simulation, decision-making support, proposed courses of action and even automated workflows. The program’s rollout will begin with us Indo-Pacific Command and Us European Command and will then be scled to other areas.

“Thunderforge marks a decisive shift ai-Powered, Data-Driven Warfare, ENSURING That Us Force Can AnatiCipate and Respond to Threats with Speed ​​and Precision,” According to a Release from the Diu, Which also said that the program will “Accelerate Decision-Making” and Spearhead “Ai-Powered Wargaming.”

“Our AI Solutions will transform today’s Military operating process and modernize American defense. Advantage, “CEO Alexandr Wang Said in a statement.

Both scale and the diu emphasized speed and how ai will help military units make much faster decisions. The diu mentioned the need for speed (or synonyms) Eight times in its release.

Doug beckDIU Director, Emphasized “Machine Speed” in a Statement, While Bryce Goodman, DIU Thunderforge Program Lead and Contractor, said there’s currently a “Fundamental Mismatch Between the Spend of Modern Warfare and our ability to respond. ”

Thought Scale mentions that the program will operate under human oversight, the diu did not highlight that point.

Read more CNBC Reporting on AI

AI-Military Partnerships

Scale’s announce is part of a broader trend of ai companys not only walking back bans on Military use of their products, but also entering into partnered Department.

In November, Anthropic, The Amazon-Backed AI Startup Founded by Ex-Openai Research Executives, and Defense Contractor Palaantiir Announced a partnership with Amazon Web Services to “Provide us intelligence and defense agencies access to (AnaThropic’s) Claude 3 and 3.5 family of models on Aws.” This Fall, Palaantir Signed A New Five-Year, Up to $ 100 Million Contract to Expand Us Military Access to its Maven Ai Warfare Program.

In December, Openai anduril Announced a partnership allowing the defense tech company to deploy advanced ai systems for “National Security Missions.”

The openai -anduril partnership focuses on “Improving the nation’s counter-unmanned aircraft systems (cuas) and their ability to detect, Assess and respond to potentially lethal threats in Real-Aime,” According to a release at the time, which added that deal will help reduce the burden on human operators.

Anduril, co-founded by Palmer luckey, did not answer a question from cnbc at the time about where about with Decisions.

At the time, Openai Told CNBC said it stands by the policy in its mission statement of prohibiting use of its ai systems to harm others.

But that’s emier said than done, according to some industry professionals.

“The problem is that that you don’t have control over how the technology is actually used-if not in the current usage, then certificate in the longer-term you alredy have shared the technology,” Mitchell, Researcher and Chief Ethics Scientist at Hugging face, Told CNBC in an interview. “So i’m a little bit curious about how companies are realizing that – do they have people who have security clearance who are literal examining the usage and verse Constraints of no direct harm? “

Hugging face, an AI Startup and Openai Competitor, Has Turned Down Military Contracts Before, Including Contracts that Didnys the Potential for Direct Harm, According to Mitchell. She said the team “undersrstood how it was one step away from direct harm,” adding that “even things that are beemingly innocuous, it’s very clear that is one paper in a pipeline of survees.”

Alexandr Wang, CEO of Scale Ai, Speaking on CNBC’s Squawk Box Outside the World Economic Forum in Davos, Switzerland on Jan. 23rd, 2025.

CNBC

Mitchell said that even summarizing social media posts un as one step away from beautiful harmful, Since there summaries count be used to potently Identify and take out Combatants.

“If it’s one step away from harm and helping propagate harm, is that actually better?” Mitchell said. “I feel like it’s a somewhat arbitrary line in the sand, and that works well for company pr and maybe epreiee moral without actually being a batter ethical situation … ‘We’ll give you this technology, please give this to harm people in any way,’ and they can say, ‘We have ethical values ​​as well and so we will align with our ethical values,’ But that can ‘ That it’s not used for harm, and you as a company don’t visit into it being used for harm. “

Mitchell called it “a game of words that provides some kind of veneer of acceptability… or non-visible.”

Tech’s Military Pivot

Google in February Removed a PLEDGE to Abstain from Using ai for potentially harmful applicationsSuch as weapons and surveillance, according to the company’s updated “AI Principles.” It was a change from the prior version, in which google said it would not pursue “Weapons or other technologies with principall purpose or implementation is to cause or directly facility People, “and” Technologies that Gather or Use Information for Survelance Violating Internationally Accepted Norms. “

In January 2024, Microsoft-Backed Openai Quistly removed a ban on the Military use of chatgpt And its other AI tools, just as it had begun to work with the US department of defense on ai tools, including open-source cylosis tools.

Until then, Openai’s Policies Page specified That the company did not allow the usage of models for “Activity that has high risk of physical harm” Such as weapons development or Military and warfare. But in the updated language, Openai Removed The Specific Reference to the Military, ALTHS Policy Still States that users should not “use our service to harm your online

News of the Military Partnerships and Mission Statements Follows Years of Controversy about Tech Companies Developing Technology for Military For Military Use, Highlighted by the Public COBLICNS OF TECH WORCERNS OF TECH Especially there is working on ai.

Employees at virtually every Tech Giant Involved With Military Contracts Have Voiced Concerns after Thousands of Google Employees Protasted The Company’s Involution with the Pentagon Which would use Google Ai to Analyze Drone Survelance Footage.

Palantir would later take over the contrast.

Microsoft Employees Protasted A $ 480 Million Army Contract that would provide suldiers with augmented-Reality headsets, and more than 1,500 amazon and google works Signed a letter Protesting a joint $ 1.2 billion, Multiyar Contract With the Israeli Government and Military, under which the tech giants would provide would provide computing services, Ai Tools and Data Center.

“There are always pendulum swings with these kinds of things,” Mitchell Said. “We’re in a swing now where employers have lesss say within technology companies Heavier than the interests of the individual employees. “

(tagstotranslate) defense

Source link

Hi, I am Tahir, a young entrepreneur working in the finance sector for more than 5 years. I am ambitious to add remarkable value to my country's economy.

Leave a Comment