The people building robots for OpenAI have seen a terrifying future

robot
Abstract generation in progress

Author: Geek Old Friend

On March 7, 2026, when I saw the news that Caitlin Kalinowski resigned, my first reaction wasn’t shock but—“Finally, someone is taking action.”

Kalinowski was the head of hardware and robotics engineering at OpenAI. She joined only in November 2024, and less than a year and a half later, she chose to leave.

Her reason was straightforward and serious—she felt unable to accept OpenAI signing a contract with the U.S. Department of Defense, which could lead to domestic surveillance and autonomous weapon applications.

This is not just an ordinary talent loss. This is someone who personally helped build AI bodies, telling the world with her resignation: she doesn’t want to be responsible for what her creations might do.

To understand Kalinowski’s departure, we must first go back to what happened about a week earlier.

On February 28, Sam Altman announced that OpenAI had reached an agreement with the U.S. Department of Defense, allowing the Pentagon to use OpenAI’s AI models within its classified networks. The announcement caused an uproar.

Interestingly, the reference point for this contract was a competitor, Anthropic.

Not long before, Anthropic had refused a similar cooperation proposed by the Pentagon, insisting on stricter ethical safeguards in the contract. As a result, Defense Secretary Pete Hegseth directly called out on X that Anthropic’s actions were “arrogance and betrayal,” echoing the Trump administration’s order to stop working with Anthropic.

OpenAI then took over this deal.

Public reaction was intense. On February 28, the number of ChatGPT uninstalls surged by 295% compared to the previous day. The #QuitGPT movement quickly swept social media, with supporters of digital resistance exceeding 2.5 million in three days. Meanwhile, Claude overtook ChatGPT to become the most downloaded AI app in the U.S. daily, topping the Apple App Store’s free app chart.

Under pressure, Altman publicly admitted on March 3 that “we shouldn’t have rushed to release this contract,” calling it “seems opportunistic and hasty,” and announced revisions to the contract language, clarifying that “AI systems should not be intentionally used for domestic surveillance of U.S. personnel and citizens.”

But the word “intentionally” itself is a loophole. EFF lawyer Aaron Mackey pointed out sharply that intelligence and law enforcement agencies often rely on “accidental” or “commercially purchased” data to bypass stronger privacy protections—adding “intentionally” doesn’t truly restrict these practices.

Kalinowski’s resignation happened against this backdrop.

01 She saw more concrete issues than we imagined

While most people are still debating whether “OpenAI is compromising with the government,” Kalinowski was facing a more specific and brutal problem—her team was building robots.

Hardware and robotics engineering is not an abstract task of coding and tuning parameters. It’s about giving AI hands, feet, and eyes. When OpenAI’s cooperation with the Department of Defense extended from “model usage” to potential future “embodied AI military applications,” the nature of Kalinowski’s work changed.

Researchers in autonomous weapons had long warned of this day.

Current U.S. Department of Defense policies do not require human approval before autonomous weapons use force. In other words, the contract OpenAI signed technically does not prevent its models from becoming part of a system where “GPT decides to kill someone.”

This is not alarmism. Jessica Tillipman, a lecturer in government procurement law at Georgetown University, analyzed that the revised contract “does not give OpenAI the freedom, like Anthropic, to prohibit legal government use,” but only states that the Pentagon cannot use OpenAI’s technology to violate “existing laws and policies”—which already leave huge gaps in regulation of autonomous weapons.

Oxford governance expert also shared a similar view, believing that OpenAI’s agreement “is unlikely to fill the structural gaps” in governance related to AI-driven domestic surveillance and autonomous weapons systems.

Kalinowski’s departure was her personal response to this judgment.

02 What’s happening inside OpenAI

Kalinowski is not the first to leave, and probably not the last.

Data shows that the turnover rate in OpenAI’s ethics and AI safety teams has reached 37%, with most citing “conflicting with company values” or “unable to accept AI being used for military purposes” as reasons. Research scientist Aidan McLaughlin wrote internally, “I personally think this deal isn’t worth it.”

It’s worth noting that this wave of departures coincided precisely with OpenAI’s rapid expansion into commercial markets. Just before the controversy over the defense contract, the company announced an extension of its existing $38 billion deal with AWS to $100 billion over eight years; it also adjusted its external spending targets, projecting total revenue exceeding $280 billion by 2030.

While business accelerates, the security teams continue to leave. This divergence is the most critical axis for understanding OpenAI’s current situation.

A company’s values are ultimately reflected in who it retains and who it loses. When those most concerned about “how this technology will be used” start leaving one after another, it’s easy to predict the direction the remaining organization will slide.

Anthropic chose a different path in this game—refusing the contract, bearing the Pentagon’s anger, but gaining a large amount of user trust. During that period, Claude’s downloads rose against the trend, proving that “principled refusal” can sometimes be a viable business strategy.

But Anthropic also paid a price—it was temporarily pushed out of government contracts.

This is the real dilemma: no choice is perfect.

Refusing might mean losing influence or being excluded from rule-making. Accepting means endorsing actions you can’t fully control with your technology.

Kalinowski’s answer was the third way—leaving.

It’s the most honest thing she could do.

03 The beginning of the Silicon Valley soul battle

If we broaden the perspective, this event’s significance goes far beyond one person’s resignation.

The integration of AI and military is an inevitable choice the industry will face sooner or later. The Pentagon has the budget, demand, and technical capacity; it will keep reaching out to AI companies. And AI companies—whether OpenAI pursuing AGI, Anthropic emphasizing safety, or others—will eventually have to give their answers to this question.

Altman’s strategy is to try to accept the commercial reality while setting boundaries through contract language. But as many legal and governance experts have pointed out, those words are more about public relations protection than real technical constraints.

The deeper issue is that once AI models are deployed into classified networks and start participating in military decision-making, the outside world has no way to verify whether those “guarantees” are truly being upheld.

Lack of transparency is itself the greatest risk.

Kalinowski spent less than a year and a half at OpenAI but chose to leave at this critical juncture. She didn’t issue a long public statement, didn’t name anyone specifically, only used her actions to draw her boundary.

In a sense, this is more powerful than any policy article.

AI hardware and robotics engineering was once one of Silicon Valley’s most exciting frontiers. When Kalinowski left, she took not just her resume but also a question—left for everyone still in the industry:

How responsible are you willing to be for what you’ve built?

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin