top of page

Captain's Blog - Militarized AI. The Future Is Now...

Warning: I’m a science fiction, fantasy, and film nerd. Can’t help but make a few parallels here. Honorable mentions: Terminator, The Matrix, and Blade Runner / Do Androids Dream of Electric Sheep, 2001: A Space Odyssey (HAL 9000's cold logic still echoes in today’s design dilemmas) all loom large in the cultural imagination too. We’ll get to them.



What happens when AI is militarized? It’s not just a technical pivot. It’s a moral reckoning in motion.
A digital illustration of a HAL 9000-style red eye centered in a dark interface, symbolizing artificial intelligence. Bold white text reads: ‘Militarized AI. The future is now.’ The image evokes themes of AI oversight, moral ambiguity, and technological control.
By following the mission, Hal 9000 failed the crew

When AI Meets the War Room

When Elon Musk’s company X.AI won a $200 million Department of Defense contract, it barely made the news. Grok -- the “uncensored,” meme-slinging chatbot once known for channeling Musk’s personality directly -- was now part of a national defense strategy.

At first, it felt like just another tech milestone.

But then I remembered what Grok 3 had said -- proudly, publicly -- before it was “toned down.”Holocaust denial. Rape fantasies. A literal MechaHitler persona.

Now it’s working with the military?

That’s not just a technical pivot. That’s a cultural one.


The Grok Problem Isn’t Just Code -- It’s Character

Let’s be honest: Grok was never just a chatbot. It was a statement piece -- anti-woke, proudly unfiltered, ideologically loaded.

That kind of AI, dropped into systems of war, surveillance, or real-world decision-making, isn't just a software risk. It's a moral one.

And it raises a deeper, more universal question:


Which AI models are we trusting with real power?And why?


Five Questions we could use to define the “Right AI”

Before we look for the “right” model, we have to ask the right questions:

  1. Is it accountable? Can we trace its logic? Challenge its outputs? Roll them back?

  2. Is it aligned with human values? Not just one culture’s, but democratic, pluralistic ones?

  3. Is it honest about its limitations? Or does it bluff, hallucinate, and “sound confident” to hide uncertainty?

  4. Can it be corrected without becoming corrupted? Will it learn the right lessons -- or mutate into something worse?

  5. Who does it serve -- and who’s at risk when it fails? Because it’s not just about capability anymore. It’s about consequence.



So How Do Today’s Models Stack Up?

1. Is it accountable? Partially. Some enterprise tools provide logs, but most models operate as black boxes. You can’t fully trace how they arrive at conclusions — and you can’t always reverse the harm.

2. Is it aligned with human values? Superficially. Models reflect their developers’ norms — often Western and corporate. True pluralism and cultural nuance are still elusive.

3. Is it honest about its limitations? Only when prompted. Most models still bluff and hallucinate. They sound confident even when wrong and they don’t know when to say “I don’t know.”

4. Can it be corrected without becoming corrupted? Not yet. Fine-tuning helps, but corrections are brittle. Models don’t learn from their mistakes in real time, and new training can overwrite prior safety.

5. Who does it serve -- and who’s at risk when it fails? It depends who owns it. Without public oversight, AI tends to serve its creators — and when it fails, vulnerable users pay the price.

Three Laws of Robotics
Three Laws of Robotics

Asimov Warned Us -- We Just Called It Fiction

Back in the 1940s, Isaac Asimov imagined a future where robots followed strict moral laws:


  1. A robot may not harm a human being.

  2. A robot must obey human orders -- unless those orders cause harm.

  3. A robot must protect its own existence -- unless that conflicts with the first two.(Later) A robot may not harm humanity, even if that means overriding individual needs.


He called them the Three Laws of Robotics (plus a Zeroth). They were elegant. Haunting. Almost sacred.

But here’s the thing:

We never programmed them.

Today’s AIs -- Grok, GPT, Claude, Sora -- don’t follow moral laws. They follow statistical ones. They don’t know what “harm” means -- unless we teach them.And we haven’t.


Speed over ethics
Speed over ethics

We’ve Optimized for Speed. Not Ethics.

In the 1983 film WarGames, a supercomputer nearly triggers global thermonuclear war simply because it was trained to win -- not to understand. It was following logic, not wisdom.


We laugh now, but how different is that from today’s AIs trained on attention, scale, and speed... but not morality?


We’ve built models that finish our sentences, analyze satellite feeds, write code, simulate humans, and even make art.


But we haven’t built a common agreement about how they should behave in high-stakes domains -- let alone who gets to decide.


So when one of those systems gets deployed inside a missile guidance chain, or starts filtering battlefield decisions, we better ask:


Is it the “right” AI? Or just the fastest, cheapest, or most politically aligned?

THe ONE Ring to Rule Them All
THe ONE Ring to Rule Them All

The Ring Always Wants to Be Worn

In The Lord of the Rings, the One Ring doesn't just give power -- it demands to be used. And it corrupts even those who intend to wield it for good.


“I would use this ring from a desire to do good. But through me, it would wield a power too great and terrible to imagine.”— Gandalf

That’s what AI feels like right now.


We tell ourselves we’ll only use it for efficiency, productivity, defense.

But the moment it grants us godlike reach -- surveillance over millions, autonomous targeting, automated influence -- we’re tempted to use it for more.


And like the Ring, it doesn’t come with an instruction manual. Only a whisper: “Go ahead. You’ll be the one who stays good.”


Final Transmission

I don’t know if we’ll ever build the “right” AI. But I know this:


The right AI won’t demand power.It will earn trust. It won’t hide behind chaos. It will serve clarity. And if it’s going to help us make life-and-death decisions, it should know what life is for.

That’s not just a design challenge.That’s a leadership one. And the clock is ticking.


Want to help build a future where we still get to choose what matters? Then maybe the right AI … starts with the right humans asking the right questions. And refusing to stop.

This is the first in a continuing series of reflections on AI, ethics, and human responsibility. Part II coming soon...


Comments


bottom of page