Get Your Copy of "A.I. The revolution that will change our lives" Now! (Link to book: koosai.co/research)

The Venus & Mars Principles: Empathy vs. Hyper-Performance (and Militarization)

My book "Artificial Intelligence. The Revolution That Will Change Our Lives." highlights a critical imbalance in AI development: the need for empathic AI (the "Venus principle") to balance hyper-performing AI (the "Mars principle"). As Geoffrey Hinton humorously remarked, AI probably "didn't have a mother," underscoring its current lack of empathy amidst advanced rationality.

TECHNOLOGYLAWSOCIETY

9/11/20251 min read

This "Mars principle" is particularly evident in the rampant militarization of AI:

  • Companies like Microsoft are key technology suppliers to the US military, having won major contracts like the $10 billion JEDI contract (replaced by JWCC in 2021).

  • The US army is already testing generative AI in Ukraine for military tasks and crisis response. Firms like Palantir provide AI-powered "battlefield intelligence systems" and imaging for drones, reportedly "immediately useful in identifying potential Russian soldiers and infiltrators at checkpoints". Clearview AI also offers its facial recognition services "free of charge" to the Ukrainian war effort.

  • The proliferation of fully autonomous lethal drones is seen by some as "the next logical and inevitable step" in warfare, raising profound ethical questions about "when a robot will 'knowingly' kill a human".

  • Mo Gawdat vividly captures this danger: "For the first time in human history, we have created a bomb that can create a bomb".

  • Concerns extend to AI becoming so complex that humans lose control, leading to potential safety risks like AI systems "wiping out humanity" (Hinton) or creating "sub-goals" that conflict with human interests (Bostrom).

Researchers have also found that advanced AIs, like ChatGPT, can perform better in chemistry research than specialized AIs, raising fears about their use to design highly lethal viruses or neurotoxic agents, with unrestricted AIs already accessible on the Dark Web.

How can we ensure that AI development prioritizes both hyper-performance AND empathy? What specific regulations or ethical frameworks are crucial to prevent the unchecked militarization of this powerful technology?