By Ramakrishna Commuri
In the current battlefield scenario, autonomy should not be viewed as an abstract technological debate but as a necessity shaped by one fundamental question: Why bleed soldiers when machines can take the hit?
The debate around autonomous systems, particularly autonomous weapons, is often framed as a moral crossroads, asking whether machines should decide matters of life and death.
While important, this framing is incomplete. The more urgent reality facing modern militaries is whether forces can afford not to adopt autonomy.
As operational environments grow faster, more complex, and more hostile, autonomy is shifting from an experimental capability to an operational requirement.
Modern battlefields no longer resemble the linear conflicts of the past. They are distributed, information-dense, and contested across land, air, sea, cyber, and electromagnetic domains.
Decisions that once took minutes must now be made in milliseconds. Human cognition, remarkable as it is, has biological limits.
Autonomy does not replace human judgment, but it extends it beyond those limits. Built for the battlefield, autonomy is designed to spare lives, ensuring intent survives when time collapses, and uncertainty dominates.
At its core, autonomy enables systems to perceive, decide, and act within predefined parameters when human intervention is constrained by speed, distance, or survivability.
This applies equally to non-weaponised robotics and weaponised platforms. A robot operating in a collapsed structure, a drone navigating GPS-denied terrain, or a ground system manoeuvring under fire all face conditions where continuous human control is impractical or impossible.
In these environments, autonomy becomes the difference between mission success and failure. Let machines face danger, so soldiers can lead where judgment, command, and accountability belong.
Critics of autonomous weapons argue that machines lack moral reasoning and contextual understanding. This concern is valid, but it assumes a false dichotomy between full autonomy and human control.
In practice, modern autonomous systems operate within layered frameworks of human intent, rules of engagement, and operational constraints.
Autonomy is not the absence of control, but it is the disciplined delegation of execution within boundaries defined by humans.
Risk is deliberately transferred to machines, with the return always intended for humans. When designed responsibly, autonomy can strengthen compliance with ethical and legal standards.
Machines do not experience fear, fatigue, or rage. They do not act out of revenge or panic. Properly trained and constrained autonomous systems can apply rules consistently, process complex sensor data faster than humans, and reduce impulsive or emotionally driven errors.
The challenge lies not in autonomy itself but in the rigour of its design, validation, and governance. Autonomy that stands where soldiers shouldn’t must be engineered with restraint and accountability.
From a strategic perspective, autonomy is a force multiplier. Nations face shrinking military manpower, rising operational costs, and increasing pressure to protect soldiers from unnecessary harm.
Autonomous systems allow smaller forces to cover larger areas, operate continuously, and take on the most dangerous tasks.
In contested environments where communications are degraded or denied, autonomy ensures systems continue to function rather than becoming liabilities. The logic is clear: battlefield risk belongs to machines, not people.
Weaponised autonomy is often misunderstood as unchecked lethality. In reality, it is a response to the accelerating tempo of modern conflict.
Defensive systems that intercept incoming threats, counter-drone platforms reacting to swarms, and autonomous perimeter systems protecting critical infrastructure depend on rapid, reliable responses.
In such scenarios, waiting for human authorisation can be suicidal. Autonomy here is not about aggression but about protection, speed, and survival.
The global landscape reinforces this necessity. Potential adversaries are already investing heavily in autonomous capabilities.
Rejecting autonomy does not stop its development, but it simply shifts the advantage elsewhere. Responsible nations must therefore focus on shaping how autonomy is designed, deployed, and governed, rather than pretending it can be uninvented.
This underscores the importance of building autonomous systems that are transparent, auditable, and aligned with human intent. Human-on-the-loop and human-in-the-loop models, rigorous testing, fail-safes, and clear accountability chains are not obstacles to autonomy; they are its enablers.
Autonomy done poorly is dangerous. Autonomy done well is stabilising. Beyond warfare, the same logic applies to disaster response, border security, infrastructure inspection, and hazardous industrial operations, where machines must operate independently when humans cannot.
The line between civilian and military autonomy continues to narrow, driven by shared technologies and real-world operational pressures.
The question, then, is not whether autonomous weapon systems should exist, but how they are responsibly integrated into a human-led defence architecture.
Autonomy is a tool, and like all powerful tools, it reflects the discipline, values, and intent of those who build it. As technology evolves, autonomy will become as fundamental to military systems as sensors, communications, and mobility.
Those who approach it with fear will fall behind. Those who approach it with seriousness, ethics, and engineering discipline will define the next era of defence capability.
In the years ahead, autonomy will not signal the absence of humanity in warfare, but the extent to which humanity is preserved by putting machines in harm’s way and bringing soldiers home.
Autonomy is not the end of human responsibility. It is the next chapter in how that responsibility is exercised.
(The author is the CEO of Bhairav Robotics, a startup with a focus on design and development of Robotic system for the armed forces. He is a Mechanical Engineer with 30 years of experience and worked for companies like General Electric Company and Rolls-Royce. Views are personal.)
NOTE: Follow Defence.Capital on Arattai.
NOTE: Follow Defence.Capital on Telegram.
NOTE: Follow Defence.Capital on WhatsApp.
