Hamid Khoshayand – International Affairs Expert
The Zionist regime is among the disruptive actors that have made substantial investments in AI and leverage this technology in various sectors, including its military capabilities. The Gaza war and the recent aggression against the Islamic Republic of Iran, in which the Zionist regime extensively used AI tools to assassinate senior military commanders, nuclear scientists, destroy residential areas, and massacre civilians, have once again drawn attention to the role of AI in the Zionist regime’s defensive and offensive strategies. Notably, the Zionist regime applies no restrictions on using AI to advance its aggressive and military agendas.
The experience of the Gaza war, the assassination of Resistance commanders in Lebanon and the region, as well as the military aggression against Iran, clearly demonstrate that the Zionist regime’s army, in blatant violation of international law and the rules of war, misuses AI to analyze vast amounts of data collected from various sources (such as drones, satellite imagery, and human intelligence). Systems like “The Gospel,” which can process and analyze intelligence data to identify targets, are examples of such applications.
A primary concern at present is that the use of AI by the Zionist regime and the U.S. government in warfare has significantly increased civilian casualties and material damage to civilian areas, sites, and infrastructure due to systemic errors or automated decision-making. The most critical concern in this regard is the development and deployment of Lethal Autonomous Weapons Systems (LAWS), which can make decisions to destroy and kill without direct human intervention. Systematizing the use of AI by establishing clear red lines can ensure that ultimate control over the use of military force and weaponry remains in human hands. This could largely prevent algorithmic errors that lead to civilian casualties and their escalation.
As recent wars in the region, particularly those waged by the Zionist regime, have shown, the expanding use of AI in military affairs could lead to a new arms race and jeopardize regional and global stability. This issue has made the systematization of AI use in military and arms domains a strategic necessity.
Although some countries, including the European Union, have taken positive steps in this regard—such as enacting the AI Act, one of the strictest AI regulations in history—to systematize the use of AI in military and arms domains, these measures are insufficient. Such regulations must be widely developed and adopted in various regions worldwide, particularly in West Asia, to prevent the inhumane and unconventional use of AI in warfare and weaponry.
It is worth noting that the AI Act, which comes into force on August 3, targets companies like OpenAI, Google, Meta, Anthropic, and Mistral. Under this law, models deemed by the EU to pose systemic risks are required to undergo technical assessments, reporting, stress testing, and protective measures. Systemically risky models are those with extremely high processing power that could have a negative impact on public health, security, fundamental rights, or social order. Companies failing to comply face hefty fines of up to 35 million euros or 7% of their global revenue.
Currently, AI has become a key tool in military arsenals. However, this immense capability brings unprecedented ethical, legal, and security challenges, further highlighting the urgent need to systematize and regulate its use in military and arms domains.
In the event of errors or war crimes resulting from AI-based systems, determining accountability (whether for developers, users, or commanders) is highly challenging. Therefore, establishing legal frameworks ensures accountability and enables systematization—and even legal prosecution.
Proposed solutions may include:
- Establishing binding global and regional agreements to define permissible and prohibited uses of AI in military affairs.
- Developing and adhering to ethical guidelines by governments and companies active in this field.
- Continuous review and updating of regulatory frameworks in light of the rapid evolution of AI.
- Emphasizing the necessity of maintaining human control and oversight over critical decisions, particularly regarding the use of lethal force.
In summary, it is essential to ensure that the use of AI in military and arms domains is conducted responsibly, ethically, and effectively, thereby preventing unintended and dangerous consequences.


0 Comments