Untitled

Loading Events

« All Events

  • This event has passed.

Monday Morning Meeting on Artificial Intelligence and National Security

August 8, 2022

Dr. Sanur Sharma, Associate Fellow, Manohar Parrikar Institute for Defence Studies and Analyses (MP-IDSA), spoke on “Artificial Intelligence and National Security” at the Monday Morning Meeting held on 8 August 2022. The session was chaired by Dr. Cherian Samuel, Research Fellow. Ambassador Sujan R. Chinoy, Director General, MP-IDSA, Maj. Gen. (Dr.) Bipin Bakshi (Retd.), Deputy Director General, MP-IDSA, scholars of the Institute, and online participants were in attendance.

Executive Summary

Artificial Intelligence (AI) refers to the ability of machines to perform cognitive tasks like thinking, perceiving, learning, problem solving and decision making. Many AI applications are dual-use, where they have both military and civil applications. In the military sector AI has various applications in Intelligence, Surveillance, and Reconnaissance; Logistics; Cyberspace Operations; Information Operations (and “Deep Fakes”); Command and Control; Semi-Autonomous and Autonomous Vehicles; and Lethal Autonomous Weapon Systems (LAWS). More than 30 Countries including – US, China, Russia, India UK, France have released their National Plan and Strategy on AI. Further, the threats due to Al’s Penetration into National Security and Global Geopolitics have potential to start a new arms race. There are also policy, regulatory, and ethical issues pertaining to AI. In the context of India, the Defence Minister has stated that lessons drawn from the ongoing Russia-Ukraine conflict will push the Indian Armed Forces to adopt emerging technologies like AI in various defence systems.

Detailed Report

Dr. Cherian Samuel commenced the session by welcoming the audience to the Monday Morning Meeting. He introduced the topic and stated that everyone experiences Artificial Intelligence (AI) in one form or another, most commonly with speaking assistants, google maps, etc. He mentioned that there are certain swirling controversies surrounding AI, like ethical issues which are magnified when AI is used in defence applications like Lethal Autonomous Weapon Systems (LAWS). He also suggested that the full potential of AI is yet to be discovered and on the flipside potential dangers of using these technologies are a concern. Therefore, when it comes to AI and National Security, the security dilemma is exacerbated since there is a lack of clarity on how AI enabled weapons will operate in the battlefield. There are many complexities with this technology that need to be unpacked.

With these remarks, Dr. Cherian invited Dr. Sanur Sharma to make her presentation. Dr. Sharma started her presentation by listing all the elements of her presentation. She stated that there have been numerous definitions of AI in AI literature. She mentioned that the actual definition of AI is an enabler to technologies because it has applications throughout all sectors. She listed some unique characteristics of AI; they are dual-use, meaning they have both military and civil applications, they are relatively less transparent, which means their  integration into a product may not be immediately recognisable, creating concerns amongst policy makers. However, she suggested that there is now a concept called explainable AI where the decision maker is able to comprehend and trust the results generated by the AI model and interpret how a particular decision has been made through these algorithms and models. She elaborated on the use of AI in predictive analytics, deep learning, language processing models (NLP), expert systems for healthcare, video and text recognition, robotics, planning and optimisation.

She stated that there has been a lot of debate on the use of AI by political and business leaders and its threats. Therefore, AI does require regulation. However, she felt that the concern of AI being an existential threat to humanity was debatable. In the private sector alone, around US$95 billion was spent on AI in 2021. She further mentioned that AI has gained the attention of commercial investors, defence intellectuals, policy makers, and international competitors alike. Recent developments like increased use of Al in cyberattacks and growth of hybrid warfare techniques have showcased how Al can potentially affect national security. Advances in AI will affect national security by driving change in three areas: Military, Information, and Economic Superiority.

She listed and explained in detail various AI defence applications like Intelligence, Surveillance, and Reconnaissance (ISR); Logistics; Cyberspace Operations; Information Operations (and “Deep Fakes”); Command and Control; Semi-Autonomous and Autonomous Vehicles; and LAWS. While explaining the ISR applications of AI, she gave the example of Project Maven where through material shared by Google and the US Department of Defense, the machine learning model behind the software used in Project Maven was trained to identify 38 different kinds of objects and, has already been deployed to locations in the Middle East and Africa, where it is helping military analysts sort through the mountains of data their sensors and drones soak up on a daily basis.

Further, the training and simulation applications of AI for the military were also discussed by Dr. Sharma. She stated that war-gaming is a well-established tradition in the PLA, given China’s relative lack of real-world combat experience. Also, AI-based war-gaming software for use in professional military education programs are on the rise. Dr. Sharma apprised the audience of the AI application for command and control. She emphasised that it is likely that the PLA’s most significant AI-enabled C2 projects are classified. Several Chinese enterprises advertise AI systems capable of automating some elements of command and control—including knowledge mapping, decision support, weapon target assignment, and combat tasking. She asserted that AI was the top military technology in 2022 in terms of impact a technology will have on the military sector. In terms of adoption of AI in the defence sector, almost 49 per cent of defence organisations are in the adoption process of AI.

Drawing attention to AI and a military case study, she spoke on the Russia-Ukraine Conflict and AI role in it. She stated that AI has been used extensively in the conflict. Russian Troll Farms have used AI to generate human faces for fake, propagandist personas, AI audio and video disinformation. Softwares like Clearview AI, SpaceKnow, Snorkel AI are being used to analyse signals and adversary communications, identify high-value information, and use it to guide diplomacy and decision-making. She further talked about the disruptions caused due to AI. The US, Russia, and China are trying to incorporate AI into their military equipment rapidly. Absence of global coalitions to regulate Al based tools can lead to rogue states and non-state actors getting access to these technologies.

Elaborating on Al’s penetration into national security and global geopolitics, she talked in detail about China, Russia and the US. On July 20, 2017, the Chinese Government released a strategy detailing its plan to take the lead in AI by 2030. Less than two months later Vladimir Putin publicly announced Russia’s intent to pursue AI technologies, stating, “Whoever becomes the leader in this field will rule the world” (‘Whoever leads in AI will rule the world’: Putin to Russian children on Knowledge Day, Russia Today, 1 September 2017). Similarly, in the US, the National Defense Strategy, released in January 2018, identified artificial intelligence as one of the key technologies that will ensure [the United States] will be able to fight and win the wars of the future. More than 30 Countries including – UK, France, Germany, Finland, Canada, Japan, South Korea, Singapore, Mexico, Kenya, Tunisia and UAE have all released their National Plan and Strategy on AI. She emphasised that China is leapfrogging into this technology along with Europe which is doing well with a very comprehensive General Data Protection Regulation (GDPR). India is now moving ahead in its investments too. She then discussed the AI development timelines of the US, China and India. Proliferation of Al in weapon systems in combination with absence of international regulation on their development could lead to a new arms race. Finally, the dual-use nature of Al based tools creates a scenario where non-state actors hold significant resources and thus have the potential for weaponisation of Al.

Talking about the roadmap of development of AI in India, she stated that India is not a late entrant in this field. India has come up with the AI development policy under the NITI Aayog but it was for the civil and private sector with focus towards healthcare, agriculture, e-commerce, etc. In defence, India got into AI later in 2019 through the Defence AI Council (DAIC) and Defence AI Project Agency (DAIPA). Both of the agencies are funded to the tune of Rs.100 Cr collectively for R&D into AI-enabled products in the defence sector. India’s advancements in AI have been promising. The Indian Navy is currently working on 30 AI projects on autonomous systems, maritime domain awareness, perimeter security, decision making, predictive inventory maintenance and management. The Navy is also establishing an AI Centre of Excellence at INS Valsura in Jamnagar, equipped with a modern lab on AI and Big Data Analytics. The Army has set up an AI Centre at Military College of Telecommunication Engineering in Mhow. The Indian Army is also conducting trials of indigenously AI enabled unmanned all-terrain vehicles in Ladakh in August 2022. She quoted the Indian Defence Minister stating that through lessons drawn from the ongoing Russia-Ukraine war, the Indian Armed Forces are pushing for adoption of new technologies.

Discussing the policy, regulatory, and ethical issues of AI, Dr. Sharma explained that in AI Ethics and Transparency, AI systems raise questions concerning the criteria used in automated decision making. Dr. Sharma concluded her presentation by pointing out a few key takeaways. She stated that the entry of Artificial Intelligence in the security domain has shown us glimpses of the complexity and the need for preparedness that is required to assimilate it. With further growth of Al, the threats it presents and opportunities it creates for national security will progressively multiply. Based on the world experience on AI, she suggested a way forward for India, which involves infrastructure development, policy & regulations, research & development, and human resource development. Along with a clear policy, there is a dire need to invest in critical infrastructure so that the data servers lie within the territory. Finally she asserted the importance of indigenous development, civil-military fusion, tapping the civilian innovation ecosystem, international cooperation, and balancing adoption and innovation.

Discussion

Ambassador Sujan R. Chinoy congratulated Dr. Sharma for her presentation on a very complex topic. He stated that AI is going to be one of the determining factors in future great power contestations and wars along with future use and misuse of data, and manipulation of politics through social media. Therefore, lack of attention and investments in this area will only be at India’s peril. He stated that since there are no ethical yardsticks currently available which will guide India into investing only in certain areas, India must put in more investments and analyse what the competitors are doing and try to match up. It has been observed how the Chinese ran the initiative on the hardware side first and then caught up on the software side. Therefore, India needs to put in more work. Ambassador Chinoy also emphasised the importance of MP-IDSA working on AI, particularly on its implications for the military. He also highlighted the significance of Unmanned Autonomous Vehicles (UAVs), especially the threat of unmanned underwater vehicles deployed by adversaries that may collect our data.

Maj. Gen. (Dr.) Bipin Bakshi mentioned that AI related systems need very complex programming and programming AI systems to learn for themselves is even more complicated and unpredictable. Transparency and predictability are not in the control of the designer and that is where the danger lies. He also talked briefly about the applications of AI in the military sector.

Ms. Krutika Patil put forth her views on AI being different from other technologies. Unlike other technologies that can be bought directly and utilised in any environment with few modifications, interoperability is not possible with AI. Building AI systems is pure hard work and India would need unique data for AI to work for its defence requirements. She also mentioned that the US has a more advanced defence AI ecosystem than China due to its access to contextual data.

Dr. Rajiv Nayan too reiterated that the subject of AI is complex. He expressed his concern over the definitions of AI and stated that one must be careful with the meaning of these definitions since when science becomes social science, concepts tend to get more complicated. He emphasised on the need to delve upon the debate of who makes the war, man or machine. Technologies like AI are a higher form of automation that will make the war simpler. He also mentioned that China is not surging as much ahead in AI as it appears.

Dr. Swasti Rao asked about the dangers related to AI capabilities of non-state actors and expressed concern related to open source research on AI and military, and whether the complexity of the issue can be gauged through open source research. Dr. Sanur Sharma answered by asserting that AI researchers working on security issues need to integrate with the defence institutions to gain access to appropriate data to gauge the level of threats and complexities.

Ambassador Saurabh Kumar enquired if India had taken any initiatives in the United Nations on the ethical and legal aspects of AI. He asserted that India should consider taking such steps similar to India’s role in 1982 on the relationship between disarmament and development. Dr. Sharma responded by mentioning that she is not aware of India taking such initiatives but she has come across reports of China insisting on having a dialogue on said topic in the UN.

Mr. Dinesh Pathak emphasised that since security has two components, reading risks and determining response, AI can help in the first one but response needs to be the application of mind and the role of a human is also essential.

Mr. Niranjan Oak posed a question on the role of private sector investment in the innovation of AI. He enquired about possible steps taken by western countries to counter Chinese investments and influence in western AI start-ups. Dr. Sharma answered by mentioning that the US is taking various measures like creating a list of Chinese entities that are infiltrating US-based AI start-ups.

The discussion ended with a vote of thanks by Dr. Cherian Samuel.

The report has been prepared by Ms. Krutika Patil, Research Assistant, Cybersecurity Project, MP-IDSA.