Opportunities and limits of preventive arms control: missile defence, space technology and artificial intelligence Abstract: Technological progress and military armament are closely linked. Whereas in the past the military was often a central driver of technological progress - think of the history of aviation, for example - it is now often the civilian sector that characterises the further development of weapons systems with its technical innovations - for example in the field of artificial intelligence. What are the security policy risks of this development and what role can arms control play in dealing with these risks in the face of technological progress? We will discuss these two key questions using case studies from the fields of missile defence, space technology and artificial intelligence. There will be room for further questions over a small drink afterwards.
Collection events bpb
Abstract: In the past decade, AI has quickly percolated into every industry and several application avenues, including within the ambit of international security. Given that it is far from a perfect technology, it has been seen to be highly prone to exhibiting biases and producing flawed outputs, and is now recognised as a ubiquitous regulation challenge. This talk will take these issues into account and focus on various aspects around the development, deployment and governance of AI for military applications, including the adverse effects of AI biases. It will cover how and where AI is employed for military purposes; how AI produces and exacerbates biases around gender and race, and how these can be countered; and what the governance paradigm around military AI looks like vis-à-vis discussions around LAWS at the UN and at other international fora. There will be room for further questions over a small drink afterwards.
Abstract: In addition to factors such as time and effectiveness, ethical and legal concerns also play a central role in the discussions surrounding the increasing automation of weapon systems. From a military perspective, these systems in Germany must not only be in line with international humanitarian law, but must also not conflict with the ethical guidelines of the Bundeswehr. Accordingly, efforts are being made by industry and the military to implement ethical and legal values in the algorithms of the weapon systems themselves in order to develop a "value-based technology" that works "better than humans", as defence industry players put it. The lecture offers insights into current empirical research on how this is to be achieved and what role ethics, law and the (more-than-)human play in the development of autonomous weapon systems. There will be room for further questions over a small drink afterwards.