Recognising rioters using AI: convenient or unethical?
Recognizing violence on surveillance footage of large crowds? It can be done, with the help of AI. But we need to think carefully about the question whether we want to implement such a model, and in what way, concludes Emmeke Veltmeijer. 'It is more relevant than ever to develop ethically sound models and to stay in conversation about this.'
AI researcher Emmeke Veltmeijer looked at how to automatically analyze large groups of people using artificial intelligence. This can come in handy, for example, for security personnel and crowd managers, who want to recognize riots based on large amounts of data.
Riots and noise
'I did this by automatically dividing crowds into smaller subgroups,' says Veltmeijer. 'You can analyze those subgroups using images, video footage, audio and posts on social media.' She developed AI models on the computer and applied them to the real world. 'With these, I could automatically recognize fighting groups of people on video footage, and trace which parts of Amsterdam are rioting in based on social media messages. I could also figure out, based only on group-level noise, whether the subgroups in a crowd were expressing themselves positively or negatively.'
Veltmeijer focused on applicable systems that could make the work of security guards and other people easier. One example is a system that alerts security guards if a group is behaving violently. Another possible application is analyzing the sound of rival supporters at a sports game. A third application is analyzing social media messages about a particular region to assist security agencies in detecting riots or other unexpected events.
Ethics
Whether it is desirable that such detailed information can be retrieved from a large crowd is a second question. Veltmeijer therefore also examined the impact of the European AI Act on automatic group analysis as in her study. 'Artificial intelligence is playing an increasing role in our society. It is more relevant than ever to develop ethically sound models and stay in conversation about it.'
Ethical analysis revealed that practical and ethical concerns remain even when applying the AI Act. Suggestions for improvement include focusing on data collection without collecting personal data, consideration of the context in which you collect data, and the individual responsibility of scientists.
The results of her research are therefore not only relevant for people involved in crowd management, but for everyone who is part of a large crowd at some point. Veltmeijer: 'It is important to know what your rights are, and in what way AI systems can contribute to your physical safety, without violating your privacy.'
More information on the thesis