United Nations Interregional Crime and Justice Research Institute (UNICRI) was created in 1965 to assist UN member states in understanding and acting on issues related to crime prevention, criminal justice, human rights and the rule of law. In 2017, the Centre for Artificial Intelligence and Robotics was established in The Hague, The Netherlands, as part of the institute. One of the objective of the Centre, according to Irakli Beridze, its head, is to explore how technology can be used by law enforcement agencies in tracking crimes and how does this fit into the debate over human rights and ethics. In an interview to Geospatial World, Beridze says that his team is bringing together private sector, UN member states and academia to advance understanding of the AI applications for crime prevention and criminal justice, while developing special tools to help countries in empowering law enforcement agencies in applying AI for different types of crimes in a lawful and trustworthy manner.
How far has law enforcement been successful in adopting technologies such as Artificial Intelligence and Machine Learning?
Both AI and Machine Learning may have been used in areas like healthcare, transportation, energy and finance for quite some time, but these are new technologies for law enforcement. Today, we see so many companies displaying these technologies, which is really a big thing. Law enforcement agencies are rapidly adopting AI because countries have started to identify its benefits and are becoming aware of its potential in the fight against crime. However, a lot of this is still at a conceptual level.
There are not a lot of tools out there that are really being used on a daily basis by law enforcement. That said, there is a lot investment going into them and alot of interest in developing them. On our part, we are trying to contribute to the knowledge and understanding of law enforcement agencies by bringing together different stakeholders. We are running specialized forums for INTERPOL and have been holding meetings since last year on the theme of trustworthy and lawful AI for Law Enforcement.
So, is your role confined to bringing stakeholders together, or are you also involved in building systems?
Bringing people together is a part of the objective of our Centre. The other part is to actually work together with the different stakeholders to actually develop and pilot the technology. For instance, we developing projects to build tools to counter human trafficking, corruption, illicit financial flows, terrorism, child pornography, and illegal fishing. We are looking at what Machine Learning and AI can contribute to identifying and countering these kind of illegal activities.
How can AI and location technology contribute to strengthening law enforcement?
AI can be a powerful too for law enforcement and help in many addressing many types of crimes. Factoring in location technology, it can help law enforcement to optimize resources their resources in specific areas and at specific times, to cover as much ground as possible with the same or even fewer resources. Drones with sensors, for instance, can also be used to detect illegal movements such as illegal border crossings, human traffickers and vessels illegally fishing. Location is a powerful piece of information for AI systems.
Do you think these technologies can be used by cybercriminals, and could they also help in boosting cybersecurity?
Yes, absolutely. One of the three areas of future crimes related to AI we are looking at are digital crimes, or cybercrimes as they are also called. While right now most of these crimes are committed by human hackers with all sorts of existing technologies, in future, these acts could be performed by autonomous AI-driven tools. What this means is that there would be no one getting tired, no one need food or sleep and the cyberattack could continue 24 hours a day, 7 days a week. AI can adapt itself, learn from itself and find loopholes in the existing systems that human might not be able to find, or at least that they won’t be able to find quick enough. At the same time, it can also help to identify threats and attacks and even come up with effective solutions. Ultimately, if we don’t adapt, don’t develop the understanding of these techniques, we won’t be able to counter them.
Do you think a specific framework is required for policing, in terms of data sharing and access?
Absolutely, I think as AI becomes more effective and sophisticated, it will be used for all forms of policing in the not too distant future. Therefore, it is very important for law enforcement agencies to stick to notions like fairness, accountability and transparency and explainability. If the use of AI by law enforcement is not lawful and trustworthy, it jeopardize our human rights and very much undermine the fundamental principles of law, such as the presumption of innocence, privilege against self-incrimination, and proof beyond a reasonable doubt. To prevent this, some framework for law enforcement will be required.
How crucial do you think it is for these technologies to become explainable?
I believe the developers really need to strive to ensure that AI is explainable and, at the same time, end users, like law enforcement agencies, need to equally underline that the technology needs to be explainable. We need to understand how algorithms are coming up with solutions. Unexplainable technology may actually yield hidden biased results, will not help in building the trust of communities and may undermine trust in institutions such as the police. This is one of the biggest dangers of applying technology without being mindful of all the issues related to it.
Do you know of a country using these technologies?
At the moment, lots of law enforcement agencies are trying to figure out how AI can be useful in their line of work. Predictive policing is and will become a big field in future. By predictive policing we don’t mean criminals being identified even before they commit crimes. What we mean is resource optimization to see where past crimes were committed and what are the possibilities of something happening at a particular place. This can save a lot of time, money and resources for law enforcement agencies. As far as a specific study is concerned, in The Netherlands, the Dutch National Police is designing concepts and tools, and is exploring different ways of applying AI. Some of it is very useful and sophisticated.
Do you think caution is a must before adoption?
This is a new field and we are in a phase where these technologies are being developed and used. They have a lot of potential to achieve good results for us — help us live safer lives. But at the same time, these technologies have the potential of being misused by individuals, companies or others. Before using them, we need to be mindful of all the dangers and issues that are associated with their use and need to always underline those issues.
For further information about the activities of the UNICRI AI and Robotics Centre please visit the relevant section on our website.