02 Jun Policing AI to empower society
Ashley Beck, Senior Industry Consultant – Global, Fraud & Security Intelligence Division at SAS, looks at how unethical AI challenges within policing can help to empower more equitable societies.
Decision-making processes within policing incidents and investigations have many elements which are fed by information at each stage. The importance of data with integrity that is representative of all information available is crucial to ensure decisions are made using the entire picture. If officers don’t have a clear representation of all information this can lead to catastrophic consequences. Using Artificial Intelligence (AI) and analytics, to support investigators with insights can increase the level of efficiency and effectiveness in performing reactive and proactive duties. It also helps law enforcement gain the much-needed trust and confidence of the communities they serve, providing them with the information they need to safeguard people from harm, whether physical, emotional, or financial.
There are many benefits that AI solutions bring to the world of law enforcement. Yet, poor design or misuse of these solutions could cause irreparable harm to society. For example, AI systems
are vulnerable to biases and errors. The output quality of any AI analysis depends heavily on the quality of the provided data; if the data is already biased, then the program will replicate it in the
results. Therefore, the development of AI systems must always be conducted in a responsible manner, closely built with investigators aligned with their objectives.
So, how can AI be more responsible?
When it comes to the development of ethical AI technologies for law enforcement, it is essential that these are being built alongside each police department, ensuring people from different specialisms, different cultures and experiences – and crucially, the people who will use the technology – are involved. This approach injects breadth and depth of understanding to extract the real value from every piece of data owned and received by law enforcement.
Having a deep understanding within the investigation of ‘The Policing Objective’ and ‘What data translates to valuable information?’ is critical to encourage focus and improve efficiencies to protect people, while taking cognisance of privacy and civil liberties.
For instance, if an investigator is faced with thousands of lines of data that have been legally obtained for a policing purpose, and this data includes information that is unrelated to the investigation, such as personal information belonging to a completely unconnected person, which can increase the risk of collateral intrusion. However, if this data is weaved with indications of serious criminality, then how do investigators minimise the intrusion of privacy while maintaining their ability to understand the identity of subjects/suspects and the risk that they present, from that data?
This is where ethically developed technology that embeds trust and confidence into its use and maintains human involvement at the heart of its development, comes into the picture.
Injecting community-generated value into AI systems
AI technologies work well only when they are used to enhance – not replace– human decision-making. AI systems do not understand our goals in society by default. When working to improve law enforcement, as well as criminal justice, we need to inform systems of our goals in order to achieve them in an equitable and fair manner. We also have to ensure that the stakeholders who inject value are involved at every stage, including pre-and post-implementation of technological support solutions to gain different perspectives of output and impact. We need to learn from experience and from data owned by significant others as well as law enforcement organisations, to feed back into the technology and support the prevention of further victims as well as detect criminals, stopping them in their tracks. The sources of information to support this approach could be derived from many data streams, which perhaps wouldn’t be immediately obvious. An example of this is an anonymous survey of young people to understand the volume and types of exploitation that have gone unreported. Questions could be asked like, how many children have been randomly contacted by a stranger? On what platform did this occur? Understanding not only the information reported to police, but also using data external to the organisation can be key to the development of technology that can truly capture, elevate, and illuminate the threat and risks posed that law enforcement agencies need to acton quickly as a result.
Post-pandemic, we have witnessed an 83% rise in children being targeted by strangers looking to exploit them. If we combine survey information, third-party organisation and public sector data, alongside policing data, this could generate an enhanced proactive approach to tackle a growing problem where children remain at risk online on a daily basis. In a recent exercise carried out, an account was created on a mainstream platform where the settings were set to public. A video was posted of a child playing with collector cards associated with children. The focus of the video was only on the child’s hands. Within six minutes the account received a message from a stranger: “Hi”. On reviewing this account, it appeared that there were various posts that would appeal to young boys. Further investigation revealed that this account belonged to an adult male.
The risk to children who are trusting and could perceive this account as being another child is real. Data can elevate the understanding of modus operandi in the criminal world, which can enable the translation of a preventative message to communities and also feed into the enhancement of the technology to further enrich future law enforcement capabilities.
Ensuring transparency and accountability
Anyone who develops technology that enhances decision-making with automated, data-driven insights should bear the responsibility for transparent and equitable outcomes. There are a number of ways that we can ensure transparency and accountability in a joint venture between software vendors, policing agencies, and national security organisations. But first, can the process behind that decision be explained?
As a result, we have to make sure that not only is the investigator able to explain decision-making to authorities, the public and the judicial system, but also, the police officers themselves need to understand the objectives, capabilities, and limitations of AI. In order to inspire public confidence and trust, ensuring that the data is explainable to different people is of utmost significance. Through the development of linguistic rules, for example, investigators would have a clear vision as to why data elements are being presented to them. Users would also be presented with the probability of an event occurring, therefore, sieving the data and creating rules and alerts to generate action quickly. They would also understand clearly why data has been omitted from the extraction, therefore allowing a truly ethical, informed decision with the support of technology.
Machine Learning can also be utilised to feed into the explainable rules, creating an ecosystem of development, extraction, and decision-making that is ethical, trustworthy, and transparent.
Accountability starts and ends with the law enforcement professional who uses AI to make a faster, better-informed decision. The initial collection of data plays a major part in this discussion as well.
Empowering investigators through education and training
Another point we should keep in mind is that most law enforcement professionals did not sign up to be data scientists. Therefore, the emphasis is shifting to education and training in policing, which is crucial to eliminate the fear of data and digital evidence or intelligence gathering.
Investigations have changed dramatically over the years, where the sentiment of ‘every contact leaves a trace’ transformed from purely the physical space of DNA, CCTV, etc. to add in the digital landscape to the areas where evidence would exist and hold real value. Police officers are very keen to learn because this shift towards gathering digital evidence has been recognised. They desperately want to stay ahead and be able to investigate and explain complex crimes. So, ongoing education and training play a huge part in elevating investigative capability.
Better outcomes for police departments and the communities they serve
In order to ensure the ethical use of AI, law enforcement organisations need to partner with a technology company that essentially enables the organisation, the technology, and the police officers to grow and learn together collectively. Most importantly, they need to make sure that this is not just a textbook exercise. This way, remaining ahead of the curve is achievable.