Archives

Using AI to investigate publicly available documents on violence prevention

 

Artificial intelligence (AI) systems are increasingly applied in public health, yet their use for analysing fragmented, multi-sectoral policy landscapes remains underdeveloped. Many applications have focused on service delivery, such as AI-powered chatbots, data surveillance and monitoring, and tracking social media interactions for emerging risks, with less attention paid to how AI might support policy analysis. This is especially true for the violence prevention sector, where AI is gaining traction as a solution for triaging help-seeking calls, detecting threatening messages, predicting conflict and improving police data, but not for understanding the policy landscape.

Policy responses to violence are undergoing scrutiny in the UK, coinciding with the recent publication of an updated cross-government strategy addressing violence against women and girls. This renewed focus places increased demands on researchers and policymakers to rapidly synthesise large and fragmented bodies of policy evidence spanning multiple sectors and both local and national government. Traditional approaches to policy review formed around a wholly manual approach may struggle to meet these demands within policy-relevant timeframes.

This research, an exploratory, proof-of-concept case study, aimed to describe the development and preliminary exploration of an AI-enabled tool designed to synthesise evidence from violence-related policy documents in the UK. The team was led by VISION Research Fellow Dr Darren Cook and inlcuded several members from the wider VISION consortium, Dr Elizabeth Cook, Kimberly Cullen, Professor Sally McManus, Professor Gene Feder and Professor Mark Bellis. 

For their article, Artificial intelligence in critical synthesis of public health responses to violence: A novel application to UK violence prevention policy, the team compiled a corpus of publicly available UK policy and strategy documents on violence (N = 343) through expert review, manual searches of government and third sector organisation websites, and automated web scraping.

Then, they used the corpus to train an existing AI framework and deployed it through a question-answer interface. Stakeholders working in violence prevention (academics, practitioners in specialist services and government officials) were invited to pose natural-language questions about violence policy and consider the system’s utility and the usefulness of its outputs. Their feedback indicated that the AI generated reports were well-grounded in the underlying source documents. Syntheses aligned closely with the documents in the tool, and the inclusion of document references and page-level citations supported credibility assessments. Corpus coverage statistics were considered particularly helpful when judging the robustness of responses. 

This research contributes by documenting the early application of an AI-enabled tool designed to support exploratory policy analysis. The team illustrates an emerging analytic capability and its potential role within policy-oriented research workflows. By demonstrating how a document-grounded, closed-domain AI system can be used to interrogate policy framings and identify potential siloes, this work addresses a gap in current public health applications of AI, specifically in the context of violence prevention.

To access the VISION AI tool to ask your own questions about violence prevention: VISION: Violence, Health & Society  

To download the paper: Artificial intelligence in critical synthesis of public health responses to violence: A novel application to UK violence prevention policy

To cite: Cook, D., Cook, E., Cullen, K., Zachos, K., McManus, S., Feder, G., Bellis, M., Maiden, N. Artificial intelligence in critical synthesis of public health responses to violence: A novel application to UK violence prevention policy. Science Direct (2026). https://doi.org/10.1186/s40163-026-00272-2

Illustration from Adobe Photo Stock subscription

VISION responds to Parliamentary, government & non-government consultations

Consultation, evidence and inquiry submissions are an important part of our work at VISION. Responding to Parliamentary, government and non-government organisation consultations ensures that a wide range of opinions and voices are factored into the policy decision making process. As our interdisciplinary research addresses violence and how it cuts across health, crime and justice and the life course, we think it is important to take the time to answer any relevant call and to share our insight and findings to support improved policy and practice. We respond as VISION, the Violence & Society Centre, and sometimes in collaboration with others. Below are the links to our published responses and evidence from June 2022.

  1. UK Parliament – International Development Committee – Inquiry: Women, Peace and Security. Our submission was published in March 2026
  2. UK Parliament – Public Bill Committee – Call for evidence: Crime and Policing Bill. Our submission was published in 2025
  3. UK Parliament (Library) – POSTNote – Approved Work: Violence Against Women and Girls in schools and among children & young people. Two VISION reports were referenced in their POSTNote published in August 2025
  4. UK Parliament – Public Accounts Committee – Inquiry: Tackling Violence against Women and Girls (VAWG). Our submission was published in April 2025
  5. UK Parliament – House of Lords Select Committee on Social Mobility Policy – Call for Evidence: Exploring how education and work opportunities can be better integrated to improve social mobility across the UK. Our submission was published in 2025
  6. UK Parliament – Women and Equalities Committee – Inquiry: Community Cohesion. Our submission was published in February 2025
  7. UK Parliament – Call for evidence on the Terminally Ill Adults (End of Life) Bill. Our submission was published in February 2025
  8. UK Parliament – Public Accounts Committee – Inquiry: Use of Artificial Intelligence in Government. Our submission was published in January 2025
  9. UK Parliament – Public Accounts Committee – Inquiry: Tackling Homelessness. Our submission with Dr Natasha Chilman was published in January 2025. See the full report
  10. Home Office – Legislation consultation: Statutory Guidance for the Conduct of Domestic Homicide Reviews. Our submission was published on the VISION website in July 2024
  11. UK Parliament – Women and Equalities Committee – Inquiry: The rights of older people. Our submission was published in November 2023
  12. UK Parliament  – Women and Equalities Committee – Inquiry: The impact of the rising cost of living on women. Our submission was published in November 2023
  13. UK Parliament – Women and Equalities Committee – Inquiry: The escalation of violence against women and girls. Our submission published in September 2023
  14. Home Office – Legislation consultation: Machetes and other bladed articles: proposed legislation (submitted response 06/06/2023). Government response to consultation and summary of public responses was published in August 2023
  15. Welsh Government – Consultation: National action plan to prevent the abuse of older people. Summary of the responses published in April 2023
  16. Race Disparity Unit (RDU) – Consultation: Standards for Ethnicity Data (submitted response 30/08/2022). Following the consultation, a revised version of the data standards was published in April 2023
  17. UK Parliament – The Home Affairs Committee – Call for evidence: Human Trafficking. Our submission was published in March 2023
  18. UN expert – Call for evidence: Violence, abuse and neglect in older people. Our submission was published in February 2023
  19. UK Parliament – The Justice and Home Affairs Committee – Inquiry: Family migration. Our submission was published in September 2022 and a report was published following the inquiry in February 2023
  20. Home Office – Consultation: Controlling or Coercive behaviour Statutory Guidance. Our submission was published in June 2022

For further information, please contact us at VISION_Management_Team@city.ac.uk

Photo by JaRiRiyawat from Adobe Stock downloads (licensed)

Discovering the Potential of Large Language Models in Social Science Research: Takeaways from an Oxford Workshop

By Dr Maddy Janickyj, Research Fellow in Natural Language Processing (NLP) for the Violence, Health, and Society (VISION) Consortium, University College London

As a data-focused VISION researcher with a PhD specialising in Natural Language Processing (NLP; see our previous blog for more about this), I initially avoided ChatGPT and similar tools. ChatGPT, a type of Large Language Model (LLM) developed by OpenAI, offers capabilities like summarising information, translating text, and even coding.

While ChatGPT is potentially the most well-known example of a LLM, similar models are integrated into many everyday tools. For instance, LLMs are the underlying technology in many customer service chatbots, virtual assistants like Alexa, and writing tools such as Grammarly. These LLMs are trained on large sets of data with the intention of getting them to understand (and in some cases generate) language. The models draw on this training to complete various tasks and are finetuned to work for specific domains. Their breadth of abilities and the many open-source models that have been developed make them the perfect methodological tool for researchers in both computer science and the social sciences. For clarity, an open-source LLM is one whose code and architecture are publicly available.

To further understand how LLMs are being used by researchers and to consider how the tools would integrate with and support violence-related research, I – a mathematician turned computational social scientist – attended the Oxford LLMs workshop. The event, held at Oxford’s Nuffield College, aimed to bring early-career scholars up to speed with the technical foundations, real-world applications, and research potential of LLMs. Throughout the week, I met with PhD/Masters students and other Post-doctoral researchers interested in using LLMs to evaluate anything from economic, linguistic, and political issues, for example.

Understanding LLMs: Lectures and Industry Insights

The first few days provided foundational lectures and talks, showcasing the technical underpinnings and application of LLMs. One of the big draws was the calibre of speakers. We heard from industry experts working at well-known companies such as Meta, Ori , Qdrant, Wayfair, Intento, Arize AI, and Google.  

We then started our deep dive into LLMs, including how they are trained and evaluated. We heard about the numerous ways you can fine-tune LLMs, a step which occurs after general pre-training and tailors a model to meet domain/task-specific needs. Fine-tuning methods such as Continued Pre-training, Supervised Fine-tuning, and Preference Tuning were highlighted. Each technique offers different ways of adapting LLMs to specialised domains without needing to re-train them from scratch, saving computational resources.

We also covered common challenges associated with finetuning models. One of these is “catastrophic forgetting,” where a model’s performance declines in one area when it’s fine-tuned on another. For example, if a model is adjusted to improve name recognition, it may inadvertently lose accuracy in identifying locations. This side effect is something I encountered when finetuning other NLP models during my PhD and illustrates the balance required when refining LLMs.

Applying LLMs: Collaborative Research Projects

In the latter half of the week, workshop attendees collaborated on research projects, exploring LLM applications across social science realms. This was a hands-on opportunity to test LLM methodologies discussed earlier and apply them to real-world social science challenges.

Leading up to the workshop we had the chance to review the proposed project briefs, gather literature showcasing how LLMs are used in our respective disciplines, and finally rank the four projects according to our own skillsets and research interests. One of the projects we decided to tackle as a group focused on developing an LLM purely for social science research. LLMs are considered to have some form of bias, for example against certain demographic groups, and with this ongoing project, we wanted to create a fair, unbiased, and open-source LLM suited to the social sciences.

In another project, we examined gender bias in academia. For this, we used Google’s Gemini to classify the gender of authors in academic syllabi. By experimenting with prompts, we measured how well the LLM could assess gender trends in syllabus authorship. Using tools like Google Colab, we collaboratively coded and refined our approach, leveraging Gemini’s capabilities to highlight gender disparities effectively. In some cases, we found the model to correctly classify 100% of the authors’ genders. This project underscored both the potential and the limitations of LLMs in accurately capturing nuanced social issues.

Appreciating the Potential: Be Cautious

Overall, the Oxford workshop demonstrated how LLMs can be powerful tools in social science research including violence-related research such as what we do at VISION, provided they are tailored to specific domain needs and applied with caution. Hearing directly from researchers and industry professionals offered invaluable guidance on both leveraging and responsibly implementing LLMs. Its also important to consider the data you are utilizing and the outputs you are expecting. In my current area (which focuses on technology-facilitated abuse), an increasing number of researchers are using sensitive data and the outcomes of such research can impact the lives of real individuals. Thus, for anyone in the social sciences looking to integrate cutting-edge NLP methods, understanding the complexities behind these models and their applications is essential. I encourage readers to look at the work being done currently by the workshop participants, and to keep an eye out for later outputs of the workshop!

For further information, please contact Maddy at m.janickyj@ucl.ac.uk