Archives

VISION responds to Parliamentary, government & non-government consultations

    Consultation, evidence and inquiry submissions are an important part of our work at VISION. Responding to Parliamentary, government and non-government organisation consultations ensures that a wide range of opinions and voices are factored into the policy decision making process. As our interdisciplinary research addresses violence and how it cuts across health, crime and justice and the life course, we think it is important to take the time to answer any relevant call and to share our insight and findings to support improved policy and practice. We respond as VISION, the Violence & Society Centre, and sometimes in collaboration with others. Below are the links to our published responses and evidence from June 2022.

    1. UK Parliament – Women and Equalities Committee – Inquiry: Community Cohesion. Our submission was published in February 2025.
    2. UK Parliament – Call for evidence on the Terminally Ill Adults (End of Life) Bill. Our submission was published in February 2025.
    3. UK Parliament – Public Accounts Committee – Inquiry: Use of Artificial Intelligence in Government. Our submission was published in January 2025.
    4. UK Parliament – Public Accounts Committee – Inquiry: Tackling Homelessness. Our submission with Dr Natasha Chilman was published in January 2025. See the full report
    5. Home Office – Legislation consultation: Statutory Guidance for the Conduct of Domestic Homicide Reviews. Our submission was published on the VISION website in July 2024.
    6. UK Parliament – Women and Equalities Committee – Inquiry: The rights of older people. Our submission was published in November 2023
    7. UK Parliament  – Women and Equalities Committee – Inquiry: The impact of the rising cost of living on women. Our submission was published in November 2023
    8. UK Parliament – Women and Equalities Committee – Inquiry: The escalation of violence against women and girls. Our submission published in September 2023
    9. Home Office – Legislation consultation: Machetes and other bladed articles: proposed legislation (submitted response 06/06/2023). Government response to consultation and summary of public responses was published in August 2023
    10. Welsh Government – Consultation: National action plan to prevent the abuse of older people. Summary of the responses published in April 2023
    11. Race Disparity Unit (RDU) – Consultation: Standards for Ethnicity Data (submitted response 30/08/2022). Following the consultation, a revised version of the data standards was published in April 2023
    12. UK Parliament – The Home Affairs Committee – Call for evidence: Human Trafficking. Our submission was published in March 2023
    13. UN expert – Call for evidence: Violence, abuse and neglect in older people. Our submission was published in February 2023
    14. UK Parliament – The Justice and Home Affairs Committee – Inquiry: Family migration. Our submission was published in September 2022 and a report was published following the inquiry in February 2023
    15. Home Office – Consultation: Controlling or Coercive behaviour Statutory Guidance. Our submission was published in June 2022

    For further information, please contact us at VISION_Management_Team@city.ac.uk

    Photo by JaRiRiyawat from Adobe Stock downloads (licensed)

    Discovering the Potential of Large Language Models in Social Science Research: Takeaways from an Oxford Workshop

      By Dr Maddy Janickyj, Research Fellow in Natural Language Processing (NLP) for the Violence, Health, and Society (VISION) Consortium, University College London

      As a data-focused VISION researcher with a PhD specialising in Natural Language Processing (NLP; see our previous blog for more about this), I initially avoided ChatGPT and similar tools. ChatGPT, a type of Large Language Model (LLM) developed by OpenAI, offers capabilities like summarising information, translating text, and even coding.

      While ChatGPT is potentially the most well-known example of a LLM, similar models are integrated into many everyday tools. For instance, LLMs are the underlying technology in many customer service chatbots, virtual assistants like Alexa, and writing tools such as Grammarly. These LLMs are trained on large sets of data with the intention of getting them to understand (and in some cases generate) language. The models draw on this training to complete various tasks and are finetuned to work for specific domains. Their breadth of abilities and the many open-source models that have been developed make them the perfect methodological tool for researchers in both computer science and the social sciences. For clarity, an open-source LLM is one whose code and architecture are publicly available.

      To further understand how LLMs are being used by researchers and to consider how the tools would integrate with and support violence-related research, I – a mathematician turned computational social scientist – attended the Oxford LLMs workshop. The event, held at Oxford’s Nuffield College, aimed to bring early-career scholars up to speed with the technical foundations, real-world applications, and research potential of LLMs. Throughout the week, I met with PhD/Masters students and other Post-doctoral researchers interested in using LLMs to evaluate anything from economic, linguistic, and political issues, for example.

      Understanding LLMs: Lectures and Industry Insights

      The first few days provided foundational lectures and talks, showcasing the technical underpinnings and application of LLMs. One of the big draws was the calibre of speakers. We heard from industry experts working at well-known companies such as Meta, Ori , Qdrant, Wayfair, Intento, Arize AI, and Google.  

      We then started our deep dive into LLMs, including how they are trained and evaluated. We heard about the numerous ways you can fine-tune LLMs, a step which occurs after general pre-training and tailors a model to meet domain/task-specific needs. Fine-tuning methods such as Continued Pre-training, Supervised Fine-tuning, and Preference Tuning were highlighted. Each technique offers different ways of adapting LLMs to specialised domains without needing to re-train them from scratch, saving computational resources.

      We also covered common challenges associated with finetuning models. One of these is “catastrophic forgetting,” where a model’s performance declines in one area when it’s fine-tuned on another. For example, if a model is adjusted to improve name recognition, it may inadvertently lose accuracy in identifying locations. This side effect is something I encountered when finetuning other NLP models during my PhD and illustrates the balance required when refining LLMs.

      Applying LLMs: Collaborative Research Projects

      In the latter half of the week, workshop attendees collaborated on research projects, exploring LLM applications across social science realms. This was a hands-on opportunity to test LLM methodologies discussed earlier and apply them to real-world social science challenges.

      Leading up to the workshop we had the chance to review the proposed project briefs, gather literature showcasing how LLMs are used in our respective disciplines, and finally rank the four projects according to our own skillsets and research interests. One of the projects we decided to tackle as a group focused on developing an LLM purely for social science research. LLMs are considered to have some form of bias, for example against certain demographic groups, and with this ongoing project, we wanted to create a fair, unbiased, and open-source LLM suited to the social sciences.

      In another project, we examined gender bias in academia. For this, we used Google’s Gemini to classify the gender of authors in academic syllabi. By experimenting with prompts, we measured how well the LLM could assess gender trends in syllabus authorship. Using tools like Google Colab, we collaboratively coded and refined our approach, leveraging Gemini’s capabilities to highlight gender disparities effectively. In some cases, we found the model to correctly classify 100% of the authors’ genders. This project underscored both the potential and the limitations of LLMs in accurately capturing nuanced social issues.

      Appreciating the Potential: Be Cautious

      Overall, the Oxford workshop demonstrated how LLMs can be powerful tools in social science research including violence-related research such as what we do at VISION, provided they are tailored to specific domain needs and applied with caution. Hearing directly from researchers and industry professionals offered invaluable guidance on both leveraging and responsibly implementing LLMs. Its also important to consider the data you are utilizing and the outputs you are expecting. In my current area (which focuses on technology-facilitated abuse), an increasing number of researchers are using sensitive data and the outcomes of such research can impact the lives of real individuals. Thus, for anyone in the social sciences looking to integrate cutting-edge NLP methods, understanding the complexities behind these models and their applications is essential. I encourage readers to look at the work being done currently by the workshop participants, and to keep an eye out for later outputs of the workshop!

      For further information, please contact Maddy at m.janickyj@ucl.ac.uk