In summer 2025, Clara Seyfried and Dean Smith, two of our Associate Officers, along with our Deputy Director for Training Dr Katy Keenan, carried out research which investigated Scottish social science PhD students’ practices, opinions and needs when it comes to generative AI. We are delighted now to present a summary of that research in the below report. The report outlines key findings and recommendations for institutions, which supervisors and institutions may find helpful when developing guidance and training for their PhD cohorts.
Key findings:
- GenAI use is now common among social science PhD students, with over three-quarters (77%) reporting some use, and over one third (36%) using genAI tools at least weekly. Use ranges from light-touch proofreading to custom-built tools embedded in research workflows, and the most common tasks are to support writing, literature review/scoping, and as a research assistant for brainstorming, checking concepts, and troubleshooting coding problems.
- Use is highly individualised and largely self-taught, shaped through experimentation rather than formal training or guidance.
- Attitudes towards genAI span a continuum from enthusiastic adoption (rare) to outright rejection (around one quarter). From the survey data, there are no clear disciplinary or cohort patterns in adoption or attitudes.
- Students drew clear distinctions between ethically legitimate support roles for genAI (which included task alleviation, helping with expression, idea development) and illegitimate uses (original knowledge generation, meaning-making from data, and uploading data), which were believed to undermine the core purpose of doctoral research.
- Ethical responsibility is experienced as highly personalised and often anxiety-producing, with students feeling personally accountable in the absence of clear institutional rules. This uncertainty can be compounded by fears of judgement, misconduct, and inconsistent expectations.
- GenAI is rarely discussed within supervisory teams, with only one in five students reporting any open conversation. Silence, avoidance, and uneven supervisory knowledge contribute to uncertainty and missed opportunities for shared learning.
Recommendations:
- There is a need for clearer, more coordinated guidance and training for students, which consistent within and across institutions, co-designed with PhD researchers, and aligned with funders and learned societies.
- Training should combine critical AI literacy with practical, discipline-relevant skills, avoiding hype, while preparing students for future academic and non-academic careers.
- A multi-generational approach—engaging both supervisors and students—was widely supported.
- Inclusion of PhD students in shaping ethical norms, policy responses, and the co-design training provision is important.