Prospective clinical evaluation of artificial intelligence in radiology
| ISRCTN | ISRCTN14908789 |
|---|---|
| DOI | https://doi.org/10.1186/ISRCTN14908789 |
| Integrated Research Application System (IRAS) | 357391 |
| Sponsor | Oxford University Hospitals NHS Trust |
| Funder | Individual to each sub-study |
- Submission date
- 07/11/2025
- Registration date
- 19/01/2026
- Last edited
- 19/01/2026
- Recruitment status
- Recruiting
- Overall study status
- Ongoing
- Condition category
- Other
Plain English summary of protocol
Background and study objectives
Artificial intelligence (AI) is increasingly used in healthcare, especially for interpreting medical images such as X-rays, CT scans, and MRIs. These images help doctors, nurses, and other healthcare professionals diagnose illnesses, but mistakes can happen, which may lead to delayed or incorrect treatment. AI tools have been developed to help clinicians identify problems more accurately and efficiently. While these tools have performed well in research studies and have been approved for use in healthcare by regulators, we do not yet know how well they work in real NHS hospitals and clinics.
This study will test AI tools in NHS hospitals to see how they help clinicians in their daily work. The AI will support clinicians by analysing medical images alongside them. We will collect information that is generated as part of normal clinical care to measure how often the AI helps improve diagnosis and patient care. The results will provide strong evidence on how AI can be best used in real-life healthcare settings.
Who can participate?
Patients: Anyone undergoing medical imaging (such as X-rays) as part of routine clinical care at participating NHS hospitals can be included in the study. For the fracture detection substudy, this includes patients attending emergency departments or minor injuries units for suspected fractures.
NHS staff: Clinicians who use the AI tool as part of their standard clinical practice will also participate.
What does the study involve?
The study uses an adaptive trial design that will evaluate multiple CE-marked AI tools across different imaging types. Each AI tool will be integrated into routine clinical care at participating NHS hospitals. During "AI on" periods, clinicians will be able to view AI outputs alongside their usual interpretation of medical images. During "AI off" periods, clinicians will work as normal without access to the AI tool.
For patients, there will be no change to usual care pathways. Medical images will be collected and analysed as part of normal clinical practice. Some patients may be invited to complete optional questionnaires about their experience one month after their visit.
What are the possible benefits and risks of participating?
The AI tools may help improve the accuracy of diagnoses, potentially reducing missed or incorrect diagnoses
Improved patient care through better detection of abnormalities on medical images. They may reduce unnecessary NHS contacts, hospital visits, and treatments, and contribute to important research that will help the NHS make evidence-based decisions about adopting AI technologies
The main risk is that the AI tool could produce false positive results (suggesting an abnormality when there isn't one) or false negative results (missing an abnormality). However, clinicians are trained to use AI outputs only as an adjunct to their clinical judgement, not as a replacement for it. All AI tools used in the study are CE-marked and approved for clinical use. Patient care will not deviate from standard clinical pathways, and clinicians retain full responsibility for all diagnostic decisions. Safety monitoring will be conducted throughout the study, and the AI tool will be suspended if significant safety concerns are identified
Where is the study run from?
The study is run from Oxford University Hospitals NHS Foundation Trust, specifically from the Emergency Department
When is the study starting and how long is it expected to run for?
The overall SAMURAI-Pro study protocol was approved in October 2025, with individual substudies having varying timelines.
Who is funding the study?
The funder varies for each substudy and will be specified individually for each component of the research. The overall study framework is sponsored by Oxford University Hospitals NHS Foundation Trust.
Who is the main contact?
Prof. Alex Novak, alex.novak@ouh.nhs.uk
Contact information
Public, Scientific, Principal investigator
OxCAIR, Kadoorie Centre
John Radcliffe Hospital
Headington
Oxford
OX3 9DU
United Kingdom
| 0000-0002-5880-8235 | |
| Phone | +44 (0)1865 231550 |
| alex.novak@ouh.nhs.uk |
Study information
| Primary study design | Interventional |
|---|---|
| Study design | Adaptive multi-centre prospective interventional comparative studies |
| Secondary study design | |
| Scientific title | Systematic Assessment for the Medical Utility of Radiology Artificial Intelligence – Prospective Evaluation |
| Study acronym | SAMURAI-Pro |
| Study objectives | Aims: 1. Measure the diagnostic accuracy of AI-assisted image interpretation algorithms in a real-world clinical setting 1.1. Evaluate the standalone diagnostic accuracy of AI algorithms in detecting and interpreting abnormalities by comparing their outputs to expert-verified ground truth data. 1.2. Identify variability in diagnostic accuracy across different clinical settings, in addition to patient and pathology subgroups. 2. Evaluate the clinical impact of implementing AI-assisted image interpretation algorithms: 2.1. Investigate the impact of AI algorithms on diagnostic decision-making and whether they reduce diagnostic errors in real-world practice. 2.2. Assess the influence of AI tools on optimising existing patient management pathways, such as improving the efficiency of referrals or reducing unnecessary follow-ups. 3. Evaluate the technical integration of AI tools into existing NHS computer systems and workflows 3.1. Assess how AI tools integrate into existing NHS clinical workflows and computer systems, including their impact on clinician efficiency, workload, and decision-making processes. 3.2. Identify operational and technical barriers to successful implementation and recommend strategies to address these challenges. 4. Examine perceptions and acceptability: 4.1. Explore the perspectives of clinicians and patients regarding AI tools to understand their acceptability, perceived benefits, and concerns. 4.2. Identify barriers and facilitators to adoption from a behavioural and cultural standpoint, ensuring tools are aligned with clinical needs and practices. |
| Ethics approval(s) |
1. Approved 23/10/2025, Health Research Authority (2 Redman Place, Stratford, London, E20 1JQ, United Kingdom; +44 (0)20 7104 8000; approvals@hra.nhs.uk), ref: 357391 2. Approved 23/10/2025, South Central – Oxford C Research Ethics Committee (Oxford University Hospitals NHS Foundation Trust, Unipart House Business Centre, Garsington Road, Oxford, OX4 2PG, United Kingdom; -; oxfordc.rec@hra.nhs.uk), ref: 25/SC/0252 |
| Health condition(s) or problem(s) studied | Patients undergoing medical imaging as part of routine clinical care |
| Intervention | In each sub-study a CE-marked AI tool will be deployed. Clinicians will be able to view AI outputs in addition to standard results to aid in diagnosis when reviewing medical diagnostic tests. The SAMURAI trials describe an adaptive study design to provide reliable evidence on the real-world impact of CE-marked AI tools for medical imaging and diagnostics in clinical practice. These will be a multi-centre prospective interventional comparative studies, where AI tools will be integrated into routine clinical care. The AI tool will be deployed only within their intended use case, i.e. to aid clinician interpretation of medical images, and as a result there will be no deviation from standard clinical pathways. The study will be conducted at NHS sites where medical imaging forms part of the normal clinical pathway. Participants will be clinicians who use the AI tool, in addition to patients undergoing medical imaging for which the AI tool is designed for. Relevant data will be collected from EPR, CRIS, and the AI vendors during ‘AI-on’ and ‘AI-off’ phases during the study period. The efficacy and impact of implementation of the AI tool will be evaluated in line with the objectives and outcome measures. Relevant AI outputs will be obtained directly from vendors. Patients: 1. Patients will not be asked to directly consent to the study; however they will be offered an option to opt-out and have their data removed from the analysis. 2. Patients who have completed the national data opt-out will also be excluded from data analysis. NHS Staff: 1. NHS staff will not be required to consent as they will be using these tools as part of their standard clinical practice. 2. For individual studies that invite clinician feedback questionnaires, NHS staff returning those questionnaires will be assumed to have consented to their inclusion in the study. Use of randomisation will be defined and be dependent on the sub-study. There will be no blinding or code breaking procedure in this study due to the nature of the intervention. Each site will install a CE marked, commercially available, AI tool which has been approved for use for identification of abnormalities on a specific imaging or diagnostic test modality (“intended use”). The AI algorithm will process all eligible medical images or tests, providing AI outputs regarding any suspected abnormality in addition to secondary captures which highlight suspected pathology on the medical image or test result. This will be displayed on PACS so that clinicians can view these findings and decide whether to accept or override them based on their clinical judgement. No additional changes to clinical practice will be required during the study and no changes to usual patient care pathways will occur. The comparator will be usual clinical care with the AI tool “off”. This means that clinicians will not have access to the AI tool during the AI “off” period. Name of device: CE-marked AI tool. Manufacturer: This will be dependent on the sub-study Intended purpose: AI tools are a decision support tool aimed to aid clinicians in their interpretation of medical images or test results. Mode of operation: AI tools are integrated into existing clinical workflows. Upon acquisition of an eligible medical image, the DICOM file is sent for processing by the AI algorithm. AI outputs are then displayed directly in CRIS and PACS, which means that any clinician viewing the medical image can review AI outputs which highlight suspected abnormalities on the image or test result. Training needed before use: Participating clinicians will receive training on use of the specific AI tool as per standard clinical practice. This will be delivered through multiple modalities such as videos, online training modules, and written information disseminated to staff as per standard implementation procedure for the specific AI tool. Device safety: Standard safety procedures will be used for implementation of the AI tool as per the specific technology recommended use. Risks include errors in clinician judgement as a result of false negatives or false positives detected by the AI tool. Clinicians will be informed of these risks during the training and reminded to use AI outputs as an adjunct to clinical judgement as per standard implementation practice. Device accountability: Post implementation surveillance will be carried out by the research, deployment, and AI tool companies to ensure functionality and check usage as per standard practice. Outcomes will include assessment of the accuracy and technical factors of the AI tool. Throughout the study period, participants will be able to opt out of their data being used for the study by completing a QR code displayed in clinical areas and waiting rooms, or through completing the national data opt-out. |
| Intervention type | Device |
| Phase | Not Applicable |
| Drug / device / biological / vaccine name(s) | In each sub-study a CE-marked AI tool will be deployed |
| Primary outcome measure(s) |
The diagnostic accuracy of the AI algorithm alone in comparison to the clinical reference standard radiologist report: the sensitivity, specificity, accuracy and AUROC of the AI tool in comparison to the clinical reference standard of radiologist's report. Timepoint: dependent on the specific sub-study. |
| Key secondary outcome measure(s) |
1. The technical performance of the AI tool within existing clinical workflows. The number of medical images or tests rejected or not processed by the AI algorithm that fall under the intended use of the AI algorithm, and the reasons for this. Timepoint: dependent on the specific sub-study. |
| Completion date | 31/12/2028 |
Eligibility
| Participant type(s) | Health professional, Patient |
|---|---|
| Age group | All |
| Sex | All |
| Target sample size at registration | 45000 |
| Key inclusion criteria | 1. NHS staff who are interacting with the AI tool as part of their standard clinical practice 2. Patients whose medical images, collected as part of standard care, are eligible for analysis using AI |
| Key exclusion criteria | Patients: 1. Those who choose to opt out or have completed the national data opt out NHS Staff: 1. Those not directly involved in the patient care pathway 2. Those using AI for non-clinical purposes, such as research |
| Date of first enrolment | 28/10/2025 |
| Date of final enrolment | 31/12/2028 |
Locations
Countries of recruitment
- United Kingdom
- England
Study participating centre
Headley Way
Headington
Oxford
OX3 9DU
England
Results and Publications
| Individual participant data (IPD) Intention to share | Yes |
|---|---|
| IPD sharing plan summary | Available on request, Stored in non-publicly available repository |
| IPD sharing plan | In accordance with UK regulations, including the GDPR, the Data Protection Act 2018, and the NHS Records Management Code of Practice 2021, all relevant patient data obtained from EPR will be handled with utmost care and confidentiality. Approval will be sought from the OUH Caldicott Guardian for use of any anonymised data for the training of healthcare professionals. To ensure patient and participant privacy, all data obtained will be anonymised using advanced pseudonymisation techniques, such as data masking, which replaces identifiable information with artificial identifiers. This process will be carried out using secure, NHS-approved anonymisation tools to maintain data integrity while protecting patient identities. In the same terms, any anonymised dataset will be stored in a password-protected, encrypted database that adheres to the Oxford University Hospitals NHS Foundation Trust information governance protocol. Access to this database will be strictly limited to authorised research personnel, and all data processing activities will be logged and audited regularly to ensure compliance with data protection regulations. It is important to note that when obtaining patient data, all patients who have opted out of data sharing through the National Data Opt-Out programme will have their preferences respected, and their data will be excluded from the study. The datasets generated during and/or analysed during the current study will be available upon request with the following specifications: Data custodian contact: Prof. Alex Novak, Chief Investigator, alex.novak@ouh.nhs.uk Type of data to be shared: Anonymised imaging data (medical images processed by AI algorithms during the study) and associated anonymised demographic and clinical metadata collected as part of routine clinical practice Timing of availability: Data will become available upon publication of the primary results Duration of availability: Data will be made available for a minimum of 5 years post-publication Access criteria and sharing mechanism: Access will be restricted to bona fide researchers undertaking research aligned with the SAMURAI-Pro study objectives. Requests will be reviewed by the OxCAIR Steering Group. Data will be shared via secure transfer mechanisms compliant with UK GDPR and NHS information governance requirements Participants' consent for data sharing: Participants were not required to provide explicit consent to the study. Patients will be offered an opt-out mechanism via national data opt-out or explicit opt-out form displayed in clinical areas. NHS staff using the AI tool as part of standard clinical practice did not require consent. Patients and staff invited to participate in questionnaires will provide informed consent at the time of invitation Data anonymisation: All data will be fully anonymised using advanced pseudonymisation techniques such as data masking, which replaces identifiable information with artificial identifiers. Anonymisation will be carried out using secure, NHS-approved anonymisation tools. No identifiable information will be retained, and re-identification will not be possible Ethical and legal restrictions: Data sharing will comply with UK GDPR, the Data Protection Act 2018, NHS Records Management Code of Practice 2021, and NHS information governance protocols. Data protection impact assessment (DPIA) has been conducted. Ethics approval obtained from South Central – Oxford C Research Ethics Committee (Reference 25/SC/0252, approved 23 October 2025). No additional ethical restrictions are anticipated beyond standard regulatory requirements. Patients who have completed the national data opt-out will be excluded. |
Editorial Notes
07/11/2025: Study's existence confirmed by the HRA.