Abstract
Background
Melanoma has one of the fastest rising incidence rates of any cancer. It accounts for a small percentage of skin cancer cases but is responsible for the majority of skin cancer deaths. History-taking and visual inspection of a suspicious lesion by a clinician is usually the first in a series of ‘tests’ to diagnose skin cancer. Establishing the accuracy of visual inspection alone is critical to understating the potential contribution of additional tests to assist in the diagnosis of melanoma.
Objectives
To determine the diagnostic accuracy of visual inspection for the detection of cutaneous invasive melanoma and intraepidermal melanocytic variants in adults with limited prior testing and in those referred for further evaluation of a suspicious lesion. Studies were separated according to whether the diagnosis was recorded face-to-face (in-person) or based on remote (image-based) assessment.
Search methods
We undertook a comprehensive search of the following databases from inception up to August 2016: Cochrane CentralRegister of Controlled Trials; ; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing TrialsRegister; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical TrialsRegistry Platform. We studied reference lists and published systematic review articles.
Selection criteria
Test accuracy studies of any design that evaluated visual inspection in adults with lesions suspicious for melanoma,compared with a reference standard of, either histological confirmation or clinical follow-up. Studies reporting data for ‘clinicaldiagnosis’ where dermoscopy may or may not have been used were excluded.
Data collection and analysis
Two review authors independently extracted all data using a standardised data extraction and quality assessment form(based on QUADAS-2). We contacted authors of included studies where information related to the target condition ordiagnostic threshold were missing. We estimated summary sensitivities and specificities per algorithm and threshold usingthe bivariate hierarchical model. We investigated the impact of: in-person test interpretation; use of a purposely developedalgorithm to assist diagnosis; and observer expertise.
Main results
Forty-nine publications reporting on a total of 51 study cohorts with 34,351 lesions (including 2499 cases) were included, providing 134 datasets for visual inspection. Across almost all study quality domains, insufficient information was provided inthe majority of study reports to allow the risk of bias to be judged, while concerns regarding applicability of study findingswere scored as ‘High’ in three of four domains assessed. Selective participant recruitment, lack of detail regarding thethreshold for deciding on a positive test result, and lack of detail on observer expertise were particularly problematic. Attempts to analyse studies by degree of prior testing were hampered by a lack of relevant information and by the restrictedinclusion of lesions selected for biopsy or excision. Accuracy was generally much higher for in-person diagnosis comparedto image-based evaluations (relative diagnostic odds ratio of 8.54, 95% CI 2.89, 25.3, P<0.001). Meta-analysis of in-personevaluations that could be clearly placed on the clinical pathway showed a general trade-off between sensitivity andspecificity, with the highest sensitivity (92.4%, 95% CI 26.2, 99.8%) and lowest specificity (79.7%, 95% CI 73.7, 84.7%) observed in participants with limited prior testing (n = 3 datasets). Summary sensitivities were lower for those referred for specialist assessment but with much higher specificities (e.g. sensitivity 76.7% (95% CI 61.7, 87.1%) and specificity 95.7%(95% CI 89.7, 98.3%) for lesions selected for excision, n = 8 datasets). These differences may be related to differences in the spectrum of included lesions, differences in the definition of a positive test result, or to variations in observer expertise. We did not find clear evidence that accuracy is improved by the use of any algorithm to assist diagnosis in all settings. Attempts to examine the effect of observer expertise in melanoma diagnosis were hindered due to poor reporting.
Authors' conclusions
Visual inspection is a fundamental component of the assessment of a suspicious skin lesion; however, the evidencesuggests that melanomas will be missed if visual inspection is used on its own. The evidence to support its accuracy in therange of settings in which it is used is flawed and very poorly reported. Although published algorithms do not appear toimprove accuracy, there is insufficient evidence to suggest that the ‘no algorithm’ approach should be preferred in all settings. Despite the volume of research evaluating visual inspection, further prospective evaluation of the potential added value of using established algorithms according to the prior testing or diagnostic difficulty of lesions may be warranted.
Melanoma has one of the fastest rising incidence rates of any cancer. It accounts for a small percentage of skin cancer cases but is responsible for the majority of skin cancer deaths. History-taking and visual inspection of a suspicious lesion by a clinician is usually the first in a series of ‘tests’ to diagnose skin cancer. Establishing the accuracy of visual inspection alone is critical to understating the potential contribution of additional tests to assist in the diagnosis of melanoma.
Objectives
To determine the diagnostic accuracy of visual inspection for the detection of cutaneous invasive melanoma and intraepidermal melanocytic variants in adults with limited prior testing and in those referred for further evaluation of a suspicious lesion. Studies were separated according to whether the diagnosis was recorded face-to-face (in-person) or based on remote (image-based) assessment.
Search methods
We undertook a comprehensive search of the following databases from inception up to August 2016: Cochrane CentralRegister of Controlled Trials; ; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing TrialsRegister; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical TrialsRegistry Platform. We studied reference lists and published systematic review articles.
Selection criteria
Test accuracy studies of any design that evaluated visual inspection in adults with lesions suspicious for melanoma,compared with a reference standard of, either histological confirmation or clinical follow-up. Studies reporting data for ‘clinicaldiagnosis’ where dermoscopy may or may not have been used were excluded.
Data collection and analysis
Two review authors independently extracted all data using a standardised data extraction and quality assessment form(based on QUADAS-2). We contacted authors of included studies where information related to the target condition ordiagnostic threshold were missing. We estimated summary sensitivities and specificities per algorithm and threshold usingthe bivariate hierarchical model. We investigated the impact of: in-person test interpretation; use of a purposely developedalgorithm to assist diagnosis; and observer expertise.
Main results
Forty-nine publications reporting on a total of 51 study cohorts with 34,351 lesions (including 2499 cases) were included, providing 134 datasets for visual inspection. Across almost all study quality domains, insufficient information was provided inthe majority of study reports to allow the risk of bias to be judged, while concerns regarding applicability of study findingswere scored as ‘High’ in three of four domains assessed. Selective participant recruitment, lack of detail regarding thethreshold for deciding on a positive test result, and lack of detail on observer expertise were particularly problematic. Attempts to analyse studies by degree of prior testing were hampered by a lack of relevant information and by the restrictedinclusion of lesions selected for biopsy or excision. Accuracy was generally much higher for in-person diagnosis comparedto image-based evaluations (relative diagnostic odds ratio of 8.54, 95% CI 2.89, 25.3, P<0.001). Meta-analysis of in-personevaluations that could be clearly placed on the clinical pathway showed a general trade-off between sensitivity andspecificity, with the highest sensitivity (92.4%, 95% CI 26.2, 99.8%) and lowest specificity (79.7%, 95% CI 73.7, 84.7%) observed in participants with limited prior testing (n = 3 datasets). Summary sensitivities were lower for those referred for specialist assessment but with much higher specificities (e.g. sensitivity 76.7% (95% CI 61.7, 87.1%) and specificity 95.7%(95% CI 89.7, 98.3%) for lesions selected for excision, n = 8 datasets). These differences may be related to differences in the spectrum of included lesions, differences in the definition of a positive test result, or to variations in observer expertise. We did not find clear evidence that accuracy is improved by the use of any algorithm to assist diagnosis in all settings. Attempts to examine the effect of observer expertise in melanoma diagnosis were hindered due to poor reporting.
Authors' conclusions
Visual inspection is a fundamental component of the assessment of a suspicious skin lesion; however, the evidencesuggests that melanomas will be missed if visual inspection is used on its own. The evidence to support its accuracy in therange of settings in which it is used is flawed and very poorly reported. Although published algorithms do not appear toimprove accuracy, there is insufficient evidence to suggest that the ‘no algorithm’ approach should be preferred in all settings. Despite the volume of research evaluating visual inspection, further prospective evaluation of the potential added value of using established algorithms according to the prior testing or diagnostic difficulty of lesions may be warranted.
Original language | English |
---|---|
Article number | CD013194 |
Journal | Cochrane Database of Systematic Reviews |
Issue number | 12 |
DOIs | |
Publication status | Published - 15 Jun 2018 |