The choice of databases depends on your research topic, but common databases include MEDLINE/PubMed for biomedical and health-related studies, CINAHL for nursing and allied health, and PsycINFO for psychology and behavioral sciences.
See the Search Sources for Systematic and Scoping Reviews GalterGuide for more databases including databases with grey literature output to help minimize publication bias in your review.
Developing a comprehensive search strategy involves identifying key concepts, selecting appropriate Keywords and subject headings, using Boolean operators, testing and refining the Strategy and documenting the search process.
The following classes and guide can help you get started with developing a search for your review:
Reach out to your liaison librarian for more tips and help.
Yes, AI can assist in developing searches for systematic reviews, but it should be used with caution.
AI tools can help generate search terms, suggest synonyms, and even translate searches across databases. However, AI-generated searches may lack transparency, reproducibility, and the nuanced understanding of subject-specific terminology that an experienced searcher provides. As a result, it is best to use AI as a supplementary tool.
Boolean operators help refine search results:
Learn more about Boolean operators through these Galter resources:
Yes, you can exclude certain words, concepts, or phrases in your search strategy by using Boolean operators such as “NOT.” However, use of NOT may inadvertently exclude relevant studies.
Yes, a librarian can assist you with designing a search strategy.
To get started, complete the online form. This form collects essential details to guide the development of your search. Once submitted, your liaison librarian will review it and follow up within a week to schedule a time to discuss your search strategy.
There is no set number of results or studies that are universally acceptable for a review, as the appropriate number depends on several factors:
Ultimately, the focus should be on the relevance and quality of the studies rather than aiming for a specific quantity.
Generally, no. Librarians should not run searches designed by others because their expertise lies in developing and refining comprehensive search strategies that meet the rigorous standards of systematic and scoping reviews. Running a pre-designed search without making intellectual contributions undermines the librarian’s role as a methodology expert and is inconsistent with the principles of authorship.
An exception to this is when a search needs to be updated, and the original librarian is unavailable to rerun it. The new librarian may assist with rerunning the search but should ensure that proper credit is given to the original search designer.
Benchmark articles (also called reference or seed articles/studies) are published studies that meet the inclusion criteria for your review. These articles are critical to the review process, as they will undergo synthesis and data extraction based on the variables outlined in your protocol's inclusion criteria. The data items of interest, as specified in the protocol, will guide the team in extracting relevant information from these studies.
Why Benchmark Articles Matter
Failure to identify suitable benchmark articles often signals that the review topic may be overly narrow for a systematic or scoping review. In such cases, teams should consider whether an alternative review type might be more appropriate.
Testing the Data Extraction Process
Because benchmark articles already meet the inclusion criteria, they are ideal for testing the data extraction form and process. These studies will be screened in full text and undergo data extraction, providing an opportunity to refine the data collection methods and ensure consistency before applying the process to the full set of included studies.
Role in Search Strategy Development
Benchmark articles are indispensable for developing and testing search strategies. During the initial stages, these studies are used to validate that the search strategy is comprehensive and effective. They should appear the database search results if the strategy is well-designed.
Benchmark Article Requirements
For teams collaborating with Galter librarians under the full collaboration model, a minimum of four to five benchmark articles is required.
In section 5.2.1 of the Cochrane Handbook, the Cochrane Collaboration states "in a systematic review, studies rather than reports of studies are the principal unit of interest" and advise "multiple reports of the same study should be linked together." Source: Chapter 5 of the Cochrane Handbook
This guidance may be applicable to scoping reviews and other evidence synthesis projects. Consider establishing your teams approach to handling multiple reports of a single study during the process of developing your review protocol.
Sources:
Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.5 (updated August 2024). Cochrane, 2024. Available from www.training.cochrane.org/handbook.
Mayo-Wilson E, Li T, Fusco N, Dickersin K; MUDS investigators. Practical guidance for using multiple data sources in systematic reviews and meta-analyses (with examples from the MUDS study). Res Synth Methods. 2018 Mar;9(1):2-12. doi: 10.1002/jrsm.1277. Epub 2017 Dec 15. PMID: 29057573; PMCID: PMC5888128.