Javascript required
Skip to content Skip to sidebar Skip to footer

The Difference Between Qualitative Review and Quantitative Review Medical

  • Commentary
  • Open Admission
  • Published:

Clarifying differences betwixt review designs and methods

  • 83k Accesses

  • 375 Citations

  • 72 Altmetric

  • Metrics details

Abstract

This paper argues that the current proliferation of types of systematic reviews creates challenges for the terminology for describing such reviews. Terminology is necessary for planning, describing, appraising, and using reviews, edifice infrastructure to enable the conduct and apply of reviews, and for farther developing review methodology. There is insufficient consensus on terminology for a typology of reviews to be produced and any such attempt is likely to be limited by the overlapping nature of the dimensions along which reviews vary. It is therefore proposed that the nigh useful strategy for the field is to develop terminology for the principal dimensions of variation. Three such main dimensions are proposed: (1) aims and approaches (including what the review is aiming to achieve, the theoretical and ideological assumptions, and the utilise of theory and logics of aggregation and configuration in synthesis); (ii) structure and components (including the number and type of mapping and synthesis components and how they relate); and (3) breadth and depth and the extent of 'piece of work done' in addressing a research issue (including the latitude of review questions, the item with which they are addressed, and the corporeality the review progresses a inquiry agenda). This so provides an overarching strategy to encompass more than detailed descriptions of methodology and may lead in fourth dimension to a more than overarching system of terminology for systematic reviews.

Peer Review reports

Background

Research studies vary in many means including the types of research questions they are asking, the reasons these questions are being asked, the theoretical and ideological perspectives underlying these questions, and in the research methods that they utilize. Systematic reviews are a grade of research; they are (and the theoretical and ideological perspectives underlying these methods) a manner of bringing together what is known from the research literature using explicit and accountable methods [one]. Systematic methods of review have been successfully developed especially for questions concerning the bear upon of interventions; these synthesize the findings of studies which use experimental controlled designs. Yet the logic of systematic methods for reviewing the literature can be practical to all areas of inquiry; therefore in that location can be as much variation in systematic reviews as is found in primary research [two, iii]. This newspaper discusses some of the important conceptual and practical differences between unlike types of systematic review. Information technology does non aim to provide an overall taxonomy of all types of reviews; the charge per unit of evolution of new approaches to reviewing is likewise fast and the overlap of approaches too great for that to be helpful. Instead, the paper argues that, for the present at least, information technology is more than useful to place the central dimensions on which reviews differ and to examine the multitude of different combinations of those dimensions. The paper also does non aim to depict all of the myriad bodily and potential differences between reviews; this would exist a task too large even for a volume let lone a paper. The focus instead is on 3 major types of dimensions of divergence. The first dimension is the aims and approaches of reviews; peculiarly in terms of their methodologies (their ontological and epistemological foundations and methods of synthesis). The 2d dimension is the construction and components of reviews. The 3rd dimension is the breadth, depth, and extent of the work done by a review in engaging with a enquiry result. Once these three aspects of a review are clear, consideration tin can be given to more than specific methodological issues such every bit methods of searching, identifying, coding, appraising, and synthesizing evidence. The aim of this paper is to clarify some of the major conceptual distinctions between reviews to assist the selection, evaluation, and development of methods for reviewing.

Clarifying the nature of variation in reviews

Equally forms of research, systematic reviews are undertaken co-ordinate to explicit methods. The term 'systematic' distinguishes them from reviews undertaken without clear and answerable methods.

The history of systematic reviews is relatively contempo [iv, 5] and despite early work on meta-ethnography [6], the field has been dominated past the development and application of statistical meta-analysis of controlled trials to synthesize the prove on the effectiveness of wellness and social interventions. Over the by x years, other methods for reviewing have been developed. Some of these methods aim to extend effectiveness reviews with information from qualitative studies [seven]. The qualitative data may be used to inform decisions made in the statistical synthesis or be part of a mixed methods synthesis (discussed later). Other approaches accept been developed from a perspective which, instead of the statistical aggregation of data from controlled trials, emphasize the central role that theory can play in synthesizing existing research [8, ix], address the complexity of interventions [10], and the importance of agreement enquiry inside its social and paradigmatic context [11]. The growth in methods has not been accompanied by a clear typology of reviews. The result is a complex web of terminology [2, 12].

The lack of clarity nigh the range of methods of review has consequences which tin limit their evolution and subsequent utilise. Knowledge or consensus about the details of specific methods may be defective, creating the danger of the over-generalization or inappropriate application of the terminology being used. Also, the branding of unlike types of review tin can pb to over-generalizations and simplification with assumptions existence made nigh differences between reviews that only utilise to particular stages of a review or that are matters of degree rather than absolute differences. For example, concepts of quality assurance can differ depending upon the nature of the research question being asked. Similarly, infrastructure systems developed to enable the better reporting and critical appraisal of reviews, such every bit PRISMA [13], and for registration of reviews, such as PROSPERO [14] currently apply predominantly to a subset of reviews, the defining criteria of which may not exist fully articulate.

A further problem is that systematic reviews have attracted criticism on the supposition that systematic reviewing is applicable only to empirical quantitative research [15]. In this way, polarized debates about the utility and relevance of unlike research paradigms may further complicate terminological issues and conceptual understandings about how reviews actually differ from one some other. All of these difficulties are heightened considering review methods are undergoing a menstruum of rapid development and and then the methods being described are ofttimes beingness updated and refined.

Knowledge about the nature and strengths of different forms of review is necessary for: advisable choice of review methods by those undertaking reviews; consideration of the importance of different issues of quality and relevance for each phase of a review; appropriate and accurate reporting and accountability of such review methods; interpretation of reviews; commissioning of reviews; evolution of procedures for assessing and undertaking reviews; and evolution of new methods.

Clarifying the nature of the similarities and differences between reviews is a outset stride to avoiding these potential limitations. A typology of review methods might be a solution. There are many diverse approaches to reviews that can be easily distinguished, such as statistical meta-analysis and meta-ethnography. A more than detailed exam, however, reveals that the types of review currently described often have commonalities that vary across types of review and at different stages of a review. Iii of these dimensions are described here. Exploring these dimensions as well reveals how reviews differ in degree along these overlapping dimensions rather than falling into clear categories.

Review aims and approaches

Principal research and research reviews vary in their ontological, epistemological, ideological, and theoretical stance, their research paradigm, and the problems that they aim to address. In reviews, this variation occurs in both the method of review and the type of main research that they consider. Equally reviews will include primary studies that address the focus of the review question, it is non surprising that review methods also tend to reflect many of the approaches, assumptions, and methodological challenges of the principal research that they include.

One indication of the aim and approach of a study is the enquiry question which the study aims to answer. Questions commonly addressed by systematic reviews include: what is the effect of this intervention (addressed by, for case, the statistical meta-analysis of experimental trials); what is the accuracy of this diagnostic tool (addressed by, for example, meta-assay of evaluations of diagnostic tests); what is the price of this intervention (addressed by, for example, a synthesis of cost-do good analyses); what is the pregnant or process of a phenomena (addressed past, for example, conceptual synthesis such as meta-ethnography or a critical interpretative synthesis of ethnographic studies); what is the effect of this complex intervention (addressed by, for example, multi-component mixed methods reviews); what is the upshot of this arroyo to social policy in this context (addressed by, for case, realist synthesis of evidence of efficacy and relevance across different policy areas); and what are the attributes of this intervention or action (addressed by, for example, framework synthesis framed by dimensions explicitly linked to item perspectives).

Although different questions drive the review procedure and advise different methods for reviewing (and methods of studies included) there is considerable overlap in the review methods that people may select to respond these questions; thus the review question solitary does non provide a complete basis for generating a typology of review methods.

Role of theory

In that location is no agreed typology of enquiry questions in the health and social sciences. In the absence of such a typology, one way to distinguish inquiry is in the extent that it is concerned with generating, exploring, or testing theory [16].

In addressing an impact question using statistical meta-analysis, the approach is predominantly the empirical testing of a theory that the intervention works. The theory existence tested may exist based on a detailed theory of change (logic model) or be a 'blackness box' where the mechanisms past which change may be afflicted are not articulated. The review may, in improver to testing theory, include methods to generate hypotheses about causal relations. Testing often (though not always) wants to add up or aggregate data from large representative samples to obtain a more than precise estimate of effect. In the context of such reviews, searching aims to identify a representative sample of studies, usually by attempting to include all relevant studies in order to avoid bias from study selection (sometimes called 'exhaustive' searching). Theoretical piece of work in such analyses is undertaken predominantly earlier and after the review, not during the review, and is concerned with developing the hypothesis and interpreting the findings.

In inquiry examining processes or meanings the approach is predominantly about developing or exploring theory. This may non require representative samples of studies (as in aggregative reviews) only does crave variation to enable new conceptual understandings to be generated. Searching for studies in these reviews adopts a theoretical approach to searching to identify a sufficient and advisable range of studies either through a rolling sampling of studies co-ordinate to a framework that is developed inductively from the emerging literature (akin to theoretical sampling in principal research) [17]; or through a sampling framework based on an existing body of literature (akin to purposive sampling in primary inquiry) [18]. In both primary research and reviews, theoretical piece of work is undertaken during the process of the research; and, merely every bit with the theory testing reviews, the nature of the concepts may exist relatively simple or very circuitous.

Aggregative and configurative reviews

The distinction between research that tests and research that generates theory also equates to the distinction betwixt review types fabricated by Voils, Sandelowski and colleagues [xix, xx] (although we have been very influenced past these authors the detail of our apply of these terms may differ in places). Reviews that are collecting empirical data to describe and examination predefined concepts can be thought of every bit using an 'aggregative' logic. The primary inquiry and reviews are adding up (accumulation) and averaging empirical observations to make empirical statements (within predefined conceptual positions). In contrast, reviews that are trying to interpret and understand the earth are interpreting and arranging (configuring) information and are developing concepts (Effigy 1). This heuristic also maps onto the way that the review is intended to inform cognition. Aggregative enquiry tends to be about seeking evidence to inform decisions whilst configuring inquiry is seeking concepts to provide enlightenment through new means of understanding.

Figure one
figure 1

Continua of approaches in aggregative and configurative reviews.

Total size image

Aggregative reviews are oft concerned with using predefined concepts and then testing these using predefined (a priori) methods. Configuring reviews can be more exploratory and, although the basic methodology is determined (or at least assumed) in advance, specific methods are sometimes adjusted and selected (iteratively) equally the inquiry proceeds. Aggregative reviews are likely to be combining like forms of data then be interested in the homogeneity of studies. Configurative reviews are more likely to be interested in identifying patterns provided by heterogeneity [12].

The logic of aggregation relies on identifying studies that support one another and then give the reviewer greater certainty nigh the magnitude and variance of the miracle under investigation. As already discussed in the previous section, the approach to searching for studies to include (the search strategy) is attempting to exist exhaustive or, if not exhaustive, then at least avoiding bias in the fashion that studies are institute. Configuring reviews accept the different purpose of aiming to find sufficient cases to explore patterns and then are not necessarily attempting to be exhaustive in their searching. (Almost reviews incorporate elements of both aggregation and configuration and so some may crave an unbiased prepare of studies too as sufficient heterogeneity to permit the exploration of differences betwixt them).

Aggregating and configuring reviews also vary in their arroyo to quality assurance. All reviews aim to avert drawing misleading conclusions because of bug in the studies they contain. Aggregative reviews are concerned with a priori methods and their quality balls processes assess compliance with those methods. Every bit the footing of quality assurance is known a priori, many aspects of this tin be incorporated into the inclusion criteria of the review and so tin be further checked at a later quality balls stage. The inclusion criteria may, for example, require only sure types of study with specific methodological features. In that location is less consensus in the practice of quality assessment in configurative reviews; some adopt a like strategy to those employed in aggregative reviews, whereas others reject the idea that the quality of a report can be assessed through an examination of its method, and instead prioritize other bug, such as relevance to the review and the contribution the study can make in the review synthesis to testing or generating theory [21–23]. Some of the differences betwixt aggregating and configuring reviews are shown in Figure i.

Although the logics of aggregating and configuring enquiry findings demand dissimilar methods for reviewing, a review often includes components of both. A meta-analysis may contain a postal service hoc estimation of statistical associations which may be configured to generate hypotheses for hereafter testing. A configurative synthesis may include some components where information are aggregated (for example, framework synthesis) [24, 25]. Examples of reviews that are predominantly aggregative, configurative, or with high degrees of both assemblage and configuring are given in Table 1 (and for a slightly different take on this heuristic see Sandelowski et al.[20]).

Table 1 Examples of review types

Total size tabular array

Similarly, the nature of a review question, the assumptions underlying the question (or conceptual framework), and whether the review aggregates or configures the results of other studies may strongly suggest which methods of review are appropriate, only this is not e'er the example. Several methods of review are applicable to a wide range of review approaches. Both thematic [26] and framework synthesis [24, 25] which identify themes within narrative data can, for example, exist used with both aggregative and configurative approaches to synthesis.

Reviews that are predominantly aggregative may have similar epistemological and methodological assumptions to much quantitative enquiry and there may be like assumptions between predominantly configurative reviews and qualitative research. However, the quantitative/qualitative stardom is not precise and does not reflect the differences in the aggregative and configurative research processes; quantitative reviews may use configurative processes and qualitative reviews can use aggregative processes. Some authors as well employ the terms conceptual synthesis for reviews that are predominantly configurative, but the process of configuring in a review does not have to be limited to concepts; information technology tin also be the arrangement of numbers (as in subgroup analyses of statistical meta-analysis). The term 'interpretative synthesis' is also used to describe reviews where meanings are interpreted from the included studies. All the same, aggregative reviews also include interpretation, before inspection of the studies to develop criteria for including studies, and after synthesis of the findings to develop implications for policy, practice, and farther research. Thus, the aggregate/configure framework cannot exist thought of as another style of expressing the qualitative/quantitative 'dissever'; information technology has a more specific meaning apropos the logic of synthesis, and many reviews have elements of both aggregation and configuration.

Further ideological and theoretical assumptions

In improver to the above is a range of issues most whose questions are being asked and the implicit ideological and theoretical assumptions driving both them and the review itself. These assumptions decide the specific choices fabricated in operationalizing the review question and thus decide the manner in which the review is undertaken, including the inquiry studies included and how they are analyzed. Ensuring that these assumptions are transparent is therefore important both for the execution of the review and for accountability. Reviews may exist undertaken to inform decision-making past not-academic users of inquiry such as policymakers, practitioners, and other members of the public and so in that location may be a broad range of different perspectives that can inform a review [27, 28]. The perspectives driving the review volition also influence the findings of the review and thereby clarify what is known and not known (inside those perspectives) and thus inform what farther primary research is required. Both reviewer and user perspectives tin thus have an ongoing influence in developing user-led research agendas. There may be many different agendas and thus a plurality of both chief research and reviews of research on any given issue.

A farther fundamental issue that is related to the types of questions being asked and the ideological and theoretical assumptions underlying them is the ontological and epistemological position taken by the reviewers. Aggregative reviews tend to assume that at that place is (ofttimes within disciplinary specifications/boundaries) a reality near which empirical statements can be made even if this reality is socially synthetic (generalizations); in other words they take a 'realist' philosophical position (a broader concept than the specific method of 'realist synthesis'). Some configurative reviews may not require such realist assumptions. They take a more relativist idealist position; the interest is non in seeking a single 'correct' answer simply in examining the variation and complexity of unlike conceptualizations [12, 29]. These philosophical differences can exist important in understanding the arroyo taken past different reviewers just every bit they are in understanding variation in approach (and debates about research methods) in primary enquiry. These differences also chronicle to how reviews are used. Aggregative reviews are ofttimes used to make empirical statements (within agreed conceptual perspectives) to inform determination making instrumentally whilst configuring reviews are ofttimes used to develop concepts and enlightenment [thirty].

Structure and components of reviews

Too every bit varying in their questions, aims, and philosophical approach, reviews also vary in their construction. They can exist unmarried reviews that synthesize a specific literature to answer the review question. They may be maps of what enquiry has been undertaken that are products in their own right and as well a phase on the fashion to one or more syntheses. Reviews can also contain multiple components equating to conducting many reviews or to reviewing many reviews.

Systematic maps

To some degree, most reviews draw the studies they incorporate and thus provide a map or account of the inquiry field. Some reviews get further than this and more explicitly identify aspects of the studies that help draw the enquiry field in some item; the focus and extent of such clarification varying with the aims of the map. Maps are useful products in their ain right just can also be used to inform the procedure of synthesis and the interpretation of the synthesis [3, 30]. Instead of automatically undertaking a synthesis of all included studies, an analysis of the map may lead to a conclusion to synthesize merely a subset of studies, or to conduct several syntheses in different areas of the one map. A broader initial review question and a narrower subsequent review question allows the synthesis of a narrower subset of studies to be understood within the wider literature described in terms of research topics, primary research methods, or both. It also allows broader review questions to create a map for a series of reviews (Figure 2) or mixed methods reviews (Figure 3). In sum, maps have three main purposes of: (i) describing the nature of a research field; (ii) to inform the behave of a synthesis; and (3) to interpret the findings of a synthesis [3, 31].The term 'scoping review' is also sometimes used in a number of different ways to depict (frequently non-systematic) maps and/or syntheses that rapidly examine the nature of the literature on a topic expanse [32, 33]; sometimes as role of the planning for a systematic review.

Figure 2
figure 2

A map leading to several syntheses.

Full size paradigm

Figure three
figure 3

A mixed method review with three syntheses.

Full size image

Mixed methods reviews

The inclusion criteria of a review may allow all types of primary inquiry or only studies with specific methods that are considered most appropriate to best address the review question. Including several unlike methods of chief research in a review tin create challenges in the synthesis stage. For example, a review asking about the bear on of some life experience may examine both randomized controlled trials and large data sets on naturally occurring phenomena (such as in large calibration cohort studies). Another strategy is to have sub-reviews that ask questions about different aspects of an upshot and which are likely to consider different primary research [34, 35]. For instance, a statistical meta-assay of impact studies compared with a conceptual synthesis of people'south views of the issue existence evaluated [34, 35]. The two sub-reviews tin and then be combined and contrasted in a 3rd synthesis as in Effigy 3. Mixed methods reviews have many similarities with mixed methods in primary enquiry and there are therefore numerous ways in which the products of dissimilar synthesis methods may be combined [35].

Mixed noesis reviews apply a like arroyo but combine data from previous research with other forms of data; for instance a survey of practice knowledge about an result (Figure iv).

Figure four
figure 4

Mixed knowledge review.

Full size image

Some other instance of a mixed methods review is realist synthesis [9] that examines the usefulness of mid-level policy interventions across different areas of social policy by unpacking the implicit models of change, followed past an iterative process of identifying and analyzing the evidence in support of each part of that model. This is quite similar to a theory-driven aggregative review (or series of reviews) that aggregatively test dissimilar parts of a causal model. The first part of the process is a form of configuration in clarifying the nature of the theory and what needs to be empirically tested; the 2d part is the aggregative testing of those subcomponents of the theory. The difference betwixt this method and more 'standard' systematic review methods is that the search for empirical bear witness is more of an iterative, investigative process of tracking down and interpreting evidence. Realist synthesis will also consider a broad range of empirical prove and will assess its value in terms of its contribution rather than according to some preset criteria. The arroyo therefore differs from the predominantly a priori strategy used in either standard 'black box' or in theory driven aggregative reviews. There have also been attempts to combine aggregative 'what works' reviews with realist reviews [36]. These innovations are exploring how best to develop the breadth, generalizability and policy relevance of aggregative reviews without losing their methodological protection against bias.

There are also reviews that utilise other pre-existing reviews as their source of data. These reviews of reviews may depict on the data of previous reviews either past using the findings of previous reviews or by drilling down to using data from the primary studies in the reviews [37]. Information drawn from many reviews can also be mined to understand more than nearly a research field or research methods in meta-epidemiology [38]. Equally reviews of reviews and meta-epidemiology both use reviews every bit their data, they are sometimes both described as types of 'meta reviews'. This terminology may non exist helpful as information technology links together two approaches to reviews which have lilliputian in mutual autonomously from the shared type of data source. A farther term is 'meta evaluation'. This can refer to the formative or summative evaluation of primary evaluation studies or can be a summative statement of the findings of evaluations which is a form of aggregative review (See Gough et al. in grooming, and [39]).

Latitude, depth, and 'work done' past reviews

Primary research studies and reviews may be read as isolated products yet they are usually one footstep in larger or longer-term enquiry enterprises. A research study usually addresses a macro research result and a specific focused sub-issue that is addressed by its specific data and analysis [16]. This specific focus tin can be broad or narrow in scope and deep or not so deep in the detail in which it is examined.

Breadth of question

Many single component aggregative reviews aim for homogeneity in the focus and method of included studies. They select narrowly defined review questions to ensure a narrow methodological focus of research findings. Although well justified, these decisions may lead to each review providing a very narrow view of both research and the issue that is beingness addressed. A user of such reviews may need to take account of multiple narrow reviews in guild to help them determine the well-nigh advisable course of action.

The need for a broader view is raised by complex questions. One instance is assessing the bear upon of circuitous interventions. At that place are often many variants of an intervention, but even inside one particular highly specified intervention there may be variations in terms of the frequency, duration, degree, engagement, and fidelity of commitment [40]. All of this variation may result in unlike furnishings on different participants in dissimilar contexts. The variation may also bear upon differentially inside the hypothesized program theory of how the intervention impacts on different causal pathways. Reviews therefore need a strategy for how they can engage with this complication. One strategy is to achieve breadth through multi-component reviews; for instance, a broad map which tin provide the context for interpreting a narrower synthesis, a series of related reviews, or mixed methods reviews. Other strategies include 'mega reviews', where the results from very many principal studies or meta-analyses are aggregated statistically (for instance, [41, 42]) and multivariate analyses, where moderator variables are used to identify the 'agile ingredients' of interventions (for example, [43, 44]). Whether breadth is achieved within a single review, from a sequence of reviews, from reviews of reviews, or from relating to the main and review work of others, the wheel of chief research product and synthesis is part of a wider circle of engagement and response to users of inquiry [45].

Review resources and breadth and depth of review

The resource required for a systematic review are not stock-still. With different amounts of resources one can reach different types of review. Broad reviews such every bit mixed methods and other multi-component reviews are likely to crave more than resources, all else beingness abiding, than narrow single method reviews. Thus, in improver to the breadth of review is the result of its depth, or the detail with which it is undertaken. A broad review may not have greater resources than a narrow review in which case those resources are spread more thinly and each aspect of that breadth may be undertaken with less depth.

When time and other resource are very restricted then a rapid review may be undertaken where some aspect of the review volition be limited; for instance, latitude of review question, sources searched, data coded, quality and relevance assurance measures, and depth of analysis [46, 47]. Many students, for example, undertake literature reviews that may be informed by systematic review principles of rigor and transparency of reporting; some of these perchance relatively pocket-size exercises whilst others make up a substantial component of the thesis. If rigor of execution and reporting are reduced as well far and then information technology may be more appropriate to characterize the work as non systematic scoping than every bit a systematic review.

Reviews thus vary in the extent that they appoint with a inquiry upshot. The enterprise may range in size from, for example, a specific program theory to a whole field of inquiry. The enterprise may be under report by one research team, past a broader group such equally a review grouping in an international collaboration or be the focus of study past many researchers internationally. The enterprises may be led bookish disciplines, applied review collaborations, past priority setting agendas, and by forums to enable different perspectives to exist engaged in inquiry agendas. Any the nature of the strategic content or process of these macro research issues, reviews vary in the extent that they plan to contribute to such more than macro questions. Reviews thus vary in the extent that this research work is washed within a review; rather than earlier and after a review (past primary studies or by other reviews).

Reviews can be undertaken with different levels of skill, efficiency, and automatic tools [48] and then resources do non equate exactly with the 'piece of work done' in progressing a research issue. In full general, a broad review with relatively little depth (providing a systematic overview) may be comparable in work done to a detailed narrow review (as in many current statistical meta-analyses). A multi-component review addressing complex questions using both aggregative and configuring methods may be attempting to accomplish more than work, though at that place may exist challenges in terms of maintaining consistency or transparency of detail in each component of the review. In contrast, a rapid review has few resources and and then is attempting less than other reviews merely there may be dangers that the express scope (and limited contribution to the broader research agenda) is not understood past funders and users of the review. How all-time to apply available resources is a strategic issue depending upon the nature of the review question, the country of the inquiry available on that result and the knowledge almost that country of the research. It is an issue of being fit for purpose. A review doing comparatively little 'work' may be exactly what is needed in i situation but not in another.

Conclusion

Explicit accountable methods are required for primary research and reviews of research. This logic applies to all research questions and thus multiple methods for reviews of enquiry are required, simply as they are required for primary research. These differences in types of reviews reflect the richness of primary enquiry not only in the range of variation but too in the philosophical and methodological challenges that they pose including the mixing of different types of methods. The say-so of one class of review question and review method and the branding of another forms of review does not clearly describe the variation in review designs and methods and the similarities and differences betwixt these methods. Clarity about the dimensions along which reviews vary provides a way to develop review methods further and to brand critical judgments necessary for the committee, product, evaluation, and use of reviews. This paper has argued for the need for clarity in describing the design and methods of systematic reviews forth many dimensions; and that particularly useful dimensions for planning, describing, and evaluating reviews are:

  1. one.

    Review aims and arroyo: (i) approach of the review: ontological, epistemological, theoretical, and ideological assumptions of the reviewers and users of the review including any theoretical manner; (ii) review question: the type of answer that is being sought (and the type of data that would reply it); and (ii) aggregation and configuration: the relative use of these logics and strategies in the different review components (and the positioning of theory in the review process, the degree of homogeneity of data, and the iteration of review method).

  2. 2.

    Structure and components of reviews: (iv) the systematic map and synthesis components of the review; and (5) the relation between these components.

  3. 3.

    Breadth, depth, and 'work done' by reviews: (half-dozen) macro research strategy: the positioning of the review (and resource and the piece of work aimed to exist done) within the land of what is already known and other inquiry planned by the review team and others; and (7) the resources used to accomplish this.

Clarifying some of the main dimensions along which reviews vary can provide a framework within which description of more detailed aspects of methodology can occur; for example, the specific strategies used for searching, identifying, coding, and synthesizing prove and the utilise of specific methods and techniques ranging from review direction software to text mining to statistical and narrative methods of analysis. Such clearer descriptions may lead in time to a more overarching system of terminology for systematic reviews.

Authors' data

DG, JT, and And so are all directors of the Evidence for Policy and Practice Information and Analogous Centre (EPPI-Eye) [49].

References

  1. Cooper H, Hedges L: The Handbook of Research Synthesis. 1994, Russell Sage Foundation, New York

    Google Scholar

  2. Gough D: Dimensions of difference in evidence reviews (Overview; I. Questions, prove and methods; II.Latitude and depth; Iii. Methodological approaches; IV. Quality and relevance appraisal; V. Communication, interpretation and awarding. Series of half dozen posters presented at National Centre for Research Methods meeting, Manchester. January 2007, EPPI-Centre, London,http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=1919,

    Google Scholar

  3. Gough D, Thomas J: Commonality and diversity in reviews. Introduction to Systematic Reviews. Edited by: Gough D, Oliver S, Thomas J. 2012, Sage, London, 35-65.

    Google Scholar

  4. Chalmers I, Hedges L, Cooper H: A brief history of research synthesis. Eval Health Professions. 2002, 25: 12-37. 10.1177/0163278702025001003.

    Article  Google Scholar

  5. Bohlin I: Formalising syntheses of medical noesis: the ascent of meta-analysis and systematic reviews. Perspect Sci. in printing, in press

  6. Noblit G: Hare RD: Meta-ethnography: synthesizing qualitative studies. 1988, Sage Publications, Newbury Park NY

    Google Scholar

  7. Noyes J, Popay J, Pearson A, Hannes K, Berth A: Qualitative research and Cochrane reviews. Cochrane Handbook for Systematic Reviews of Interventions. Edited by: Higgins JPT, Green S. Version 5.one.0 (updated March 2011). The Cochrane Collaboration. world wide web.cochrane-handbook.org

  8. Dixon-Woods Thou, Cavers D, Agarwal Southward, Annandale Eastward, Arthur A, Harvey J, Hsu R, Katbamna S, Olsen R, Smith 50, Riley R, Sutton AJ: Conducting a disquisitional interpretive synthesis of the literature on admission to healthcare by vulnerable groups. BMC Med Res Methodol. 2006, 6: 35-ten.1186/1471-2288-6-35.

    Article  PubMed  PubMed Central  Google Scholar

  9. Pawson R: Evidenced-based policy: a realist perspective. 2006, Sage, London

    Book  Google Scholar

  10. Shepperd S, Lewin Due south, Struas South, Clarke M, Eccles M, Fitzpatrick R, Wong G, Sheikh A: Can we systematically review studies that evaluate circuitous interventions?. PLoS Med. 2009, six: 8-ten.1371/journal.pmed.1000008.

    Article  Google Scholar

  11. Greenhalgh T, Robert 1000, Macfarlane F, Bate P, Kyriakidou O, Peacock R: Storylines of inquiry in diffusion of innovation: a meta-narrative arroyo to systematic review. Soc Sci Med. 2005, 61: 417-430. x.1016/j.socscimed.2004.12.001.

    Commodity  PubMed  Google Scholar

  12. Barnett-Folio E, Thomas J: Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009, 9: 59-ten.1186/1471-2288-9-59.

    Commodity  PubMed  PubMed Cardinal  Google Scholar

  13. Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA Statement. PLoS Med. 2009, half-dozen: half dozen-10.1371/journal.pmed.1000006.

    Article  Google Scholar

  14. PLoS Medicine Editors: Best practice in systematic reviews: The importance of protocols and registration. PLoS Med. 2011, 8: two-

    Google Scholar

  15. Thomas Grand: Introduction: show and practice. Evidence-based Exercise in Education. Edited by: Pring R, Thomas Yard. 2004, Open University Press, Buckingham, 44-62.

    Google Scholar

  16. Gough D, Oliver S, Newman Thousand, Bird G: Transparency in planning, warranting and interpreting inquiry. Teaching and Learning Enquiry Briefing 78. 2009, Didactics and Learning Enquiry Plan, London

    Google Scholar

  17. Strauss A, Corbin J: Basics of qualitative research, grounded theory procedures and techniques. 1990, Sage, London

    Google Scholar

  18. Miles M, Huberman A: Qualitative Data Analysis. 1994, Sage, London

    Google Scholar

  19. Voils CI, Sandelowski M, Barroso J, Hasselblad 5: Making sense of qualitative and quantitative findings in mixed research synthesis studies. Field Methods. 2008, 20: iii-25. x.1177/1525822X07307463.

    Article  PubMed  PubMed Key  Google Scholar

  20. Sandelowski G, Voils CJ, Leeman J, Crandlee JL: Mapping the Mixed Methods-Mixed Research Synthesis Terrain. Periodical of Mixed Methods Research. 2011, 10.1177/1558689811427913.

    Google Scholar

  21. Pawson R, Boaz A, Grayson L, Long A, Barnes C: Types and Quality of Knowledge in Social Care. 2003, Social Care Plant for Excellence, London

    Google Scholar

  22. Oancea A, Furlong J: Expressions of excellence and the assessment of applied and practice-based research. Res Pap Educ. 2007, 22: 119-137. x.1080/02671520701296056.

    Commodity  Google Scholar

  23. Harden A, Gough D: Quality and relevance appraisal. Introduction to Systematic Reviews. Edited by: Gough D, Oliver Due south, Thomas J. 2012, Sage, London, 153-178.

    Google Scholar

  24. Thomas J, Harden A: Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008, 8: 45-ten.1186/1471-2288-eight-45.

    Article  PubMed  PubMed Central  Google Scholar

  25. Oliver S, Rees RW, Clarke-Jones L, Milne R, Oakley AR, Gabbay J, Stein K, Buchanan P, Gyte M: A multidimensional conceptual framework for analyzing public interest in health services enquiry. Heal Expect. 2008, xi: 72-84. ten.1111/j.1369-7625.2007.00476.x.

    Article  Google Scholar

  26. Carroll C, Berth A, Cooper Thou: A worked instance of "best fit" framework synthesis: a systematic review of views apropos the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011, 11: 29-10.1186/1471-2288-11-29.

    Article  PubMed  PubMed Fundamental  Google Scholar

  27. Rees R, Oliver Due south: Stakeholder perspectives and participation in reviews. Introduction to Systematic Reviews. Edited by: Gough D, Oliver S, Thomas J. 2012, Sage, London, 17-34.

    Google Scholar

  28. Oliver S, Dickson K, Newman M: Getting started with a review. Introduction to Systematic Reviews. Edited past: Gough D, Oliver S, Thomas J. 2012, Sage, London, 66-82.

    Google Scholar

  29. Spencer L, Ritchie J, Lewis J, Dillon L: Quality in Qualitative Evaluation: a Framework for Assessing Research Evidence. 2003, Regime Principal Social Researcher'due south Office, London

    Google Scholar

  30. Weiss C: The many meanings of research utilisation. Public Adm Rev. 1979, 29: 426-431.

    Article  Google Scholar

  31. Peersman Chiliad: A Descriptive Mapping of Health Promotion Studies in Young People EPPI Enquiry Report. 1996, EPI-Heart, London

    Google Scholar

  32. Arksey H, O'Malley L: Scoping Studies: towards a methodological framework. Int J Soc Res Methodol. 2005, 8: 19-32. ten.1080/1364557032000119616.

    Article  Google Scholar

  33. Levac D, Colquhoun H, O'Brien KK: Scoping studies: advancing the methodology. Implement Sci. 2010, 5: 69-10.1186/1748-5908-5-69.

    Article  PubMed  PubMed Central  Google Scholar

  34. Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton Thou, Kavanagh J: Integrating qualitative research with trials in systematic reviews: an example from public wellness. Brit Med J. 2004, 328: 1010-1012. ten.1136/bmj.328.7446.1010.

    Commodity  PubMed  PubMed Cardinal  Google Scholar

  35. Harden A, Thomas J: Mixed methods and systematic reviews: examples and emerging problems. Handbook of Mixed Methods in the Social and Behavioral Sciences. Edited by: Tashakkori A, Teddlie C. 2010, Sage, London, 749-774. two

    Chapter  Google Scholar

  36. Leontien K, van der Knaap , Leeuw FL, Bogaerts S, Laura TJ: Nijssen Combining campbell standards and the realist evaluation approach: the best of two worlds?. J Eval. 2008, 29: 48-57. 10.1177/1098214007313024.

    Article  Google Scholar

  37. Smith V, Devane D, Begley CM, Clarke M: Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol. 2011, eleven: 15-10.1186/1471-2288-11-xv.

    Commodity  PubMed  PubMed Fundamental  Google Scholar

  38. Oliver S, Bagnall AM, Thomas J, Shepherd J, Sowden A, White I, Dinnes J, Rees R, Colquitt J, Oliver Grand, Garrett Z: RCTs for policy interventions: a review of reviews and meta-regression. Health Technol Appraise. 2010, fourteen: 16-

    Commodity  Google Scholar

  39. Scriven M: An introduction to meta-evaluation. Educational Products Study. 1969, 2: 36-38.

    Google Scholar

  40. Carroll C, Patterson M, Woods Southward, Booth A, Rick J, Balain South: A conceptual framework for implementation fidelity. Implement Sci. 2007, 2: 40-ten.1186/1748-5908-ii-xl.

    Commodity  PubMed  PubMed Fundamental  Google Scholar

  41. Smith ML, Glass GV: Meta-assay of psychotherapy event studies. Am Psychol. 1977, 32: 752-760.

    CAS  Article  PubMed  Google Scholar

  42. Hattie J: Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. 2008, Routledge, London

    Google Scholar

  43. Melt TD, Cooper H, Cordray DS, Hartmann H, Hedges LV, Light RJ, Louis TA, Mosteller F: Meta-analysis for Explanation: A Casebook. 1992, Russell Sage Foundation, New York

    Google Scholar

  44. Thompson SG, Abrupt SJ: Explaining heterogeneity in meta-assay: a comparing of methods. Stat Med. 1999, 18: 2693-2708. x.1002/(SICI)1097-0258(19991030)eighteen:20<2693::AID-SIM235>3.0.CO;2-V.

    CAS  Article  PubMed  Google Scholar

  45. Stewart R, Oliver Southward: Making a deviation with systematic reviews. Introduction to Systematic Reviews. Edited by: Gough D, Oliver S, Thomas J. 2012, Sage, London, 227-244.

    Google Scholar

  46. Government Social Inquiry Unit: Rapid Evidence Assessment Toolkit. 2008,http://world wide web.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-cess,

    Google Scholar

  47. Abrami PC, Borokhovski E, Bernard RM, Wade CA, Tamim R, Persson T, Surkes MA: Issues in conducting and disseminating brief reviews of evidence. Bear witness & Policy: A Journal of Inquiry, Debate and Practise. 2010, 6: 371-389. x.1332/174426410X524866.

    Article  Google Scholar

  48. Brunton J, Thomas J: Information management in reviews. Introduction to Systematic Reviews. Edited by: Gough D, Oliver Southward, Thomas J. 2012, Sage, London, 83-106.

    Google Scholar

  49. Prove for Policy and Practice Information and Coordinating Eye (EPPI-Centre):http://eppi.ioe.ac.uk,

Download references

Acknowledgements

The authors wish to acknowledge the back up and intellectual contribution of their previous and current colleagues at the EPPI-Centre. They also wish to acknowledge the support of their major funders as many of the ideas in this paper were developed whilst working on research supported by their grants; this includes the Economical and Social Research Council, the Department of Health, and the Department for Education. The views expressed hither are those of the authors and are not necessarily those of our funders.

Author information

Affiliations

Corresponding author

Correspondence to David Gough.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

All three authors have made substantial contributions to the conception of the ideas in this paper, have been involved in drafting or revising it critically for important intellectual content, and accept given final approval of the version to be published.

Authors' original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted utilise, distribution, and reproduction in whatsoever medium, provided the original work is properly cited.

Reprints and Permissions

About this commodity

Cite this article

Gough, D., Thomas, J. & Oliver, South. Clarifying differences between review designs and methods. Syst Rev 1, 28 (2012). https://doi.org/10.1186/2046-4053-1-28

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI : https://doi.org/10.1186/2046-4053-1-28

Keywords

  • Aggregation configuration
  • Complex reviews
  • Mapping
  • Methodology
  • Mixed methods reviews
  • Research methods
  • Scoping reviews
  • Synthesis
  • Systematic reviews
  • Taxonomy of reviews

winspearwhost1955.blogspot.com

Source: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/2046-4053-1-28