An unfavorable change in the health of a participant, including abnormal laboratory findings, that happens during a clinical study or within a certain amount of. Indicates that the study sponsor or investigator recalled a submission of study results before quality control (QC) review took place. If the submission was. Welcome. Astellas is committed to public disclosure and data transparency of our clinical research, whether the study results are positive or negative. We believe.
They examined the relationship between changes on a scale measuring global well-being and changes in the score of the outcome measure of interest. The amount of change in the severity of the patients' shortness of breath that they noticed in a global sense was deemed the MCID of the CHF scale.
From the psychology literature, Jacobson et al. Along a similar vein, Kendall et al. In summary, different disciplines have taken divergent approaches to determining the clinical importance of their interventions. There is also no consensus on the appropriate method s of determining the clinical importance of therapies. Historically, despite the critical nature of the role of clinical importance in the interpretation of clinical trial results, journal editors have typically prompted authors to emphasize statistical significance over clinical importance when reporting their results.
More recently, with the publication of influential papers 36 including the evidence-based medicine series in JAMA , 37 there has been a clear shift toward greater emphasis on clinical importance when deciding whether the results of clinical trials should change patients' management. Nevertheless, issues still exist. For example, the revision of the CONSORT statement 38 a set of widely adopted recommendations designed to improve the quality of reporting of randomized controlled trials does not specifically recommend that authors discuss the clinical importance of their results.
Also, the published clinical trial reports did not consistently discuss their results in a clinical context. Other investigators have published methods of relating the clinical importance of trial results to their statistical significance. Alternatively, inferences for treatment are weaker when the confidence interval overlaps the MCID. Using a slightly different approach based on the number needed to treat, Guyatt et al.
Again, if the confidence interval was completely on one side of the threshold, stronger recommendations for treatment could be made than if the confidence interval overlapped the threshold. For patients with chronic airflow obstruction and with use of an n-of-1 design, Mahon et al.
Other authors including Detsky and Sackett 40 and Goodman and Berlin 10 have outlined the role of confidence intervals in the interpretation of clinical trials without statistically significant results. They stated that for therapies to be considered equivalent, not only should the comparison of the efficacies of the two interventions not reach statistical significance, but also the upper limit of the confidence interval should be smaller than the predetermined MCID.
Unfortunately, in the reporting of clinical trial results, these approaches have not been widely adopted. Judging the clinical importance of study results can be more difficult. Fortunately, the previous work reviewed above provides a systematic method by which clinical importance can be determined Fig. Possible combinations for the relationship between clinical importance and statistical significance see text for the details.
The results demonstrated that the study intervention reduced the chance of death by an absolute value of The study result may or may not be statistically significant. A recent randomized trial 44 Fig. Thus, the study result is probably clinically important, but there is still a reasonable possibility of a clinically unimportant effect.
An example of a study with results that were not statistically significant but of probable clinical importance is a randomized controlled trial 45 that assessed the efficacy of nursing visits on outcomes of elderly people in the community Fig. The authors showed a statistically insignificant absolute decrease in the primary outcome events combined death and hospital admissions of 4.
Thus, while the results are not statistically significant, it is probable that the efficacy of the study intervention is clinically important. Again, the study result may or may not be statistically significant. An example of a study with statistically significant results but only possible clinical importance is a randomized placebo-controlled trial by Silverstein et al. They assessed the efficacy of misoprostol to reduce serious gastrointestinal complications in patients with rheumatoid arthritis receiving nonsteroidal anti-inflammatory drugs.
Comparing the misoprostol and placebo groups, the results showed a statistically significant 0. In their sample size calculation, the authors used a 0. Using this value as the MCID, it is possible that the efficacy of misoprostol is or is not clinically important. Even though the result was statistically significant i. Therefore, the clinical importance of the efficacy of the study intervention still needs confirmation. An example of a study with results that were not statistically significant but had possible clinical importance is a recent randomized double-blind placebo controlled trial 47 that assessed the efficacy of pentoxifylline in the treatment of venous leg ulcers Fig.
Again, the study results may or may not be statistically significant. An example of a study result that is statistically significant but definitely not clinically important is a randomized placebo controlled trial 48 assessing the effect of dietary supplementation with polyunsaturated fats Fig.
It demonstrated an absolute decrease of 1. If one assumes that this value represents the MCID of the intervention, the magnitude of the intervention's effect is definitely not clinically important. It did not find a statistically significant benefit to the intervention relative risk, 0.
The lack of both statistical significance and clinical importance suggests that no further studies are required to test the efficacy of this intervention in similar clinical situations. Table 1 shows the possible benefits of this method of determining clinical importance.
Clarifying the concept of clinical importance and the relationship between statistical significance and clinical importance will lead to improvements in both the design and interpretation of clinical trials. In turn, this will enhance the ability of readers to interpret the results of studies from the perspective of clinical importance. When differences in opinion arise regarding the clinical importance of study results, this method may allow the reasons for these differences to be more easily articulated e.
Thus, more explicit, orderly, and focused discussion regarding the clinical importance of study results would be promoted. Other methodological issues in the design and conduct of clinical trials could be addressed. Clearly, such issues have serious implications for the sample sizes and thus feasibility of prospective clinical trials. Further debate regarding this issue is necessary. Also, at present, stopping rules for trials are based primarily on the statistical significance of the interim results.
Hence, trials would not be stopped until more reliable evidence of clinically important effects are found. There are other potential benefits. Researchers could use this approach in combination with cumulative meta-analyses 51 to provide clearer indication regarding when no further studies are needed to assess the clinical importance of therapies.
Also, greater emphasis on reporting the clinical importance encourages more rigorous interpretation of the results of trials without statistically significant results. For example, journals may recognize studies that reliably exclude clinically important treatment effects Fig. In order to discourage other investigators from embarking on similar studies, editors may be more willing to publish such manuscripts.
Finally, this method of clinical importance determination may also encourage investigators to develop methodologies to empirically determine the clinically important differences of therapies. Clearly, a major methodological limitation of the above approach is the uncertainty in the judgment and determination of MCIDs.
MCID values for specific interventions will vary from person to person depending on their own values and the perspective individual, professional, or societal from which they view the intervention. However, by comparing the relationship of other possible MCID values to the point estimate of the efficacy of the intervention and its confidence interval, readers can easily determine the level of clinical importance of the study result on the basis of their own or any other estimate of the MCID.
Interestingly, the systematic determination of clinical importance of study results can help to resolve this issue. In turn, this would allow greater confidence in the clinical interpretation based on these benchmarks. Caution must also be taken to use this approach only for the interpretation of the clinical importance of individual studies and not with recommendations for treatment.
In most instances, this is not appropriate until the totality of pertinent evidence is considered, such as whether efficacy translates into effectiveness and whether the intervention is cost-effective. Only then can treatment recommendations be made with confidence. An inherent limitation of this approach is that it does not reflect the methodologic quality of studies. There is continued need to judge the validity and generalizability of study results independent of clinical importance.
Just as the statistical significance of the results of methodologically flawed studies should be viewed with scepticism, so should their clinical importance. Depending on many factors, including the seriousness of the condition, the outcome being measured, the potential of the intervention to cause adverse consequences, and the availability of alternative treatments, these standards may be too strict or not strict enough.
For example, suppose an initial study investigating a possible cure for Creutzfeld-Jakob disease a rapidly fatal condition with no known effective treatment found a beneficial effect that was statistically significant but did not meet the criteria for definite or even probable clinical importance.
In such situations, implementation into clinical practice may be warranted even though the clinical importance of the intervention's efficacy is not fully confirmed. Alternatively, for conditions in which extremely effective therapies exist, very rigorous standards for the clinical importance of new therapies should be set.
For example, if a medication is to become an effective alternative to corticosteroids in the treatment of temporal arteritis, extreme confidence in the clinical importance of its efficacy is necessary. Rather than using cut-points to indicate definitely, probably, possibly, and definitely not clinically important, an alternative approach would be to simply present the relationship between the point estimate of the efficacy and its confidence intervals and the MCID.
This approach would avoid an important issue that presently plagues the statistical interpretation of clinical trial results—the artificial dichotomization of results into those that are statistically significant and those that are statistically insignificant. However, we believe that our proposed levels for cut-off values resulting in 4 levels of clinical importance provides clinicians with the added benefit of general guidance in interpreting clinical importance.
Further debate is necessary to resolve this issue. Finally, we recognize that further development of methods to determine the clinical importance of study results is necessary. For example, based on the relationship of the confidence interval to the MCID, it is possible to determine the probability that the true value of the efficacy of the intervention meets or exceeds an a priori determined MCID, and thus quantify the concept of clinical importance.
However, the intention of this paper is to summarize previous work on the rationale and framework for the concept of clinical importance. This will hopefully encourage further theoretical and statistical developments. Systematic determination of the clinical importance of study results will increase discussion about the concept of the MCID.
Thus, it has the potential of improving the methodological rigor with which clinical trials are designed, conducted, reported, and interpreted. These potential benefits may provide greater clarity for the process in which interventions with putative clinical benefits are incorporated into or rejected from the standard care of patients.
National Center for Biotechnology Information , U. J Gen Intern Med. Find articles by Jeffery Mahon. Author information Copyright and License information Disclaimer. Address correspondence to Dr.
Copyright by the Society of General Internal Medicine. When formulating the results section, it's important to remember that the results of a study do not prove anything.
Findings can only confirm or reject the hypothesis underpinning your study. However, the act of articulating the results helps you to understand the problem from within, to break it into pieces, and to view the research problem from various perspectives. The page length of this section is set by the amount and types of data to be reported. Be concise, using non-textual elements appropriately, such as figures and tables, to present findings more effectively.
In deciding what data to describe in your results section, you must clearly distinguish information that would normally be included in a research paper from any raw data or other content that could be included as an appendix. In general, raw data that has not been summarized should not be included in the main text of your paper unless requested to do so by your professor.
Avoid providing data that is not critical to answering the research question. The background information you described in the introduction section should provide the reader with any additional context or explanation needed to understand the results. A good strategy is to always re-read the background section of your paper after you have written up your results to ensure that the reader has enough context to understand the results [and, later, how you interpreted the results in the discussion section of your paper].
Doing Your Education Research Project. Bates College; Kretchmer, Paul. San Francisco Edit; "Reporting Findings. SAGE Publications, pp. For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results. Both approaches are appropriate in how you report your findings, but choose only one format to use. Just as the literature review should be arranged under conceptual categories rather than systematically describing each source, organize your findings under key themes related to addressing the research problem.
This can be done under either format noted above [i. In general, the content of your results section should include the following: Use the past tense when referring to your results. Reference to findings should always be described as having already happened because the method of gathering data has been completed.
When writing the results section, avoid doing the following: Rice University; Hancock, Dawson R. Doing Case Study Research: A Practical Guide for Beginning Researchers. January 4, ; Kretchmer, Paul. San Francisco Edit ; Ng, K. February ; Results. Bates College; Schafer, Mickey S. Thesis Writing in the Sciences. It's not unusual to find articles in social science journals where the author s have combined a description of the findings with a discussion about their implications.
You could do this. Think of the results section as the place where you report what your study found; think of the discussion section as the place where you interpret your data and answer the "So What? As you become more skilled writing research papers, you may want to meld the results of your study with a discussion of its implications.
Driscoll, Dana Lynn and Aleksandra Kasztalska. Writing the Experimental Report: Methods, Results, and Discussion. Search this Guide Search. The Results This guide provides advice on how to develop and organize a research paper in the social and behavioral sciences.
The Conclusion Toggle Dropdown Appendices Definition The results section is where you report the findings of your study based upon the methodology [or methodologies] you applied to gather information.
Importance of a Good Results Section When formulating the results section, it's important to remember that the results of a study do not prove anything. Structure and Writing Style I. Organization and Approach For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results.
Present a synopsis of the results followed by an explanation of key findings. This approach can be used to highlight important findings. For example, you may have noticed an unusual correlation between two variables during the analysis of your findings.
It is appropriate to point this out in the results section. However, speculating as to why this correlation exists, and offering a hypothesis about what may be happening, belongs in the discussion section of your paper.
Clinical study results: Publication
Dr. Gregg Stone and Dr. C. Michael Gibson Discuss: COAPT: A Randomized Trial of Transcatheter Mitral Valve Leaflet Approximation in Patients With Heart. Formal statistical methods for analyzing clinical trial data are widely accepted by the medical community. Unfortunately, the interpretation and reporting of trial. The results section is where you report the findings of your study based upon the methodology [or methodologies] you applied to gather.