Search This Blog

Sunday, July 2, 2017

Evidence-based medicine

From Wikipedia, the free encyclopedia
Evidence-based medicine (EBM) is an approach to medical practice intended to optimize decision-making by emphasizing the use of evidence from well-designed and well-conducted research. Although all medicine based on science has some degree of empirical support, EBM goes further, classifying evidence by its epistemologic strength and requiring that only the strongest types (coming from meta-analyses, systematic reviews, and randomized controlled trials) can yield strong recommendations; weaker types (such as from case-control studies) can yield only weak recommendations. The term was originally used to describe an approach to teaching the practice of medicine and improving decisions by individual physicians about individual patients.[1] Use of the term rapidly expanded to include a previously described approach that emphasized the use of evidence in the design of guidelines and policies that apply to groups of patients and populations ("evidence-based practice policies").[2] It has subsequently spread to describe an approach to decision-making that is used at virtually every level of health care as well as other fields (evidence-based practice).

Whether applied to medical education, decisions about individuals, guidelines and policies applied to populations, or administration of health services in general, evidence-based medicine advocates that to the greatest extent possible, decisions and policies should be based on evidence, not just the beliefs of practitioners, experts, or administrators. It thus tries to assure that a clinician's opinion, which may be limited by knowledge gaps or biases, is supplemented with all available knowledge from the scientific literature so that best practice can be determined and applied. It promotes the use of formal, explicit methods to analyze evidence and makes it available to decision makers. It promotes programs to teach the methods to medical students, practitioners, and policy makers.

Background, history and definition

In its broadest form, evidence-based medicine is the application of the scientific method into healthcare decision-making. Medicine has a long tradition of both basic and clinical research that dates back at least to Avicenna[3][4] and more recently to protestant reformation exegesis of the 17th and 18th centuries.[5] An early critique of statistical methods in medicine was published in 1835.[6]

However, until recently, the process by which research results were incorporated in medical decisions was highly subjective.[citation needed] Called "clinical judgment" and "the art of medicine", the traditional approach to making decisions about individual patients depended on having each individual physician determine what research evidence, if any, to consider, and how to merge that evidence with personal beliefs and other factors.[citation needed] In the case of decisions which applied to groups of patients or populations, the guidelines and policies would usually be developed by committees of experts, but there was no formal process for determining the extent to which research evidence should be considered or how it should be merged with the beliefs of the committee members.[citation needed] There was an implicit assumption that decision makers and policy makers would incorporate evidence in their thinking appropriately, based on their education, experience, and ongoing study of the applicable literature.[citation needed]

Clinical decision making

Beginning in the late 1960s, several flaws became apparent in the traditional approach to medical decision-making. Alvan Feinstein's publication of Clinical Judgment in 1967 focused attention on the role of clinical reasoning and identified biases that can affect it.[7] In 1972, Archie Cochrane published Effectiveness and Efficiency, which described the lack of controlled trials supporting many practices that had previously been assumed to be effective.[8] In 1973, John Wennberg began to document wide variations in how physicians practiced.[9] Through the 1980s, David M. Eddy described errors in clinical reasoning and gaps in evidence.[10][11][12][13] In the mid 1980s, Alvin Feinstein, David Sackett and others published textbooks on clinical epidemiology, which translated epidemiological methods to physician decision making.[14][15] Toward the end of the 1980s, a group at RAND showed that large proportions of procedures performed by physicians were considered inappropriate even by the standards of their own experts.[16] These areas of research increased awareness of the weaknesses in medical decision making at the level of both individual patients and populations, and paved the way for the introduction of evidence-based methods.

Evidence-based

The term "evidence-based medicine", as it is currently used, has two main tributaries. Chronologically, the first is the insistence on explicit evaluation of evidence of effectiveness when issuing clinical practice guidelines and other population-level policies. The second is the introduction of epidemiological methods into medical education and individual patient-level decision-making.[citation needed]

Evidence-based guidelines and policies

The term "evidence-based" was first used by David M. Eddy in the course of his work on population-level policies such as clinical practice guidelines and insurance coverage of new technologies. He first began to use the term "evidence-based" in 1987 in workshops and a manual commissioned by the Council of Medical Specialty Societies to teach formal methods for designing clinical practice guidelines. The manual was widely available in unpublished form in the late 1980s and eventually published by the American College of Medicine.[17][18] Eddy first published the term "evidence-based" in March, 1990 in an article in the Journal of the American Medical Association that laid out the principles of evidence-based guidelines and population-level policies, which Eddy described as "explicitly describing the available evidence that pertains to a policy and tying the policy to evidence. Consciously anchoring a policy, not to current practices or the beliefs of experts, but to experimental evidence. The policy must be consistent with and supported by evidence. The pertinent evidence must be identified, described, and analyzed. The policymakers must determine whether the policy is justified by the evidence. A rationale must be written."[19] He discussed "evidence-based" policies in several other papers published in JAMA in the spring of 1990.[19][20] Those papers were part of a series of 28 published in JAMA between 1990 and 1997 on formal methods for designing population-level guidelines and policies.[21]

Medical education

The term "evidence-based medicine" was introduced slightly later, in the context of medical education. This branch of evidence-based medicine has its roots in clinical epidemiology. In the autumn of 1990, Gordon Guyatt used it in an unpublished description of a program at McMaster University for prospective or new medical students.[22] Guyatt and others first published the term two years later (1992) to describe a new approach to teaching the practice of medicine.[1]

In 1996, David Sackett and colleagues clarified the definition of this tributary of evidence-based medicine as "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. ... [It] means integrating individual clinical expertise with the best available external clinical evidence from systematic research."[23] This branch of evidence-based medicine aims to make individual decision making more structured and objective by better reflecting the evidence from research.[24][25] Population-based data are applied to the care of an individual patient,[26] while respecting the fact that practitioners have clinical expertise reflected in effective and efficient diagnosis and thoughtful identification and compassionate use of individual patients' predicaments, rights, and preferences.[23]

This tributary of evidence-based medicine had its foundations in clinical epidemiology, a discipline that teaches health care workers how to apply clinical and epidemiological research studies to their practices. Between 1993 and 2000, the Evidence-based Medicine Working Group at McMaster University published the methods to a broad physician audience in a series of 25 "Users’ Guides to the Medical Literature" in JAMA. In 1995 Rosenberg and Donald defined individual level evidence-based medicine as "the process of finding, appraising, and using contemporaneous research findings as the basis for medical decisions."[27] In 2010, Greenhalgh used a definition that emphasized quantitative methods: "the use of mathematical estimates of the risk of benefit and harm, derived from high-quality research on population samples, to inform clinical decision-making in the diagnosis, investigation or management of individual patients."[28] Many other definitions have been offered for individual level evidence-based medicine, but the one by Sackett and colleagues is the most commonly cited.[23]

The two original definitions[which?] highlight important differences in how evidence-based medicine is applied to populations versus individuals. When designing guidelines applied to large groups of people in settings where there is relatively little opportunity for modification by individual physicians, evidence-based policymaking stresses that there should be good evidence to document a test´s or treatment´s effectiveness.[29] In the setting of individual decision-making, practitioners can be given greater latitude in how they interpret research and combine it with their clinical judgment.[23][30] in 2005 Eddy offered an umbrella definition for the two branches of EBM: "Evidence-based medicine is a set of principles and methods intended to ensure that to the greatest extent possible, medical decisions, guidelines, and other types of policies are based on and consistent with good evidence of effectiveness and benefit."[31]

Progress

Both branches of evidence-based medicine spread rapidly. On the evidence-based guidelines and policies side, explicit insistence on evidence of effectiveness was introduced by the American Cancer Society in 1980.[32] The U.S. Preventive Services Task Force (USPSTF) began issuing guidelines for preventive interventions based on evidence-based principles in 1984.[33] In 1985, the Blue Cross Blue Shield Association applied strict evidence-based criteria for covering new technologies.[34] Beginning in 1987, specialty societies such as the American College of Physicians, and voluntary health organizations such as the American Heart Association, wrote many evidence-based guidelines. In 1991, Kaiser Permanente, a managed care organization in the US, began an evidence-based guidelines program.[35] In 1991, Richard Smith wrote an editorial in the British Medical Journal and introduced the ideas of evidence-based policies in the UK.[36] In 1993, the Cochrane Collaboration created a network of 13 countries to produce of systematic reviews and guidelines.[37] In 1997, the US Agency for Healthcare Research and Quality (then known as the Agency for Health Care Policy and Research, or AHCPR) established Evidence-based Practice Centers (EPCs) to produce evidence reports and technology assessments to support the development of guidelines.[38] In the same year, a National Guideline Clearinghouse that followed the principles of evidence-based policies was created by AHRQ, the AMA, and the American Association of Health Plans (now America's Health Insurance Plans).[39] In 1999, the National Institute for Clinical Excellence (NICE) was created in the UK.[40] A central idea of this branch of evidence-based medicine is that evidence should be classified according to the rigor of its experimental design, and the strength of a recommendation should depend on the strength of the evidence.

On the medical education side, programs to teach evidence-based medicine have been created in medical schools in Canada, the US, the UK, Australia, and other countries. A 2009 study of UK programs found the more than half of UK medical schools offered some training in evidence-based medicine, although there was considerable variation in the methods and content, and EBM teaching was restricted by lack of curriculum time, trained tutors and teaching materials.[41] Many programs have been developed to help individual physicians gain better access to evidence. For example, Up-to-date was created in the early 1990s.[42] The Cochrane Center began publishing evidence reviews in 1993.[35] BMJ Publishing Group launched a 6-monthly periodical in 1995 called Clinical Evidence that provided brief summaries of the current state of evidence about important clinical questions for clinicians.[43] Since then many other programs have been developed to make evidence more accessible to practitioners.

Current practice

The term evidence-based medicine is now applied to both the programs that are designing evidence-based guidelines and the programs that teach evidence-based medicine to practitioners. By 2000, "evidence-based medicine" had become an umbrella term for the emphasis on evidence in both population-level and individual-level decisions. In subsequent years, use of the term "evidence-based" had extended to other levels of the health care system. An example is "evidence-based health services", which seek to increase the competence of health service decision makers and the practice of evidence-based medicine at the organizational or institutional level.[44] The concept has also spread outside of healthcare; for example, in his 1996 inaugural speech as President of the Royal Statistical Society, Adrian Smith proposed that "evidence-based policy" should be established for education, prisons and policing policy and all areas of government work.

The multiple tributaries of evidence-based medicine share an emphasis on the importance of incorporating evidence from formal research in medical policies and decisions. However they differ on the extent to which they require good evidence of effectiveness before promulgating a guideline or payment policy, and they differ on the extent to which it is feasible to incorporate individual-level information in decisions. Thus, evidence-based guidelines and policies may not readily 'hybridise' with experience-based practices orientated towards ethical clinical judgement, and can lead to contradictions, contest, and unintended crises.[13] The most effective 'knowledge leaders' (managers and clinical leaders) use a broad range of management knowledge in their decision making, rather than just formal evidence.[14] Evidence-based guidelines may provide the basis for governmentality in health care and consequently play a central role in the distant governance of contemporary health care systems.[15]

Methods

Steps

The steps for designing explicit, evidence-based guidelines were described in the late 1980s: Formulate the question (population, intervention, comparison intervention, outcomes, time horizon, setting); search the literature to identify studies that inform the question; interpret each study to determine precisely what it says about the question; if several studies address the question, synthesize their results (meta-analysis); summarize the evidence in "evidence tables"; compare the benefits, harms and costs in a "balance sheet"; draw a conclusion about the preferred practice; write the guideline; write the rationale for the guideline; have others review each of the previous steps; implement the guideline.[12]

For the purposes of medical education and individual-level decision making, five steps of EBM in practice were described in 1992[45] and the experience of delegates attending the 2003 Conference of Evidence-Based Health Care Teachers and Developers was summarized into five steps and published in 2005.[46] This five step process can broadly be categorized as:
  1. Translation of uncertainty to an answerable question and includes critical questioning, study design and levels of evidence[47]
  2. Systematic retrieval of the best evidence available[48]
  3. Critical appraisal of evidence for internal validity that can be broken down into aspects regarding:[49]
    • Systematic errors as a result of selection bias, information bias and confounding
    • Quantitative aspects of diagnosis and treatment
    • The effect size and aspects regarding its precision
    • Clinical importance of results
    • External validity or generalizability
  4. Application of results in practice[50]
  5. Evaluation of performance[51]

Evidence reviews

Systematic reviews of published research studies is a major part of the evaluation of particular treatments. The Cochrane Collaboration is one of the best-known programs that conducts systematic reviews. Like other collections of systematic reviews, it requires authors to provide a detailed and repeatable plan of their literature search and evaluations of the evidence.[52] Once all the best evidence is assessed, treatment is categorized as (1) likely to be beneficial, (2) likely to be harmful, or (3) evidence did not support either benefit or harm.

A 2007 analysis of 1,016 systematic reviews from all 50 Cochrane Collaboration Review Groups found that 44% of the reviews concluded that the intervention was likely to be beneficial, 7% concluded that the intervention was likely to be harmful, and 49% concluded that evidence did not support either benefit or harm. 96% recommended further research.[53] A 2001 review of 160 Cochrane systematic reviews (excluding complementary treatments) in the 1998 database revealed that, according to two readers, 41% concluded positive or possibly positive effect, 20% concluded evidence of no effect, 8% concluded net harmful effects, and 21% of the reviews concluded insufficient evidence.[54] A review of 145 alternative medicine Cochrane reviews using the 2004 database revealed that 38.4% concluded positive effect or possibly positive (12.4%) effect, 4.8% concluded no effect, 0.7% concluded harmful effect, and 56.6% concluded insufficient evidence.[55]

Assessing the quality of evidence

Evidence quality can be assessed based on the source type (from meta-analyses and systematic reviews of triple-blind randomized clinical trials with concealment of allocation and no attrition at the top end, down to conventional wisdom at the bottom), as well as other factors including statistical validity, clinical relevance, currency, and peer-review acceptance. Evidence-based medicine categorizes different types of clinical evidence and rates or grades them[56] according to the strength of their freedom from the various biases that beset medical research. For example, the strongest evidence for therapeutic interventions is provided by systematic review of randomized, triple-blind, placebo-controlled trials with allocation concealment and complete follow-up involving a homogeneous patient population and medical condition. In contrast, patient testimonials, case reports, and even expert opinion (however some critics have argued that expert opinion "does not belong in the rankings of the quality of empirical evidence because it does not represent a form of empirical evidence" and continue that "expert opinion would seem to be a separate, complex type of knowledge that would not fit into hierarchies otherwise limited to empirical evidence alone.")[57] have little value as proof because of the placebo effect, the biases inherent in observation and reporting of cases, difficulties in ascertaining who is an expert and more.
Several organizations have developed grading systems for assessing the quality of evidence. For example, in 1989 the U.S. Preventive Services Task Force (USPSTF) put forth the following:[58]
  • Level I: Evidence obtained from at least one properly designed randomized controlled trial.
  • Level II-1: Evidence obtained from well-designed controlled trials without randomization.
  • Level II-2: Evidence obtained from well-designed cohort studies or case-control studies, preferably from more than one center or research group.
  • Level II-3: Evidence obtained from multiple time series designs with or without the intervention. Dramatic results in uncontrolled trials might also be regarded as this type of evidence.
  • Level III: Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees.
Another example is the Oxford (UK) CEBM Levels of Evidence. First released in September 2000, the Oxford CEBM Levels of Evidence provides 'levels' of evidence for claims about prognosis, diagnosis, treatment benefits, treatment harms, and screening, which most grading schemes do not address. The original CEBM Levels was Evidence-Based On Call to make the process of finding evidence feasible and its results explicit. In 2011, an international team redesigned the Oxford CEBM Levels to make it more understandable and to take into account recent developments in evidence ranking schemes. The Oxford CEBM Levels of Evidence have been used by patients, clinicians and also to develop clinical guidelines including recommendations for the optimal use of phototherapy and topical therapy in psoriasis[59] and guidelines for the use of the BCLC staging system for diagnosing and monitoring hepatocellular carcinoma in Canada.[60]

In 2000, a system was developed by the GRADE (short for Grading of Recommendations Assessment, Development and Evaluation) working group and takes into account more dimensions than just the quality of medical research.[61] It requires users of GRADE who are performing an assessment of the quality of evidence, usually as part of a systematic review, to consider the impact of different factors on their confidence in the results. Authors of GRADE tables grade the quality of evidence into four levels, on the basis of their confidence in the observed effect (a numerical value) being close to what the true effect is. The confidence value is based on judgements assigned in five different domains in a structured manner.[62] The GRADE working group defines 'quality of evidence' and 'strength of recommendations' based on the quality as two different concepts which are commonly confused with each other.[62]

Systematic reviews may include randomized controlled trials that have low risk of bias, or, observational studies that have high risk of bias. In the case of randomized controlled trials, the quality of evidence is high, but can be downgraded in five different domains.[63]
  • Risk of bias: Is a judgement made on the basis of the chance that bias in included studies has influenced the estimate of effect.
  • Imprecision: Is a judgement made on the basis of the chance that the observed estimate of effect could change completely.
  • Indirectness: Is a judgement made on the basis of the differences in characteristics of how the study was conducted and how the results are actually going to be applied.
  • Inconsistency: Is a judgement made on the basis of the variability of results across the included studies.
  • Publication bias: Is a judgement made on the basis of the question whether all the research evidence has been taken to account.
In the case of observational studies per GRADE, the quality of evidence starts of lower and may be upgraded in three domains in addition to being subject to downgrading.[63]
  • Large effect: This is when methodologically strong studies show that the observed effect is so large that the probability of it changing completely is less likely.
  • Plausible confounding would change the effect: This is when despite the presence of a possible confounding factor which is expected to reduce the observed effect, the effect estimate still shows significant effect.
  • Dose response gradient: This is when the intervention used becomes more effective with increasing dose. This suggests that a further increase will likely bring about more effect.
Meaning of the levels of quality of evidence as per GRADE:[62]
  • High Quality Evidence: The authors are very confident that the estimate that is presented lies very close to the true value. One could interpret it as "there is very low probability of further research completely changing the presented conclusions."
  • Moderate Quality Evidence: The authors are confident that the presented estimate lies close to the true value, but it is also possible that it may be substantially different. One could also interpret it as: further research may completely change the conclusions.
  • Low Quality Evidence: The authors are not confident in the effect estimate and the true value may be substantially different. One could interpret it as "further research is likely to change the presented conclusions completely."
  • Very low quality Evidence: The authors do not have any confidence in the estimate and it is likely that the true value is substantially different from it. One could interpret it as "new research will most probably change the presented conclusions completely."

Categories of recommendations

In guidelines and other publications, recommendation for a clinical service is classified by the balance of risk versus benefit and the level of evidence on which this information is based. The U.S. Preventive Services Task Force uses:[64]
  • Level A: Good scientific evidence suggests that the benefits of the clinical service substantially outweigh the potential risks. Clinicians should discuss the service with eligible patients.
  • Level B: At least fair scientific evidence suggests that the benefits of the clinical service outweighs the potential risks. Clinicians should discuss the service with eligible patients.
  • Level C: At least fair scientific evidence suggests that there are benefits provided by the clinical service, but the balance between benefits and risks are too close for making general recommendations. Clinicians need not offer it unless there are individual considerations.
  • Level D: At least fair scientific evidence suggests that the risks of the clinical service outweighs potential benefits. Clinicians should not routinely offer the service to asymptomatic patients.
  • Level I: Scientific evidence is lacking, of poor quality, or conflicting, such that the risk versus benefit balance cannot be assessed. Clinicians should help patients understand the uncertainty surrounding the clinical service.

GRADE guideline panelists may make strong or weak recommendations on the basis of further criteria. Some of the important criteria are the balance between desirable and undesirable effects (not considering cost), the quality of the evidence, values and preferences and costs (resource utilization).[63]

Despite the differences between systems, the purposes are the same: to guide users of clinical research information on which studies are likely to be most valid. However, the individual studies still require careful critical appraisal.

Statistical measures

Evidence-based medicine attempts to express clinical benefits of tests and treatments using mathematical methods. Tools used by practitioners of evidence-based medicine include:
  • Likelihood ratio The pre-test odds of a particular diagnosis, multiplied by the likelihood ratio, determines the post-test odds. (Odds can be calculated from, and then converted to, the [more familiar] probability.) This reflects Bayes' theorem. The differences in likelihood ratio between clinical tests can be used to prioritize clinical tests according to their usefulness in a given clinical situation.
  • AUC-ROC The area under the receiver operating characteristic curve (AUC-ROC) reflects the relationship between sensitivity and specificity for a given test. High-quality tests will have an AUC-ROC approaching 1, and high-quality publications about clinical tests will provide information about the AUC-ROC. Cutoff values for positive and negative tests can influence specificity and sensitivity, but they do not affect AUC-ROC.
  • Number needed to treat (NNT)/Number needed to harm (NNH). Number needed to treat or number needed to harm are ways of expressing the effectiveness and safety, respectively, of interventions in a way that is clinically meaningful. NNT is the number of people who need to be treated in order to achieve the desired outcome (e.g. survival from cancer) in one patient. For example, if a treatment increases the chance of survival by 5%, then 20 people need to be treated in order to have 1 additional patient survive due to the treatment. The concept can also be applied to diagnostic tests. For example, if 1339 women age 50–59 have to be invited for breast cancer screening over a ten-year period in order to prevent one woman from dying of breast cancer,[65] then the NNT for being invited to breast cancer screening is 1339.

Quality of clinical trials

Evidence-based medicine attempts to objectively evaluate the quality of clinical research by critically assessing techniques reported by researchers in their publications.
  • Trial design considerations. High-quality studies have clearly defined eligibility criteria and have minimal missing data.
  • Generalizability considerations. Studies may only be applicable to narrowly defined patient populations and may not be generalizable to other clinical contexts.
  • Follow-up. Sufficient time for defined outcomes to occur can influence the prospective study outcomes and the statistical power of a study to detect differences between a treatment and control arm.
  • Power. A mathematical calculation can determine if the number of patients is sufficient to detect a difference between treatment arms. A negative study may reflect a lack of benefit, or simply a lack of sufficient quantities of patients to detect a difference.

Limitations and criticism

Although evidence-based medicine is regarded as the gold standard of clinical practice, there are a number of limitations and criticisms of its use.[66] Two widely cited categorization schemes for the various published critiques of EBM include the three-fold division of Straus and McAlister ("limitations universal to the practice of medicine, limitations unique to evidence-based medicine and misperceptions of evidence-based-medicine")[67] and the five-point categorization of Cohen, Stavri and Hersh (EBM is a poor philosophic basis for medicine, defines evidence too narrowly, is not evidence-based, is limited in usefulness when applied to individual patients, or reduces the autonomy of the doctor/patient relationship).[68]

In no particular order, some published objections include:
  • The theoretical ideal of EBM (that every narrow clinical question, of which hundreds of thousands can exist, would be answered by meta-analysis and systematic reviews of multiple RCTs) faces the limitation that research (especially the RCTs themselves) is expensive; thus, in reality, for the foreseeable future, there will always be much more demand for EBM than supply, and the best humanity can do is to triage the application of scarce resources.
  • Research produced by EBM, such as from randomized controlled trials (RCTs), may not be relevant for all treatment situations.[69] Research tends to focus on specific populations, but individual persons can vary substantially from population norms. Since certain population segments have been historically under-researched (racial minorities and people with co-morbid diseases), evidence from RCTs may not be generalizable to those populations.[70] Thus EBM applies to groups of people, but this should not preclude clinicians from using their personal experience in deciding how to treat each patient. One author advises that "the knowledge gained from clinical research does not directly answer the primary clinical question of what is best for the patient at hand" and suggests that evidence-based medicine should not discount the value of clinical experience.[57] Another author stated that "the practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research."[71]
  • Research can be influenced by biases such as publication bias and conflict of interest. For example, studies with conflicts due to industry funding are more likely to favor their product.[72][73]
  • There is a lag between when the RCT is conducted and when its results are published.[74]
  • There is a lag between when results are published and when these are properly applied.[75]
  • Hypocognition (the absence of a simple, consolidated mental framework that new information can be placed into) can hinder the application of EBM.[76]
  • Values: while patient values are considered in the original definition of EBM, the importance of values is not commonly emphasized in EBM training, a potential problem under current study.[77][78]

Application of evidence in clinical settings

One of the ongoing challenges with evidence-based medicine is that some healthcare providers do not follow the evidence. This happens partly because the current balance of evidence for and against treatments shifts constantly, and it is impossible to learn about every change.[79] Even when the evidence is unequivocally against a treatment, it usually takes ten years for other treatments to be adopted.[79] In other cases, significant change can require a generation of physicians to retire or die, and be replaced by physicians who were trained with more recent evidence.[79]

Another major cause of physicians and other healthcare providers treating patients in ways unsupported by the evidence is that the these healthcare providers are subject to the same cognitive biases as all other humans. They may reject the evidence because they have a vivid memory of a rare but shocking outcome (the availability heuristic), such as a patient dying after refusing treatment.[79] They may overtreat to "do something" or to address a patient's emotional needs.[79] They may worry about malpractice charges based on a discrepancy between what the patient expects and what the evidence recommends.[79] They may also overtreat or provide ineffective treatments because the treatment feels biologically plausible.[79]

Education

The Berlin questionnaire and the Fresno Test[80][81] are the most validated instruments for assessing the effectiveness of education in evidence-based medicine.[82][83] These questionnaires have been used in diverse settings.[84][85]

A Campbell systematic review that included 24 trials examined the effectiveness of e-learning in improving evidence-based health care knowledge and practice. It was found that e-learning, compared to no learning, improves evidence-based health care knowledge and skills but not attitudes and behaviour. There is no difference in outcomes when comparing e-learning to face-to-face learning. Combining e-learning with face-to-face learning (blended learning) has a positive impact on evidence-based knowledge, skills, attitude and behaviour.

Consilience

From Wikipedia, the free encyclopedia
In science and history, consilience (also convergence of evidence or concordance of evidence) refers to the principle that evidence from independent, unrelated sources can "converge" to strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will not likely be a strong scientific consensus.

The principle is based on the unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures the distance between the Great Pyramids of Giza by laser rangefinding, by satellite imaging, or with a meter stick – in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc.

The word consilience was originally coined as the phrase "consilience of inductions" by William Whewell ("consilience" refers to a "jumping together" of knowledge).[1][2] The word comes from Latin com- "together" and -siliens "jumping" (as in resilience).[3]

Description

Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser rangefinding measurements is based on the scientific understanding of lasers, while satellite pictures and meter sticks rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the same way as any of the other methods, and a difference between the measurements will be observed.[note 1] If the scientific understanding of the properties of lasers were inaccurate, then the laser measurement would be inaccurate but the others would not.

As a result, when several different methods agree, this is strong evidence that none of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements (and will also tend to cancel out since the direction of the error will still be random). This is how scientific theories reach high confidence – over time, they build up a large degree of evidence which converges on the same conclusion.[note 2]

When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics);[4] or current attempts to resolve theoretical differences between quantum mechanics and general relativity.[5]

Significance

Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields, because the techniques are usually very different.

For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields.[6] In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. (As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error.[2]) The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct.[6] In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics.[2]

Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result.

Consilience is important across all of science, including the social sciences,[7] and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields.

Deviations

Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong.
Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism,[8] equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely.[8][note 3]

In history

Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archaeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar’s civil war occurred, and so forth.

Consilience has also been discussed in reference to Holocaust denial.
"We [have now discussed] eighteen proofs all converging on one conclusion...the deniers shift the burden of proof to historians by demanding that each piece of evidence, independently and without corroboration between them, prove the Holocaust. Yet no historian has ever claimed that one piece of evidence proves the Holocaust. We must examine the collective whole."[2]
That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one particular piece of evidence in favor of a conclusion is a flawed question.[6][9]

Outside the sciences

In addition to the sciences, consilience can be important to the arts, ethics and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation.[1]

History of the concept

Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment.[1]
Whewell’s definition was that:[10]
The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.
More recent descriptions include:
"Where there is convergence of evidence, where the same explanation is implied, there is increased confidence in the explanation. Where there is divergence, then either the explanation is at fault or one or more of the sources of information is in error or requires reinterpretation."[11]
"Proof is derived through a convergence of evidence from numerous lines of inquiry--multiple, independent inductions, all of which point to an unmistakable conclusion."[6]

Edward O. Wilson

Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in Consilience: The Unity of Knowledge, a 1998 book by the humanist biologist Edward Osborne Wilson, as an attempt to bridge the culture gap between the sciences and the humanities that was the subject of C. P. Snow's The Two Cultures and the Scientific Revolution (1959).[1]

Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understanding the details, to lend to all inquirers "a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws." Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well.[1]

A parallel view lies in the term universology, which literally means "the science of the universe." Universology was first promoted for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th-century utopian futurist and anarchist.

Chemical equilibrium

From Wikipedia, the free encyclopedia

In a chemical reaction, chemical equilibrium is the state in which both reactants and products are present in concentrations which have no further tendency to change with time.[1] Usually, this state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but equal. Thus, there are no net changes in the concentrations of the reactant(s) and product(s). Such a state is known as dynamic equilibrium.[2][3]

Historical introduction

Burette, a common laboratory apparatus for carrying out titration, an important experimental technique in equilibrium and analytical chemistry.

The concept of chemical equilibrium was developed after Berthollet (1803) found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions are equal. In the following chemical equation with arrows pointing both ways to indicate equilibrium, A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products:
α A + β B ⇌ σ S + τ T
The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants.

Guldberg and Waage (1865), building on Berthollet’s ideas, proposed the law of mass action:
{\displaystyle {\mbox{forward reaction rate}}=k_{+}\mathrm {A} ^{\alpha }\mathrm {B} ^{\beta }\,\!}
{\displaystyle {\mbox{backward reaction rate}}=k_{-}\mathrm {S} ^{\sigma }\mathrm {T} ^{\tau }\,\!}
where A, B, S and T are active masses and k+ and k are rate constants. Since at equilibrium forward and backward rates are equal:
{\displaystyle k_{+}\left\{\mathrm {A} \right\}^{\alpha }\left\{\mathrm {B} \right\}^{\beta }=k_{-}\left\{\mathrm {S} \right\}^{\sigma }\left\{\mathrm {T} \right\}^{\tau }\,}
and the ratio of the rate constants is also a constant, now known as an equilibrium constant.
{\displaystyle K_{c}={\frac {k_{+}}{k_{-}}}={\frac {\{\mathrm {S} \}^{\sigma }\{\mathrm {T} \}^{\tau }}{\{\mathrm {A} \}^{\alpha }\{\mathrm {B} \}^{\beta }}}}
By convention the products form the numerator. However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs.

Despite the failure of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached.[2][4]

Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions,
CH3CO2H + H2O ⇌ CH
3
CO
2
+ H3O+
a proton may hop from one molecule of acetic acid on to a water molecule and then on to an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior.

Le Châtelier's principle (1884) gives an idea of the behavior of an equilibrium system when changes to its reaction conditions occur. If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change. For example, adding more S from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same).

If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction:
{\displaystyle K={\frac {\ce {\{CH3CO2^{-}\}\{H3O+\}}}{\ce {\{CH3CO2H\}}}}}
If {H3O+} increases {CH3CO2H} must increase and CH
3
CO
2
must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant.

A quantitative version is given by the reaction quotient.

J. W. Gibbs suggested in 1873 that equilibrium is attained when the Gibbs free energy of the system is at its minimum value (assuming the reaction is carried out at constant temperature and pressure). What this means is that the derivative of the Gibbs energy with respect to reaction coordinate (a measure of the extent of reaction that has occurred, ranging from zero for all reactants to a maximum for all products) vanishes, signalling a stationary point. This derivative is called the reaction Gibbs energy (or energy change) and corresponds to the difference between the chemical potentials of reactants and products at the composition of the reaction mixture.[1] This criterion is both necessary and sufficient. If a mixture is not at equilibrium, the liberation of the excess Gibbs energy (or Helmholtz energy at constant volume reactions) is the "driving force" for the composition of the mixture to change until equilibrium is reached. The equilibrium constant can be related to the standard Gibbs free energy change for the reaction by the equation
{\displaystyle \Delta _{r}G^{\ominus }=-RT\ln K_{\mathrm {eq} }}
where R is the universal gas constant and T the temperature.

When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, Kc,
{\displaystyle K_{\mathrm {c} }={\frac {[\mathrm {S} ]^{\sigma }[\mathrm {T} ]^{\tau }}{[\mathrm {A} ]^{\alpha }[\mathrm {B} ]^{\beta }}}}
where [A] is the concentration of A, etc., is independent of the analytical concentration of the reactants. For this reason, equilibrium constants for solutions are usually determined in media of high ionic strength. Kc varies with ionic strength, temperature and pressure (or volume). Likewise Kp for gases depends on partial pressure. These constants are easier to measure and encountered in high-school chemistry courses.

Thermodynamics

At constant temperature and pressure, one must consider the Gibbs free energy, G, while at constant temperature and volume, one must consider the Helmholtz free energy: A, for the reaction; and at constant internal energy and volume, one must consider the entropy for the reaction: S.

The constant volume case is important in geochemistry and atmospheric chemistry where pressure variations are significant. Note that, if reactants and products were in standard state (completely pure), then there would be no reversibility and no equilibrium. Indeed, they would necessarily occupy disjoint volumes of space. The mixing of the products and reactants contributes a large entropy (known as entropy of mixing) to states containing equal mixture of products and reactants. The standard Gibbs energy change, together with the Gibbs energy of mixing, determine the equilibrium state.[5][6]

In this article only the constant pressure case is considered. The relation between the Gibbs free energy and the equilibrium constant can be found by considering chemical potentials.[1]

At constant temperature and pressure, the Gibbs free energy, G, for the reaction depends only on the extent of reaction: ξ (Greek letter xi), and can only decrease according to the second law of thermodynamics. It means that the derivative of G with ξ must be negative if the reaction happens; at the equilibrium the derivative being equal to zero.
\left({\frac {dG}{d\xi }}\right)_{T,p}=0~:     equilibrium
In order to meet the thermodynamic condition for equilibrium, the Gibbs energy must be stationary, meaning that the derivative of G with respect to the extent of reaction: ξ, must be zero. It can be shown that in this case, the sum of chemical potentials of the products is equal to the sum of those corresponding to the reactants. Therefore, the sum of the Gibbs energies of the reactants must be the equal to the sum of the Gibbs energies of the products.
{\displaystyle \alpha \mu _{\mathrm {A} }+\beta \mu _{\mathrm {B} }=\sigma \mu _{\mathrm {S} }+\tau \mu _{\mathrm {T} }\,}
where μ is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent.
{\displaystyle \mu _{\mathrm {A} }=\mu _{A}^{\ominus }+RT\ln\{\mathrm {A} \}\,}
(where μo
A
is the standard chemical potential).

The definition of the Gibbs energy equation interacts with the fundamental thermodynamic relation to produce
dG=Vdp-SdT+\sum _{i=1}^{k}\mu _{i}dN_{i}.
Inserting dNi = νi dξ into the above equation gives a Stoichiometric coefficient (\nu _{i}~) and a differential that denotes the reaction occurring once (). At constant pressure and temperature the above equations can be written as
{\displaystyle \left({\frac {dG}{d\xi }}\right)_{T,p}=\sum _{i=1}^{k}\mu _{i}\nu _{i}=\Delta _{\mathrm {r} }G_{T,p}} which is the "Gibbs free energy change for the reaction .
This results in:
{\displaystyle \Delta _{\mathrm {r} }G_{T,p}=\sigma \mu _{\mathrm {S} }+\tau \mu _{\mathrm {T} }-\alpha \mu _{\mathrm {A} }-\beta \mu _{\mathrm {B} }\,}.
By substituting the chemical potentials:
{\displaystyle \Delta _{\mathrm {r} }G_{T,p}=(\sigma \mu _{\mathrm {S} }^{\ominus }+\tau \mu _{\mathrm {T} }^{\ominus })-(\alpha \mu _{\mathrm {A} }^{\ominus }+\beta \mu _{\mathrm {B} }^{\ominus })+(\sigma RT\ln\{\mathrm {S} \}+\tau RT\ln\{\mathrm {T} \})-(\alpha RT\ln\{\mathrm {A} \}+\beta RT\ln\{\mathrm {B} \})},
the relationship becomes:
{\displaystyle \Delta _{\mathrm {r} }G_{T,p}=\sum _{i=1}^{k}\mu _{i}^{\ominus }\nu _{i}+RT\ln {\frac {\{\mathrm {S} \}^{\sigma }\{\mathrm {T} \}^{\tau }}{\{\mathrm {A} \}^{\alpha }\{\mathrm {B} \}^{\beta }}}}
{\displaystyle \sum _{i=1}^{k}\mu _{i}^{\ominus }\nu _{i}=\Delta _{\mathrm {r} }G^{\ominus }}:
which is the standard Gibbs energy change for the reaction that can be calculated using thermodynamical tables. The reaction quotient is defined as:
{\displaystyle Q_{\mathrm {r} }={\frac {\{\mathrm {S} \}^{\sigma }\{\mathrm {T} \}^{\tau }}{\{\mathrm {A} \}^{\alpha }\{\mathrm {B} \}^{\beta }}}}
Therefore,
{\displaystyle \left({\frac {dG}{d\xi }}\right)_{T,p}=\Delta _{\mathrm {r} }G_{T,p}=\Delta _{\mathrm {r} }G^{\ominus }+RT\ln Q_{\mathrm {r} }}
At equilibrium:
{\displaystyle \left({\frac {dG}{d\xi }}\right)_{T,p}=\Delta _{\mathrm {r} }G_{T,p}=0}
leading to:
{\displaystyle 0=\Delta _{\mathrm {r} }G^{\ominus }+RT\ln K_{\mathrm {eq} }}
and
{\displaystyle \Delta _{\mathrm {r} }G^{\ominus }=-RT\ln K_{\mathrm {eq} }}
Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant.
Diag eq.svg

Addition of reactants or products

For a reactional system at equilibrium: Qr = Keq; ξ = ξeq.
  • If are modified activities of constituents, the value of the reaction quotient changes and becomes different from the equilibrium constant: Qr ≠ Keq
{\displaystyle \left({\frac {dG}{d\xi }}\right)_{T,p}=\Delta _{\mathrm {r} }G^{\ominus }+RT\ln Q_{\mathrm {r} }~}
and
{\displaystyle \Delta _{\mathrm {r} }G^{\ominus }=-RT\ln K_{eq}~}
then
{\displaystyle \left({\frac {dG}{d\xi }}\right)_{T,p}=RT\ln \left({\frac {Q_{\mathrm {r} }}{K_{\mathrm {eq} }}}\right)~}
  • If activity of a reagent i increases

{\displaystyle Q_{\mathrm {r} }={\frac {\prod (a_{j})^{\nu _{j}}}{\prod (a_{i})^{\nu _{i}}}}~}, the reaction quotient decreases.
then
{\displaystyle Q_{\mathrm {r} }<K_{\mathrm {eq} }~}     and     \left({\frac {dG}{d\xi }}\right)_{T,p}<0~
The reaction will shift to the right (i.e. in the forward direction, and thus more products will form).
  • If activity of a product j increases
then
{\displaystyle Q_{\mathrm {r} }>K_{\mathrm {eq} }~}     and     \left({\frac {dG}{d\xi }}\right)_{T,p}>0~
The reaction will shift to the left (i.e. in the reverse direction, and thus less products will form).
Note that activities and equilibrium constants are dimensionless numbers.

Treatment of activity

The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ.
{\displaystyle K={\frac {[\mathrm {S} ]^{\sigma }[\mathrm {T} ]^{\tau }...}{[\mathrm {A} ]^{\alpha }[\mathrm {B} ]^{\beta }...}}\times {\frac {{\gamma _{\mathrm {S} }}^{\sigma }{\gamma _{\mathrm {T} }}^{\tau }...}{{\gamma _{\mathrm {A} }}^{\alpha }{\gamma _{\mathrm {B} }}^{\beta }...}}=K_{\mathrm {c} }\Gamma }
[A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation[7] Specific ion interaction theory or Pitzer equations[8] may be used.Software (below). However this is not always possible. It is common practice to assume that Γ is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term equilibrium constant instead of the more accurate concentration quotient. This practice will be followed here.

For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, f, is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the gas phase is given by
{\displaystyle \mu =\mu ^{\ominus }+RT\ln \left({\frac {f}{\mathrm {bar} }}\right)=\mu ^{\ominus }+RT\ln \left({\frac {p}{\mathrm {bar} }}\right)+RT\ln \gamma }
so the general expression defining an equilibrium constant is valid for both solution and gas phases.

Concentration quotients

In aqueous solution, equilibrium constants are usually determined in the presence of an "inert" electrolyte such as sodium nitrate NaNO3 or potassium perchlorate KClO4. The ionic strength of a solution is given by
{\displaystyle I={\frac {1}{2}}\sum _{i=1}^{N}c_{i}z_{i}^{2}}
where ci and zi stand for the concentration and ionic charge of ion type i, and the sum is taken over all the N types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that Γ is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant.[9]
{\displaystyle K_{\mathrm {c} }={\frac {K}{\Gamma }}}
However, Kc will vary with ionic strength. If it is measured at a series of different ionic strengths the value can be extrapolated to zero ionic strength.[8] The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant.

To use a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjustedSoftware (below).

Metastable mixtures

A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3.
2 SO2 + O2 ⇌ 2 SO3
The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations.

Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions
CO2 + 2 H2O ⇌ HCO
3
+ H3O+
but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase.

Pure substances

When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant[10] because their numerical values are considered one.

Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains
CH3CO2H + H2O ⇌ CH3CO2 + H3O+
{\displaystyle K_{\mathrm {c} }={\frac {\mathrm {[{CH_{3}CO_{2}}^{-}][{H_{3}O}^{+}]} }{\mathrm {[{CH_{3}CO_{2}H}][{H_{2}O}]} }}}
For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as
{\displaystyle K={\frac {\mathrm {[{CH_{3}CO_{2}}^{-}][{H_{3}O}^{+}]} }{\mathrm {[{CH_{3}CO_{2}H}]} }}=K_{\mathrm {c} }}.
A particular case is the self-ionization of water itself
2 H2O ⇌ H3O+ + OH
Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as
{\displaystyle K_{\mathrm {w} }=\mathrm {[H^{+}][OH^{-}]} }
It is perfectly legitimate to write [H+] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature.

The concentrations of H+ and OH are not independent quantities. Most commonly [OH] is replaced by Kw[H+]−1 in equilibrium constant expressions which would otherwise include hydroxide ion.

Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction:[10]
2 CO ⇌ CO2 + C
for which the equation (without solid carbon) is written as:
{\displaystyle K_{\mathrm {c} }={\frac {\mathrm {[CO_{2}]} }{\mathrm {[CO]^{2}} }}}

Multiple equilibria

Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA and A2−. This equilibrium can be split into two steps in each of which one proton is liberated.
{\displaystyle {\begin{array}{rl}{\ce {H2A<=>{HA^{-}}+{H+}}}:&K_{1}={\frac {\ce {[HA-][H+]}}{\ce {[H2A]}}}\\{\ce {HA-<=>{A^{2-}}+{H+}}}:&K_{2}={\frac {\ce {[A^{2-}][H+]}}{\ce {[HA-]}}}\end{array}}}
K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant, βD, is product of the stepwise constants.
{\displaystyle {\ce {{H2A}<=>{A^{2-}}+{2H+}}}}:     {\displaystyle \beta _{\ce {D}}={\frac {\ce {[A^{2-}][H^{+}]^{2}}}{\ce {[H_{2}A]}}}=K_{1}K_{2}}
Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.
{\displaystyle {\begin{array}{ll}{\ce {{A^{2-}}+{H+}<=>HA-}}:&\beta _{1}={\frac {\ce {[HA^{-}]}}{\ce {[A^{2-}][H+]}}}\\{\ce {{A^{2-}}+{2H+}<=>H2A}}:&\beta _{2}={\frac {\ce {[H2A]}}{\ce {[A^{2-}][H+]^{2}}}}\end{array}}}
β1 and β2 are examples of association constants. Clearly β1 = 1/K2 and β2 = 1/βD; log β1 = pK2 and log β2 = pK2 + pK1[11] For multiple equilibrium systems, also see: theory of Response reactions.

Effect of temperature

The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation
{\displaystyle {\frac {d\ln K}{dT}}={\frac {\Delta H_{\mathrm {m} }^{\ominus }}{RT^{2}}}}
Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is
{\displaystyle {\frac {d\ln K}{d(T^{-1})}}=-{\frac {\Delta H_{\mathrm {m} }^{\ominus }}{R}}}
At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way.

Effect of electric and magnetic fields

The effect of electric field on equilibrium has been studied by Manfred Eigen[citation needed] among others.

Types of equilibrium

  1. N2 (g) ⇌ N2 (adsorbed)
  2. N2 (adsorbed) ⇌ 2 N (adsorbed)
  3. H2 (g) ⇌ H2 (adsorbed)
  4. H2 (adsorbed) ⇌ 2 H (adsorbed)
  5. N (adsorbed) + 3 H(adsorbed) ⇌ NH3 (adsorbed)
  6. NH3 (adsorbed) ⇌ NH3 (g)
In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association/dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant’s value was determined.

Composition of a mixture

When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are any number of ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid.

There are three approaches to the general calculation of the composition of a mixture at equilibrium.
  1. The most basic approach is to manipulate the various equilibrium constants until the desired concentrations are expressed in terms of measured equilibrium constants (equivalent to measuring chemical potentials) and initial conditions.
  2. Minimize the Gibbs energy of the system.[13][14]
  3. Satisfy the equation of mass balance. The equations of mass balance are simply statements that demonstrate that the total concentration of each reactant must be constant by the law of conservation of mass.

Mass-balance equations

In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A:
{\displaystyle T_{\mathrm {A} }=\mathrm {[A]+[HA]+[H_{2}A]} \,}
{\displaystyle T_{\mathrm {H} }=\mathrm {[H]+[HA]+2[H_{2}A]-[OH]} \,}
With TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations.

When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA] = β1[A][H], [H2A] = β2[A][H]2 and [OH] = Kw[H]−1
{\displaystyle T_{\mathrm {A} }=\mathrm {[A]} +\beta _{1}\mathrm {[A][H]} +\beta _{2}\mathrm {[A][H]} ^{2}\,}
{\displaystyle T_{\mathrm {H} }=\mathrm {[H]} +\beta _{1}\mathrm {[A][H]} +2\beta _{2}\mathrm {[A][H]} ^{2}-K_{w}[\mathrm {H} ]^{-1}\,}
so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants. General expressions applicable to all systems with two reagents, A and B would be
{\displaystyle T_{\mathrm {A} }=[\mathrm {A} ]+\sum _{i}p_{i}\beta _{i}[\mathrm {A} ]^{p_{i}}[\mathrm {B} ]^{q_{i}}}
{\displaystyle T_{\mathrm {B} }=[\mathrm {B} ]+\sum _{i}q_{i}\beta _{i}[\mathrm {A} ]^{p_{i}}[\mathrm {B} ]^{q_{i}}}
It is easy to see how this can be extended to three or more reagents.

Polybasic acids

Species concentrations during hydrolysis of the aluminium.

The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A.

The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al3+(aq)[15] shows the species concentrations for a 5 × 10−6 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium.

Solution and precipitation

The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al3+ are aluminium hydroxides Al(OH)2+, AlOH+
2
and Al
13
(OH)7+
32
, but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, Al(OH)
4
, is formed.

Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni2+ and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2.

Minimization of Gibbs energy

At equilibrium, at a specified temperature and pressure, the Gibbs energy G is at a minimum:
dG=\sum _{j=1}^{m}\mu _{j}\,dN_{j}=0
For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints:
\sum _{j=1}^{m}a_{ij}N_{j}=b_{i}^{0}
where aij is the number of atoms of element i in molecule j and b0
i
is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero.

This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers, also known as undetermined multipliers (though other methods may be used).

Define:
{\mathcal {G}}=G+\sum _{i=1}^{k}\lambda _{i}\left(\sum _{j=1}^{m}a_{ij}N_{j}-b_{i}^{0}\right)=0
where the λi are the Lagrange multipliers, one for each element. This allows each of the Nj and λj to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by
{\displaystyle 0={\frac {\partial {\mathcal {G}}}{\partial N_{j}}}=\mu _{j}+\sum _{i=1}^{k}\lambda _{i}a_{ij}}
{\displaystyle 0={\frac {\partial {\mathcal {G}}}{\partial \lambda _{i}}}=\sum _{j=1}^{m}a_{ij}N_{j}-b_{i}^{0}}
(For proof see Lagrange multipliers.) This is a set of (m + k) equations in (m + k) unknowns (the Nj and the λi) and may, therefore, be solved for the equilibrium concentrations Nj as long as the chemical potentials are known as functions of the concentrations at the given temperature and pressure. (See Thermodynamic databases for pure substances.) Note that the second equation is just the initial constraints for minimization.

This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of k atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations.[12]. The results are consistent with those specified by chemical equations. For example, if equilibrium is specified by a single chemical equation: [16],
{\displaystyle \sum _{j=0}^{m}\nu _{j}R_{j}=0}
where νj is the stochiometric coefficient for the j th molecule (negative for reactants, positive for products) and Rj is the symbol for the j th molecule, a properly balanced equation will obey:
{\displaystyle \sum _{j=1}^{m}a_{ij}\nu _{j}=0}
Multiplying the first equilibrium condition by νj yields
{\displaystyle 0=\sum _{j=1}^{m}\nu _{j}\mu _{j}+\sum _{j=1}^{m}\sum _{i=1}^{k}\nu _{j}\lambda _{i}a_{ij}=\sum _{j=1}^{m}\nu _{j}\mu _{j}}
As above, defining ΔG
{\displaystyle \Delta G=\sum _{j=1}^{m}\nu _{j}\mu _{j}=\sum _{j=1}^{m}\nu _{j}(\mu _{j}^{\ominus }+RT\ln(\{R_{j}\}))=\Delta G^{\ominus }+RT\ln \left(\prod _{j=1}^{m}\{R_{j}\}^{\nu _{j}}\right)=\Delta G^{\ominus }+RT\ln(K_{eq})}
which will be zero at equilibrium.

Surgery

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...