0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Article |

A New Brief Instrument for Assessing Decisional Capacity for Clinical Research FREE

Dilip V. Jeste, MD; Barton W. Palmer, PhD; Paul S. Appelbaum, MD; Shahrokh Golshan, PhD; Danielle Glorioso, BS; Laura B. Dunn, MD; Kathleen Kim, MD, MPH; Thomas Meeks, MD; Helena C. Kraemer, PhD
[+] Author Affiliations

Author Affiliations: Department of Psychiatry, University of California, San Diego, La Jolla (Drs Jeste, Palmer, Golshan, Dunn, Kim, and Meeks and Ms Glorioso); Veterans Affairs San Diego Healthcare System, San Diego, California (Drs Jeste and Kim); Division of Psychiatry, Law, and Ethics, Department of Psychiatry, Columbia University College of Physicians and Surgeons, New York, New York (Dr Appelbaum); and Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California (Dr Kraemer).


Arch Gen Psychiatry. 2007;64(8):966-974. doi:10.1001/archpsyc.64.8.966.
Text Size: A A A
Published online

Context  There is a critical need for practical measures for screening and documenting decisional capacity in people participating in different types of clinical research. However, there are few reliable and validated brief tools that could be used routinely to evaluate individuals' capacity to consent to a research protocol.

Objective  To describe the development, testing, and proposed use of a new practical instrument to assess decision-making capacity: the University of California, San Diego Brief Assessment of Capacity to Consent (UBACC). The UBACC is intended to help investigators identify research participants who warrant more thorough decisional capacity assessment and/or remediation efforts prior to enrollment.

Design, Setting, and Participants  We developed the UBACC as a 10-item scale that included questions focusing on understanding and appreciation of the information concerning a research protocol. It was developed and tested among middle-aged and older outpatients with schizophrenia and healthy comparison subjects participating in research on informed consent. In an investigation of reliability and validity, we studied 127 outpatients with schizophrenia or schizoaffective disorder and 30 healthy comparison subjects who received information about a simulated clinical drug trial. Internal consistency, interrater reliability, and concurrent (criterion) validity (including correlations with an established instrument as well as sensitivity and specificity relative to 2 potential “gold standard” criteria) were measured.

Main Outcome Measures  Reliability and validity of the UBACC.

Results  The UBACC was found to have good internal consistency, interrater reliability, concurrent validity, high sensitivity, and acceptable specificity. It typically took less than 5 minutes to administer, was easy to use and reliably score, and could be used to identify subjects with questionable capacity to consent to the specific research project.

Conclusion  The UBACC is a potentially useful instrument for screening large numbers of subjects to identify those needing more comprehensive decisional capacity assessment and/or remediation efforts.

Figures in this Article

Investigators have an ethical responsibility not simply to disclose information to potential participants but also to ensure that the participant has the capacity to reach a decision on the basis of the information provided. How should researchers meet this ethical responsibility? Clinical assessments of decisional capacity have shown poor interrater reliability,1 so there is a clear need for structured tools to evaluate capacity to consent to research.2,3

In a recent review of such instruments,2 we identified 10 published scales for assessing capacity to consent for research; among the currently available instruments, the best general choice for measuring capacity to consent to research is the MacArthur Competency Assessment Tool for Clinical Research (MacCAT-CR),4 for which there has been considerable evidence for reliability and construct validity.2,3 However, even the MacCAT-CR has limitations for routine use, including administration time of 15 to 20 minutes and a need for substantial training for valid administration and interpretation. Thus, there remains a need for brief and easy-to-use yet reliable and validated methods by which investigators may document that at least a basic level of comprehension of key elements was present prior to enrollment and may identify individuals for whom a more thorough assessment of decisional capacity and/or remediation via enhanced consent procedures is warranted.

We previously described the utility of a 3-item questionnaire that examined participants' comprehension of the purpose, risks, and benefits of a research protocol5; however, we recognized that an ideal capacity screening tool would need to include assessment and documentation of additional essential elements such as comprehension of protocol procedures, appreciation of the potential significance of study risks, and the voluntary nature of participation. Thus, the aim of the present project was to develop and validate a decisional capacity screening tool that partially retains the advantage of the 3-item questionnaire in terms of brevity (fostering routine use in applied situations) while providing for more comprehensive evaluation of understanding, appreciation, and reasoning about protocol elements that are key to meaningful consent. The University of California, San Diego Brief Assessment of Capacity to Consent (UBACC) items were chosen based on consensus among a variety of stakeholders, including our center community advisory board as well as experts in bioethics and decisional capacity. Specifically, our goal was to develop an instrument for screening and basic documentation of decisional capacity with the following characteristics: (1) brief administration time so that it could be routinely applied with minimal added burden; (2) standardized procedures for administration, scoring, and interpretation so that it could be used by trained research staff without advanced degrees; and (3) satisfactory psychometric properties. The result of our efforts, the UBACC, is a 10-item scale that can be administered by bachelor's degree–level research staff and typically requires less than 5 minutes to administer. After describing the development and instructions for using the UBACC, we present information about its reliability and validity as applied to 1 specific context and population, ie, a simulated clinical drug trial involving middle-aged and older outpatients with schizophrenia.

STEPS IN UBACC DEVELOPMENT
Input From Various Stakeholders

To help ensure content validity of the UBACC, we consulted with several experts in the areas of competence, empirical bioethics, psychometrics, statistics, and scale development. Consistent with the notion of community equipoise,5 an integral part of our National Institute of Mental Health–funded Advanced Center for Interventions and Services Research, and especially its Bioethics Unit, is the community advisory board.6 This board, comprising representative research subjects, caregivers, community health care clinicians, and investigators, was consulted at various stages of the development of the UBACC.

Item Generation and Selection

Based on the input from various stakeholders, an examination of individual MacCAT-CR items from our existing database, a review of the published instruments,2 and the recent review by Sturman,3 we identified an initial pool of more than 25 potential items.

Through discussion among our bioethics and methodologic experts, we narrowed the UBACC from 25 to 10 items through consideration of the clarity and general applicability of the item wording, content overlap, and range. The final version included 4 items for examining understanding, 5 for appreciation, and 1 for reasoning. We chose not to assess the expression of choice because this aspect is usually not impaired in most neuropsychiatric populations7,8 and a deficit in this area tends to be obvious during the consent process without a specific formal evaluation. We have previously found that persons with schizophrenia have greater difficulty with open-ended questions than with those requiring answers of the yes or no type.9 This issue may arise in part from the patient's inability to adequately articulate responses to open-ended queries (which puts the subjects at an unnecessary disadvantage) and in part from the fact that there is a 50% chance of the subject's giving a correct yes or no answer without comprehending the topic (which may lead to missed opportunities to clarify initially misunderstood information). For that reason, we decided to use a combination of open-ended and true or false questions. We made it possible with either type of question to probe responses with follow-up questions to determine whether apparently incorrect answers reflect an underlying difficulty. The resulting 10-item scale is provided in Figure 1.

Place holder to copy figure label and caption
Figure 1.

UCSD indicates University of California, San Diego. Note that all of the responses given in parentheses apply to the specific hypothetical protocol described in the text of this article. Before the initiation of a study, the principal investigator must prepare a list of answers for his or her specific study that will receive a score of 2 on each item. A deviation from such an answer will be scored as 1 or 0. The local institutional review board may need to approve the principal investigator's list of answers. If an answer to a question is ambiguous or uncertain, the subject should be asked follow-up questions to clarify. If a subject does not get a score of 2, the information on the items missed may be repeated and the specific question(s) asked again. This may be done for a total of 3 trials. During the second and third trials, it is important to ensure that the subject does not merely parrot back information but shows clear understanding of the issue.

Graphic Jump Location
UBACC STRUCTURE, ADMINISTRATION, AND SCORING
Structure

Based on consideration of the content each item appeared to measure, we initially defined 3 UBACC subscales: understanding (items 1, 3, 7, and 8), appreciation (items 4, 5, 6, 9, and 10), and reasoning (item 2). As described later, we subsequently modified these groupings based on the results from the principal components analysis.

Administration

The UBACC typically takes less than 5 minutes to administer. The scale is designed to be administered by a bachelor's degree–level research assistant (RA). After the participant has reviewed the consent form in detail, the RA explains that he or she is going to ask a few brief questions about the study. Potential participants generally have a copy of the consent form available to them during the consent process and therefore do not have to rely solely on their ability to memorize the protocol details when giving consent to enroll. Consequently, potential subjects being screened with the UBACC should be permitted to refer to the study consent form when answering the UBACC questions. At the same time, comprehension is not synonymous with the ability to read the words on a consent form. Therefore, it is important that the participants be encouraged to explain the information relevant to each item in their own words to ensure that the subjects' responses reflect genuine comprehension, not merely an ability to read or repeat the consent text.

Also, it ought to be noted that low scores on the UBACC should indicate difficulty with understanding, appreciation, or reasoning about information related to a study protocol and not difficulty understanding the questions on the UBACC itself. When a participant seems to be having difficulty understanding the wording of a UBACC question, the RA should attempt to rephrase the question in language appropriate to the individual (if a participant requires frequent rephrasing, this should be noted on the UBACC form).

Scoring

Each item is scored on a scale of 0 to 2 points, with 0 reflecting a clearly incapable response and 2 indicating a clearly capable response. An intermediate score of 1 may be used for partially appropriate responses or uncertainty even after reexplanation and additional probing. A score of 1 or 0 on any particular item may alert investigators to the need for further assessment of decisional capacity and/or efforts at remediation. Nonetheless, paralleling the scoring procedures for MacCAT-CR items, the UBACC includes such intermediate values to reflect the degree of confidence about a lack of adequate capacity and possibly the amount of effort that would be needed for improving the capacity. Total scores can thus range from 0 to 20. Prior to the initiation of the study, the principal investigator must do the following: (1) examine the UBACC questions and determine which of the 10 are essential for consent to the specific protocol, and (2) prepare a list of answers for his or her specific study that will receive a score of 1 or 2 on each item.

Interpretation and Follow-up

The UBACC serves 2 purposes: (1) documenting for each participant that he or she manifested at least a basic level of comprehension of the study protocol prior to enrollment, and (2) screening for participants who may lack adequate capacity to consent. Those subjects who manifest inadequate comprehension on the UBACC may then be further evaluated with a more comprehensive method, such as the MacCAT-CR,4 to more fully document the scope and nature of the capacity deficits and/or be offered remediation efforts to improve understanding, appreciation, and reasoning with respect to the disclosed information.

Training of the Research Staff

The RAs should be familiar with the study protocol and with the UBACC scoring before they administer the scale. In the event of uncertainty in scoring, the RA should probe the participant to collect as much information as possible and discuss these items with the principal investigator for an appropriate score.

RELIABILITY AND VALIDITY STUDY

We conducted a reliability and validity study in the context of a larger investigation of informed consent among middle-aged and older persons with schizophrenia (D.V.J., B.W.P., P.S.A., S.G., D.G.L.B.D., T.M., H.C.K., Lisa T. Eyler, PhD, and Ian Fellows, MS, unpublished data, 2006). We administered the UBACC to 127 patients with schizophrenia or schizoaffective disorder as well as 30 healthy comparison subjects (HCs) older than 40 years in the larger informed consent study. Demographic, cognitive, and clinical characteristics of the sample are described in Table 1.

Table Graphic Jump LocationTable 1. Demographic, Cognitive, and Clinical Characteristics of the 2 Participant Groups

The consent study, including incorporation of the UBACC, was reviewed and approved by the University of California, San Diego Human Research Protections Program (institutional review board). All of the research participants signed a written, institutional review board–approved informed consent form. As the consent study itself was considered a minimal-risk protocol, no formal assessment of capacity was required for it.

Each participant underwent a simulated consent process for a hypothetical, randomized, double-blind, placebo-controlled trial of an investigational cognition-enhancing drug that could be appropriate for people with schizophrenia or for middle-aged and older HCs. The simulated protocol was modeled after commonly conducted phase 3 studies involved in regulatory approval as well as specified potential risks such as cardiac arrhythmia and abnormal liver function. Immediately after presentation of the simulated consent process, the participant's capacity to consent to the presented clinical trial was evaluated with the UBACC and then with the MacCAT-CR.5 The participant's responses to individual items on both of these scales were recorded.

Another RA (kept unaware of the UBACC and MacCAT-CR responses) administered standardized assessments of psychopathology (Positive And Negative Syndrome Scale10 and the 17-item version of the Hamilton Depression Rating Scale11) and a brief cognitive evaluation (Repeatable Battery for the Assessment of Neuropsychological Status12).

Reliability and Consistency Analyses

Internal consistency was evaluated in terms of Cronbach α.13 Because of the reexplanations of material that are an inherent part of standard MacCAT-CR administration (ie, the MacCAT-CR procedures would likely influence UBACC scores in a manner that does not mirror the way informed consent is typically provided in clinical research), we did not readminister the UBACC and therefore did not evaluate its test-retest reliability.

Interrater reliability was assessed in 2 different ways involving subsets of the total sample (this was not done in the entire sample owing to staff time requirements). With method A, the project coordinator (D.G.) administered the UBACC to 29 patients as well as 4 HCs while an experienced RA (Jorge Gutierrez, BA, and Ruth Rodriguez, AA) observed and scored independently. With method B, the project coordinator randomly selected UBACC paperwork on 19 participants (this random selection resulted in 17 patients with schizophrenia and 2 HCs). All of the identifying information as well as rating scale scores were removed from the UBACC sheets, and only the written record of the subject's verbatim responses (as well as any prompts or follow-up clarification questions asked by the interviewer) to individual questions on the UBACC were provided to 2 experienced RAs (Jorge Gutierrez, BA, and Ruth Rodriguez, AA). These RAs were asked first to score the UBACC items based on the responses documented and then to determine whether the participant was capable or not capable.

Validity Analyses

To evaluate the validity of our initial subscale grouping of the UBACC items, we examined bivariate correlations among the UBACC items as well as between UBACC items and MacCAT-CR items and subscales using Spearman ρ. We also computed correlations among the UBACC subscale scores and MacCAT-CR subscale scores. Next, we used principal components analysis to examine the underlying structure of the UBACC items. The focus of the UBACC is on detection of deficits in decisional capacity rather than on differentiation among levels of intact capacity. Given the expected narrow range of variance seen among the HC sample, our examination of the factor structure of the UBACC was limited to the patient sample (see the studies by Delis et al14 and Larrabee15).

In part, the earlier described analyses also speak to the concurrent validity of the UBACC in that we evaluated the relationship of UBACC scores as continuous variables relative to the MacCAT-CR. We also evaluated the association between UBACC scores and neuropsychological performance as well as measures of severity of psychiatric symptoms. Convergent validity would be supported by evidence of strong associations between the UBACC scores and MacCAT-CR subscale scores as well as by an overall tendency for the capacity scores to correlate with cognitive performance on the Repeatable Battery for the Assessment of Neuropsychological Status.1620 In terms of divergent validity, UBACC scores should not be strongly associated with severity of positive symptoms.8,1820

UTILITY

As the UBACC is intended to be a screening instrument, we do not recommend that participants be deemed decisionally incapable based solely on their total UBACC scores. Nonetheless, because a part of the utility of the UBACC as a screening tool is its sensitivity to impaired decisional capacity (avoiding false-negative errors—ie, categorizing a participant as decisionally capable when he or she is in fact incapable) and its specificity (avoiding false-positive errors—ie, categorizing a participant as decisionally incapable when he or she is in fact capable), we evaluated the performance on the UBACC in making such categorical determinations using the following procedures.

First, to determine an effective cut score for the UBACC in terms of the specific hypothetical clinical trial, we had a board-certified clinical psychiatrist (either K.K. or T.M.) independently interview 24 subjects (through individual interviews) and judge each as capable or incapable of consenting to the hypothetical protocol. The psychiatrists were kept blind to the participant's diagnostic status. To assess interrater reliability between these 2 psychiatrists, each was also asked to review the other's notes and determine the participant's capacity status based on the recorded responses from that clinical interview. A 100% agreement rate was found in the latter determinations (these judges were kept unaware of the content of the UBACC). We then used receiver operating characteristic curve analyses to determine the cut score providing the best balance of sensitivity and specificity.

Next, using the UBACC cut score associated with the best balance of sensitivity and specificity determined through the receiver operating characteristic curve analyses, we compared the categorization rates achieved by the UBACC relative to the MacCAT-CR as a second gold standard. There is no single cut score or algorithm for defining capable vs incapable status in the original MacCAT-CR. However, such a cutoff score was used in the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) schizophrenia study sponsored by the National Institute of Mental Health21—the largest published applied use of the MacCAT-CR. Specifically, in that trial, a potential participant had to obtain a MacCAT-CR understanding subscale score of 16 or more points (of 26 possible points) to be independently able to consent to participate.20 This cut score was not based on empirical data but rather on the determination by a group of experts of what would be appropriate in the context of that particular study. Nonetheless, the CATIE MacCAT-CR criterion is the only one that has been described in a large-scale applied context. Thus, we used it as a proxy gold standard for capable status on the MacCAT-CR to evaluate the utility of the UBACC in accurately classifying participants as capable or incapable.

RELIABILITY ANALYSES

Internal consistency in terms of the Cronbach α for the 127 patients with schizophrenia was 0.77 (a similar value was found among the 30 HCs, among whom the Cronbach α was 0.76). With regard to interrater reliability, both methods A (independent scoring done during the same interview) and B (independent scoring from written UBACC protocols) provided evidence of good interrater reliability, with intraclass correlation coefficients of 0.84 and 0.98, respectively.

VALIDITY ANALYSES

All of the items for the understanding factor were significantly intercorrelated, with correlation coefficients ranging from 0.18 to 0.53 (all P < .05). These items were also significantly correlated with the UBACC total (correlation coefficients, 0.44-0.73) and UBACC understanding subscale (correlation coefficients, 0.51-0.76) scores (all P < .05). Except for item 9 with items 4 and 5, the appreciation items were significantly intercorrelated, with correlation coefficients ranging from 0.23 to 0.42 (all P < .05). The appreciation items were also significantly correlated with the UBACC total (correlation coefficients, 0.26-0.60), UBACC appreciation subscale (correlation coefficients, 0.27-0.73), MacCAT-CR total (correlation coefficients, 0.23-0.48), and MacCAT-CR appreciation subscale (correlation coefficients, 0.20-0.33) scores.

As shown in Table 2, the UBACC understanding items were significantly correlated with the MacCAT-CR total (correlation coefficients, 0.29-0.55) and MacCAT-CR understanding subscale (correlation coefficients, 0.32-0.52) scores. The appreciation items were significantly correlated with UBACC total (correlation coefficients, 0.26-0.60), UBACC appreciation subscale (correlation coefficients, 0.27-0.73), MacCAT-CR total (correlation coefficients, 0.23-0.48), and MacCAT-CR appreciation subscale (correlation coefficients, 0.20-0.33) scores.

Table Graphic Jump LocationTable 2. Intercorrelations Among the University of California, San Diego Brief Assessment of Capacity to Consent Items and A Priori Subscale Scores and the MacArthur Competency Assessment Tool for Clinical Research Subscale Scores for the Group of Patients With Schizophrenia
PRINCIPAL COMPONENTS ANALYSIS

Principal components analysis with varimax rotation was used to examine the underlying structure of the UBACC items among the patients. Three factors were generated with eigenvalues greater than 1, and they explained more than 56% of the variance. The first factor explained 34% of the variance and included items 1, 8, 7, and 2 (3 of which had been part of our a priori–defined UBACC understanding subscale). The second factor explained an additional 12% of the variance and included items 9, 10, and 6 (3 of which had been part of our a priori–defined UBACC appreciation subscale). The third factor explained an additional 10% of the variance and included items 4, 5, and 3 (items 4 and 5 had been on the a priori–defined UBACC appreciation subscale, whereas item 3 had been on the UBACC understanding subscale).

The first factor was significantly correlated with the MacCAT-CR total (r = 0.67), understanding (r = 0.68), appreciation (r = 0.34), and reasoning (r = 0.34) scores and the first priority factor (r = 0.91). The second factor was significantly correlated with the MacCAT-CR total (r = 0.68), understanding (r = 0.68), appreciation (r = 0.41), and reasoning (r = 0.34) scores and the second priority factor (r = 0.96). The third factor was significantly correlated with the MacCAT-CR total (r = 0.45), understanding (r = 0.40), appreciation (r = 0.38), and reasoning (r = 0.29) scores and the third priority factor (r = 0.29). Although the correlation coefficients of the calculated factors with the MacCAT-CR scores were higher than those with the original UBACC factors, these analyses confirm the structure of the priority factors (Table 3).

Table Graphic Jump LocationTable 3. Factor Analysis of the University of California, San Diego Brief Assessment of Capacity to Consent Items a
UTILITY IN MAKING CATEGORICAL DETERMINATIONS

Based on the receiver operating characteristic curve analysis of the UBACC total score against an experienced psychiatrist's (T.M.) judgment, the cut point of 14.5 was selected (Figure 2). The false-positive rate remained zero up to a score of 14.5, whereas the sensitivity rate continued to increase with higher-magnitude cut scores up to 14.5. Using cut scores greater than 14.5, the false-positive rate began to increase higher than zero with marginal gains in sensitivity. With this cut score of 14.5, the UBACC correctly detected as incapable 8 of the 9 patients who, via expert interview, were deemed incapable of consenting to the simulated clinical trial. Of the 15 persons categorized as capable by the expert interviewer, the UBACC (with a cut score of 14.5) correctly identified all of the 15 persons as capable; sensitivity was 89% and specificity was 100%. Item analysis of the 15 participants categorized as capable by the expert interviewer and the UBACC cut point of 14.5 showed that item 1 (purpose) was missed by 4 participants; item 6 (things they will be asked to do) by 1 participant; item 7 (risks) by 8 participants; item 8 (possible benefits) by 2 participants; and item 9 (no benefit possibility) by 1 participant.

Place holder to copy figure label and caption
Figure 2.

Sensitivity and 1 − specificity of each potential cut score for the University of California, San Diego Brief Assessment of Capacity to Consent (UBACC). Sensitivity (true-positive rate) is the probability that a person was categorized via the UBACC as not capable when that person was not capable per clinical interview. Specificity (true-negative rate) is the probability that a person was categorized via the UBACC as capable when that person was capable per the clinical interview. 1 − Specificity (false-positive rate) is the probability that a person was categorized by the UBACC as not capable when that person was found capable per clinical interview. The values within the box represent the UBACC scores associated with the sensitivity and 1 − specificity rates that intersect at that point. The dashed diagonal line represents the results that would be expected by chance.

Graphic Jump Location

When the MacCAT-CR CATIE study criterion for being capable (MacCAT-CR understanding score ≥ 16)20 and UBACC cut score (UBACC total score > 14.5) were applied to the full sample, we found that 20 participants would be labeled incapable by the CATIE study criterion; of these, the UBACC correctly identified 19 participants. Likewise, 137 participants were labeled capable with the CATIE study criterion; of these, 102 participants were so identified by the UBACC. Thus, the 2 methods led to discrepant results for 23% of the sample. Ninety-seven percent (35 of 36) of these discrepancies were instances wherein the CATIE criterion categorized an individual as capable, whereas the UBACC categorized him or her as having inadequate responses. The tendency for a greater number of persons classified as potentially incapable by the UBACC relative to the more lenient criterion used in the CATIE study is appropriate given that the UBACC will generally be used to identify the individuals who may need further capacity assessment and/or remediation efforts.

In this article, we have described the development and guidelines for administration and scoring of a 10-item instrument, the UBACC, designed to screen for participants who may lack capacity to consent (at least under routine consent conditions) and as a means to document for each participant that he or she manifested at least a basic level of comprehension of the study protocol prior to enrollment. We also demonstrated the reliability and validity of this instrument within a specific application, ie, evaluating capacity to consent to a particular (hypothetical) clinical trial for middle-aged and older patients with schizophrenia along with a smaller group of HCs. The results suggest that this assessment instrument has good internal consistency, interrater reliability, and concurrent validity as well as high sensitivity and acceptable specificity (relative to the clinical interview). Moreover, the UBACC takes less than 5 minutes to administer.

The present reliability and validity study has several limitations related to the selection of subjects, protocol, scale items, and statistical tests. Although our study included a small sample of HCs, the UBACC was used in this study primarily to assess the capacity to consent to a treatment research protocol among middle-aged and older outpatients with schizophrenia or schizoaffective disorder. A sample including inpatients with more acute illness might yield different results not only in terms of numbers of persons deemed incapable but perhaps also in terms of the specific psychometric characteristics, as might also occur when applied to other research populations or protocol types. The use of a simulated protocol is common in decisional capacity research2229 and was desirable in that it permitted us to be certain the protocol included typical clinical trials elements, some of which may nonetheless be absent from any 1 particular clinical trial, and permitted evaluation of the UBACC in a common context with a large number of patients. Nonetheless, it is conceivable that some responses, particularly those related to the appreciation component of decisional capacity, might be different when subjects face an actual (rather than hypothetical) decision about whether to consent to a research study.

We originally intended item 2 (“What makes you want to consider participating in this study?”) as an assessment of participants' reasoning, but in the principal components analyses, this item loaded heavily with understanding items. Item 2 was scored on the basis of the specific reasons offered, as we felt it might be inappropriate to enroll a participant in a study if his or her stated reasons conflicted with the likely benefits of the protocol (such as a response indicating a therapeutic misconception30).31,32 In contrast, most efforts to operationalize the construct of reasoning have focused on subjects' ability to generate reasons for behavior rather than on any set of correct responses. In hindsight, as scored here, item 2 may be best viewed as another potential indication of whether the participant misunderstood the intention of the protocol and so may in fact be appropriately viewed as an assessment of understanding. In short, the UBACC's primary focus is on understanding and appreciation. Given the reported high correlations of MacCAT-CR total scores with cognition data, the observed correlations of individual UBACC items with the Repeatable Battery for the Assessment of Neuropsychological Status total index score were, although significant, not particularly strong. This may reflect the fact that, per classic psychometric theory, scale totals (comprising multiple items) tend to be more reliable than individual items. Finally, in terms of statistical methods, there is a possibility of type I error due to multiple analyses.

Like with any psychometric instrument and with consent itself, the validation and refinement process should be viewed as an ongoing one. Decisional capacity is not a context-free trait33,34 but rather the outcome of an interaction between a person's skills and deficits and various environmental factors such as the complexity of the protocol being considered, the quality of the consent form and consent procedures, and the manner in which that information is communicated, explained, and reexplained when needed. As the present reliability and validity data were obtained in 1 specific context, further data from a variety of medical research populations and protocol types would be helpful to delineate the psychometric properties and utility of the UBACC across a wide array of potential applications.

The MacCAT-CR criterion examined in the present study may not appropriately generalize beyond its initial use in the CATIE study,21 and even the judgment of experts is subject to error and can be unreliable.1 Thus, additional validation relative to a range of potential gold standards is needed. For instance, an examination of the degree of convergence with the opinions of a wider range of potential experts or stakeholders (eg, patients, family members, and legal or regulatory authorities) may be desirable.35 Numerical data alone cannot answer ethical questions, but having these data would represent substantive steps in the desired direction. Further development of additional items or revised scoring to measure the reasoning dimension may be useful, particularly for applications wherein the UBACC is to be used with populations at risk for impaired reasoning with relatively intact understanding and appreciation.

We used receiver operating characteristic curve analyses to identify a cut score that we could apply to further evaluate the sensitivity and specificity of the UBACC. However, item analysis among the participants deemed capable indicated that some of these individuals provided initially incorrect answers to such basic questions as the purpose, procedures, or risks of the protocol. We therefore believe that it may not be appropriate to use a standard cut score in applied settings where the focus is on individual participants. Instead, we recommend that the investigators decide a priori which items are essential to consenting to the specific protocol. If, even after appropriate attempts at reexplanation, the participant cannot give full-credit responses to each of the essential items, then more comprehensive assessment of decisional capacity and/or more focused efforts to remediate the deficits through educational or enhanced consent procedures are warranted.

Although additional research such as that suggested earlier is needed to fully validate and further refine the UBACC as a screening and documentation tool, we believe the UBACC as presently configured and validated can serve as a valuable addition to the overall consent process in a wide variety of research settings. As intended, the UBACC partially retains the advantage of brevity of our previously described 3-item questionnaire,5 enabling more widespread use than has been practical with lengthier instruments. At the same time, as an expansion of the 3-item questionnaire, the UBACC provides for assessment and documentation of participants' understanding and appreciation of additional elements essential to meaningful consent, such as study procedures and the voluntary nature of participation. Thus, the UBACC fills a void between the 3-item questionnaire and more comprehensive but time-consuming capacity evaluations such as the MacCAT-CR.

To apply the UBACC to a specific study, the investigators will need to identify which of its 10 items are essential to valid consent for the specific protocol (in most cases this may be all of the 10 items, but some of the items may not apply to certain types of studies) and to identify the responses required for full credit (2 points) on these items. The UBACC can thereby be readily added to the consent process for most clinical research projects wherein a proportion of potential participants may be expected to lack adequate understanding or appreciation of the study information provided.

Correspondence: Dilip V. Jeste, MD, University of California, San Diego, Veterans Affairs San Diego Healthcare System, Bldg 13, Fourth Floor, 3350 La Jolla Village Dr, San Diego, CA 92161 (djeste@ucsd.edu).

Submitted for Publication: August 18, 2006; final revision received December 13, 2006; accepted January 5, 2007.

Financial Disclosure: None reported.

Funding/Support: This work was supported in part by grants R01 MH67002, P30 MH66248, and R01 MH64722 from the National Institute of Mental Health and R01 AG028827-01 from the National Institute on Aging as well as by the Veterans Affairs San Diego Healthcare System.

Additional Contributions: Ruth Rodriguez, AA, and Jorge Gutierrez, BA, assisted in data collection, Tia Thrasher, BA, and Margaret Thompson, BA, helped with collection of interscorer reliability data, and Rebecca Daly assisted with data analysis. A number of consumers and other stakeholders were involved in issues related to decisional capacity and provided useful advice and feedback throughout the process of developing and testing the University of California, San Diego Brief Assessment of Capacity to Consent.

Marson  DCMcInturff  BHawkins  LBartolucci  AHarrell  LE Consistency of physician judgments of capacity to consent in mild Alzheimer's disease. J Am Geriatr Soc 1997;45 (4) 453- 457
PubMed
Dunn  LBNowrangi  MAPalmer  BWJeste  DVSaks  ER Assessing decisional capacity for clinical research or treatment: a review of instruments. Am J Psychiatry 2006;163 (8) 1323- 1334
PubMed Link to Article
Sturman  ED The capacity to consent to treatment and research: a review of standardized assessment tools. Clin Psychol Rev 2005;25 (7) 954- 974
PubMed Link to Article
Appelbaum  PSGrisso  T MacCAT-CR: MacArthur Competence Assessment Tool for Clinical Research.  Sarasota, FL Professional Resource Press2001;
Karlawish  JHLantos  J Community equipoise and the architecture of clinical research. Camb Q Healthc Ethics 1997;6 (4) 385- 396
PubMed Link to Article
Jeste  DVDunn  LBPalmer  BWSaks  EHalpain  MCook  AAppelbaum  PSchneiderman  L A collaborative model for research on decision capacity and informed consent in older patients with schizophrenia: bioethics unit of a geriatric psychiatry intervention research center. Psychopharmacology (Berl) 2003;171 (1) 68- 74
PubMed Link to Article
Jeste  DVDepp  CAPalmer  BW Magnitude of impairment in decisional capacity in people with schizophrenia compared to normal subjects: an overview. Schizophr Bull 2006;32 (1) 121- 128
PubMed Link to Article
Dunn  LBCandilis  PJRoberts  LW Emerging empirical evidence on the ethics of schizophrenia research. Schizophr Bull 2006;32 (1) 47- 68
PubMed Link to Article
Dunn  LBJeste  DV Problem areas in the understanding of informed consent: study of middle-aged and older patients with psychotic disorders. Psychopharmacology (Berl) 2003;171 (1) 81- 85
PubMed Link to Article
Kay  SRFiszbein  AOpler  LA The Positive and Negative Syndrome Scale (PANSS) for schizophrenia. Schizophr Bull 1987;13 (2) 261- 276
PubMed Link to Article
Hamilton  M Development of a rating scale for primary depressive illness. Br J Soc Clin Psychol 1967;6 (4) 278- 296
PubMed Link to Article
Randolph  C RBANS Manual: Repeatable Battery for the Assessment of Neuropsychological Status.  San Antonio, TX Psychological Corp1998;
Anastasi  AUrbina  S Psychological Testing.  7th ed Upper Saddle River, NJ Prentice Hall1997;
Delis  DCJacobson  MBondi  MWHamilton  JMSalmon  DP The myth of testing construct validity using factor analysis or correlations with normal or mixed clinical populations: lessons from memory assessment. J Int Neuropsychol Soc 2003;9 (6) 936- 946
PubMed Link to Article
Larrabee  GJ Lessons on measuring construct validity: a commentary on Delis, Jacobson, Bondi, Hamilton, and Salmon. J Int Neuropsychol Soc 2003;9 (6) 947- 953
PubMed Link to Article
Carpenter  WTGold  JMLahti  ACQueern  CAConley  RRBartko  JJKovnick  JAppelbaum  PS Decisional capacity for informed consent in schizophrenia research. Arch Gen Psychiatry 2000;57 (6) 533- 538
PubMed Link to Article
Moser  DJSchultz  SKArndt  SBenjamin  MLFleming  FWBrems  CSPaulsen  JSAppelbaum  PSAndreasen  NC Capacity to provide informed consent for participation in schizophrenia and HIV research. Am J Psychiatry 2002;159 (7) 1201- 1207
PubMed Link to Article
Palmer  BWDunn  LBAppelbaum  PSJeste  DV Correlates of treatment-related decision-making capacity among middle-aged and older patients with schizophrenia. Arch Gen Psychiatry 2004;61 (3) 230- 236
PubMed Link to Article
Palmer  BWJeste  DV Relationship of individual cognitive abilities to specific components of decisional capacity among middle-aged and older patients with schizophrenia. Schizophr Bull 2006;32 (1) 98- 106
PubMed Link to Article
Stroup  SAppelbaum  PSwartz  MPatel  MDavis  SJeste  DVKim  SKeefe  RManschreck  TMcEvoy  JLieberman  J Decision-making capacity for research participation among individuals in the CATIE schizophrenia trial. Schizophr Res 2005;80 (1) 1- 8
PubMed Link to Article
Stroup  TSMcEvoy  JPSwartz  MSByerly  MJGlick  IDCanive  JMMcGee  MFSimpson  GMStevens  MCLieberman  JA The National Institute of Mental Health Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) project: schizophrenia trial design and protocol development. Schizophr Bull 2003;29 (1) 15- 31
PubMed Link to Article
Palmer  BWDunn  LBAppelbaum  PSMudaliar  SThal  LHenry  RGolshan  SJeste  DV Assessment of capacity to consent to research among older persons with schizophrenia, Alzheimer disease, or diabetes mellitus: comparison of a 3-item questionnaire with a comprehensive standardized capacity instrument. Arch Gen Psychiatry 2005;62 (7) 726- 733
PubMed Link to Article
Saks  ERDunn  LBMarshall  BJNayak  GVGolshan  SJeste  DV The California Scale of Appreciation: a new instrument to measure the appreciation component of capacity to consent to research. Am J Geriatr Psychiatry 2002;10 (2) 166- 174
PubMed Link to Article
Dunn  LBPalmer  BWKeehan  M Understanding of placebo controls among older people with schizophrenia. Schizophr Bull 2006;32 (1) 137- 146
PubMed Link to Article
Kim  SYCaine  EDCurrier  GWLeibovici  ARyan  JM Assessing the competence of persons with Alzheimer's disease in providing informed consent for participation in research. Am J Psychiatry 2001;158 (5) 712- 717
PubMed Link to Article
Kim  SYHCox  CCaine  ED Impaired decision-making ability in subjects with Alzheimer's disease and willingness to participate in research. Am J Psychiatry 2002;159 (5) 797- 802
PubMed Link to Article
Moser  DJReese  RLHey  CTSchultz  SKArndt  SBeglinger  LJDuff  KMAndreasen  NC Using a brief intervention to improve decisional capacity in schizophrenia research. Schizophr Bull 2006;32 (1) 116- 120
PubMed Link to Article
Karlawish  JHCasarett  DJJames  BD Alzheimer's disease patients' and caregivers' capacity, competency, and reasons to enroll in an early-phase Alzheimer's disease clinical trial. J Am Geriatr Soc 2002;50 (12) 2019- 2024
PubMed Link to Article
Schmand  BGouwenberg  BSmit  JHJonker  C Assessment of mental competency in community-dwelling elderly. Alzheimer Dis Assoc Disord 1999;13 (2) 80- 87
PubMed Link to Article
Lidz  CWAppelbaum  PS The therapeutic misconception: problems and solutions. Med Care 2002;40 (9) ((suppl)) V55- V63
PubMed Link to Article
Candilis  PJGeppert  CMFletcher  KELidz  CWAppelbaum  PS Willingness of subjects with thought disorder to participate in research. Schizophr Bull 2006;32 (1) 159- 165
PubMed Link to Article
Dunn  LBPalmer  BWKeehan  MJeste  DVAppelbaum  PS Assessment of therapeutic misconception in older schizophrenia patients with a brief instrument. Am J Psychiatry 2006;163 (3) 500- 506
PubMed Link to Article
Appelbaum  PSRoth  LH Competency to consent to research: a psychiatric overview. Arch Gen Psychiatry 1982;39 (8) 951- 958
PubMed Link to Article
Grisso  TAppelbaum  PS Assessing Competence to Consent to Treatment: A Guide for Physicians and Other Health Professionals.  New York, NY Oxford University Press1988;
Roberts  LWHammond  KAWarner  TDLewis  R Influence of ethical safeguards on research participation: comparison of perspectives of people with schizophrenia and psychiatrists. Am J Psychiatry 2004;161 (12) 2309- 2311
PubMed Link to Article

Figures

Place holder to copy figure label and caption
Figure 1.

UCSD indicates University of California, San Diego. Note that all of the responses given in parentheses apply to the specific hypothetical protocol described in the text of this article. Before the initiation of a study, the principal investigator must prepare a list of answers for his or her specific study that will receive a score of 2 on each item. A deviation from such an answer will be scored as 1 or 0. The local institutional review board may need to approve the principal investigator's list of answers. If an answer to a question is ambiguous or uncertain, the subject should be asked follow-up questions to clarify. If a subject does not get a score of 2, the information on the items missed may be repeated and the specific question(s) asked again. This may be done for a total of 3 trials. During the second and third trials, it is important to ensure that the subject does not merely parrot back information but shows clear understanding of the issue.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.

Sensitivity and 1 − specificity of each potential cut score for the University of California, San Diego Brief Assessment of Capacity to Consent (UBACC). Sensitivity (true-positive rate) is the probability that a person was categorized via the UBACC as not capable when that person was not capable per clinical interview. Specificity (true-negative rate) is the probability that a person was categorized via the UBACC as capable when that person was capable per the clinical interview. 1 − Specificity (false-positive rate) is the probability that a person was categorized by the UBACC as not capable when that person was found capable per clinical interview. The values within the box represent the UBACC scores associated with the sensitivity and 1 − specificity rates that intersect at that point. The dashed diagonal line represents the results that would be expected by chance.

Graphic Jump Location

Tables

Table Graphic Jump LocationTable 1. Demographic, Cognitive, and Clinical Characteristics of the 2 Participant Groups
Table Graphic Jump LocationTable 2. Intercorrelations Among the University of California, San Diego Brief Assessment of Capacity to Consent Items and A Priori Subscale Scores and the MacArthur Competency Assessment Tool for Clinical Research Subscale Scores for the Group of Patients With Schizophrenia
Table Graphic Jump LocationTable 3. Factor Analysis of the University of California, San Diego Brief Assessment of Capacity to Consent Items a

References

Marson  DCMcInturff  BHawkins  LBartolucci  AHarrell  LE Consistency of physician judgments of capacity to consent in mild Alzheimer's disease. J Am Geriatr Soc 1997;45 (4) 453- 457
PubMed
Dunn  LBNowrangi  MAPalmer  BWJeste  DVSaks  ER Assessing decisional capacity for clinical research or treatment: a review of instruments. Am J Psychiatry 2006;163 (8) 1323- 1334
PubMed Link to Article
Sturman  ED The capacity to consent to treatment and research: a review of standardized assessment tools. Clin Psychol Rev 2005;25 (7) 954- 974
PubMed Link to Article
Appelbaum  PSGrisso  T MacCAT-CR: MacArthur Competence Assessment Tool for Clinical Research.  Sarasota, FL Professional Resource Press2001;
Karlawish  JHLantos  J Community equipoise and the architecture of clinical research. Camb Q Healthc Ethics 1997;6 (4) 385- 396
PubMed Link to Article
Jeste  DVDunn  LBPalmer  BWSaks  EHalpain  MCook  AAppelbaum  PSchneiderman  L A collaborative model for research on decision capacity and informed consent in older patients with schizophrenia: bioethics unit of a geriatric psychiatry intervention research center. Psychopharmacology (Berl) 2003;171 (1) 68- 74
PubMed Link to Article
Jeste  DVDepp  CAPalmer  BW Magnitude of impairment in decisional capacity in people with schizophrenia compared to normal subjects: an overview. Schizophr Bull 2006;32 (1) 121- 128
PubMed Link to Article
Dunn  LBCandilis  PJRoberts  LW Emerging empirical evidence on the ethics of schizophrenia research. Schizophr Bull 2006;32 (1) 47- 68
PubMed Link to Article
Dunn  LBJeste  DV Problem areas in the understanding of informed consent: study of middle-aged and older patients with psychotic disorders. Psychopharmacology (Berl) 2003;171 (1) 81- 85
PubMed Link to Article
Kay  SRFiszbein  AOpler  LA The Positive and Negative Syndrome Scale (PANSS) for schizophrenia. Schizophr Bull 1987;13 (2) 261- 276
PubMed Link to Article
Hamilton  M Development of a rating scale for primary depressive illness. Br J Soc Clin Psychol 1967;6 (4) 278- 296
PubMed Link to Article
Randolph  C RBANS Manual: Repeatable Battery for the Assessment of Neuropsychological Status.  San Antonio, TX Psychological Corp1998;
Anastasi  AUrbina  S Psychological Testing.  7th ed Upper Saddle River, NJ Prentice Hall1997;
Delis  DCJacobson  MBondi  MWHamilton  JMSalmon  DP The myth of testing construct validity using factor analysis or correlations with normal or mixed clinical populations: lessons from memory assessment. J Int Neuropsychol Soc 2003;9 (6) 936- 946
PubMed Link to Article
Larrabee  GJ Lessons on measuring construct validity: a commentary on Delis, Jacobson, Bondi, Hamilton, and Salmon. J Int Neuropsychol Soc 2003;9 (6) 947- 953
PubMed Link to Article
Carpenter  WTGold  JMLahti  ACQueern  CAConley  RRBartko  JJKovnick  JAppelbaum  PS Decisional capacity for informed consent in schizophrenia research. Arch Gen Psychiatry 2000;57 (6) 533- 538
PubMed Link to Article
Moser  DJSchultz  SKArndt  SBenjamin  MLFleming  FWBrems  CSPaulsen  JSAppelbaum  PSAndreasen  NC Capacity to provide informed consent for participation in schizophrenia and HIV research. Am J Psychiatry 2002;159 (7) 1201- 1207
PubMed Link to Article
Palmer  BWDunn  LBAppelbaum  PSJeste  DV Correlates of treatment-related decision-making capacity among middle-aged and older patients with schizophrenia. Arch Gen Psychiatry 2004;61 (3) 230- 236
PubMed Link to Article
Palmer  BWJeste  DV Relationship of individual cognitive abilities to specific components of decisional capacity among middle-aged and older patients with schizophrenia. Schizophr Bull 2006;32 (1) 98- 106
PubMed Link to Article
Stroup  SAppelbaum  PSwartz  MPatel  MDavis  SJeste  DVKim  SKeefe  RManschreck  TMcEvoy  JLieberman  J Decision-making capacity for research participation among individuals in the CATIE schizophrenia trial. Schizophr Res 2005;80 (1) 1- 8
PubMed Link to Article
Stroup  TSMcEvoy  JPSwartz  MSByerly  MJGlick  IDCanive  JMMcGee  MFSimpson  GMStevens  MCLieberman  JA The National Institute of Mental Health Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) project: schizophrenia trial design and protocol development. Schizophr Bull 2003;29 (1) 15- 31
PubMed Link to Article
Palmer  BWDunn  LBAppelbaum  PSMudaliar  SThal  LHenry  RGolshan  SJeste  DV Assessment of capacity to consent to research among older persons with schizophrenia, Alzheimer disease, or diabetes mellitus: comparison of a 3-item questionnaire with a comprehensive standardized capacity instrument. Arch Gen Psychiatry 2005;62 (7) 726- 733
PubMed Link to Article
Saks  ERDunn  LBMarshall  BJNayak  GVGolshan  SJeste  DV The California Scale of Appreciation: a new instrument to measure the appreciation component of capacity to consent to research. Am J Geriatr Psychiatry 2002;10 (2) 166- 174
PubMed Link to Article
Dunn  LBPalmer  BWKeehan  M Understanding of placebo controls among older people with schizophrenia. Schizophr Bull 2006;32 (1) 137- 146
PubMed Link to Article
Kim  SYCaine  EDCurrier  GWLeibovici  ARyan  JM Assessing the competence of persons with Alzheimer's disease in providing informed consent for participation in research. Am J Psychiatry 2001;158 (5) 712- 717
PubMed Link to Article
Kim  SYHCox  CCaine  ED Impaired decision-making ability in subjects with Alzheimer's disease and willingness to participate in research. Am J Psychiatry 2002;159 (5) 797- 802
PubMed Link to Article
Moser  DJReese  RLHey  CTSchultz  SKArndt  SBeglinger  LJDuff  KMAndreasen  NC Using a brief intervention to improve decisional capacity in schizophrenia research. Schizophr Bull 2006;32 (1) 116- 120
PubMed Link to Article
Karlawish  JHCasarett  DJJames  BD Alzheimer's disease patients' and caregivers' capacity, competency, and reasons to enroll in an early-phase Alzheimer's disease clinical trial. J Am Geriatr Soc 2002;50 (12) 2019- 2024
PubMed Link to Article
Schmand  BGouwenberg  BSmit  JHJonker  C Assessment of mental competency in community-dwelling elderly. Alzheimer Dis Assoc Disord 1999;13 (2) 80- 87
PubMed Link to Article
Lidz  CWAppelbaum  PS The therapeutic misconception: problems and solutions. Med Care 2002;40 (9) ((suppl)) V55- V63
PubMed Link to Article
Candilis  PJGeppert  CMFletcher  KELidz  CWAppelbaum  PS Willingness of subjects with thought disorder to participate in research. Schizophr Bull 2006;32 (1) 159- 165
PubMed Link to Article
Dunn  LBPalmer  BWKeehan  MJeste  DVAppelbaum  PS Assessment of therapeutic misconception in older schizophrenia patients with a brief instrument. Am J Psychiatry 2006;163 (3) 500- 506
PubMed Link to Article
Appelbaum  PSRoth  LH Competency to consent to research: a psychiatric overview. Arch Gen Psychiatry 1982;39 (8) 951- 958
PubMed Link to Article
Grisso  TAppelbaum  PS Assessing Competence to Consent to Treatment: A Guide for Physicians and Other Health Professionals.  New York, NY Oxford University Press1988;
Roberts  LWHammond  KAWarner  TDLewis  R Influence of ethical safeguards on research participation: comparison of perspectives of people with schizophrenia and psychiatrists. Am J Psychiatry 2004;161 (12) 2309- 2311
PubMed Link to Article

Correspondence

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 47

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles
JAMAevidence.com