sábado, 12 de marzo de 2016

Stated Preference for Cancer Screening: A Systematic Review of the Literature, 1990–2013

Stated Preference for Cancer Screening: A Systematic Review of the Literature, 1990–2013

CDC. Centers for Disease Control and Prevention. CDC 24/7: Saving Lives. Protecting People.

PCD logo

Stated Preference for Cancer Screening: A Systematic Review of the Literature, 1990–2013

Carol Mansfield, PhD; Florence K. L. Tangka, PhD; Donatus U. Ekwueme, PhD; Judith Lee Smith, PhD; Gery P. Guy Jr, PhD; Chunyu Li, MD, PhD; A. Brett Hauber, PhD

Suggested citation for this article: Mansfield C, Tangka FK, Ekwueme DU, Smith JL, Guy GP Jr, Li C, et al. Stated Preference for Cancer Screening: A Systematic Review of the Literature, 1990–2013. Prev Chronic Dis 2016;13:150433. DOI: http://dx.doi.org/10.5888/pcd13.150433.


Stated-preference methods provide a systematic approach to quantitatively assess the relative preferences for features of cancer screening tests. We reviewed stated-preference studies for breast, cervical, and colorectal cancer screening to identify the types of attributes included, the use of questions to assess uptake, and whether gaps exist in these areas. The goal of our review is to inform research on the design and promotion of public health programs to increase cancer screening.
Using the PubMed and EconLit databases, we identified studies published in English from January 1990 through July 2013 that measured preferences for breast, cervical, and colorectal cancer screening test attributes using conjoint analysis or a discrete-choice experiment. We extracted data on study characteristics and results. We categorized studies by whether attributes evaluated included screening test, health care delivery characteristics, or both.
Twenty-two studies met the search criteria. Colorectal cancer was the most commonly studied cancer of the 3. Fifteen studies examined only screening test attributes (efficacy, process, test characteristics, and cost). Two studies included only health care delivery attributes (information provided, staff characteristics, waiting time, and distance to facility). Five studies examined both screening test and health care delivery attributes. Overall, cancer screening test attributes had a significant effect on a patient’s selection of a cancer screening test, and health care delivery attributes had mixed effects on choice.
A growing number of studies examine preferences for cancer screening tests. These studies consistently find that screening test attributes, such as efficacy, process, and cost, are significant determinants of choice. Fewer studies have examined the effect of health care delivery attributes on choice, and the results from these studies are mixed. There is a need for additional studies on the barriers to cancer screening uptake, including health care delivery attributes, and the effect of education materials on preferences.


Screening for certain cancers may increase the identification of early-stage disease and likelihood of successful treatment and survival (1). Screening for breast, cervical, and colorectal cancer is recommended by the US Preventive Services Task Force (USPSTF) (2). Recent analysis of the 2013 National Health Interview Survey indicates that the percentages of the population screened for breast, cervical, and colorectal cancer were 72.6%, 80.7%, and 58.2%, respectively (3), below the Healthy People 2020 recommended targets of 81.1%, 93.0%, and 70.5%, respectively (4).
Research that leads to an understanding of how patients value the attributes of health care interventions is critical to the design, development, and implementation of effective programs. Incorporating patient values in the decision-making process may result in operational policies and programs that enhance the effectiveness of health care interventions by improving the uptake of and adherence to recommended preventive health care services (5).
Stated-preference (SP) methods systematically assess the relative preferences for screening tests or the features of screening tests using questions that present hypothetical trade-offs. Furthermore, SP studies can incorporate questions to assess the factors that affect reported likelihood of uptake for cancer screening (5). Previous reviews of SP studies indicate that people have identifiable preferences for the features of cancer screening tests (6–8).
This article reviews SP studies of preferences for cancer screening tests for breast, cervical, and colorectal cancer recommended by the USPSTF that were collected using conjoint analysis (CA) and discrete-choice experiments (DCEs). CA and DCEs describe tests (or other goods) using a set of attributes (features) with varying levels and allow estimation of relative preferences for different attributes. The goal of the review was to assess the types of cancer screening test attributes researchers have considered, differentiating between attributes of the screening tests themselves and attributes that capture other elements of the patient experience. We also reviewed the use of questions to determine reported likelihood of uptake. Understanding how test attributes affect reported likelihood of uptake may help improve public health programs to increase cancer screening.


Stated-preference techniques

Researchers have developed several approaches consistent with economic theory to measure preferences for market and nonmarket goods, interventions, and policies (5). Revealed-preference methods use information from actual behavior or purchases to infer individuals’ preferences; SP methods use surveys or experimental methods with hypothetical scenarios to elicit preferences. There are varied SP methods, including contingent value, time-trade-off, standard gamble, and other variations. The Medical Device Innovation Consortium has more information on SP methods in health care research (9).
This review focuses on CA and DCE studies, a type of SP study where the good or policy is defined by a set of attributes with varying levels (for a general discussion, see Hensher et al [10]). These surveys allow researchers to identify and quantify the relative effect of the changes in different attributes on choices. Good practice suggests that the number of attributes should be limited depending on the nature of the attributes and that researchers should make decisions about the attributes to include and exclude (5). Researchers use their research question and findings from previous studies and pretesting to select attributes that respondents find relevant. To examine reported likelihood of uptake and attributes that influence reported uptake, researchers can include a fixed alternative in the choice question, usually a reference test representing the standard of care or the option of not getting a test, or a follow-up question asking if the respondent would get the hypothetical test they selected in the choice question. CA and DCE approaches have been used for decades in the fields of marketing, transportation, environmental policy, and health care.

Data sources and literature review strategies

Studies eligible for this systematic review met the following criteria: was a CA or DCE study; examined patient preferences for breast, colorectal, or cervical cancer screening recommended by the USPSTF; had the full-text article available in English; and was published from January 1990 through July 2013. We excluded studies that examined cancer treatment, cancer therapy, pharmaceuticals, healthy behaviors, or cancer prevention strategies not recommended by the USPSTF. We also excluded studies that included only physicians in their sample ( Table 1).
We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (11) to design and perform the literature review. Database searches were conducted in PubMed and EconLit. Search terms for PubMed were (“neoplasms”[mesh] or “cancer”) and (“conjoint analysis” or “conjoint analyses” or “conjoint-analysis” or “conjoint-analyses” or “discrete choice” or “discrete-choice” or “discrete ranking” or “discrete rank”). Search terms for EconLit were (“cancer”) and (“conjoint analysis” or “conjoint analyses” or “conjoint-analysis” or “conjoint-analyses” or “discrete choice” or “discrete-choice” or “discrete ranking” or “discrete rank”).

Study selection and data extraction

We identified 157 articles, 7 of which were duplicates. We screened 150 articles for inclusion, 114 of which were eliminated. We then screened the full text of 36 articles for eligibility; 22 articles remained for inclusion in the qualitative synthesis (Figure).
 Identification and selection of articles for review. Abbreviations: CA, conjoint analysis; DCE, discrete-choice experiment; HPV, human papillomavirus; USPSTF, US Preventive Services Task Force.
Figure. Identification and selection of articles for review. Abbreviations: CA, conjoint analysis; DCE, discrete-choice experiment; HPV, human papillomavirus; USPSTF, US Preventive Services Task Force. [A text description of this figure is also available.]
We abstracted the following data items from the selected studies: author(s), year, sample size and population, cancer type, purpose of the study, attributes studied, significant attributes (defined as categorical attributes in which at least 2 levels were significantly different from each other or a continuous attribute with a significant coefficient [P ≤ .10]), whether the design included a no-test option, and predicted uptake as reported in the articles.
The review focused on the types of attributes included in the studies. To provide more focus for the review, the studies were categorized as studies that focused on screening test attributes only, health care delivery attributes only, or a combination of both. The categories were defined as follows:
Screening test attributes: attributes of the tests independent of the patient’s characteristics. These included efficacy (sensitivity, expected reduction in cancer rates or mortality, specificity), test features (type of test, preparation before the test, length of test, pain during test, complication risk), recommended frequency, where the test was administered, how soon results were available, whether a follow-up test was needed to address abnormal findings, and cost.
Health care delivery attributes: attributes related to the patient experience in the health care setting in which the screening was offered that are unrelated to the attributes of the test. These included attributes such as information provided to patients, how information was delivered, characteristics of the doctor and health care staff, waiting time for appointments, and distance to facility.
Studies were qualitatively assessed to identify common results.


Of the 22 studies, 15 included only screening test attributes, 2 included only health care delivery attributes, and 5 were a mixture of the 2. Tables 2 and 3 summarize the study characteristics and results.

Studies with only screening test attributes

Fifteen studies included only screening test attributes for breast cancer screening (15), cervical cancer screening (12,13,24,26), or colorectal cancer screening (14,16–23,25). Among the studies that examined preferences for colorectal cancer screening, 2 looked only at the fecal occult blood test (FOBT) (16,19) and 1 compared preferences for computed tomography colonography and colonoscopy (20). The rest included attributes defining a range of screening tests. Most studies surveyed the general population; however, many studies included respondents with screening experience or at higher risk of developing cancer (13,14,18–20,25).
DCE and CA studies can be set up as a forced choice, where respondents pick between tests, or they can include a no-test option where the respondent can select “no test” instead of the hypothetical options posed in the choice question. Two-thirds of the studies included a no-test option. In addition, 1 study included a separate question asking about preferences for specific unlabeled tests assigned with the characteristics of existing tests and included the option of no test (23). Four studies provided predictions of uptake for tests with specific characteristics. Gyrd-Hansen (15) found that predicted uptake for a hypothetical test screening people aged 50 to 69 years every second year with features drawn from the literature and a program in Denmark (80%–88%) was similar to estimates of actual uptake (88%). Hol et al (18) predicted a 77% uptake for colonoscopy for screening-naive respondents in his sample in the Netherlands based on what the authors defined as realistic assumptions for the attribute levels after reviewing the clinical literature. Marshall et al (21) estimated that total uptake for all types of colorectal cancer screening would be 42% at the highest if all currently available tests were offered to their sample in Canada. Van Dam et al (25) estimated uptake using risk reductions taken from the clinical literature to be 75% for biennial FOBT screening, 80% for 5-yearly flexible sigmoidoscopy screening, and 71% for 10-yearly colonoscopy screening for this sample from the Netherlands.
Another feature that distinguished the studies was whether the screening test was identified by the process or name of the procedure. This feature was most relevant for colorectal cancer screening, in which available tests range from stool samples to colonoscopies. De Bekker-Grob et al (14) compared an unlabeled design with a labeled design. Howard et al (20) used a labeled design. Four studies included an attribute that identified the type of colorectal cancer screening test by name or through the process (17,18,21,22). The rest of the studies described the tests through attributes related to efficacy and process without mentioning the type of test.
All studies included some kind of efficacy attribute. Forty percent defined efficacy as the accuracy of the test (the probability that the test found cancer or precancerous growths); the rest presented the reduction in risk of cancer mortality. The efficacy attributes were significant in every study. Forty-seven percent of the studies also included specificity (the risk of false negatives) as an attribute, which was significant in every study except one (16).
Test experience attributes included preparation before the test, discomfort during the test, waiting time for results, whether a follow-up visit was needed if results were abnormal, complication risk, duration of screening procedure, recommended test frequency, out-of-pocket cost, and type of facility where the test was conducted. The attributes that were always significant were preparation before the test (included in 47% of the studies), discomfort or pain during the test (included in 40% of the studies), waiting time for the results (included in 27% of the studies), complication risk (included in 27% of the studies), cost (included in 67% of the studies), and the type of facility where the test was preformed (included in 13% of the studies). Waiting time to get test results was not significant in 1 of the 4 times it was included (12), location of test in 1 of 2 times (14), test frequency in 2 of 11 times (15,16), and whether a follow-up test was needed to confirm abnormal results in 1 of 4 times (22).
The primary purpose of most studies was to examine preferences for screening test features; however, 3 of the studies investigated questions about DCE or CA methods. De Bekker-Grob et al (14) looked at the effect of a labeled versus unlabeled design. Pignone et al (23) compared choice-based CAs with rating and ranking. Howard and Salkeld (19) examined the effect of attribute framing (whether sensitivity and specificity were presented as cancers found or cancers missed).

Studies with only health care delivery attributes

Only 2 studies, which looked at preferences for genetic counseling, included exclusively what we termed health care delivery attributes (27,28). Griffith et al (27) looked at preferences for genetic testing among women with a low, moderate, or high risk of breast cancer. Peacock et al (28) examined preferences for the type of information received during counseling for women at high risk of carrying the BRCA1 or BRCA2 genetic mutations, which are associated with a higher risk for breast and ovarian cancer.
The attributes in Griffith et al (27) were related to the appointment and were all significant, except whether the screening test was available only for high-risk women (versus the entire population), which was not significant to high-risk women, and the length of the appointment, which was not significant to low-risk women. The attributes in Peacock et al (28) included 4 topics that could be discussed during counseling; all were significant.

Studies with attributes of both a screening test and health care delivery

Five studies combined screening test attributes and health care delivery attributes, and examined screening for colorectal cancer (31–33), cervical cancer (29), or breast cancer (30). Nayaradou et al (31) and Salkeld et al (32) did not include a no-test option, whereas the other studies did. Gerard et al (30) designed questions with a single scenario for screening, and women were asked if they would attend.
Nayaradou et al (31) and Salkeld et al (33) surveyed average risk or general population samples. Fiebig et al (29) compared women with and without screening histories, Gerard et al (30) sampled from women with a history of screening, and Salkeld et al (32) surveyed individuals who had used an at-home FOBT (bowel screening) kit.
Four studies included sensitivity of the screening test, reduction in cancer mortality, or both, and 4 included the chance of a false-negative (specificity). These attributes were significant in all the studies, except specificity, defined as rate of unnecessary colonoscopy in Nayaradou et al (31). Cost was included in 3 of the studies and was consistently significant (29,31,32).
The health care delivery attributes were more diverse and context specific, and many were nonsignificant. Whether a person would be notified of negative test results was significant in Salkeld et al (33) and nonsignificant in Salkeld et al (32). Whether the doctor was paid an incentive was nonsignificant in Fiebig et al (29), but other attributes related to the doctor or general practitioner were significant. Who proposed the screening or where the respondent was told they learned about the screening was nonsignificant in Gerard et al (30) and Nayaradou et al (31). Gerard et al (30) examined many features related to the appointment: some were significant (travel time to the appointment, a private changing area, and the length of the screening), and some were nonsignificant (waiting time for an appointment and the results, a choice of hours for appointments in the evening or Saturday, and whether the staff at the clinic was welcoming or reserved).


Overall, the studies suggest that respondents valued improvements in attributes related to the characteristics of cancer screening tests, including sensitivity, process, and cost. The significance of the health care delivery attributes was uneven across studies, especially in studies combining test and health care delivery attributes. More than half of the studies included only screening test attributes. Thirteen included some type of opt-out option, but only 4 calculated predicted uptake for specific tests.
Three similar reviews of cancer screening tests have been published. Phillips et al (6), which reviewed SP contingent valuation, CA, and DCE studies published through May 2005 for any type of cancer screening test, identified 8 studies of patient preferences. Marshall et al (7) reviewed 6 SP studies for colorectal cancer screening published between 1990 and May 2009. Ghanouni et al (8) reviewed 7 CA studies of colorectal cancer screening tests to assess the quality of the research and results. With a larger sample of 22 studies, we confirmed the findings in the earlier reviews — that patients had preferences over multiple attributes and that sensitivity was an important feature. This review included articles published through July 2013. Since this review was completed, several additional CA studies, not included in this review, have been published, including 8 more on colorectal cancer screening and 1 on breast cancer screening (34–42). Three of these more recent studies included health care delivery attributes such as travel time to breast screening appointment and the sex of staff members conducting breast screening (35,39,41). As with the 2 previous reviews (6,7), we found that most of the studies were administered to the general population at average risk of cancer; however, there are now more studies of populations at high risk of cancer or with screening histories. Several of the new studies focused on specific populations including older adults and Hispanics (34,35,39), and 1 study was conducted in Japan (41).
There are many ways in which these results from SP studies can aid in the design of future research and be applied to public health programs designed to increase screening. For example, in the United States, physicians may be more likely to recommend colonoscopy than other tests (43,44); however, the DCE and CA studies suggest that preparation, discomfort, and cost are important to patients and that some patients may prefer a stool test. In countries where stool tests are the standard of care, offering colonoscopies could improve uptake among people who have strong preferences for high sensitivity.
Health care delivery variables were sometimes nonsignificant. In SP surveys, process variables such as waiting time for an appointment may be nonsignificant relative to variables such as sensitivity, but these process factors may be important in determining whether people get screened. If an acceptable test exists, then process factors related to making appointments, getting the test, and getting the results may have a big influence on uptake for that test. Our understanding of preferences and uptake could be improved by additional research on the best way to include attributes associated with health care delivery. Health behavior theory, which has been used to develop and evaluate public health interventions (45), could provide a useful structure to develop attributes or other supporting questions related to attitude, environmental, or social factors influencing uptake (see Tsunematsu et al [41] for an example).
The hypothetical nature of SP surveys makes it challenging to accurately predict uptake. Nonetheless, adding a no-test option and providing estimates of uptake for specific tests when appropriate will provide more information on preferences and predicted uptake.
The issue of labeled versus unlabeled designs can affect predictions of uptake. De Bekker-Grob et al (14) found that choices differed based on whether labels were included. They concluded that respondents were less attentive to the attributes when labels were provided and that labeled designs may be more appropriate for respondents who were familiar with the labels and for studies interested in predicting uptake. It is unknown whether including test names as attributes is similar to using a labeled design.
We focused on patient preferences; however, studies have been done with physicians or comparing patients and physicians (12,22,29,46). Studies on physician preferences are important, because patients often rely on their physicians for advice (7). If patients and physicians value attributes differently, patient-preference surveys provide an opportunity for physicians and patients to identify differences in perspective, which could improve communication and shared decision making.
CA and DCE surveys could also be used more extensively to test the effect of messages on preferences and willingness of different populations, including underserved populations, to be screened. The results could help shape strategies for public health communication, especially because studies have found that the type of information provided can affect preferences for screening tests (7,38).
Our review has limitations. We reported attribute significance; however, the significance or lack of significance of attributes should be viewed as conditional on the set of attributes included and the range of levels. An attribute may be more or less important depending on the other attributes included in the survey. In general, best practice suggests that researchers include attributes that are important to respondents, implying that most attributes should be significant. However, even with careful pretesting, changes in attributes that are important in isolation may not be important when included in a wider set of attributes. The surveys differed in objectives and format, limiting our ability to compare findings across studies. Furthermore, few studies were conducted in the same country, which limits the generalizability of findings, because differences in national health policies vary widely among countries. For example, although many studies focused on colorectal cancer screening, only 3 were conducted in the United States.
A growing number of studies examine preferences for cancer screening tests. These studies consistently find that screening test attributes such as efficacy, process, and cost are significant determinants of choice. Fewer studies have examined the effect of health care delivery attributes on choice, and the results from these studies are mixed. Going forward, there is a need for studies on the barriers to cancer screening uptake, the impact of education materials on preferences, and the role of preference studies in patient and physician communication. Patient-preference studies may become more important as patient-centered care gains more prominence.


Funding was provided by the Centers for Disease Control and Prevention (Contract No. 200-2008-27958, Task order 0025); we have no financial disclosures. We thank Linda Chamiec-Case for her help in assembling the data for this study.

Author Information

Corresponding Author: Florence K. L. Tangka, PhD, Centers for Disease Control and Prevention, 4770 Buford Hwy, NE, MS F-76, Atlanta, GA 30341. Telephone: 770-488-1183. Email:ftangka@cdc.gov.
Author Affiliations: Carol Mansfield, A. Brett Hauber, RTI Health Solutions, RTI International, Research Triangle Park, North Carolina; Donatus U. Ekwueme, Judith Lee Smith, Gery P. Guy, Jr, Chunyu Li, Centers for Disease Control and Prevention, Atlanta, Georgia.


  1. National Institutes of Health. Cancer screening overview — for health professionals. Bethesda (MD): National Cancer Institute; 2015. http://www.cancer.gov/cancertopics/pdq/screening/overview/HealthProfessional#Section_25. Accessed August 27, 2015.
  2. Agency for Healthcare Research and Quality, US Preventive Services Task Force. The guide to clinical preventive services 2014: recommendations of the US Preventive Services Task Force; 2014. http://www.uspreventiveservicestaskforce.org/Page/Name/tools-and-resources-for-better-preventive-care. Accessed August 27, 2015.
  3. Sabatino SA, White MC, Thompson TD, Klabunde CN; Centers for Disease Control and Prevention (CDC). Cancer screening test use — United States, 2013. MMWR Morb Mortal Wkly Rep 2015;64(17):464–8. PubMed
  4. US Department of Health and Human Services. Healthy people 2020. Disparities in clinical preventive services. Office of Disease Prevention and Health Promotion; 2015. http://www.healthypeople.gov/2020/topics-objectives/topic/cancer/objectives. Accessed August 28, 2015.
  5. Bridges JF, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, et al. Conjoint analysis applications in health — a checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value Health 2011;14(4):403–13. CrossRef PubMed
  6. Phillips KA, Van Bebber S, Marshall D, Walsh J, Thabane L. A review of studies examining stated preferences for cancer screening. Prev Chronic Dis 2006;3(3):A75. PubMed
  7. Marshall D, McGregor SE, Currie G. Measuring preferences for colorectal cancer screening: what are the implications for moving forward? Patient 2010;3(2):79–89. CrossRef PubMed
  8. Ghanouni A, Smith SG, Halligan S, Plumb A, Boone D, Yao GL, et al. Public preferences for colorectal cancer screening tests: a review of conjoint analysis studies. Expert Rev Med Devices 2013;10(4):489–99. CrossRef PubMed
  9. Medical Device Innovation Consortium. 2015. http://mdic.org/wp-content/uploads/2015/05/MDIC_PCBR_Framework_Web1.pdf. Accessed January 6, 2016.
  10. Hensher DA, Rose JM, Greene WH. Applied choice analysis: a primer. Cambridge (MA): Cambridge University Press; 2005.
  11. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6(7):e1000097.CrossRef PubMed
  12. Araña JE, León CJ, Quevedo JL. The effect of medical experience on the economic evaluation of health policies. A discrete choice experiment. Soc Sci Med 2006;63(2):512–24. CrossRefPubMed
  13. Basen-Engquist K, Fouladi RT, Cantor SB, Shinn E, Sui D, Sharman M, et al. Patient assessment of tests to detect cervical cancer. Int J Technol Assess Health Care 2007;23(2):240–7. CrossRefPubMed
  14. de Bekker-Grob EW, Hol L, Donkers B, van Dam L, Habbema JD, van Leerdam ME, et al. Labeled versus unlabeled discrete choice experiments in health economics: an application to colorectal cancer screening. Value Health 2010;13(2):315–23. CrossRef PubMed
  15. Gyrd-Hansen D. Cost-benefit analysis of mammography screening in Denmark based on discrete ranking data. Int J Technol Assess Health Care 2000;16(3):811–21. CrossRef PubMed
  16. Gyrd-Hansen D, Søgaard J. Analysing public preferences for cancer screening programmes. Health Econ 2001;10(7):617–34. CrossRef PubMed
  17. Hawley ST, Volk RJ, Krishnamurthy P, Jibaja-Weiss M, Vernon SW, Kneuper S. Preferences for colorectal cancer screening among racially/ethnically diverse primary care patients. Med Care 2008;46(9, Suppl 1):S10–6. CrossRef PubMed
  18. Hol L, de Bekker-Grob EW, van Dam L, Donkers B, Kuipers EJ, Habbema JD, et al. Preferences for colorectal cancer screening strategies: a discrete choice experiment. Br J Cancer 2010;102(6):972–80. CrossRef PubMed
  19. Howard K, Salkeld G. Does attribute framing in discrete choice experiments influence willingness to pay? Results from a discrete choice experiment in screening for colorectal cancer. Value Health 2009;12(2):354–63. CrossRef PubMed
  20. Howard K, Salkeld G, Pignone M, Hewett P, Cheung P, Olsen J, et al. Preferences for CT colonography and colonoscopy as diagnostic tests for colorectal cancer: a discrete choice experiment. Value Health 2011;14(8):1146–52. CrossRef PubMed
  21. Marshall DA, Johnson FR, Phillips KA, Marshall JK, Thabane L, Kulin NA. Measuring patient preferences for colorectal cancer screening using a choice-format survey. Value Health 2007;10(5):415–30. CrossRef PubMed
  22. Marshall DA, Johnson FR, Kulin NA, Ozdemir S, Walsh JME, Marshall JK, et al. How do physician assessments of patient preferences for colorectal cancer screening tests differ from actual preferences? A comparison in Canada and the United States using a stated-choice survey. Health Econ 2009;18(12):1420–39. CrossRef PubMed
  23. Pignone MP, Brenner AT, Hawley S, Sheridan SL, Lewis CL, Jonas DE, et al. Conjoint analysis versus rating and ranking for values elicitation and clarification in colorectal cancer screening. J Gen Intern Med 2012;27(1):45–50. CrossRef PubMed
  24. Ryan M, Skåtun D. Modelling non-demanders in choice experiments. Health Econ 2004;13(4):397–402. CrossRef PubMed
  25. van Dam L, Hol L, de Bekker-Grob EW, Steyerberg EW, Kuipers EJ, Habbema JD, et al. What determines individuals’ preferences for colorectal cancer screening programmes? A discrete choice experiment. Eur J Cancer 2010;46(1):150–9. CrossRef PubMed
  26. Wordsworth S, Ryan M, Skåtun D, Waugh N. Women’s preferences for cervical cancer screening: a study using a discrete choice experiment. Int J Technol Assess Health Care 2006;22(3):344–50. CrossRef PubMed
  27. Griffith GL, Edwards RT, Williams JM, Gray J, Morrison V, Wilkinson C, et al. Patient preferences and National Health Service costs: a cost-consequences analysis of cancer genetic services. Fam Cancer 2009;8(4):265–75. CrossRef PubMed
  28. Peacock S, Apicella C, Andrews L, Tucker K, Bankier A, Daly MB, et al. A discrete choice experiment of preferences for genetic counselling among Jewish women seeking cancer genetics services. Br J Cancer 2006;95(10):1448–53. CrossRef PubMed
  29. Fiebig DG, Haas M, Hossain I, Street DJ, Viney R. Decisions about Pap tests: what influences women and providers? Soc Sci Med 2009;68(10):1766–74. CrossRef PubMed
  30. Gerard K, Shanahan M, Louviere J. Using stated preference discrete choice modelling to inform health care decision-making: a pilot study of breast screening participation. Appl Econ 2003;35(9):1073–85. CrossRef
  31. Nayaradou M, Berchi C, Dejardin O, Launoy G. Eliciting population preferences for mass colorectal cancer screening organization. Med Decis Making 2010;30(2):224–33. CrossRefPubMed
  32. Salkeld G, Ryan M, Short L. The veil of experience: do consumers prefer what they know best? Health Econ 2000;9(3):267–70. CrossRef PubMed
  33. Salkeld G, Solomon M, Short L, Ryan M, Ward JE. Evidence-based consumer choice: a case study in colorectal cancer screening. Aust N Z J Public Health 2003;27(4):449–55. CrossRefPubMed
  34. Kistler CE, Hess TM, Howard K, Pignone MP, Crutchfield TM, Hawley ST, et al. Older adults’ preferences for colorectal cancer-screening test attributes and test choice. Patient Prefer Adherence 2015;9:1005–16. PubMed
  35. Martens CE, Crutchfield TM, Laping JL, Perreras L, Reuland DS, Cubillos L, et al. Why wait until our community gets cancer? Exploring CRC screening barriers and facilitators in the Spanish-speaking community in North Carolina. J Cancer Educ 2015 Aug 13. CrossRef PubMed
  36. Benning TM, Dellaert BG, Dirksen CD, Severens JL. Preferences for potential innovations in non-invasive colorectal cancer screening: a labeled discrete choice experiment for a Dutch screening campaign. Acta Oncol 2014;53(7):898–908. CrossRef PubMed
  37. Ghanouni A, Halligan S, Taylor SA, Boone D, Plumb A, Stoffel S, et al. Quantifying public preferences for different bowel preparation options prior to screening CT colonography: a discrete choice experiment. BMJ Open 2014;4(4):e004327. CrossRef PubMed
  38. Benning TM, Dellaert BG, Severens JL, Dirksen CD. The effect of presenting information about invasive follow-up testing on individuals’ noninvasive colorectal cancer screening participation decision: results from a discrete choice experiment. Value Health 2014;17(5):578–87. CrossRef PubMed
  39. Pignone MP, Crutchfield TM, Brown PM, Hawley ST, Laping JL, Lewis CL, et al. Using a discrete choice experiment to inform the design of programs to promote colon cancer screening for vulnerable populations in North Carolina. BMC Health Serv Res 2014;14(1):611. CrossRef PubMed
  40. Boone D, Mallett S, Zhu S, Yao GL, Bell N, Ghanouni A, et al. Patients’ & healthcare professionals’ values regarding true- & false-positive diagnosis when colorectal cancer screening by CT colonography: discrete choice experiment. PLoS One 2013;8(12):e80767. CrossRef PubMed
  41. Tsunematsu M, Kawasaki H, Masuoka Y, Kakehashi M. Factors affecting breast cancer screening behavior in Japan — assessment using the health belief model and conjoint analysis. Asian Pac J Cancer Prev 2013;14(10):6041–8. CrossRef PubMed
  42. Brenner A, Howard K, Lewis C, Sheridan S, Crutchfield T, Hawley S, et al. Comparing 3 values clarification methods for colorectal cancer screening decision-making: a randomized trial in the US and Australia. J Gen Intern Med 2014;29(3):507–13. CrossRef PubMed
  43. Zapka J, Klabunde CN, Taplin S, Yuan G, Ransohoff D, Kobrin S. Screening colonoscopy in the US: attitudes and practices of primary care physicians. J Gen Intern Med 2012;27(9):1150–8.CrossRef PubMed
  44. Lafata JE, Cooper GS, Divine G, Flocke SA, Oja-Tebbe N, Stange KC, et al. Patient-physician colorectal cancer screening discussions: delivery of the 5A’s in practice. Am J Prev Med 2011;41(5):480–6. CrossRef PubMed
  45. Glanz K, Bishop DB. The role of behavioral science theory in development and implementation of public health interventions. Annu Rev Public Health 2010;31(1):399–418. CrossRefPubMed
  46. Berchi C, Dupuis JM, Launoy G. The reasons of general practitioners for promoting colorectal cancer mass screening in France. Eur J Health Econ 2006;7(2):91–8. CrossRef PubMed

No hay comentarios: