Publications

Hierarchies in the Decentralized Welfare State: Prioritization in the Housing Choice Voucher Program

Social provision in the United States is highly decentralized. Significant federal and state funding flows to local organizational actors, who are granted discretion over how to allocate resources to people in need. In welfare states where many programs are underfunded and decoupled from local need, how does decentralization shape who gets what? This article identifies forces that shape how local actors classify help-seekers when they ration scarce resources, focusing on the case of prioritization in the Housing Choice Voucher Program. We use network methods to represent and analyze 1,398 local prioritization policies. Our results reveal two patterns that challenge expectations from past literature. First, we observe classificatory restraint, or many organizations choosing not to draw fine distinctions between applicants to prioritize. Second, when organizations do institute priority categories, policies often advantage applicants who are formally institutionally connected to the local community. Interviews with officials, in turn, reveal how prioritization schemes reflect housing agencies’ position within a matrix of intra-organizational, inter-organizational, and vertical forces that structure the meaning and cost of classifying help-seekers. These findings illustrate how local organizations’ use of classification to solve on-the-ground organizational problems and manage scarce resources can generate additional forms of exclusion.

What is the Bureaucratic Counterfactual? Categorical versus Algorithmic Prioritization in U.S. Social Policy

There is growing concern about governments’ use of algorithms to make high-stakes decisions. While an early wave of research focused on algorithms that predict risk to allocate punishment and suspicion, a newer wave of research studies algorithms that predict “need” or “benefit” to target beneficial resources, such as ranking those experiencing homelessness by their need for housing. The present paper argues that existing research on the role of algorithms in social policy could benefit from a counterfactual perspective that asks: given that a social service bureaucracy needs to make some decision about whom to help, what status quo prioritization method would algorithms replace? While a large body of research contrasts human versus algorithmic decision-making, social service bureaucracies target help not by giving street-level bureaucrats full discretion. Instead, they primarily target help through pre-algorithmic, rule-based methods. In this paper, we outline social policy’s current status quo method—categorical prioritization—where decision-makers manually (1) decide which attributes of help seekers should give those help seekers priority, (2) simplify any continuous measures of need into categories (e.g., household income falls below a threshold), and (3) manually choose the decision rules that map categories to priority levels. We draw on novel data and quantitative and qualitative social science methods to outline categorical prioritization in two case studies of United States social policy: waitlists for scarce housing vouchers and K-12 school finance formulas. We outline three main differences between categorical and algorithmic prioritization: is the basis for prioritization formalized; what role does power play in prioritization; and are decision rules for priority manually chosen or inductively derived from a predictive model. Concluding, we show how the counterfactual perspective underscores both the understudied costs of categorical prioritization in social policy and the understudied potential of predictive algorithms to narrow inequalities.

Polygenic Scores for Plasticity: A New Tool for Studying Gene–Environment Interplay

Fertility, health, education, and other outcomes of interest to demographers are the product of an individual’s genetic makeup and their social environment. Yet, gene × environment (G×E) research deploys a limited toolkit on the genetic side to study the gene–environment interplay, relying on polygenic scores (PGSs) that reflect the influence of genetics on levels of an outcome. In this article, we develop a genetic summary measure better suited for G×E research: variance polygenic scores (vPGSs), which are PGSs that reflect genetic contributions to plasticity in outcomes. First, we use the UK Biobank (N ∼ 408,000 in the analytic sample) and the Health and Retirement Study (N ∼ 5,700 in the analytic sample) to compare four approaches to constructing PGSs for plasticity. The results show that widely used methods for discovering which genetic variants affect outcome variability fail to serve as distinctive new tools for G×E. Second, using the PGSs that do capture distinctive genetic contributions to plasticity, we analyze heterogeneous effects of a UK education reform on health and educational attainment. The results show the properties of a useful new tool for population scientists studying the interplay of nature and nurture and for population-based studies that are releasing PGSs to applied researchers.

A cluster randomized controlled trial of a modified vaccination clinical reminder for primary care providers

Objective: Adult vaccination rates in the United States fall short of national goals, and rates are particularly low for Black Americans. We tested a provider-focused vaccination uptake intervention: a modified electronic health record clinical reminder that bundled together three adult vaccination reminders, presented patient vaccination history, and included talking points for providers to address vaccine hesitancy. Method: Primary care teams at the Atlanta Veterans Affairs Medical Center, who saw 28,941 patients during this period, were randomly assigned to receive either the modified clinical reminder (N = 44 teams) or the status quo (N = 40 teams). Results: Uptake of influenza and other adult vaccinations was 1.6 percentage points higher in the intervention group, which was not statistically significant (confidence interval, CI [-1.3, 4.4], p = .28). The intervention had similar effects on Black and White patients and did not reduce the disparity in vaccination rates between these groups. Conclusion: Provider-focused interventions are a promising way to address vaccine hesitancy, but they may need to be more intensive than a modified clinical reminder to have appreciable effects on vaccination uptake.

Public attitudes toward genetic risk scoring in medicine and beyond

Advances in genomics research have led to the development of polygenic risk scores, which numerically summarize genetic predispositions for a wide array of human outcomes. Initially developed to characterize disease risk, polygenic risk scores can now be calculated for many non-disease traits and social outcomes, with the potential to be used not only in health care but also other institutional domains. In this study, we draw on a nationally-representative survey of U.S. adults to examine three sets of lay attitudes toward the deployment of genetic risk scores in a variety of medical and non-medical domains: 1. abstract belief about whether people should be judged on the basis of genetic predispositions; 2. concrete attitudes about whether various institutions should be permitted to use genetic information; and 3. personal willingness to provide genetic information to various institutions. Results demonstrate two striking differences across these three sets of attitudes. First, despite almost universal agreement that people should not be judged based on genetics, there is support, albeit varied, for institutions being permitted to use genetic information, with support highest for disease outcomes and in reproductive decision-making. We further find significant variation in personal willingness to provide such information, with a majority of respondents expressing willingness to provide information to health care providers and relative finder services, but less than a quarter expressing willingness to do so for an array of other institutions and services. Second, while there are no demographic differences in respondents’ abstract beliefs about judging based on genetics, demographic differences emerge in permissibility ratings and personal willingness. Our results should inform debates about the deployment of polygenic scores in domains within and beyond medicine.

What is your estimand? Defining the target quantity connects statistical evidence to theory

We make only one point in this article. Every quantitative study must be able to answer the question: what is your estimand? The estimand is the target quantity—the purpose of the statistical analysis. Much attention is already placed on how to do estimation; a similar degree of care should be given to defining the thing we are estimating. We advocate that authors state the central quantity of each analysis—the theoretical estimand—in precise terms that exist outside of any statistical model. In our framework, researchers do three things: (1) set a theoretical estimand, clearly connecting this quantity to theory; (2) link to an empirical estimand, which is informative about the theoretical estimand under some identification assumptions; and (3) learn from data. Adding precise estimands to research practice expands the space of theoretical questions, clarifies how evidence can speak to those questions, and unlocks new tools for estimation. By grounding all three steps in a precise statement of the target quantity, our framework connects statistical evidence to theory.

Tool for surveillance or spotlight on inequality? Big data and the law

The rise of big data and machine learning is a polarizing force among those studying inequality and the law. Big data and tools like predictive modeling may amplify inequalities in the law, subjecting vulnerable individuals to enhanced surveillance. But these data and tools may also serve an opposite function, shining a spotlight on inequality and subjecting powerful institutions to enhanced oversight. We begin with a typology of the role of big data in inequality and the law. The typology asks questions—Which type of individual or institutional actor holds the data? What problem is the actor trying to use the data to solve?—that help situate the use of big data within existing scholarship on law and inequality. We then highlight the dual uses of big data and computational methods—data for surveillance and data as a spotlight—in three areas of law: rental housing, child welfare, and opioid prescribing. Our review highlights asymmetries where the lack of data infrastructure to measure basic facts about inequality within the law has impeded the spotlight function.

Using machine learning to help vulnerable tenants in New York city

To keep housing affordable, the City of New York has implemented rent-stabilization policies to restrict the rate at which the rent of certain units can be increased every year. However, some landlords of these rent-stabilized units try to illegally force their tenants out in order to circumvent rent-stabilization laws and greatly increase the rent they can charge. To identify and help tenants who are vulnerable to such landlord harassment, the New York City Public Engagement Unit (NYC PEU) conducts targeted outreach to tenants to inform them of their rights and to assist them with serious housing challenges. In this paper, we1 collaborated with NYC PEU to develop machine learning models to better prioritize outreach and help to vulnerable tenants. Our best-performing model can potentially help TSU find 59% more buildings where tenants face landlord harassment than the current outreach method using the same resources. The results also highlight the factors that help predict the risk of experiencing tenant harassment, and provide a data-driven and comprehensive approach to improve the city’s policy of proactive outreach to vulnerable tenants.

A sibling method for identifying vQTLs

The propensity of a trait to vary within a population may have evolutionary, ecological, or clinical significance. In the present study we deploy sibling models to offer a novel and unbiased way to ascertain loci associated with the extent to which phenotypes vary (variance-controlling quantitative trait loci, or vQTLs). Previous methods for vQTL-mapping either exclude genetically related individuals or treat genetic relatedness among individuals as a complicating factor addressed by adjusting estimates for non-independence in phenotypes. The present method uses genetic relatedness as a tool to obtain unbiased estimates of variance effects rather than as a nuisance. The family-based approach, which utilizes random variation between siblings in minor allele counts at a locus, also allows controls for parental genotype, mean effects, and non-linear (dominance) effects that may spuriously appear to generate variation. Simulations show that the approach performs equally well as two existing methods (squared Z-score and DGLM) in controlling type I error rates when there is no unobserved confounding, and performs significantly better than these methods in the presence of small degrees of confounding. Using height and BMI as empirical applications, we investigate SNPs that alter within-family variation in height and BMI, as well as pathways that appear to be enriched. One significant SNP for BMI variability, in the MAST4 gene, replicated. Pathway analysis revealed one gene set, encoding members of several signaling pathways related to gap junction function, which appears significantly enriched for associations with within-family height variation in both datasets (while not enriched in analysis of mean levels). We recommend approximating laboratory random assignment of genotype using family data and more careful attention to the possible conflation of mean and variance effects.

Identifiable Characteristics and Potentially Malleable Beliefs Predict Stigmatizing Attributions Toward Persons With Alzheimer’s Disease Dementia: Results of a Survey of the U.S. General Public

The general public’s views can influence whether people with Alzheimer’s disease (AD) experience stigma. The purpose of this study was to understand what characteristics in the general public are associated with stigmatizing attributions. A random sample of adults from the general population read a vignette about a man with mild Alzheimer’s disease dementia and completed a modified Family Stigma in Alzheimer’s Disease Scale (FS-ADS). Multivariable ordered logistic regressions were used to examine relationships between personal characteristics and FS-ADS ratings. Older respondents expected that persons with AD would receive less support (OR = 0.82, p = .001), have social interactions limited by others (OR = 1.13, p = .04), and face institutional discrimination (OR = 1.13, p = .04). Females reported stronger feelings of pity (OR = 1.57, p = .03) and weaker reactions to negative aesthetic features (OR = 0.67, p = .05). Those who believed strongly that AD was a mental illness rated symptoms more severely (OR = 1.78, p = .007). Identifiable characteristics and beliefs in the general public are related to stigmatizing attributions toward AD. To reduce AD stigma, public health messaging campaigns can tailor information to subpopulations, recognizable by their age, gender, and beliefs.

Diagnosing, Disclosing, and Documenting Borderline Personality Disorder: A Survey of Psychiatrists' Practices

Borderline personality disorder (BPD) is a valid and reliable diagnosis with effective treatments. However, data suggest many patients remain unaware they carry the diagnosis, even when they are actively engaged in outpatient psychiatric treatment. The authors conducted a survey of 134 psychiatrists practicing in the United States to examine whether they had ever withheld and/or not documented their patients’ BPD diagnosis. Fifty-seven percent indicated that at some point during their career they failed to disclose BPD; 37 percent said they had not documented the diagnosis. For those respondents with a history of not disclosing or documenting BPD, most agreed that either stigma or uncertainty of diagnosis played a role in their decisions. The findings highlight the need for clinical training programs to address these issues. The research also invites further research to identify other reasons why psychiatrists are hesitant to be fully open about the diagnosis of BPD.

Risks of phase I research with healthy participants: A systematic review

Background/aims:Tragedies suggest that phase I trials in healthy participants may be highly risky. This possibility raises concern that phase I trials may exploit healthy participants to develop new therapies, making the translation of scientific discoveries ethically worrisome. Yet, few systematic data evaluate this concern. This article systematically reviews the risks of published phase I trials in healthy participants and evaluates trial features associated with increased risks.Methods:Data on adverse events and trial characteristics were extracted from all phase I trials published in PubMed, Embase, Cochrane, Scopus, and PsycINFO (1 January 2008?1 October 2012). Inclusion criteria were phase I studies that enrolled healthy participants of any age, provided quantitative adverse event data, and documented the number of participants enrolled. Exclusion criteria included (1) adverse event data not in English, (2) a ?challenge? study in which participants were administered a pathogen, and (3) no quantitative information about serious adverse events. Data on the incidence of adverse events, duration of adverse event monitoring, trial agent tested, participant demographics, and trial location were extracted.Results:In 475 trials enrolling 27,185 participants, there was a median of zero serious adverse events (interquartile range?=?0?0) and a median of zero severe adverse events (interquartile range?=?0?0) per 1000 treatment group participants/day of monitoring. The rate of mild and moderate adverse events was a median of 1147.19 per 1000 participants (interquartile range?=?651.52?1730.9) and 46.07 per 1000 participants/adverse event monitoring day (interquartile range?=?17.80?77.19).Conclusion:We conclude that phase I trials do cause mild and moderate harms but pose low risks of severe harm. To ensure that this conclusion also applies to unpublished trials, it is important to increase trial transparency.

When clinical care is like research: the need for review and consent

The prevailing “segregated model” for understanding clinical research sharply separates it from clinical care and subjects it to extensive regulations and guidelines. This approach is based on the fact that clinical research relies on procedures and methods—research biopsies, blinding, randomization, fixed treatment protocols, placebos—that pose risks and burdens to participants in order to collect data that might benefit all patients. Reliance on these methods raises the potential for exploitation and unfairness, and thus points to the need for independent ethical review and more extensive informed consent. In contrast, it is widely assumed that clinical care does not raise these ethical concerns because it is designed to promote the best interests of individual patients. The segregation of clinical research from clinical care has been largely effective at protecting research participants. At the same time, this approach ignores the fact that several aspects of standard clinical care, such as clinician training and scheduling, also pose some risks and burdens to present patients for the benefit of all patients. We argue that recently proposed learning health care systems offer a way to address this concern, and better protect patients, by developing integrated review and consent procedures. Specifically, current approaches base the need for independent ethical review and more extensive informed consent on whether an activity is categorized as clinical research or clinical care. An ethically sounder approach, which could be incorporated into learning health care systems, would be to base the need for independent ethical review and more extensive informed consent on the extent to which an activity poses risks to present patients for the benefit of all patients.

The relative contributions of disease label and disease prognosis to Alzheimer's stigma: A vignette-based experiment

Background The classification of Alzheimer’s disease is undergoing a significant transformation. Researchers have created the category of “preclinical Alzheimer’s,” characterized by biomarker pathology rather than observable symptoms. Diagnosis and treatment at this stage could allow preventing Alzheimer’s cognitive decline. While many commentators have worried that persons given a preclinical Alzheimer’s label will be subject to stigma, little research exists to inform whether the stigma attached to the label of clinical Alzheimer’s will extend to a preclinical disorder that has the label of “Alzheimer’s” but lacks the symptoms or expected prognosis of the clinical form. Research questions The present study sought to correct this gap by examining the foundations of stigma directed at Alzheimer’s. It asked: do people form stigmatizing reactions to the label “Alzheimer’s disease” itself or to the condition’s observable impairments? How does the condition’s prognosis modify these reactions? Methods Data were collected through a web-based experiment with N = 789 adult members of the U.S. general population (median age = 49, interquartile range, 32–60, range = 18–90). Participants were randomized through a 3 × 3 design to read one of 9 vignettes depicting signs and symptoms of mild stage dementia that varied the disease label (“Alzheimer’s” vs. “traumatic brain injury” vs. no label) and prognosis (improve vs. static vs. worsen symptoms). Four stigma outcomes were assessed: discrimination, negative cognitive attributions, negative emotions, and social distance. Results The study found that the Alzheimer’s disease label was generally not associated with more stigmatizing reactions. In contrast, expecting the symptoms to get worse, regardless of which disease label those symptoms received, resulted in higher levels of perceived structural discrimination, higher pity, and greater social distance. Conclusion These findings suggest that stigma surrounding pre-clinical Alzheimer’s categories will depend highly on the expected prognosis attached to the label. They also highlight the need for models of Alzheimer’s-directed stigma that incorporate attributions about the condition’s mutability.

A review of ethical issues in dementia

Dementia raises many ethical issues. The present review, taking note of the fact that the stages of dementia raise distinct ethical issues, focuses on three issues associated with stages of dementia’s progression: (1) how the emergence of preclinical and asymptomatic but at-risk categories for dementia creates complex questions about preventive measures, risk disclosure, and protection from stigma and discrimination; (2) how despite efforts at dementia prevention, important research continues to investigate ways to alleviate clinical dementia’s symptoms, and requires additional human subjects protections to ethically enroll persons with dementia; and (3) how in spite of research and prevention efforts, persons continue to need to live with dementia. This review highlights two major themes. First is how expanding the boundaries of dementias such as Alzheimer’s to include asymptomatic but at-risk persons generate new ethical questions. One promising way to address these questions is to take an integrated approach to dementia ethics, which can include incorporating ethics-related data collection into the design of a dementia research study itself. Second is the interdisciplinary nature of ethical questions related to dementia, from health policy questions about insurance coverage for long-term care to political questions about voting, driving, and other civic rights and privileges to economic questions about balancing an employer’s right to a safe and productive workforce with an employee’s rights to avoid discrimination on the basis of their dementia risk. The review highlights these themes and emerging ethical issues in dementia.

Views about responsibility for alcohol addiction and negative evaluations of naltrexone

Moral philosophers have debated the extent to which persons are individually responsible for the onset of and recovery from addiction. Empirical investigators have begun to explore counselors’ attitudes on these questions. Meanwhile, a separate literature has investigated counselors’ negative attitudes towards naltrexone, an important element of medication-assisted treatment for alcohol addiction. The present study bridges the literature on counselor views about responsibility for addiction with the literature on attitudes towards naltrexone. It investigates the extent to which a counselor’s views of individual responsibility for alcohol addiction are related to that counselor’s views of naltrexone.

Revision and Representation: The Controversial Case of Dsm-5

Challenging the Sanctity of Donorism: Patient Tissue Providers as Payment-Worthy Contributors

, Many research projects rely on human biological materials and some of these projects generate revenue. Recently, it has been argued that investigators have a moral claim to share in the revenue generated by these projects, whereas persons who provide the biological material have no such claim (Truog, Kesselheim, and Joffe 2012). In this paper, we critically analyze this view and offer a positive proposal for why tissue providers have a moral claim to benefit. Focusing on payment as a form of benefit, we argue that research is a joint project and propose a contribution principle for paying participants in those joint projects. We distinguish between contributions that shape a project’s revenue generating properties, grounding a claim to payment, and contributions that fail to ground such a claim. We conclude, contrary to existing arguments and practices, that some tissue providers have a moral claim to payment beyond compensation for risk and burden. This conclusion suggests that investigators, institutions, and sponsors should reconsider the fairness of their current practices.

US state variation in autism insurance mandates: Balancing access and fairness

This article examines how nations split decision-making about health services between federal and sub-federal levels, creating variation between states or provinces. When is this variation ethically acceptable? We identify three sources of ethical acceptability?procedural fairness, value pluralism, and substantive fairness?and examine these sources with respect to a case study: the fact that only 30 out of 51 US states or territories passed mandates requiring private insurers to offer extensive coverage of autism behavioral therapies, creating variation for privately insured children living in different US states. Is this variation ethically acceptable? To address this question, we need to analyze whether mandates go to more or less needy states and whether the mandates reflect value pluralism between states regarding government?s role in health care. Using time-series logistic regressions and data from National Survey of Children with Special Health Care Needs, Individual with Disabilities Education Act, legislature political composition, and American Board of Pediatrics workforce data, we find that the states in which mandates are passed are less needy than states in which mandates have not been passed, what we call a cumulative advantage outcome that increases between-state disparities rather than a compensatory outcome that decreases between-state disparities. Concluding, we discuss the implications of our analysis for broader discussions of variation in health services provision.

The Tarasoff Rule: The Implications of Interstate Variation and Gaps in Professional Training

Recent events have revived questions about the circumstances that ought to trigger therapists’ duty to warn or protect. There is extensive interstate variation in duty to warn or protect statutes enacted and rulings made in the wake of the California Tarasoff ruling. These duties may be codified in legislative statutes, established in common law through court rulings, or remain unspecified. Furthermore, the duty to warn or protect is not only variable between states but also has been dynamic across time. In this article, we review the implications of this variability and dynamism, focusing on three sets of questions: first, what legal and ethics-related challenges do therapists in each of the three broad categories of states (states that mandate therapists to warn or protect, states that permit therapists to breach confidentiality for warnings but have no mandate, and states that give no guidance) face in handling threats of violence? Second, what training do therapists and other professionals involved in handling violent threats receive, and is this training adequate for the task that these professionals are charged with? Third, how have recent court cases changed the scope of the duty? We conclude by pointing to gaps in the empirical and conceptual scholarship surrounding the duty to warn or protect.