
Elior Cohen’s “Housing the Homeless” Study (2024) Requires Some Important Caveats
Access to homelessness data is a researcher’s white whale. Local “Continuums of Care” (the regional networks that run homeless services) keep Homeless Management Information Systems (HMIS) that log every shelter night, motel voucher, or case-management visit. But once someone signs a lease (through a Housing First program), those systems usually stop tracking—so we only see people again if they re-enter the homeless system. That’s why Cohen’s recent study, “Housing the Homeless,” is notable: by linking the City of Los Angeles’ intake records to HMIS and other county files, he follows thousands of single adults from their first contact into housing and then observes whether they come back. It’s a powerful window—and it also means that a reported “return” is specifically a return to the homeless system. It does not reflect every episode of housing instability. Because of this, the report requires some important caveats.
Cohen reports there is significant improvement when people get into housing quickly. Being placed in a housing program within a month lowers the chance of coming back to the homeless system by about 21 percentage points over 10 months and 26 points over 20 months—roughly a 70 percent and 61 percent drop relative to the baseline rates in the data. He also finds fewer law-enforcement records: about a 6–7 percentage-point reduction in having any booking/charge/probation record in both the 10- and 20-month windows. Because the average rate of these events is small, the paper describes this as a 75–85 percent reduction (and even “100%” in the short run), but the more policy-relevant comparison for the group moved by the design (the “compliers”) is a 20–30 percent drop. That’s still meaningful, just less sensational. Finally, on health-care use, the estimates are imprecise and hover near zero (slightly positive at 10 months and slightly negative at 20 months), so there’s no clear health effect in these windows.
These results are encouraging, but they don’t license blanket claims that “Housing First” works for all forms of homelessness or that it reduces crime across the entire homeless population. They describe what happened for single adults in Los Angeles whose placement hinged on which caseworker they met, over 10–20 months, with outcomes defined as re-entry to the homeless system and recorded justice contacts. That’s still important data—but narrower than headline policy claims.
Before asking what the results mean for policy, we should be clear about what the study can—and cannot—see in the data.
HMIS is a service log, not a population register. It records who touches the homeless-services system and which services they receive, but it has significant limitations. Coverage varies across providers, many fields are self-reported, and once someone finds housing outside the system, the record usually goes quiet. Cohen’s linkage greatly improves what most studies can do, yet the paper could do more to foreground these limits because they shape the two headline outcomes. “Return to homelessness” in this setup means re-entering the homeless-services system (not every episode of housing instability, couch-surfing, or informal eviction) in L.A. (no neighboring areas). As for the “criminal record” data, the outcome reflects recorded justice contacts (bookings/charges/probation) rather than all offending or victimization, and it can move with changes in enforcement exposure as much as behavior. Read that way, the estimates are best interpreted as reductions in recorded system touchpoints—not a direct measure of all homelessness or all crime—and the paper should say so plainly.
The data does not actually reflect every “return to homelessness,” but every “return to the homeless-services system.” That’s a useful policy metric as it tells researchers whether people need to come back for help, but it is not a census of instability. It can miss quiet losses of housing (couch-surfing, car camping), exits into providers outside HMIS, or outreach that never gets logged. Two practical fixes would make it easier to read the findings correctly. First, label the outcome as “system re-entry” throughout and show its components (new intake vs shelter vs outreach) so readers see what’s driving the change. Second, add either (a) a simple bounding exercise for plausible undercount (e.g., “if x percent of unstable episodes go unobserved, the effect ranges from … to …”) or (b) triangulation with independent stability signals: program exits to permanent addresses, sustained lease length, or any county administrative address anchors. With those clarifications, the result stands as strong evidence of reduced demand on the homeless system; whether it is also a reduction in true homelessness remains an open (and testable) question.
The crime result also requires caveats. The study measures “law-enforcement contact” using two county sources: Sheriff’s Department records (charges and jail bookings) and probation records (supervision and violations). That’s a meaningful slice of the justice system—it captures serious events that reach county systems—but it does not include LAPD incident/arrest data, stops, citations, or 911 calls that never lead to a county booking. Because a great deal of unsheltered homelessness is in the City of Los Angeles, readers need to know how many LAPD arrests flow into the county jail data and how much city-level contact never shows up here. Two simple clarifications would right-size the interpretation: (i) decompose the “any law-enforcement record” outcome into its components (bookings, charges, probation) and show event-time plots with pre-period checks; and (ii) report coverage tests—e.g., confirm similar effects in areas where the Sheriff is the primary agency, or document the share of LAPD arrests that appear as county bookings (if that’s available). Framed this way, the paper shows fewer serious justice touchpoints; whether overall police contact fell is left for future work.
The study narrows to non-veteran, single adults whose first recorded contact with L.A.’s homeless-services system occurred in 2016–2017 and for whom the data allow 10 or 20 months of follow-up. It also excludes small sites (those that had months with only one caseworker working) and very low-volume caseworkers to build a stable instrument. Those choices are sensible for the design, but they tilt the sample toward larger providers and toward clients who engage early enough to be followed for the full window. Two things would right-size the interpretation: (1) show how much of the original intake universe is dropped at each step and whether excluded people look different on observables (a simple CONSORT-style flow plus a one-row table of differences), and (2) surface the appendix checks that relax these filters (include veterans, older adults, multiple intakes, single-caseworker months) so readers can see that the main results don’t hinge on the tightest sample.
Still, researchers must be careful comparing the 10- and 20-month results: they’re not estimates on the same people. The 20-month window necessarily drops later entrants (anyone who can’t be followed that long), so its 7,584 cases are an earlier cohort than the 15,990 used for the 10-month window. If earlier entrants differ in mix (needs, providers, neighborhoods) or if caseworker assignment patterns shifted over time, the “medium-run” number reflects composition as well as duration. The clean takeaway is that the two windows describe two overlapping but different groups.
One more scope note: “treatment” here is defined as being placed in housing within 30 days of intake. So, the estimates tell researchers about the benefit of getting housed fast for people whose odds of fast placement were pushed up by the caseworker they saw. They do not speak to slower placements or to clients who never secure a unit. Because delays can come from document readiness, unit availability, or provider capacity—not just caseworker effort—this timing cutoff is important for proper interpretation. The effect here concerns rapid placement, but Cohen does relax the cut-off and finds similar results.
The next piece to consider is the study’s method. Cohen uses an instrumental-variables approach that leans on a simple idea: some caseworkers are more prone to place clients into housing quickly than others. Because clients are effectively assigned to caseworkers, that difference in “placement tendency” functions like a natural push—almost like a coin flip—that shifts who gets housed. The design then asks: among otherwise similar clients, how much does being placed (thanks to that push) change later outcomes? This lets the study speak causally about the effect of placement for the people whose chances were swayed by the caseworker they saw.
By design, the study estimates effects for people whose chances of getting housed quickly were nudged by which caseworker they happened to meet—the “compliers.” It does not tell researchers about (1) clients who would have been housed quickly no matter who saw them, or (2) clients who would not be housed quickly under any caseworker because of refusal, documentation hurdles, unit scarcity, or program fit. That scope isn’t a flaw; it’s how this method works. But it does mean the paper can’t claim that Housing First works for everyone. The clean way forward is to label the target group throughout (“rapid-placement compliers”), briefly profile who they are, and state plainly that the results don’t speak to people who decline housing or can’t be placed under current capacity.
The design’s strongest assumption (meaning, “hardest to accept”) is that the only way caseworker assignment changes outcomes is by changing whether (and how quickly) someone gets housed. Cohen is transparent that caseworkers can also shape program type, time in housing, and access to other services, and that the data do not track follow-up interactions or full benefit uptake (e.g., Medicaid/SNAP). These are all limits that make this assumption do real work.
He partially addresses this with an “augmented IV” that adds caseworkers’ tendencies to place clients into emergency or non-housing services; the estimates look similar, which is reassuring but cannot rule out unobserved post-placement help. Given that federal rules expect case managers to arrange services and connect clients to mainstream benefits, it is plausible that caseworker assignment moves outcomes through non-housing channels, too. Two simple additions would right-size the claim: (1) show “placebo-instrument” results using non-housing placement propensities (this should not move the main outcomes if housing is the channel), and (2) add an outcome that housing should not affect but benefit linkage might to rule out such benefits driving the results (or bound the effect under reasonable amounts of unobserved benefit uptake).
What should policymakers and advocates take from this? The study offers credible evidence that getting system-engaged single adults housed quickly reduces later returns to the homeless system and lowers serious justice-system touchpoints over the following 10–20 months. That’s valuable—and actionable for efforts that speed up placements (document readiness, landlord engagement, unit search, case-manager bandwidth). But the estimates do not show that “Housing First works for everyone,” they don’t capture all housing instability, and they say little about people who decline services or can’t be placed under current capacity. Health and benefit-uptake effects are unclear in these data.
The right reading is this: evidence for rapid placement benefits exists in this context; it is not a universal verdict. The next step is targeted scaling with better measurement—link HMIS to health and city-police data, track lease stability beyond system re-entry, and embed simple prospective tests (e.g., processing-time pilots) so we learn what works for whom and why. Let me add a brief note on the replication package Cohen provides, since reproducing major studies is crucial, considering the social sciences’ replication crisis. The code repository is helpful, but the underlying microdata sits in UCLA’s California Policy Lab enclave (the Homelessness Research Accelerator) and is not publicly shareable. American Economic Association journals allow restricted data, but they expect precise documentation, clear access instructions, and materials sufficient for independent reproduction of the published tables. In practice, access to L.A. HMIS requires approvals and can be slow, so the paper should enable computational reproducibility now—even without raw HMIS—through a fuller package. Specifically, this paper should ship synthetic data, complete construction code, and aggregation files that exactly rebuild the published tables—so independent researchers can audit the analysis today, not be asked to reestablish a Memoranda of Understanding with the data sources.

Stay Informed
Sign up to receive updates about our fight for policies at the state level that restore liberty through transparency and accountability in American governance.