Technology

Why the “Best Places to Live” Usually Aren’t

The fascinating flaws in the models that rank cities like Fishers, Indiana, at the top.

Photo illustration by Slate. Photo by iStock/Thinkstock.

Photo illustration by Slate. Photo by iStock/Thinkstock.

The best place to live in America, according to Money’s 2017 ranking, is Fishers, Indiana (population: 90,000). The little-known Indianapolis suburb, we’re told, is not only “safe and quaint and full of young families,” it also boasts a growing economy and a farmer’s market that was recently voted “one of Indiana’s best.” Are you convinced yet?

Not so fast. According to 24/7 Wall Street’s rankings, Money got it wrong—albeit by only a few miles. That outlet crowned Fishers’ next-door neighbor Carmel, Indiana (population 91,000), the best American city to live in, while Fishers rated nary a mention. But at least we can all agree that the Greater Indianapolis area is America’s garden spot, right?

As it turns out, we cannot. A ranking by Livability, developed in conjunction with New York University and famed urbanist Richard Florida, claims to be the most scientific of all—and it puts Rochester, Minnesota, at the top of the list. No place in Indiana cracks its top 50, though our old friend Fishers makes a cameo at No. 99. U.S. News’ top 50 is also notably Indiana-less. And as for Rochester, U.S. News and World Report doesn’t rank it as even the best Rochester in America: That honor goes to Rochester, New York, which checks in at No. 39 on its list. (Austin is its big winner.)

If you’re beginning to suspect that perhaps these city rankings are rather arbitrary, well, that would be one reasonable takeaway. But before you dismiss them altogether, it’s worth considering what we can learn from their discrepancies—not only about U.S. cities, but about ranking algorithms in general, and how they can mislead us about all kinds of other topics.

First, a point in defense of the city rankings. The wild variation from one “best cities” list to the next might seem like an indictment because it implies that each ranking reflects its peculiar methodology more than any real-world consensus. Yet it would actually be more suspicious if they all agreed with each other.

Take, for example, the broad agreement exhibited by popular magazine rankings of the best colleges in the United States. With few exceptions, they inevitably place such famous institutions as Harvard, Princeton, Stanford, Yale, and MIT near the top. This gives them a veneer of plausibility—which is exactly the point. As critics have long pointed out, rankings pioneer U.S. News knew its list had to reflect conventional wisdom to be taken seriously so it developed a methodology that would achieve that. The publication’s rankings have since become a sort of self-fulfilling prophecy.

Let’s count it as a virtue, then, that city rankings tend to exhibit more diversity in the specific data they consider and the weights they assign them: It implies that the lists’ creators probably haven’t reverse-engineered their models to match a preconceived output. It also means that they often shine a light on places that would otherwise pass unknown or underappreciated. Rochester really does have a lot going for it!

That said, it should give readers pause that the same cities they find at the top of one magazine’s list are often nowhere to be found on another—even as they both purport to be prioritizing roughly the same traits. So why can’t our professional list–makers agree on what makes for the optimal city?

One factor is relatively boring, but bears mentioning anyway. It’s that even more than the college rankings, the city rankings tend to shuffle their criteria in significant ways on a yearly basis. Money, for instance, confined its 2017 list to cities with populations between 10,000 and 100,000, whereas in 2016 it considered cities ranging from 50,000 to 300,000. Livability set the upper bound for its 2017 ranking at 350,000, while U.S. News and 24/7 Wall Street imposed no upper bound but ruled out cities smaller than 50,000 and 65,000, respectively. These kinds of constraints might detract from the rankings’ comprehensiveness—and belie the unqualified superlatives in their titles—but there is some sense in them: As Malcolm Gladwell once pointed out in a New Yorker critique of college rankings, the more diverse the entities you’re trying to compare, the less meaningful your comparison will be.

Yet those population cutoffs hardly begin to explain the huge disparities in outcome from one magazine to the next. Fishers, for instance, qualified for all four of the lists mentioned above, but made the top 50 in just one of them. And here is where we come to the more interesting questions of methodology.

The four “best places to live” lists mentioned above—Money, 24/7 Wall Street, Livability, and U.S. News—all include metrics designed to represent cost of living, quality of life, and economic opportunity among other criteria. But the thing to remember is that the data they’re feeding into their formulas is, in most cases, only a proxy for what they’re really trying to measure. (This is true of many forms of data.) And it isn’t always a good proxy.

For instance, U.S. News uses a metropolitan area’s per-capita crime rate as a proxy for how safe its residents are. That sounds reasonable enough, until you consider that metropolitan areas have more or less arbitrary boundaries; that crime is not evenly distributed within them; and that crime rates themselves vary in methodology. You might live in a perfectly safe neighborhood, yet your city might suffer in U.S. News’ rankings because of a high-crime pocket 20 miles away that happens to fall within the same metropolitan area. (The complexity of crime scores was one of several issues raised in a 2015 report by the Chicago Council on Global Affairs on how to interpret city rankings.)

Money, for its part, nods to consideration of ethnic diversity in its 2017 “Best Places to Live” list. Yet it doesn’t factor into the rankings in quite the way you might expect. In compiling the list of cities to evaluate, Money says it eliminated “any place that had … a lack of ethnic diversity.” In other words, it set an arbitrary threshold for what counts as “diverse.” But as long as a city cleared that undisclosed bar, Money didn’t seem care whether it was extremely diverse, somewhat diverse, or just a tiny bit diverse. Hence the victory for Fishers, which, per the 2010 census, is 86 percent white.

Cultural amenities, which Money’s list and others also claim to consider, are notoriously hard to quantify. Money tries to do it by counting the “number of leisure activities in the town and surrounding area, including bars, restaurants, museums, sports complexes, and green spaces.” It’s nice to have three small museums nearby, sure—but is it better than having one great one? And does history matter to a city’s culture at all? Not according to this methodology: Fishers barely existed as recently as 1990, when it had just 7,500 people. It has since boomed to more than 10 times that size, thanks to a flood of new development.

Livability’s list makes perhaps the most earnest effort to weigh the things that real people—at least, some relatively affluent subset of real people—find appealing in cities. It developed its scoring system with the help of New York University’s Initiative for Creativity and Innovation in Cities, directed by urbanists Richard Florida and Steven Pedigo, and used data from the economic data consultancy Emsi. As such, it appears tilted toward the sorts of factors that might appeal to Florida’s “creative class:” a highly ranked school system, ethnic diversity, a “thriving arts scene,” and even the prevalence of farmers markets.

That sort of open bias can be a blessing in a list like this. All such models represent normative judgments, and it’s better for readers to know upfront what biases are encoded in them. Yet even the most thoughtful attempts to quantify something as subjective as “livability” are bound to end up quantifying something rather different. Often, if you look closely enough at a list and its attendant methodology, you can discern a crude pattern in the rankings that belies the complexity of the model that produced them.

For instance, Livability’s rankings weigh 40 different data points, yet they turn out to be thoroughly dominated by smallish cities that are home to disproportionately large colleges, hospitals, and/or tech-company headquarters (but especially colleges and hospitals). Fifth-ranked Charlottesville, Virginia, third-ranked Ann Arbor, Michigan, and second-ranked Iowa City are all best known for their major public universities; top-ranked Rochester is home to the famous Mayo Clinic, along with the University of Minnesota–Rochester and several other colleges. Seventh-ranked Palo Alto, California, and eighth-ranked Madison host Stanford University and the University of Wisconsin, respectively. Expedia and T-Mobile have headquarters in sixth-ranked Bellevue, Washington; ninth-ranked Overland Park, Kansas, is Sprint’s home base; and so on. All fine places to make one’s home, no doubt. Together, however, they embody a version of “livability” that has more to do with the kinds of data you can readily plug into a spreadsheet than the experience of living there.

An even more glaring pattern jumps out from Money’s 2017 rankings. Recall that Fishers, its overall winner, is essentially brand-new. Well, it isn’t the only one. Second-ranked Allen, Texas, has boomed from about 18,000 people in 1990 to nearly 100,000 today. Fourth-ranked Franklin, Tennessee, has gone from 20,000 to 75,000 in that same period. Fifth-ranked Olive Branch, Mississippi, grew from just 3,600 people in 1990 to nearly 10 times that size by 2010, earning a Bloomberg profile for being the “fastest-growing city in the U.S.” And so on. This is a list of boomtowns, not best places.

What you’re beginning to get a feel for when you find these commonalities is what the model is actually measuring, as opposed to what it purports to measure. Livability went looking for dynamic places to live and found a bunch of company towns where dynamic people happen to live (not always by choice). Money sought small cities with job growth, but its algorithm spat out a list of new-build suburbs and exurbs where jobs can’t help but grow (because there were none before). These lists, in the end, are best regarded as the outputs of various more-or-less arbitrary ways of slicing and dicing the specific types of data that tend to be available for the majority of cities.

Such exercises in number-crunching can have their value provided one understands that this is what they are. Unfortunately, the editors who explain and frame these lists to readers usually go out of their way to pretend otherwise, penning jaunty ex post facto justifications for each city’s inclusion. These are often unintentionally funny: Did you know that Money’s No. 5 place to live, Olive Branch, is home to a popular bonsai nursery? Or that its residents “no longer have to leave the town’s borders now for basic services and shopping?” Back up the moving truck!

The blatant mismatch between the model’s output and reality is what makes these lists an instructive case study. They’re textbook examples of how ranking algorithms can end up producing outputs far afield from their purported goals. That’s crucial to understand in an age like ours, when weighted models of this kind underpin not only special issues of otherwise irrelevant newsmagazines, but things like credit scores and no-fly lists that directly affect people’s lives. (These weightier applications are the sort catalogued in Cathy O’Neil’s trenchant Weapons of Math Destruction.)

It’s a cruel irony that the methodologies behind patently silly city rankings are often more accessible to the public than those that go into the scores used to deny people loans or set their prison sentences. But then, it’s probably no accident that such models are kept opaque by the people who create and use them. Otherwise, we could all see for ourselves just how arbitrary they are.

Home page photo of Fishers, Indiana, by Scott Morris via Flickr/Creative Commons.