Online valuation tools like the Zestimate get Cheverly wrong for reasons that are structural, not accidental -- shared zip codes, city names rooted in decades-old postal assignments, non-contiguous town parcels, and an 80-year-old housing stock where condition drives value more than size. Six tools recently produced a $112,000 spread on a single Cheverly home. And AI won't fix any of this: a smarter model trained on bad source data produces a more confidently wrong answer. The solution is a practitioner who knows which database to use, how to read it, and what the algorithm can't see.
Before almost every listing appointment, I already know what the seller has looked up. They've checked Zillow. Maybe Redfin. They have a number in their head -- specific, confident-looking, with a little accuracy meter and a green checkmark. For Cheverly, that number is frequently wrong by a margin that actually matters.
It's not a calibration issue. It's structural -- and it starts long before the algorithm runs. These tools are called AVMs: Automated Valuation Models. The name is more honest than the marketing. They're models. They automate. What they don't do is understand Cheverly.
The Geography Problem
Cheverly is an incorporated town in Prince George's County, with real municipal boundaries on file with the state. But online valuation tools don't use municipal boundaries. They use postal data: address, city name, zip code. And Cheverly's postal geography is a mess.
-
1Two zip codes, both shared
Cheverly straddles 20784 and 20785 -- but both zip codes also include surrounding communities that sell at different prices. When an AVM pulls "comparable sales in 20785," it's mixing Cheverly with Landover, Seat Pleasant, and others. The comparison pool is polluted from the first step.
-
2Non-contiguous parcels
Cheverly has three or four pieces of town land that don't physically connect to the main body. An AVM can't know those parcels belong to the same market -- or that a property just outside the town line belongs to a different one.
-
3Three city names, one neighborhood
Cheverly's zip codes have been served by different post offices at different times over the decades. The city name that ended up in the tax record depended on which post office was assigned at the moment the record was created -- which is why the same town shows up as Cheverly, Landover, and Hyattsville in the public record. Agents who pull from tax data inherit whatever name was current when that record was filed, sometimes 30 or 40 years ago. An AVM working from the same records has the same problem -- and no way to know that all three names refer to the same market.
"The AVM isn't doing a neighborhood analysis. It's doing a zip code smear with cosmetic labeling on top."
The Housing Stock Problem
Even if an AVM got the geography right, it would still face Cheverly's toughest valuation problem: an 80-year-old housing stock where condition matters more than size. Two 1,100 sq ft Cape Cods on the same block -- one fully renovated, one untouched since 1962 -- can sell $100,000 apart. An AVM sees two similarly sized homes, averages them, and produces a number that's wrong for both. It isn't accounting for condition; it's papering over it.
My own analysis of every Cheverly closed sale -- rated by condition from distressed to fully renovated -- shows that by 2025, renovating no longer reliably produced a higher sale price. The market got harder to read. AVMs did not.
Square footage · Bedroom count · Age of home · Zip code average — and little else. They can't see condition, can't tell town boundaries from postal boundaries, and can't sort out decades of inconsistent city-name entries in their source data.
Why Don't They Just Use Municipal Boundaries?
The maps exist. Maryland maintains them. The answer comes down to cost, complexity, and incentives.
Using actual town boundaries would mean matching every property record to a boundary map and keeping that match current. For a town like Cheverly with disconnected parcels, it would mean recognizing that land sitting a quarter mile from the main body is still part of the same market. That's a real technical investment -- and Zillow invests where its revenue is. If their Cheverly estimates are off by $40,000, that error is too small to show up in their national accuracy numbers. The fix costs money. The mistake costs them nothing.
There's also a simpler possibility: they may not know Cheverly is a town. If half their Cheverly sales say "Landover" in the city field, the model may have no idea Cheverly is a distinct market at all.
Six Tools, One House, $112,000 of Disagreement
Here's what this looks like in practice. Six tools, one Cheverly home I was recently evaluating. Actual numbers:
That $112,000 spread isn't a rounding error. Zillow and Redfin land low -- almost certainly pulling sales from the surrounding zip code rather than the town. Realtor.com comes in $100,000 higher. Homes.com is the only one that shows a range instead of a single number -- which is at least honest about the uncertainty.
If these tools were actually measuring something real about this property, they wouldn't disagree by $112,000. The spread is the argument.
The Deeper Issue: The Number Isn't Trying to Be Right
The Zestimate isn't a valuation tool that happens to have accuracy problems. It's a tool designed to keep you on the website that happens to show a number. The number's job is to make you curious enough to keep clicking -- which sells ads and captures leads.
A seller who thinks "that seems low, let me look around" is doing exactly what the product is designed to make them do. A seller who closes the tab is a problem for the platform.
"You're asking: what is my house worth? The site is asking: how do we keep this person on our platform? Those are different questions with different answers."
Look at the design choices: a single dollar figure, decimal-point precision, a confidence meter, a green checkmark. Those are marketing decisions, not mathematical ones. A range with an honest margin of error might be more accurate -- but it would also send more people elsewhere to find out what their home is actually worth.
"Won't AI Eventually Fix This?"
AI doesn't fix bad source data. It scales it. A smarter model trained on 30 years of Cheverly sales filed as "Landover" produces a more confident wrong answer, faster. The data problem exists before the algorithm ever runs. Better math can't recover a town boundary that was never in the records.
"Garbage in, garbage out -- just with a fancier interface and a higher confidence score."
The real solution is working from a database that actually captures town listings -- not postal-area averages -- and knowing how to use it. That means checking city fields against actual town boundaries. Reading listing history, not just sale prices. Knowing that a sale filed as "Landover" might be your best Cheverly comparison, and that one filed as "Cheverly" might sit outside the town line. That's local knowledge. No algorithm can substitute for what isn't in the data to begin with.
What to Do With This Information
If you've looked up your Cheverly home on one of these sites, I'm not saying ignore the number. I'm saying understand what it is: an estimate built from a hodge-podge of records by a system that doesn't know Cheverly's boundaries, can't see the condition of your home, and has no particular reason to get it right.
If you're thinking about selling, one question is worth asking any agent before you sign: do you know that Bright MLS defaults to the postal/tax city for this address, and do you know to change it to "Cheverly"? The answer tells you something.
Pricing a Cheverly home accurately means knowing which sales to use and which to set aside -- by town boundary, not zip code. It means adjusting for condition, not just square footage. It means knowing each comparable's full history. And it means a listing that says "Cheverly," so the buyers who are specifically looking for this town can find you.
That's the work I do. If you want a number that reflects the actual market -- I'm here.

