Category:
![]() | > Fannie Mae > AMC Resource Guide> OREP E&O ★★★★★ “They are knowledgeable, professional, and understand urgency.” – Joe Thweatt |
AI Usage in Appraisals: Trust but Verify
by Jo Traut, McKissock Learning
Artificial intelligence (AI) has arrived, and it’s brought some genuinely impressive capabilities across various sectors, including real estate appraisal. From analyzing comparable sales, assessing property condition and quality from photographs, and generating market trend analyses, AI tools are transforming workflow efficiency in ways that would have seemed like science fiction just a few years ago. Today, some appraisers are already leveraging AI to enhance their practice, which is not only acceptable but rapidly becoming essential for maintaining productivity.
The truth is, AI serves as a tool rather than a professional peer. It lacks certification, is not required to adhere to USPAP, and it won’t be available to clarify your approach to a review appraiser or stand before a state board to justify your work.
The question isn’t whether to embrace AI technology. The question is how to deploy it responsibly by adhering to USPAP and regulations while safeguarding your professional accountability standards.
Understanding AI’s Limitations
AI is like a top student with an eidetic memory but lacking practical judgment. AI can absorb massive amounts of data, compile property statistics, and generate polished text at an impressive speed. However, when faced with a real appraisal challenge that needs market insight, context-based reasoning, or the ability to discern why one comparison stands out over another, it may struggle.
I’ve seen AI fixate on gross living area, even though lakefront footage is the primary value driver in that market. I’ve watched it cite Wikipedia as an authoritative source or present opinion editorials as peer-reviewed research—delivered with a confident, unquestioning tone. I’ve seen it confuse mass appraisal methodologies with single-property valuation, treating fundamentally different approaches as interchangeable.
AI knows facts like that student with the photographic memory, but it may miss the deeper meaning. It may lack practical market experience to judge what is truly realistic and sometimes can’t tell the difference between a trustworthy source and a random blog post. In appraisal work, understanding the “why” behind the data and knowing which facts matter is everything.
The Prompting Paradox
When you point out these potential flaws, AI proponents have ready responses: your prompts weren’t specific enough, you didn’t ask for source citations, you need to train the model better, or you forgot to tell it to “think like a market participant.”
Fair enough. Prompting technique does matter, and AI is inherently iterative, so you can refine outputs through multiple rounds of feedback, adjusting your prompts to get closer to what you need. Ask it to reconsider, provide more details, or approach the problem differently, and it will generate new responses.
But here’s the Catch-22: effective iteration requires you to recognize when the output is wrong or incomplete. If you don’t already know the correct answer or understand the underlying principles well enough to spot problems, how would you know what to refine? How would you recognize that the polished, confident response is potentially flawed? You wouldn’t. You’d accept it as accurate because it sounds authoritative and reads well, ending the iteration before it ever addresses the real problem.
The fundamental issue isn’t solely about perfecting your prompts. It’s about understanding that every professional tool has inherent limitations, and it’s your responsibility to know what those limitations are before you rely on that tool. You wouldn’t use a laser measuring device in bright sunlight without understanding its reduced accuracy. You wouldn’t trust a measuring wheel on steep, rocky terrain without verification. So why would you trust AI generated content without the same professional skepticism?
The Credibility Trap
If I asked AI about King Henry VIII’s wife Jane Seymour and it started describing the actress who starred in “Live and Let Die” (1973), you’d catch that error immediately. Wrong Jane Seymour, wrong century, and absurdly wrong context.
But AI doesn’t give you such obvious red flags in appraisal work. It presents information about cap rates, adjustment factors, and market conditions in the same confident, authoritative tone, regardless of whether the underlying data is solid or weak. A questionable adjustment may sound just as professional as a well-supported one.
That polished output creates a dangerous psychology. It looks credible, so we assume it is credible. We skip the verification steps we’d never skip with a human assistant’s work. We don’t question the sources. We don’t test the logic. We copy, paste, and move on; and that’s exactly where the compliance risk lives.
Let’s talk about the elephant in the room: using AI in appraisal work comes with real responsibilities. You can’t just plug numbers into some black box algorithm, get the output, and call it a day.
(story continues below)
(story continues)
USPAP Compliance
First off, Standards Rule 1-1(b) makes it clear that you can’t commit a substantial error that significantly impacts your appraisal. In addition, Advisory Opinion 37 (AO-37):
“Computer Assisted Valuation Tools” offers valuable guidance on this topic. Keep in mind that Advisory Opinions are not officially part of USPAP, but they provide practical examples of how USPAP applies in certain scenarios and offer recommendations for addressing appraisal issues and challenges.
In summary, AO-37 states that when you’re using computer-assisted tools, whether it’s regression analysis software (or an AI program), you need to understand what it’s doing. You don’t need to recreate the algorithm or reproduce the data, but you do need to understand
the overall process and the selection parameters being used, if applicable.
If you can’t explain the reasoning behind AI’s output to a client or peer reviewer, or if you can’t provide independent support for that conclusion, you shouldn’t be using it. AI technology should enhance the credibility of your work, not undermine it.
Practical Steps for Verifying Output
You don’t need to recreate everything AI generates, but you do need a verification strategy. Here is one practical approach to verify AI output:
1. Use Deep Research Mode When Available
Many AI platforms now offer “deep research” or “research mode” features that go beyond standard
responses. Instead of relying solely on training data, these tools ac-
tively search the web, gather information from multiple sources, and provide citations.
Deep research typically provides clickable sources, publication dates, and more transparent methodology. You can see where
the information originated and verify it directly. Check the sources it provides and confirm if they’re credible and current.
2. Use Technology and Templates to Save Time
Don’t wait to wonder where the information came from. Include source requirements in your initial prompt. If AI gives you data without sources, ask directly: “What sources did you use for this cap rate?” or “Where did this adjustment factor come from?” If it can’t provide specific, verifiable sources, the information isn’t usable.
3. Ask AI to Show Its Work
Just like your math teacher required you to show your calculations, you can (and should) require AI to do the same. Prompt it to “Show your step-by-step calculations for this adjustment” or “Show me how you calculated the depreciation amount.”
When AI must show its work, you can spot errors more easily in the math, the logic, or identify any missing steps. By seeing the work, it not only helps you verify accuracy but to know whether the approach itself is sound.
4. Test Against Your Market Knowledge and Experience
You know your market. Does the AI output pass the smell test? Does the market area characterization match what you’ve observed? Are the trend statements consistent
with your recent analysis? Does the condition and quality assessment align with your inspection observations and match the applicable definitions? If something feels off, it probably is. Your professional judgment isn’t optional just because AI suggested something different.
5. Verify Key Facts Independently
Don’t trust AI for critical data points. For instance, you should independently confirm relevant property details of comparable sales and transaction information by using your MLS, public records, parties involved, or other trusted sources.
6. Check for Internal Consistency
Does the AI generated narrative match the data? If AI says, “the market is improving,” do the comparable sales you use support that? Does the reconciliation align with
the weight given to each approach or comparable sales within the sales comparison approach? Inconsistencies may signal that AI is generating content without understanding
context.
The Reality Check
Ask yourself if you could defend this report with a review appraiser or in front of a state board. Could you explain where the data came from and articulate the reasoning behind the conclusions? If your answer is “the AI tool told me so,” you haven’t verified enough.
The goal here isn’t to eliminate AI from your appraisal workflow or return to purely manual processes. AI can genuinely enhance productivity and help you deliver better service to clients. The goal is to integrate these tools strategically while maintaining the professional accountability that makes your work credible. AI is your assistant, not your replacement. You’re still the appraiser. You’re still the one signing the certification.
Verification doesn’t mean recreating everything from scratch. It means applying professional judgment to AI output, which is something you do quickly because you know your market. Spot-checking calculations takes minutes, not hours. Testing output against your market knowledge is nearly instantaneous. Asking AI to show its work or cite sources adds seconds to your workflow, not days.
AI handles the time-consuming grunt work of compiling data, drafting descriptions, and formatting narratives. You handle what you do best— professional judgment, market expertise, and quality control. That combination is incredibly powerful. You’re producing higher quality reports in less time while maintaining the professional standards that protect your license and reputation. Used responsibly, AI isn’t a compliance risk. It’s a competitive advantage.
Sharpen your appraisal skills and keep up with the latest regulatory changes with continuing education courses from McKissock Learning. Gain access to all our CE courses, including the latest national USPAP class and our premier suite of URAR training courses—all for one discounted price—when you become a McKissock CE Member.
About the Author
Jo Traut is the Director of Appraisal Course & Curriculum at McKissock, and a Certified Residential Appraiser licensed in Illinois and Wisconsin. As an appraiser since 1997, Jo specializes in appraising luxury homes, valuations for lending, appraisal review, and collateral compliance. She authors and teaches appraisal courses designed to simplify complex topics with practical, real world insights. Jo holds the CDEI designation and is an AQB Certified USPAP Instructor. Previously, Jo served as Residential Chief Appraiser for the fifth largest bank in the United States.
90-Day AI Appraiser Challenge (Prometheus Project):
Can a Real Estate Appraiser net over $100K a year, while working less than 40 hours per week―without having any staff?
Follow along as Dustin Harris (The Appraiser Coach) documents this 90-day experiment with an anonymous appraiser to test the possibilities! Full disclosure—videos of this experiment will be posted regularly in the All—Star Team Community.
Go to https://theappraisercoach.com/ and learn how to follow along.
Tags: Appraisers, news editions




