The three ways are:  1) pick using good judgment; 2) draw random sample; or, 3) use all relevant sales.

Editor’s Note: This is part 2 of the series ‘Three Ways to Pick Comps‘. 

When I became an appraiser, good judgment ruled.  Commercial data was not available, and the MLS comp book came out every three months.  Phone calls on listings and queries to agents was the way.  We called this “confirming” a sale.  “Trust me, I know a good comp when I see one!”

Subjective and convenient.  But it was the best method available in the realm of paper data.

As appraisal education changed, especially from the Appraisal Institute – the random sample was not ever suggested as a great way to select comps.  However, a large amount of (then new) “advanced education” curriculum taught inferential statistics, which assumes a random sample.  Such as p-values, t-test, confidence intervals, hypothesis testing, standard errors, Type I and Type II errors, null and alternative hypothesis tests.  All ways of judging how well a random sample represents population.

Clever and sophisticated and “advanced.”  Unfortunately, useless for appraisal work.  If you have all the data, you just use it!  No need to take samples.   Neither judgment samples nor random samples.

In appraisal, we should use all relevant sales.  Three reasons:

  1. USPAP requires it. “An appraiser must collect, verify, and analyze all information necessary” and “must analyze such comparable sales data as are available.”  [Emphases added].
  2. The analytic ideal. Comparable similarity is a core part of modern analysis (data science).  There is a trade-off between too little data and too much data.  There is an ideal, best-sized, relevant, comparable data set.  Too little data you get bias, analytic bias.  Too much data, you get analytic  Data scientists call this the bias-variance tradeoff.
  3. Adjustments can’t be objectively calculated/estimated from a subjective/biased/uncertain data set. It cannot!  [Remember our Rule #1:  You can’t get objective results from subjective data.]

You ask:  “So how do I get “all necessary” and “available” data, as required by USAPAP?  How do we get near to that “bias versus variance” ideal?

The answer lies in a basic principle, and is a major focus of data science.  The starting principle is called “reduction.”  Simply put, you take a larger problem, reduce it into smaller pieces and parts, analyze the parts, then synthesize, (put them back together).  In the Community of Asset Analysts (CAA), we call these the five dimensions of similarity, as follows:

  • Property type/rights
  • Transaction/contract
  • Time and price indexing
  • Spatial location elements
  • Preference characteristics

We call these “dimensions” because they have different model/algorithm solutions.  While AVMs (Automated Valuation Models) utilize some of these methods, the ideal valuation (and risk analysis) solution involves the refined judgment of a field-related expert – the appraiser.

Appraisal is “believe me.”  Statistics is theoretical.  Data science explicitly recognizes the need for refined, informed judgment.  Quality, trained analyst judgment.

Let’s do science — systematic study through observation and experiment.