How do you measure “close enough” to a market value?  We have standards you know!

Editor’s Note: This is Standards, part 3.10 of George Dell’s series on How Do I Move to EBV? Links to the earlier posts are here.

Valuation is numbers.  Some numbers are numbers, like building SqFt.  Other numbers are exact like “Boolean,” (0 or 1, True or False).  Most all other numbers are approximate. (Like appraised values.)

So how do appraisals measure “closeness,” or “certainty”?  Nothing is certain.  Why even the common definitions of value include the words “most probable,” recognizing the uncertainty, the variation, the deviation of possible answers.

We have prices for comps, and measures of features.  Yet we still have uncertainty.  How close is close enough?

So, who cares?  Appraisers deliver a probable point value.  Supported by, er …  well, support.

But how certain is the probable value?  Is it for sure, or is it a really good guess?  Perhaps we could measure the certainty!  That would be nice.  So how can we measure it?

One way is by explaining it.  Legacy appraisal covers this quite well.  It recognizes that the “three approaches” will vary.  They may vary a lot, or a little.  And one approach may be better, so should be given more weight.  Easy.  We call this “reconciliation.”  We explain in words or rhetoric – why the three approaches disagree.  By a little or by a lot.  “Justify and explain” I was taught.

Even if there is great discrepancy (deviation), we can arrive at a ‘most probable’ number value.  Hmmm.  Probability is defined as how likely something is to happen.  Perhaps we can pick:

  • Very sure;
  • Pretty sure;
  • Not so sure.

OK, this is better.  Most appraisals I have seen dance around sureness.  It’s best to avoid really definitive words like the above.  Avoid any ‘confession’ that your result is not so sure.  (And the client assumes you just didn’t do a good enough job!)  Better to use vague weasel-words or hide behind giving different weight to each of the three anointed approaches.  “Close enough.”  (Worse yet, you may be judged to be in violation of USPAP, as being not “worthy of belief”.)

We need to do better.  We do want to be as certain as possible.  How do we do that?  In order to measure this level of certainty – we must first measure it!  How do we measure uncertainty?

There are only two ways.  For the first way, we must know the ‘true’ value to compare against.  Perhaps an actual sale price?  Alternately, I am told we can measure the reliability of the process itself.  For valuation these parts comprise:

  • Appropriateness: Is the right question asked, and what are the assumptions?
  • Data: Is the process to reduce to the best complete information ideal?
  • Prediction: Is one of the three predictive algorithms applied?
  • Delivery: Is the delivered data stream reproducible?

Of any other known numerical discipline or scientific topic, measurable sureness (precision), and analytical unbiasedness are as important or more important than the actual result.  Who cares what your opinion is, if it might be biased, or just way off.

The future of the profession rests on attention to sureness as well as trueness.  We must become comfortable with uncertainty.  We must reflect on and measure and report measurable uncertainty.