Editor’s Note: This is Part 1 of a new series on Bias.

Easy – Just don’t be biased.  Don’t even think about it!

Alternately, we can have the government regulate and require it!

To know what ‘unbiasedness’ is, and how to achieve it, we have to be able to measure it.  We cannot enforce anything, unless we can tell when it approaches zero, no bias.

To intelligently consider the options, we need to know what bias is.  As we have discussed here before, there are two kinds:  1)  personal/human bias; and 2) analytic bias (data or algorithm).

In my classes in econometrics and statistics is where I learned the word “unbiased.”  I have not heard the word used in the world of prejudice or unacceptable discrimination.

Unbiased is clearly applicable to the practice of valuation.  Whatever is the goal, or problem to be solved—the models and algorithms should lead to that result without bias.  In the world of statistics, econometrics, and psychology, we can reduce this analysis further, by asking the question, how do we avoid bias?  In valuation, and in other analytics purposes, this can be salami-sliced into six parts:  accuracy, precision, validity, sufficiency, appropriateness and reliability.

  • Reliability. This combines accuracy, precision, and validity – plus consistency across varying situations.  Yet reliability itself has to be measured in human terms.  Is the result ‘sufficient’ for practical use in decision-making?  We need to add this into our methodological soup.
  • Accuracy. A measure of how close a single measurement is to the true or accepted value.  Problem here is how do you know the ‘true’ value to compare to?  Hmmm?
  • Precision. How close together are repeat measurements to each other? No problem here, the deviations can be quantified by measures of variation, such as standard deviation (sd).
  • Validity. Does the study measure what it is intended to measure?  The study model or data cannot answer that question.  It takes a human.
  • Sufficiency. Is the result sufficiently accurate and precise (and valid)?  This is also a judgment call.  If a human is making a decision, that human must evaluate all of the above.
  • Appropriateness. This is the hardest of all!  How do we know that the right question was asked in the first place, even if we have great validity?  Does the human know what question to ask?  Did we pick the right human?  And is that human question-biased?
Watch George Dell and Peter Christensen discuss How to Avoid Being Accused of Bias.
George Dell and Peter Christensen discussed ways to avoid accusations of bias in appraisal. This is a replay of one of George’s Things You Need to Know webinars from 2021.

Wow!  I had no idea that being right is so hard!

In valuation, analytic bias can come from two primary types of error.  One is data selection bias, and the other is algorithmic selection bias.

Data selection involves both the subject property, and the comparable data.  An error in subject measurement or categorizing results in a model bias of exactly the amount of the error.  An error in any one ‘comparable’ is moderated (or averaged out) by the other comparables.  If there are three competitive sales in the data set, a $100,000 error results in a direct bias of $33,000.  If ten competitive sales are included, the expected error is only about $11,000!

And if the original $100,000 error was in one comp as a single case (because it is a ‘random’) – then the probable overall error for ten competitive sales is dramatically less, perhaps around $2000 – $3000.  It is now a visible data outlier, easily identified, wrangled and fixed by the expert modeler – the appraiser.

Lesson #1 then is (analytically) more data is better than less data (usually).  The science of data, and Evidence Based Valuation (EBV)© emphasize the use of the right data in the right amount.

More unbiased bias issues coming in coming blogs.  Stay tuned.