Earlier we’ve blogged about using algorithms for employee recruitment and the effects of confirmation bias on decision making.
A recent article by Quentin Hardy in The New York Times technology blog Bits entitled Using Algorithms to Determine Character takes this to a new level.
He highlights a few companies that are using algorithms to judge our character. And it’s proving significantly more accurate than most of us can do based on intuition or other methods that often are subject to selection bias.
- The mortgage lender, Upstart, serves borrowers who otherwise may be deemed “risky” since their credit history is relatively unknown: recent graduates without mortgages, car payments or credit card settlements.
- Another lender, ZestFinance, writes loans to subprime and “near prime” borrowers.
How do they do it? How do they justify taking risks that other financial institutions won’t?
Upstart looks at their SAT scores, what colleges they attended, their majors and their grade-point averages. Those with the higher GPA are more likely to pay their debt. One factor ZestFinance has found to predict one subprime borrower will be riskier than another: at some point they’ve given up a prepaid wireless phone number, making them harder to find.
Other companies using algorithms to go one better than human decisionmaking:
- Workday, a company offering cloud-based personnel software, predicts whether an employee is likely to quit and suggests incentives are more likely to make this kind of person stay.
- A study by Stanford’s Jure Leskovec compares data predictions against decisions of judges at bail hearings, indicating that data-driven analysis is 30 percent better at predicting crime.
However, Hardy points out that algorithms are written by human beings. So, “even if the facts aren’t biased, design can be, and we could end up with a flawed belief that math is always truth.”
Some notable quotes from the article:
“‘Character’ is a loaded term, but there is an important difference between ability to pay and willingness to pay.”
“Familiarity is a crude form of risk management, since we know what to expect. But that doesn’t make it fair.”
“Algorithms aren’t subjective. Bias comes from people.”