How badly do humans misjudge AIs?

 [[{“value”:”We study how humans form expectations about the performance of artificial intelligence (AI) and consequences for AI adoption. Our main hypothesis is that people project human-relevant problem features onto AI. People then over-infer from AI failures on human-easy tasks, and from AI successes on human-difficult tasks. Lab experiments provide strong evidence for projection of human
The post How badly do humans misjudge AIs? appeared first on Marginal REVOLUTION.”}]] 

We study how humans form expectations about the performance of artificial intelligence (AI) and consequences for AI adoption. Our main hypothesis is that people project human-relevant problem features onto AI. People then over-infer from AI failures on human-easy tasks, and from AI successes on human-difficult tasks. Lab experiments provide strong evidence for projection of human difficulty onto AI, predictably distorting subjects’ expectations. Resulting adoption can be sub-optimal, as failing human-easy tasks need not imply poor overall performance in the case of AI. A field experiment with an AI giving parenting advice shows evidence for projection of human textual similarity. Users strongly infer from answers that are equally uninformative but less humanly-similar to expected answers, significantly reducing trust and engagement. Results suggest AI “anthropomorphism” can backfire by increasing projection and de-aligning human expectations and AI performance.

That is from a new paper by Raphael Raux, job market candidate from Harvard.  The piece is co-authored with Bnaya Dreyfuss.

The post How badly do humans misjudge AIs? appeared first on Marginal REVOLUTION.

 Data Source, Education, Uncategorized, Web/Tech 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *