Connect with us

Bussiness

Insurers must test AI for bias, New York State says

Published

on

Insurers must test AI for bias, New York State says

Insurers must verify that their use of consumer data and artificial intelligence isn’t discriminatory, according to a state Department of Financial Services memo dated July 11, and will be held responsible if external tools produce biased underwriting or pricing decisions.

Insurers shouldn’t incorporate external data or AI systems unless a comprehensive assessment has shown their use is not “unfairly discriminating between similarly situated individuals,” the state Department of Financial Services wrote in a memo released last week. If the assessment shows a disproportionate impact on a demographic group, insurers must try to find a less biased alternative, the document says. This circular letter — a government document that outlines expectations — is relevant for the roughly 1,960 companies supervised by DFS, including life, property and casualty insurers, as well as some health insurers.

DFS Superintendent Adrienne Harris said the guidance will ensure “that the implementation of AI in insurance does not perpetuate or amplify systemic biases that have resulted in unlawful or unfair discrimination, while safeguarding the stability of the marketplace.”

While traditional analysis methods can also be discriminatory, risks are exacerbated when firms deploy opaque formulas to analyze enormous amounts of data and automate decisions, the consumer rights group Consumer Reports said. AI may be fed partial, inaccurate or unrepresentative data — or information shaped by defunct, biased practices. 

For instance, life insurers have acknowledged considering customers’ body mass index — a body weight and height ratio — but the American Medical Association has advised against relying solely on BMI since it was developed almost exclusively using data from white patients. Insurers may now need to re-evaluate how they factor in BMI, said Chuck Bell, advocacy programs director at Consumer Reports.

“If you’re using a data variable and there are questions about whether it has a discriminatory impact, it’s important for the insurance company to consider … and to have appropriately tested and vetted the algorithm,” Bell said.

The DFS vision seems to be appropriately adaptable, said Eric Linzer, president and CEO of the New York Health Plan Association, which represents health insurers. AI is not commonly used in underwriting and pricing health plans, but has been used to assess claims for fraud and identify gaps in care, Linzer said.

“We certainly support a flexible framework that allows plans to develop appropriate tools … that seek to augment, not replace, human decision making and expertise,” Linzer said. “AI has the potential to make the health care system work better and cost less.”

Continue Reading