Artificial intelligence [AI] is the new challenging frontier in the administration of workers’ compensation benefits. While there are cost savings benefits that will be achieved through deep learning and machine intervention there are also serious ethical concerns coming to the forefront as this new technology evolves.
Deep machine learning is complicated and it is in an ambitious goal. The result may afford a good prediction, but it lacks an explanation of "why" in many instances.
The potential decrease costs in both administration and payment of workers’ compensation benefits have been alluded to in advertisements of software manufacturers. Already vendors are claiming reductions including: a 5% reduction in claims cost; a 50% decrease in the cost of medical-only claims; and a 25% to 60% reduction in attorney involvement.
AI programs transfer the decision-making role onto a computer algorithm rather than a human being. In others words, the plight of the injured worker is not left to the decision making capacity of an adjuster, but rather a computer utilizing logarithms. The logarithms can be biased concerning such stereotypes as: racial, demographic, genetic, gender, economic, and/or religious. AI can be utilized to admit or deny claims, restrict temporary disability benefits and direct medical care.
The lack of privacy in the vast amount of data flowing into machine learning programs continues unabated. Some of the data is distributed without consent and without transparency. How much of this data is used by AI programs remains unknown. The process is unexplainable.
Computer based learning systems have available vast amounts of data, from unknown sources, that form a gold mine of information available. Insurance carriers and employers can use this data to reduce claims costs and ultimate payouts. The data grows daily from multiple sources both with in the workers’ compensation community as well as from collateral information resources. In the information world the availability of electronic information grows constantly.
The deployment of artificial intelligence programs that involve deep machine learning raise significant issues involving questions as to the explainability of the decision making process. The Explainability of Artificial intelligence [XAI] including the algorithms employed in the decision making process is problematic. An overriding question is who is responsible for the potential harm since an individual cannot sue a computer.
The ethical dilemma created is that it is difficult to regulate logarithms. The federal government has taken the lead on this challenging issue. The Defense Advancement Research Projects Agency [DARPA] assesses how the components of AI can be explained and applied in a responsible manner. The components include: how rich, complex and subtle information is perceived; how the machine learns the information within an environment; how the information is abstracted to create new meanings; and how artificial intelligence can reason to plan and decide.
The integrity of workers’ compensation is being challenged by AI systems that lack explainability. The goal of employers and or insurance companies in utilizing AI for cost-claim reduction is noble. The playing field must remain balanced and the right to “due process” in workers compensation programs needs to be preserved. The oversight by governance, policy, and rules concerning XAI should be utilized to maintain the integrity of workers’ compensation programs.
Jon L. Gelman of Wayne NJ is the author of NJ Workers’ Compensation Law (West-Thomson-Reuters) and co-author of the national treatise, Modern Workers’ Compensation Law (West-Thomson-Reuters). For over 4 decades the Law Offices of Jon L Gelman firstname.lastname@example.org has been representing injured workers and their families who have suffered occupational accidents and illnesses.