John Denslinger explores the growing role of artificial intelligence in business environments and offers an opinion on the recently published Blueprint for an AI Bill of Rights
AI is largely desired for its data analytics, but generally comes with embedded algorithms particularly in customer, employment and productivity software packages often unbeknownst to procurement teams. Likewise, large enterprise platforms updated by outside vendors may introduce additional or modified AI functions likely enhanced by cloud-based machine-learning systems. There can be no doubt, AI is becoming ubiquitous as a super-automated, decision-making tool. Coupled with machine-learning, AI’s overall capability is nothing less than scary powerful.
Algorithms are generally thought to be benign, but biases have surfaced in consumer and employment related areas. Just the perception of bias or privacy invasion could cripple a company and its brand. The distinction between benign and bias might be small but an unintended error in judgement could be enormously consequential. Software recipients beware. Executives may find staying on top of algorithmic unknowns an endless task.
So, what constitutes a responsible use of AI in business? Simple question, right? Perhaps the basic assumption might be that every company already has executive level governance with well-defined operational checks and controls in place. Do I dare say, most companies have yet to reach this level of awareness let alone operational competence.
One of the first publicly voiced ethics concern occurred in 2015 via an open letter on Artificial Intelligence signed by Steven Hawking, Elon Musk and other notable professionals. It contrasted short-term and long-term benefits versus the unintended consequence of mis-applied AI. In the time since, responsible AI seems commonplace, but as adoption penetrates more deeply into daily society, AI hits upon some touchy areas. Questions of bias, discrimination and inequity have appeared in both business and political circles.
Local, federal, and international bodies have weighed in proposing rules, and in some cases, passed laws regulating use of artificial intelligence. At the Federal level, FTC, NSF, EEOC and more recently Congress, have looked to strengthening AI safeguards on civil rights, privacy, consumer deception, and discrimination. Not to be outdone, the White House just issued a major document on 4 October called: Blueprint for an AI Bill of Rights. This policy paper identifies five tenets: (1) safe and effective automated systems; (2) algorithmic discrimination protections; (3) data privacy; (4) notice and explanation how AI is used; and (5) opt-out alternatives.
The Blueprint for an AI Bill of Rights has two problems:
• It’s a non-binding white paper aimed at how the federal government and its agencies acquire and deploy AI technology
• The blueprint intentionally avoids regulating tech companies who power machine learning and AI deployment
Developers are free to create algorithms that best suit their business intentions obviously giving due consideration to current privacy and discrimination laws. One point is remarkably interesting: the government’s opt-out protection requires a human alternative as the remedy. If this tenet were forced on business, it might be costly. Nevertheless, the signal is clear: big-brother oversight is coming and with it more unknowns for executives to oversee.
Companies need data. It’s data that fuels prospering ones and AI delivers extraordinary results. But ethics matter more. It’s ethics that sustain the mission regardless of whether your company is a buyer, developer or provider of AI technology. Executive governance is an absolute must. Organizations need well-defined operational checks and controls to avoid hints of bias, discrimination and inequity.
It would appear AI unknowns outweigh the knowns at this time.