John Denslinger observes that as AI moves into the realms of policing, hiring, promotion and social services, the question of censorship and restriction is next to be addressed.
It was thirty years ago, the first speech recognition machine powered by algorithms came to market. The product was designed and produced by Dragon and named Dragon Dictate. State-of-the art at the time, this one-function offering may have ushered the age of AI. It demonstrated a machine, using algorithms, could convert sound into an electronic signal and back to intelligent sound again. It was an astonishing breakthrough in technology. It would be twenty years before Apple introduced Siri, the now infamous digital personal assistant for iPhones and Apple devices. In 2014, Amazon would release Echo, a voice-controlled digital personal assistant for home environments named Alexa. Apple and Amazon still dominate that voice-activated digital assistant space today.
The technology behind voice-based algorithms is but one facet of the AI evolution, yet nearly every AI solution since incorporates some form of voice capability. Language translation and speech understanding became obvious extensions of the original concept. Virtual assistants and smart robots designed to interface, communicate and protect humans emerged next. With the march towards smart homes and smart cities, the need for AI will only grow.
In late 2018, McKinsey & Company offered one of the best definitions of AI: the ability of a machine to perform cognitive functions associated with human minds, such as perceiving, reasoning and learning. The thought of machines mimicking human activity seemed distant then, but rapid advances in computational accelerators, memory, storage and networks has made AI ubiquitous now.
New markets for AI are everywhere. Cybersecurity seeks solutions to antivirus and antimalware intrusions. Data banks require secure access and fool-proof facial recognition systems that pass privacy concerns. Medical imaging and surgical tools need deep learning models and computer vision for life-critical decisions. Defense systems depend on smart drones and similar mechanisms for precision deployments and battlefield advantage. But when assessing all the potential applications, it’s the connected and autonomous vehicles that require the most advanced AI of all. Instantaneous decisions that rely on vast amounts of data gathered from connected devices, image recognition systems, deep learning and neural networks capable of unsupervised learning, unstructured data, and predictive analysis. Since the safety of individuals is at significant risk, systems must perform flawlessly under any operating condition.
Hi-tech applications aside, AI has long been deployed in the world of social media, advertising, financial services, insurance and more. The experience has been mixed often raising questions of fairness and bias. Perhaps that explains a recent call for regulation of AI, specifically ‘the algorithms’. In fact, three cities New York, London and Barcelona are setting rules for AI use citing five best practices:
1: Fairness and non-discrimination
2: Transparency and openness
3: Safety and cybersecurity
4: Privacy protection
The main focus seems to target traffic management, policing, hiring, promotion and other social services within their current domain of responsibility. Not unexpectedly, facial recognition was the first to be censored, severely restricted or, in some cases, completely banned.
It’s difficult to argue against these best practices and their potential benefit to society. Of course, the big question is who decides? Cities are political entities and bureaucratic. Not the ideal algorithmic decision maker. It might be okay for internal practices, but complex technology like smart cities and autonomous vehicles will dominate future daily life. Will their scope snare the hi-tech realm as well? Knowing governments, it’s hard to fathom otherwise.