What is Human Centric AI?

As AI slowly evolves to better personalize services to us, we have to keep asking the tough questions. What really is human-centric AI and its design?

We are heading for a bottleneck of AI’s potential to impact society, where its weaponization, the automation of jobs and the optimization of services are all likely to occur at the same time.

Human-centered AI learns from human input and collaboration, focusing on algorithms that exist among a larger, human-based system. Human-centered AI is defined by systems that are continuously improving because of human input while providing an effective experience between human and robot.

Retail and the workplace of the future both talk about an AI-human hybrid model of business, but how human is such an environment?

In a world of algorithms impacting our social media decisions, deep fakes, and NLP agents writing as if they were human, is that a more human-centric world?

MIT defines human-centric AI as “the design, development, and deployment of (information) systems that learn from and collaborate with humans in a deep, meaningful way”.

Is Microsoft a human-centric AI company? I’m sure some employees working at such companies do believe they are. But when looking at the evolution of Facebook, Google or Amazon, one has to wonder if their AI has had a sum total of a good impact on people at all.

A human-centric AI world should not just be about corporate profits. Stakeholder capitalism requires a paradigm change in how we monetize AI and the rules we place on it.

But a lot of the PR in the “AI for good” narrative is fake. So software developers and CEOs must continually ask, “What is human-centric AI really?”.

Is the future of business and technology so deeply intertwined that it leaves virtually no scope in the future for the vagaries of human intelligence and behavior? What responsibility do major companies have in making the world a better place? This starts to become a question of corporate social responsibility (CSR) at the intersection of AI.

From a business standpoint, human-centered AI solutions leverage human science and qualitatively thick data to understand the deeper needs, aspirations and drivers that underlie customer behaviors in your market.

We know AI will monetize new fields of health technology. But what will be the moral, spiritual and privacy costs? If DeepMind can predict my outcome as a patient, is that a more human-centric world where my care and experience will be more personalized?

How will humans actually work with AI in the future of society? AI’s purpose is to help humans, but without human input and understanding, it can only help so much. Taking a human-centric AI approach puts some of the computational heavy lifting on the shoulders of technology while still leveraging emotional and cognitive input from human beings.

Giving AI too much control or impact in our systems could be dangerous. Facebook as a manipulator of digital dopamine for an entire generation is a perfect example.

We want to work in companies with human-centric AI ideals but where are they? Not likely in Silicon Valley. Not likely in state-controlled China.

I would argue that true human-centric AI ideology has yet to be truly born because human beings still don’t understand what they are creating with more algorithmic, data and AI-centric institutions, lifestyles and experiences.

Artificial intelligence—software that appears to mimic or exceed human reasoning, rather than simply automating repetitive tasks—is already reshaping business and society.

But its morality and oversight is clearly not its strong point as engineers and product people blindly focus on monetization without seeing the big picture.

  • Weaponization (new exploits to manipulate people and conduct warfare e.g. cybersecurity)
  • Corporate monetization: automation (loss of jobs)
  • Human-centric AI

Without more female leaders in machine learning and AI and in corporations, I’m afraid human-centric AI is being left behind. The AI for good campaign cannot succeed without a major shift in equality in boardrooms, the C-suite and equality in companies. By all accounts Covid-19 has set us back ten or twenty years in this regard.

So what if human-centric AI is immature? What are the consequences to society? Many large AI projects fail to deliver, and the AI algorithms embedded in ubiquitous digital technology have been proven to encode societal biases, spread rumors and “fake news” disinformation, amplify echo chambers of public opinion, hijack our attention and impair our mental well-being. TikTok or Instagram can ensnare us, but then what?

According to Wired, the ability of AI applications to automate tasks associated with human tacit knowledge is rapidly progressing but it may be argued that it might take a generation for humanity to figure out a better approach to human-centric AI.

What new problems will technology have created while solving convenience, basic needs and more efficiency in the illusion of the unending growth of global capitalism?

AI as the future virtual assistant is quickly approaching. Think of all the applications. Facial recognition, sensing emotions, driving cars, interpreting spoken language, reading text, writing reports, grading student papers, and even setting people up on dates. AI knowing our mental health and relationship preferences. Is that a more human-centric world? Think about it.

At the Last Futurist, we’re always thinking about the future of AI and its impact on society.

You can follow our various Newsletters here.

Utopia Press

Breaking Technology News

Stock Market Breaking News

Artificial Intelligence Report

As well as articles that appear on The Last Futurist that aren’t shared in Newsletters. If you care to support this is our Patreon.

Similar Posts