At the Last Futurist in 2021, we’ve been thinking a lot about the regulation of AI in the wild-wild era of data collection, privacy invasion and surveillance capitalism where companies like Google, Amazon and China are re-writing the rules. Does and will human rights matter on the internet and world today impacted by AI? This all gravely remains to be seen.
The European Commission recently published a proposal for a regulation on artificial intelligence (AI). This is the first document of its kind to attempt to tame the multi-tentacled beast that is artificial intelligence. However there’s no global consensus of how to regulate AI or what legal framework and independent regulatory jurisdiction such a body could have.
Big corporations of America or China don’t really care what the EU believes about the rules surrounding the development of artificial intelligence. China’s entire framework around surveillance capitalism isn’t just about commercial interests, but citizen control and surveillance. Should a social credit system even be legal or it is a violation of human rights? China lives in a parallel universe of AI ethics, on its own internet.
The European Union does not have jurisdiction over the United Nations, which is governed by international law. In some ways the EU and UN are international bodies that have less global impact than they once did. Artificial intelligence technologies themselves have been used increasingly by the United Nations.
Think about it, United Nations agencies have also used biometric identification to manage humanitarian logistics and refugee claims.
There appears to be a lack of coherent global leadership in the regulation of AI and ethics around artificial intelligence. Given the current landscape of AI research and development it’s most likely the “winner” of AI will be the one to regulate it. In the 2020s this increasingly looks like it will be China.
But can the EU, UN and rest of the world trust China to regulate something itself appears to be abusing? Can an authoritarian state be capable of regulating something so vast and ubiquitous as artificial intelligence? If so, what biases will it leave for its own supremacy and right for (‘police state’) surveillance in the domain to go unchallenged?
In parallel, the United Nations has partnered with private companies that provide analytical services. A notable example is the World Food Programme, which in 2019 signed a contract worth US$45 million with Palantir, an American firm specializing in data collection and artificial intelligence modelling.
The UN thus appears more interested in harnessing the powers of AI, than helping to regulate it. It’s still unclear how the EU’s new rules could spur global regulation in artificial intelligence.
As new companies involved in BigData analytics and other aspects related to AI come into being, how does the global community and these organizations like the UN and EU integrate or attempt to regulate them? In 2014, the United States Bureau of Immigration and Customs Enforcement (ICE) awarded a US$20 billion-dollar contract to Palantir to track undocumented immigrants in the U.S., especially family members of children who had crossed the border alone. It papers beyond the scope of global bodies to regulate the commercial interests of private or even public companies.
It’s very problematic because there’s no legal regulation even in how these huge organizations use AI in their activities. For instance, several human rights watchdogs, including Amnesty International, have raised concerns about Palantir for human rights violations. Like most AI initiatives developed in recent years, this work has happened largely without regulatory oversight.
There have been many attempts to set up ethical modes of operation, such as the Office for the Co-ordination of Humanitarian Affairs’ Peer Review Framework, which sets out a method for overseeing the technical development and implementation of AI models.
It does not appear the UN has any intention of helping to regulate AI or its human rights abuses. The EU can come up with broad rules, but how does it expect to enforce them outside of its jurisdiction? In the absence of regulation, however, tools such as these, without legal backing, are merely best practices with no means of enforcement.
BigTech have their AI ethics councils that appear to be mostly just a PR gesture and in the case of Google have clearly failed on a repeated basis. Microsoft has some fluff around AI for good, but has returned to its monopolistic practices of its anti-trust days in its renewed push for digital transformation.
In the European Commission’s AI regulation proposal, developers of high-risk systems must go through an authorization process before going to market, just like a new drug or car. They are required to put together a detailed package before the AI is available for use, involving a description of the models and data used, along with an explanation of how accuracy, privacy and discriminatory impacts will be addressed.
Is that the the future of AI development and deployment? Will China and America even agree to such a process? If they do not, does the EU then become simply the last bastion to privacy rights and freedom online when the rest of the world moves closer to dystopia and an apartheid under AI control? These are the moral challenges and questions of our times living in the wild-wild west of artificial intelligence development and deployment.
The UN does not appear to have strong leadership on the matter. During the pandemic, an organization like the WHO, does not even appear to be objective or highly trustworthy. The AI applications in question include biometric identification, categorization and evaluation of the eligibility of people for public assistance benefits and services.
They may also be used to dispatch of emergency first response services — all of these are current uses of AI by the United Nations. Is AI simply a tool without rules or how best to utilize it? Will we continue to live in a world with no oversight, frameworks of law and ethical guidelines with regards to the practices involving AI? In all the hype around AI, we forget the heart that tools, machines and machine intelligence must benefit humanity directly.
Perhaps the UN is an outdated institution that has become corrupt. Conversely, the lack of regulation at the United Nations can be considered a challenge for agencies seeking to adopt more effective and novel technologies. Trust in AI is difficult to obtain, particularly in United Nations work, which is highly political and affects very vulnerable populations. Organizations like the UN and the WHO appear to be vulnerable to coercion, bribery, manipulation and corruption. This is a pity given how AI becoming more important at these organizations.
We have to imagine eventually AI will be the overseer and regulator, since human beings appear incapable of doing this themselves. A regulatory framework like the one proposed by the European Commission would take the pressure off data scientists in the humanitarian sector to individually justify their activities.
Instead, agencies or research labs who wanted to develop an AI solution would work within a regulated system with built-in accountability. This would produce more effective, safer and more just applications and uses of AI technology.
It’s hard to trust a world or an internet where machine learning, algorithms and AI is not properly regulated. It’s hard to trust a Government or institution that uses AI without rule of law. Humanity will have to decide how important it is to regulate AI or face the dire consequences. The is the future sin we have been talking about since the inception of the Last Futurist.