SHARE

With automation and artificial intelligence (AI) taking the limelight in recent technological developments, is there still a way to unite the concept of building smart machines and morality? For proponents of human-centered AI, it might just be possible to ensure building intelligent machines comes with building democracy, human rights, and trust. How can this be possible?

Artificial intelligence (AI) as a concept has been surrounded by quite a number of theories and speculations – and being subjugated by machine overlords aside, the discussion regarding the impact of AI towards the enjoyment of human rights and democratic governance are extremely important.

Such were the points discussed by Eileen Donahoe, the executive director of the Global Digital Policy Incubator (GDPi) of Stanford during a one-day conference. She begins by explaining that humanity has come to an inflection point when it comes to the discussions on how society will see AI’s impact on them.

It’s important to understand that mankind has slowly shifted to a data-driven society, ai-human rightswhere many dimensions of our lives have begun to be intertwined with a lot of AI-based applications. Now, AI influences a lot of humanity’s interaction with information, with government, with technology, and even with one another.

Perhaps it’s important to discuss quite the strange turning point when computers are now asking us that we’re not robots – as this has both negative and positive directions. The potential of AI is like how electricity and fire are both destructive and helpful. The ways AI can help society are vast, but it also comes with a lot of risks.

Donahoe in particular focused on the various ramifications of the deployment of AI in a widespread manner, especially in terms of the enjoyment of human rights and democratic governance. Mankind today now needs to find better ways of capitalizing on AI’s huge potential, and at the same time be able to protect humanity and human beings from its various risks. As such, questions have to be answered – including aspects on just how much do we have to rely on AI when it comes to governance decisions, losing trust in disinformation and digital information when it comes to democracy, the reinforcement of discrimination and bias via AI, and the future of work in general.

However, perhaps the most compelling questions are these three: just what is human-centered AI, how can ethics relate to this concept, and what are the existing ways a “human rights” framework can be built to guide better development and application of artificial intelligence?

Human-centered AI: Donahoe defines human-centered AI as an extremely important principle when it comes to the application and development of AI. It’s pegged by Stanford as the new priority when it comes to developing AI systems, but unfortunately it can mean a lot of things.

  • When it comes to the “AI” portion, it’s already covering a lot of aspects. Some people may use AI to refer to the full range of data-driven intelligences and processes –  as in algorithmic and automated decision-making; unsupervised deep learning and machine learning that mimics neural networks in biology; supervised AI in specific fields; and even AGI, or artificial general intelligence.
  • Humanity’s focus, however, should be on the “human-centered” portion of the concept. It’s important to have a unified stance on just what this concept means when it comes to reinforcing a human-centered approach to the application and development of AI.
  • As per Fei-Fei Li, Stanford Associate Professor of Computer Science, there should be three goals that should guide the basis of creating AI that’s human-centered – it should be guided by the concern on its effects on humanity, it should replace and not enhance human capabilities, and it should reflect human intelligence’s depth.
  • For Donahoe, the concern on its effects on humans should be the center of focus when it comes to building human-centered AI. It’s important to remember that trying to make AI “human-like” won’t necessarily make it human-centered, just as making AI “democratized” or “friendlier” still can be dangerous if their development has not been taken with its effects on humans ito mind.
  • It’s also important to remember that while AI can do a lot of things or humans, matters of governance accountability and responsibility should be addressed. Humans shouldn’t be taken away from the organization of society even when AI systems are used, but this also means thinking of more unique ways to develop a way for humans to be accountable and responsible for AIs they develop.

 

Ethics and Human-centered AI: Donahoe also emphasized that there needs to be dialogue with how the current field of ethics could be discussed along with the idea of human-centered AI, as there’s an ongoing ethical debate on the matter. For Donahoe, three questions and entry points should be discussed in this debate.

  • The first question should be on just exactly whether or not AI should be seen “legal personhood,” or if they should be treated as ethical agents. This means answering whether or not AI should even be allowed to make decisions for themselves in the first place.
  • The second question is whether humans should be assigned ethical responsibilities when it comes to robots and artificial intelligence.
  • The third is just what kinds of ethical responsibilities people who apply, develop, and design AI should exactly have.
  • Luckily, a lot of ethics-based initiatives have started popping up when it comes to the overall concern with AI, and technologists have begun formulating just what responsibilities people who deploy, develop, and design AI in policies, processes, and products exactly have.

These various institutions have their own terms and variations of just how AI should use “human values” in order to create and develop AIs that bring benefits to humanity. But right now, a lot of these initiatives still remain very abstract.

 

Beneficial AI Development and Application with current Human Rights framework: According to Fei-Fei Li, there’s no need to retreat and reinvent to vague ideas when it comes to reflecting on the human rights framework to be used in AI. In fact, our current human rights framework can be used to reflect on where AI development should be going.

  • A lot of human rights framework today already exist based on international human rights law, which are laws that have been recognized and negotiated internationally. Human rights norms are well-suited for humanity’s current digitized and globalized context, primarily because of its ability to be applied in a universal way.
  • A universal human rights framework also allows a practical and rich basis when it comes to evaluating the effects of a particular AI on humanity. These range from how AI and humans will adjust to their right to work, how AI should adjust to the right of privacy of humans, to other forms of rights such as access to free expression and information, equal protection, nondiscrimination, and fairness.

To close her discussion, Donahoe said that human-centered AI shouldn’t be treated as solely a technology challenge. It requires a lot of insight from fields outside computer science, and it’s a cross-disciplinary, cross-sector, and cross-cultural endeavor.

 

 

 

 

You May Also Like: 

Is Snapchat Building Its Own Visual Searching Algo To Find Products Via Amazon?

Locked And Unlocked Phones: What’s The Difference?

Apple USB Security Update Has A Major Flaw – Will This Be Fixed?

New Apple Siri Chief Is Former Google AI Boss – What Will Change?