Realizing Beneficial AI, Keynote by Tak Lo, Founder of Zeroth.ai | Techsauce

Realizing Beneficial AI, Keynote by Tak Lo, Founder of Zeroth.ai

AI (Artificial Intelligence) is transforming society and it has the potential to change the world. The rise of AI has also led to the rise in the discussion of ethics in AI. In this keynote at Hello Tomorrow Singapore 2019, Tak Lo talks about ethics and how it comes about to beneficial AI. He strongly believes that we have to pay more attention to ethics in AI.




 
Tak Lo, Founder of Zeroth.aiTak Lo, Founder of Zeroth.ai

Coercion and Corruption in Business

He speaks about the two parts of ethics that come into business, coercion and corruption. In terms of coercion, whenever there is a transaction between a buyer and a seller, it is not always fair. Oftentimes, it is the seller who coerces and the buyer that buys. In terms of corruption, it means that when something is bought, sold or commoditised, it corrupts the nature of what that item is. If the item is not meant to be sold (like pride or naming rights) but is sold, it corrupts the idea of what that object is.

Examples of ethical issues regarding insurance, Facebook and Google.

Life insurance companies are pay-out in the occurrence of death. When this concept first came out, life insurance was morally reprehensible. However, many companies packaged this whole concept very nicely. In a way, they coerced the elderly to sign life insurance policy, and have someone else pay the premium so that when the elderly passes away, that person can collect the pay-out. So initially, what people thought was that you will want the person to die sooner so you can collect the pay-out.

For Facebook, he talks about how they incentivise your clicks. They optimize the clicks, art, and algorithms to want to compel people to open certain things. He highlights that this is a not fair transaction. 

For Google, their original slogan was 'Don't be Evil'. But after a while, it was removed. Through that pursuit, Tak Lo feels that the nature of 'Don't be Evil' has been corrupted and its all very different now.

We didn't aim for that outcome - it was just a side effect of what we are doing.

He shares the view of one of the founders for Future of Life Institute. The problem of AI is that human extinction may be a side effect. This argument is very similar to how human beings came into this world and animal species started going extinct. This suggests that with the use of AI, it may unintentionally bring about harm to human beings.

It is important for us to all spare a thought on ethics and AI. Thus, Zeroth.ai presents a solution to ethical problems revolving around AI. They are going to set up an AI ethics board, similar to Google DeepMind, to judge the ethical implications of technology and AI with experts all around the globe. 

It is not the fact that there is a solution, rather, it is about realizing the importance of ethical implications in AI and coming together to start a conversation to resolve these implications.

ลงทะเบียนเข้าสู่ระบบ เพื่ออ่านบทความฟรีไม่จำกัด

No comment

RELATED ARTICLE

Responsive image

2D Floor Plan Drawings vs 3D Models: Which is Right for Your Project?

In the vibrant world of architecture and interior design, the way we visualize spaces is evolving at an astounding rate. For architects and interior designers, the choice between 2...

Responsive image

Electrode Materials in Lithium Thionyl Chloride Batteries: Enhancing Energy Density and Cycle Life

Lithium thionyl chloride (Li/SOCl₂) batteries have gained attention due to their high energy density and long shelf life. These batteries are often used in applications that requi...

Responsive image

Top 8 Growth Drivers of the Flexible Batteries Market by 2030

The flexible batteries market is poised for remarkable growth in the coming years, driven by a variety of factors that highlight the increasing demand for lightweight, portable, an...