Realizing Beneficial AI, Keynote by Tak Lo, Founder of

AI (Artificial Intelligence) is transforming society and it has the potential to change the world. The rise of AI has also led to the rise in the discussion of ethics in AI. In this keynote at Hello Tomorrow Singapore 2019, Tak Lo talks about ethics and how it comes about to beneficial AI. He strongly believes that we have to pay more attention to ethics in AI.

Tak Lo, Founder of Zeroth.aiTak Lo, Founder of

Coercion and Corruption in Business

He speaks about the two parts of ethics that come into business, coercion and corruption. In terms of coercion, whenever there is a transaction between a buyer and a seller, it is not always fair. Oftentimes, it is the seller who coerces and the buyer that buys. In terms of corruption, it means that when something is bought, sold or commoditised, it corrupts the nature of what that item is. If the item is not meant to be sold (like pride or naming rights) but is sold, it corrupts the idea of what that object is.

Examples of ethical issues regarding insurance, Facebook and Google.

Life insurance companies are pay-out in the occurrence of death. When this concept first came out, life insurance was morally reprehensible. However, many companies packaged this whole concept very nicely. In a way, they coerced the elderly to sign life insurance policy, and have someone else pay the premium so that when the elderly passes away, that person can collect the pay-out. So initially, what people thought was that you will want the person to die sooner so you can collect the pay-out.

For Facebook, he talks about how they incentivise your clicks. They optimize the clicks, art, and algorithms to want to compel people to open certain things. He highlights that this is a not fair transaction. 

For Google, their original slogan was 'Don't be Evil'. But after a while, it was removed. Through that pursuit, Tak Lo feels that the nature of 'Don't be Evil' has been corrupted and its all very different now.

We didn't aim for that outcome - it was just a side effect of what we are doing.

He shares the view of one of the founders for Future of Life Institute. The problem of AI is that human extinction may be a side effect. This argument is very similar to how human beings came into this world and animal species started going extinct. This suggests that with the use of AI, it may unintentionally bring about harm to human beings.

It is important for us to all spare a thought on ethics and AI. Thus, presents a solution to ethical problems revolving around AI. They are going to set up an AI ethics board, similar to Google DeepMind, to judge the ethical implications of technology and AI with experts all around the globe. 

It is not the fact that there is a solution, rather, it is about realizing the importance of ethical implications in AI and coming together to start a conversation to resolve these implications.


Responsive image

Climate-related risks have financial statement impacts – KPMG Study

All companies are facing climate-related risks and opportunities and are making strategic decisions in response. KPMG study found these climate-related risks and strategic decision...

Responsive image

What Decarbonization Means for Utility Goals and the Digital Grid

What Decarbonization Means for Utility Goals and the Digital Grid...

Responsive image

How can NFTs make real estate investment more accessible?

Today the NFT craze is synonymous with rare digital art and other digital. But unlike traditional works of art, people are spending millions of dollars on a fractional auction for ...