As part of the 38th HKTDC Hong Kong Electronics Fair (Autumn Edition), the Symposium on Innovation & Technology held was themed “AI Empowerment – Grow without Limits”. The event invited Ms. Jessie Lin, Vice President of SenseTime, who shared about ways AI can empower, and addressed job concerns relating to AI.
We think research drives product evolution … that product evolution drives the revenue.
Founded in 2014 by the head of multimedia lab at the Chinese University of Hong Kong, SenseTime places heavy emphasis on basic research. This has enabled the business to bring in new products and services, drive revenue growth, leading to it becoming the most valued AI company in the world.
The business covers a broad range of key verticals, where AI+ is integrated into: smart retail, fintech, autonomous driving, smartphones, and smart city. Ms. Jessie Lin picked the top three examples to show the empowerment AI can provide.
Three aspects of help are provided to supermarkets like Walmart and Ikea, namely, store management, customer profile identification and enhancing customer experience. Using facial recognition, the system can identify customers’ duration and order of movement at each section, as well as the VIP customers. This helps store owners to pinpoint customers’ needs quicker with the help of big data and shopping pattern analysis.
In terms of improving shopping experience, SenseTime is working on unmanned stores where the system automatically tracks the items and amount spent through the monitors placed on the shelves. As customers check out, their faces will be scanned and permission will be requested to deduct the purchases from their e-wallet. Once granted, the transaction will be done. To date, this is mostly seen happening in the Chinese mainland.
“Financial industry places great emphasis on security.” Banks can use help during their account opening or transactions, using facial recognition with an error rate of 1 in 100 million. This means customers can open accounts remotely without having to visit the securities and banks.
To enable autonomous driving, the first mission is to teach the car how to see.
In contrast to the use of radars and sensors in the past, image recognition is now used to judge the distance in between cars, pedestrians crossing the road, and more extreme circumstances like at night or heavy fog days.
There are 5 levels for autonomous vehicles in the industry, 5 being the most autonomous. Ms. Lin feels that the current status of the cars in the industry are at level 2, also known as assistant driving. The system is able to send alarms when the driver is getting out of lane. This is especially attractive to insurance and logistics companies with long distance truck services, as it helps to monitor their drivers better.
This is a general purpose technology, just like steam engine and the internet. Hence going forward, every industry will incorporate AI in one way or another.
Computers and neural network were invented about 80 and 40 years ago respectively. On the other hand, this multi-neural network called deep learning was only applied to image recognition 4 years ago. It has three main components namely, algorithms, computing power and data. The amount of data collected today is enabling AI. For instance, 1.7 megabytes of data is accumulated per second for every human being, and 300 hours of video per minute on YouTube.
AI today is able to complete specific tasks, but it is hard to do it alone without human intervention.
The challenge for us is – how to work with AI just as we work in a team today.
Ms. Lin envisions the future to be human plus AI. “It’s not the beginning of the end, but the end of the beginning. The end that we are finally passing the baby steps of AI, finally growing into a teenager.”