Last 10th October the World Summit in Artificial Intelligence took place at Amsterdam with more than 4000 attendees, doubling the number of attendees from last year. This annual event bring together practitioners, influencers and users of applied artificial intelligence.

Kernix had the opportunity to participate again this year thanks to the contributions made to the association City AI. One of the hot topics of the event was the Ethics in AI. This encouraged us to think about the different ethical problems around AI and how to handle them. In this article we are going to focus on transparency, fairness, and spreading of AI.


Sarah Porter from @Inspired Minds introducing World Summit AI 2018

1.AI transparency

About data

AI adoption has increased in the recent years, for example the vocal assistants proposed by Google, Apple and Amazon are being used by more and more people around the world. An AI algorithm is used to process the vocal input, and people tell these vocal assistants precise information about their life, their agenda, their contacts, the places that they have visited, their music preferences, etc. A critical question raises about how these personal data are handled: are they anonymized? Are they sold to other companies? What are our rights regarding our personal data?

European union has made some progress about the latter subject with the GDPR regulation, which states how to process personal data in order to keep privacy for all individuals. However, this is only applied for European data so there are still remaining questions for other companies in the world in order to have full transparency.

About algorithms

In most cases the software behind remains a black box algorithm: is it ok to rely on systems whose decisions cannot be explained? How to make sure that no programming flaws, corrupted data or silent errors have impacted the decisions if no real diagnosis is possible?

Influencers from Alibaba, Google, Yandex and Microsoft at a panel discussion about a leadership perspective of AI

This subject was discussed between important people from leading companies in AI (Alibaba, Google, Yandex and Microsoft). They argued that people should be able to trust these devices, but they only said that more transparency is needed while we were expecting to hear more concrete details on how these major companies plan to implement this vision.

We think that AI applications shouldn't be black box algorithms. One step forward is to use open source initiatives and make their algorithms public in order to present how they treat data. This is why at Kernix, we use open source technology for our projects so that our clients can access the code at the delivery time. Also when we implement a machine learning model, we care about explainability of the result, we use python libraries like eli5, which allows our clients to know which variables are used in the model and which are the most important to a explain the results.

2. Fair AI

A lot of people have criticized that most vocal assistants have feminine voices: Alexa, Siri or Google Home. Dr Stephen Cave and Dr Kanta Dihal from @Leveulme Centre for Future of Intelligence, argued that AI is the product of the white male imagination, where women is the subordinated assistant that helps the male to accomplish most of his tasks, and this is why they think that their voices are chosen. This example illustrates how some human biases can be transferred to AI systems from their designers.

In fact, female voice assistants are just the tip of the iceberg of the problem of fairness in AI. Most people see artificial intelligence as more rational and objective than human intelligence. But we are living in a imperfect society which can generate data with sexist or racist bias. The problem is that AI is based on data, and biased data will lead to biased algorithms.

At Kernix we encourage our clients to think about the issues due to biases. Each time we are developing predictive models, most of the time based on machine-learning algorithms, we are warning our clients about their re-training strategies. Let us imagine that our clients decide to put it in production and use these models to select a subset of their "most promising new customers" in order to focus their activity on them. But if they also decide to retrain these models with fresh new data, we tell them that the new data will be restricted to these sub-population of "most promising new customers", and that it is not representative of the full customer population. The danger is to end up re-training models with biased data. We thus advise our clients to adopt some random sampling strategies and to keep monitoring the predictive performances continuously during the period of exploitation.

Oscar Celma from @Pandora telling the lessons learned from building a large recommender system. One of them: The machine has no common sense !

3. AI for everyone

Most AI applications are used for business or marketing purposes. For example, Ciaran Jetten from @Heiniken group presented how they use AI to improve operations, marketing and advertising; and Oscar Celma from @Pandora showed how they combine 70 different AI models to improve music recommendation and get more retention and people satisfaction.

Still AI is a tool, and as such it could be applied to different fields, and to help society in general. For example Derek Haoyang from @Squirrel AI, uses AI in education to help children to progress in the subjects where they need help.
Concerning health care, at Kernix, we try to participate in different data challenges and recently won a prize at the JFR data challenge where we developed a model to predict abnormal knee and thyroid images. But a lot more could be done.


Anand Raman from @Microsoft in a workshop about AI for Good

We believe that AI should be used for various applications, by private companies but also by public organisms and non-profit associations. For this, it is important that AI should be spread over the world.

In one of the panel discussions, Ambassador Amandeep Singh Gill from @United Nations, stated that everyone should have the opportunity to apply AI, which is not always true. In most cases people don't have access to hardware needed to create AI or either they don't have the required coding skills. United Nations have been pushing initiatives in order to distribute knowledge to every single human like for example the right to have internet access.

This issue is actually being addressed by the globalization of education. Online resources and courses are a good place to start. In Kernix, we believe that this is an important issue to address, this is why we usually give talks to demystify AI https://www.linkedin.com/feed/update/urn:li:activity:6466252981984600064.


Conclusion

Attending a major conference as the World Summit AI is always the opportunity for us to think about our work and its applications. The fact that ethics was a major issue this year is a good sign for the future of AI. We hope that with more and more people aware of the limitations and problems of AI, we will be able to find practical solutions together.

DJ Sleeper incredible animations!

Cristian Perez