Transparency needed to combat biased AI says analyst

  •   2 min reads
Transparency needed to combat biased AI says analyst

It is a misnomer to call AI biased, it is humans that are biased, and AI learns from data drawn from society. As they say, garbage in garbage out, or to put in ethical AI terms, biased data in, biased analysis out.

Bias is becoming a  major headache in the rapidly growing AI market. PwC projects that AI could contribute $15.7 trillion to the global economy by 2030, a huge opportunity for the worldwide economy, and it will only be the first steps taken by then - it's going to be much bigger than that in the following decade. But there is rarely a free lunch, and in the case of AI, there is a massive risk to society that as its influence grows, so will biased outcomes.

So how do we overcome the issue of a biased dataset that has its roots in the very foundation of the language and cultures, from thousands of years of data collection?

Dr Nicolai Baldin, the founder of Synthesized, a company that has developed bias mitigation software, says, "We believe the answer to data bias needs a societal solution".

The first step towards solving a problem lies with acknowledging there is a problem.

AI will transform business; it will continue to evolve to take on more and more tasks from us and make decisions without the oversight of human beings. However, the risk and loss of certain cognitive controls to a machine is quite intimidating and needs to be done in a controlled and clear manner.

It needs a strong and universal set of best practices and hard rules, which can only be developed by an independent and impartial third party.

But it's a challenge.

Sarah Burnett, a well-known tech analyst who assesses and advises clients on technology ethics, says: "Many enterprises are new to AI and need as much help as they can to ensure that they do not create problems with poor practices."

Burnett added: "That said, I think we need to ask tool providers for transparency as well. For example, exactly what it is that they are providing and what their technology practices are like."

Synthesized has responded to the challenge by making FairLens, its tool for discovering hidden biases and ensuring data transparency and fairness, open-source software.

Synthesized was recently recognised by analyst firm Gartner as "cool."

Gartner said that "Synthesized is cool because it uses AI and GANs to generate synthetic data, and it systematically identifies and mitigates bias in the resulting dataset."

So it is encouraging that Gartner is applying the cool accolade to a company that focuses on responsible use of AI.

Burnett continued: "The fact that analyst firms are starting to highlight good AI tools and practices is great. However, it will increase the pressure on laggards, both on the supply and demand side of the market, to start embedding ethical technology practices."

Synthesized was co-founded in 2017 by Dr Nicolai Baldin, who holds a PhD in Machine Learning and Statistics from the University of Cambridge. Nicolai started Synthesized with a mission to transform the way our society works with data using Artificial Intelligence.

Baldin said: "We have recently open-sourced FairLens as a means of being transparent in how we approach bias and to encourage the broader data science community to contribute to the important work of fighting bias. We are also engaging all businesses and sectors by urging them to use our tool to discover the unfairness in their data".

The future is undoubtedly a digital one, and the importance of objectivity in the years to come cannot be overstated.

Related News

You've successfully subscribed to Techopian - The conversation and voice for ethical technology
All done, we'll keep you informed when we post articles. Just check your email
Welcome back!
Success! Your billing info is updated.
Billing info update failed.
Your link has expired.