Seth Dobrin: AI is reflecting back at us all the bad decisions we made in the past

Seth Dobrin: AI is reflecting back at us all the bad decisions we made in the past

AI is here and we are charmed and scared by it. But its evolution depends on the decisions that humans are making now, says Dr. Seth Dobrin, who spoke about AI at the Creativity4Better global conference, organised by IAA Romania in Bucharest this month.

Seth Dobrin is the CEO of Trustwise, President of the Responsible AI Institute and Founder of Qantm AI. He has has ideated some of the most innovative AI strategies for a variety of Fortune 500 companies, designed and led cutting edge technology strategy initiatives with IBM. Trained as a human-geneticist, he launched his career in tech, preoccupied with the rigor of the scientific method applied to business, combined with his interest in human nature described through data. 

AI is reflecting back at us all the bad decisions that we made in the past. When it generates hate, when it generates biases, when it propagates inequities. It is a mirror, but it is a mirror that is looking back historically and it looks back historically for as long as you have data, says Seth Dobrin. 

During a break at the conference, we talked with Seth about AI, creativity, human touch and the need for regulation and responsability in using the new technologies that are changing the world. 

 

Misconceptions about AI

One of the biggest misconception - actually not a misconception, but a misimplementation of AI - is often done without a strategy that is tied to business value. How much money is it making or saving a company, how is it making human lives better, cheaper, faster. That's mistake number one.

Mistake number two is that humans are often an afterthought. People don't consider humans upfront when they are designing and building an AI system. 

And third, they don't consider responsability and trustworthy first. 

The way they are building it is not done in a way that actually adds value to the humans or to the business. 

 

The fear of AI

There is actually a reason for the fear of AI. In the hands of people that are not using it responsibly, AI can do a lot of bad things, and even in the hands of people that are trying to use it responsibly, it can be a bad thing. So, without mitigation efforts upfront, AI can engage in existing human biases that are extentiated in the data. We need proper control that can create a fair, better world for society.

When people are using AI or interacting with AI, it's important to understand that current new technologies we hear about, like DALL-E, the online system that can generate images, the way that it is built is that it is trained on the whole internet. It's not surprising that it learned biases. It is not surprising that it learned hate. You can ask it to do or generate certain things and it can give you biased images.

The other one that is similar it is called GPT-3. A code for a language model. And again, it was built on the whole internet and it learned a lot of hateful and discriminatory things. 

 

A mirror for humanity

AI is reflecting back at us all the bad decisions that we made in the past. When it generates hate, when it generates biases, when it propagates inequities. It is a mirror, but it is a mirror that is looking back historically and it looks back historically for as long as you have data. 

AI is math. Math is not apparently biased. What AI is doing - the math in AI is learning about how the past decisions were made and is helping to move those forward. It is learning about all the bad decisions we made in the past. We don't actively monitor for those and we don't actively mitigate for those. So we, as professionals, have a decision to make. We can allow for AI to propagate that hate and that biases or we can use AI to make better, more inclusive decisions. 

 

Ethics

The AI ethics is a very hot topic in the business community. I recently left IBM, where I was Chief Global AI Officer. At IBM, we had an Ethics Board. An Ethics Board creates policies, governance and controls for what we would do and wouldn't do in the organisation. It monitored what we were doing internally and made sure that we were staying within our principles. 

The organisation that I am joining now, the Responsible AI Institute, helps organisations to set up guidelines to governance, to establish policies and to measure the maturity of the organisations in this area, as well as the AI systems that they employ.

 

What can we learn from AI

The goal in most cases is not to get rid of human decision making. Ii is to provide insights tot humans or to reduce the near infinite number of possibilities that humans are making. Those decissions that are being presented to them, when using AI responsibly, have removed or mitigated most of the specific biases. We'll never get rid of all biases. But when using AI we can get rid of the biases that are important for those decisions and that's why it is important to start with humans. 

 

Can AI be creative?

Absolutely. For example, Refik Anadol, a turkish artist, is using AI to create art. Physical or virtual pieces of art, installations. So AI can be used for creative purposes. There are AI systems that can help you write. So AI can be creative. Will it ever totally replace creatives? I don't think so. Again, the whole goal is not for AI to actually replace humans. It is to augment them. Make you do your work better, cheaper or faster. 

Recently, the artist that I mentioned used an AI system to evaluate the New York Metropolitan Musem of Modern Arts Collection. What this AI system did: it looked at works of similar artists and generated works as if one person have made them. 

I've played with DALL-E and GPT-3. I think that they are great toys. They are tools that we should be careful about, if we are using them in business, but right now they are just toys. Because they are trained on the whole internet, they have hate in it, they have biases, racism, misoginy built into them and they will propagate these inequities that we are talking about. There needs to be some controls around them when they are being released. 

There will be some chepear work in the creative industry. There's cheaper work outhere today. Good art will never be cheapened. Yes, I could generate things on my own, but it will never be as good as a true creative's. 

 

First steps in regulation

We should regulate outcomes and not the AI use itself. There would be things that are off-limits, there would be things that are considered high-risk - things that are generally impacting human health - and those need to be rigorously understood by organisations. 

I think that's the way to do it: A risk based, outcome focused, application of regulation.

 

When AI will rule the world

We've been saying, for 50 years, that generative AI is gonna be here in 50 years. We are still saying that it will be here in 50 years. We need to consider it now so that we'll be prepared for then, when it will come. But even now, it is important that AI is transparent and explainable, that we, as humans can understand it, that it is safe and robust and secure, that it doesn't propagate inequities.

Even when we'll get to that future world of autonomist general AI, hopefully we would've build in that things that help protect us and help create a better society.

There are some tasks that AI will replace from humans, but those are generally things that humans don't want to do. Very boring tasks. I think that it is important that AI is used as a tool by humans. To help them do what they are doing better. In a more productive and faster manner. And what we need to do is make it part of the human process. How we use it as a tool for our human creativity is really what's gonna be important. 

Aboneaza-te la newsletterul IQads cu cele mai importante articole despre comunicare, marketing si alte domenii creative:
Info


Companii

Oameni

Pozitii

Subiecte

Sectiune



Branded


Related