قالب وردپرس درنا توس
Home / SmartTech / Why Responsible AI Must Disrupt Your Organization From the Ground Up (VB)

Why Responsible AI Must Disrupt Your Organization From the Ground Up (VB)



Presented by Dataiku


The white box AI is getting a lot of attention now. But what does that mean in practice? And how can companies start moving from black box systems to more explainable AI? Find out why white box AI brings business benefits and how it is necessary to further develop when you take part in this VB live event.

Register here for free led to unwanted, even unethical, results. But the conversation is so much more complex, says Rumman Chowdhury, managing director at Accenture AI. When technologists or data scientists speak of black box algorithms, they specifically refer to a class of algorithms for which we do not always understand how the output is achieved, or to unintelligible systems.

"Just because something is a black box algorithm doesn't necessarily mean that it is irresponsible," says Chowdhury. "There are a number of interesting models that can be used to explain output ̵

1; for example the human brain."

For this reason, black box AI systems actually have an important relationship with the responsible AI, she explains. Responsible AI can be used to understand and unpack a black box system, even if it is included in this black box algorithm.

"When people talk about explainability at the receiving end of a model's release, they actually want to understand," says Chowhury. "Understanding is about explaining the output in a way that is beneficial to the user."

For example, in the case of the Apple Card discussion, in which a woman's sexist algorithm grants less credit than to her husband, the customer service representative explained that they didn't know why that happened. The algorithm simply said it. So it's not just about understanding data scientists. It's about a customer service representative being able to explain to a customer why the algorithm came out and how it affects the conclusions, rather than a general discussion about unpacking a neural network, Chowdhury explains.

] "Correctly done, understandable AI, well explained, means that people can make the right decisions and make the best decisions for themselves."

To recognize the benefits of innovation and identify possible negative consequences, is the most important thing companies need to establish cross-functional governance. Responsible thinking should play an important role in every step of the process, from thinking about a project, the phase of brainstorming to development, deployment and use.

"If we develop and implement AI responsibly, we are not just thinking about what we deliver to our customers, but what we do for ourselves," says Chowhury. "We know that there is no one-size-fits-all approach."

The biggest challenge when implementing responsible AI or ethical AI is usually that it is a very large and very discouraging company. There has been great concern for media attention from the start. But then the more complicated questions arise: What does it actually mean to be responsible or ethical? Does this mean compliance with laws, a change in corporate culture, etc.? [19659005‹EstablishinganethicalAIhelpstodividetheseintofourpillars:technicaloperationalorganizationalandserious

Companies understand this most often Technical component: How do I unpack the black box? What is the algorithm about?

The operational pillar may be the most important and determines the overall structure of your initiative. It's about creating the right kind of organizational and corporate structure.

This then goes into the third pillar, the organization, about: How do you hire the right people, how do you create cross-functional governance? Finally, the last pillar, reputation, requires thoughtful and strategic thinking about how to talk about your AI systems, how to give your customers the confidence that they are sharing their information and contacting AI.

is changing the field of data science in a very important way, ”says Chowdhury. “To create models that are understandable and explainable, data scientists and customer teams have to be very involved. Customer-oriented people must be involved in the early development phase. I think that data science will grow as a field and develop into people who specialize in algorithmic criticism. I'm pretty excited that it happens. “

Learn more about how companies can create a culture of responsible, ethical AI, the challenges of unpacking a black box from organization to technology, and how to start it. Don't miss this VB live event on your own initiative ,


Don't miss it!

Register here for free.


Important tips:

  • How to design the data science process collaboratively in the entire organization
  • Building trust from the data to the model
  • Move your company towards data democratization

Speakers:

  • Rumman Chowdhury Managing Director, Accenture AI
  • David Fagnan Director, Applied Science, Zillow Offers
  • Triveni Gandhi [1945900] Data Scientist, Dataiku
  • Seth Colaner AI Editor, VentureBeat

Source link