Is your model prejudicial? Is your model deviating from the predictions it ought to have made? Has your model misunderstood the concept? In the world of artificial intelligence and machine learning, the word "fairness" is particularly common. It is described as having the quality of being impartial or fair. Fairness in ML is essential for contemporary businesses. It helps build consumer confidence and demonstrates to customers that their issues are important. Additionally, it aids in ensuring adherence to guidelines established by authorities. So guaranteeing that the idea of responsible AI is upheld. In this talk, let's explore how certain sensitive features are influencing the model and introducing bias into it. We'll also look at how we can make it better.

Nandana Sreeraj

Affiliation: Censius.ai

Nandana is a data scientist at Censius AI. She had completed her bachelors degree from College Of Engineering, Trivandrum. She had previously worked in the e-commerce industry where she had to deal with real world problem statements including product ranking and recommendation systems. She had published a research paper in health care domain in an international journal. Currently, her research interests are aligned to the Explainable AI domain.

visit the speaker at: Github