Attacking Differential Privacy Using the Correlation Between the Features

2023-04-19

Learn how the differential privacy works by simulating attack on data protected with that technique.


Continue reading

Are LIME Explanations Any Useful?

2023-04-18

Don't let black box models hold you back. With LIME, you can interpret the predictions of even the most complex machine learning models.


Continue reading

Explaining AI - The Key Differences Between LIME and SHAP Methods

2023-04-14

When it comes to explainable AI, LIME and SHAP are two popular methods for providing insights into the decisions made by machine learning models. What are the key differences between these methods? In this article, we will help you understand which method may be best for your specific use case.


Continue reading

LIME - Understanding How This Method for Explainable AI Works

2023-04-14

Discover how the LIME method can help you understand the important factors behind your model's predictions in a simple, intuitive way.


Continue reading

SHAP - Understanding How This Method for Explainable AI Works

2023-04-14

Discover how the SHAP method can help you understand the important factors behind your model's predictions in a simple, intuitive way.


Continue reading

KernelShap and TreeShap - Two Most Popular Variations of the SHAP Method

2023-04-14

Making sense of AI's inner workings with KernelShap and TreeShap the powerfull tools for responsible AI.


Continue reading

LIME Tutorial

2023-04-14

Unveiling the mysteries of AI decisions? Let us dive into LIME, the tool that sheds light on the black box.


Continue reading

Evaluation of Interpretability for Explainable AI

2020-11-05

Learn about the evaluation of interpretability in machine learning with this guide. Discover different levels and methods for assessing the explainability of models.


Continue reading