September 20, 2023
8
 min read

Making AI Trustworthy with Capsa

Capsa allows you to assess and mitigate risk.

Making AI Trustworthy with Capsa

In April 2019, the European Commission's High-Level Expert Group on AI unveiled an important document that would shape the ethical and legal landscape of artificial intelligence. The EU "Ethics Guidelines for Trustworthy Artificial Intelligence" is a comprehensive framework aimed at establishing a solid foundation for AI development and deployment within the European Union and beyond. In fact, the upcoming EU AI Act - the World’s first comprehensive legislative framework for AI - is expected to define certain AI systems as "high-risk" and subject them to more stringent regulations and oversight. For those models, ensuring risk assessment and mitigation at scale will be absolutely key to guarantee compliance. 

According to the EU Guidelines, trustworthy AI must adhere to a three-fold criterion:

1. Laws: Trustworthy AI should operate within the bounds of all relevant laws and regulations. This means AI systems must respect and uphold the legal frameworks that govern their use, ensuring compliance with data protection, intellectual property, and other pertinent laws.

2. Ethics: Beyond legality, AI systems must also be ethical. This implies a commitment to upholding a widely recognized and accepted set of moral principles and values (e.g., promoting fairness, respecting human rights, and ensuring transparency and accountability).

3. Robustness: The third pillar of trustworthy AI is robustness. AI systems need to be reliable and accurate across various contexts, situations, and scenarios. They should not falter under unexpected conditions but rather perform consistently and dependably in real-world scenarios. They should also be able to signal uncertainty and low confidence to allow humans to take control when needed. 

In this ever-evolving landscape of AI development, the role of state-of-the-art tools has become increasingly vital. Capsa, built by the engineers at Themis AI, stands at the forefront of this technological leap. Capsa provides model-agnostic, quantifiable insights into core components indispensable for the development and deployment of robust AI systems.

Existing approaches for estimating uncertainty in neural networks incur significant computational costs. These methods have traditionally depended on repeatedly running or sampling a network to gauge its level of confidence. This process consumes both time and memory, making it hardly suitable for high-risk, rapid decision-making scenarios. 

In contrast, Capsa stands out as a model-agnostic, easily implementable framework tailored for uncertainty quantification. Our Python library encompasses a range of methods packaged as easy-to-use "wrappers," making them applicable and scalable across model sizes and types.

At any stage of model development (e.g., annotation, pre-training, training, fine-tuning, inference), Capsa automatically converts models into a uncertainty-aware variants, able to provide uncertainty estimates alongside every output. More specifically, Capsa allows your model to quantify several uncertainty types:

  1. Vacuitic Uncertainty: biased training and testing data sets lead to high uncertainty for inputs that are not well represented. This risk factor is particularly difficult to identify as it relates to latent features, i.e., variables not directly and explicitly represented. In machine learning, latent feature imbalance can lead to inaccurate predictions.
  1. Aleatoric Uncertainty: this type of uncertainty comes from noisy, erroneous, or ambiguous data in training sets.
  1. Epistemic Uncertainty: this type of uncertainty arises when the model is lacking data and knowledge.

Capsa provides a variety of broad classes of wrappers (e.g., Sculpt, Vote, Neo, etc.) to quantify these uncertainty types. Each individual wrapper encompasses a range and class of uncertainty estimation algorithms that you can select and tune to your specific needs. Our conversion procedure ensures that the AI model produces uncertainty estimates with minimal computational overhead.

By measuring uncertainty, we can understand a model's robustness, namely its ability to perform reliably and accurately in diverse scenarios. Capsa empowers organizations and AI developers to take a significant step toward the ultimate goal of ensuring that AI models can be trusted. In alignment with the guidelines of the EU AI Act, Capsa emerges as a tangible, turnkey solution for evaluating and understanding AI systems.

Making AI Trustworthy with Capsa

Latest articles

View all

Don’t keep up with AI, stay ahead of it.

Work with us