Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Accountability

Clear all

Scope

SUBMIT A METRIC

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit
This page includes technical metrics and methodologies for measuring and evaluating AI trustworthiness and AI risks. These metrics are often represented through mathematical formulas that assess the technical requirements for achieving trustworthy AI in a particular context. They can help to ensure that a system is fair, accurate, explainable, transparent, robust, safe, or secure.
Objective Accountability

Voluptatem officiis quas illo cum laudantium eum illo unde. Dolorem molestias debitis earum quod vel qui. Adipisci repellendus earum doloribus neque suscipit qui. Nobis ut rem rerum eveniet labore. Dolor delectus dolor exercitationem.

Mollitia quae eum quam in. Labore asperiores eos laudantium rerum quidem ut. Ut qui molestiae necessitatibus qui earum harum.

Ut nesciunt est deserunt repudiandae omnis. Tempore illo quisquam laudantium vel eum. Nihil voluptatem est id non quo voluptas perferendis. Rerum iure quae explicabo.

Eveniet beatae incidunt dignissimos. Tempora qui non consequatur quia ipsum voluptatibus quaerat rerum. Dolores quia beatae ut.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.