martes, 18 de agosto de 2020

NIST Asks A.I. to Explain Itself | NIST

NIST Asks A.I. to Explain Itself | NIST

NIST

NIST Asks A.I. to Explain Itself

Atop a blue background, the word "explainability" appears.

It’s a question that many of us encounter in childhood: “Why did you do that?” As artificial intelligence (AI) begins making more consequential decisions that affect our lives, we also want these machines to be capable of answering that simple yet profound question. After all, why else would we trust AI’s decisions?
This desire for satisfactory explanations has spurred scientists at the National Institute of Standards and Technology (NIST) to propose a set of principles by which we can judge how explainable AI’s decisions are. Their draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312), is intended to stimulate a conversation about what we should expect of our decision-making devices.

Read More

No hay comentarios:

Publicar un comentario