You are here

EXPLANATIONS IN CONTEXTUAL GRAPHS:A SOLUTION TO ACCOUNTABILITY INKNOWLEDGE BASED SYSTEMS

Download pdf | Full Screen View

Date Issued:
2005
Abstract/Description:
In order for intelligent systems to be a viable and utilized tool, a user must be able to understand how the system comes to a decision. Without understanding how the system arrived at an answer, a user will be less likely to trust its decision. One way to increase a user's understanding of how the system functions is by employing explanations to account for the output produced. There have been attempts to explain intelligent systems over the past three decades. However, each attempt has had shortcomings that separated the logic used to produce the output and that used to produce the explanation. By using the representational paradigm of Contextual Graphs, it is proposed that explanations can be produced to overcome these shortcomings. Two different temporal forms of explanations are proposed, a pre-explanation and a post-explanation. The pre-explanation is intended to help the user understand the decision making process. The post-explanation is intended to help the user understand how the system arrived at a final decision. Both explanations are intended to help the user gain a greater understanding of the logic used to compute the system's output, and thereby enhance the system's credibility and utility. A prototype system is constructed to be used as a decision support tool in a National Science Foundation research program. The researcher has spent the last year at the NSF collecting the knowledge implemented in the prototype system.
Title: EXPLANATIONS IN CONTEXTUAL GRAPHS:A SOLUTION TO ACCOUNTABILITY INKNOWLEDGE BASED SYSTEMS.
37 views
14 downloads
Name(s): Sherwell, Brian, Author
Gonzalez, Avelino, Committee Chair
University of Central Florida, Degree Grantor
Type of Resource: text
Date Issued: 2005
Publisher: University of Central Florida
Language(s): English
Abstract/Description: In order for intelligent systems to be a viable and utilized tool, a user must be able to understand how the system comes to a decision. Without understanding how the system arrived at an answer, a user will be less likely to trust its decision. One way to increase a user's understanding of how the system functions is by employing explanations to account for the output produced. There have been attempts to explain intelligent systems over the past three decades. However, each attempt has had shortcomings that separated the logic used to produce the output and that used to produce the explanation. By using the representational paradigm of Contextual Graphs, it is proposed that explanations can be produced to overcome these shortcomings. Two different temporal forms of explanations are proposed, a pre-explanation and a post-explanation. The pre-explanation is intended to help the user understand the decision making process. The post-explanation is intended to help the user understand how the system arrived at a final decision. Both explanations are intended to help the user gain a greater understanding of the logic used to compute the system's output, and thereby enhance the system's credibility and utility. A prototype system is constructed to be used as a decision support tool in a National Science Foundation research program. The researcher has spent the last year at the NSF collecting the knowledge implemented in the prototype system.
Identifier: CFE0000713 (IID), ucf:46601 (fedora)
Note(s): 2005-08-01
M.S.Cp.E.
Engineering and Computer Science, Department of Electrical and Computer Engineering
Masters
This record was generated from author submitted information.
Subject(s): Intelligent Systems
Expert System
Explanations
Context
Contextual Graphs
Persistent Link to This Record: http://purl.flvc.org/ucf/fd/CFE0000713
Restrictions on Access: public
Host Institution: UCF

In Collections