探索与争鸣 ›› 2024, Vol. 1 ›› Issue (6): 80-87.

• 学术争鸣 • 上一篇    下一篇

事后解释可以消除认识的不透明性吗

贾玮晗,董春雨   

  • 出版日期:2024-06-20 发布日期:2024-07-21
  • 作者简介:贾玮晗,北京师范大学哲学学院博士研究生; 董春雨,北京师范大学哲学学院、价值与文化中心教授。(北京 100875)
  • 基金资助:
    国家社会科学基金重点项目“大数据个性化知识的本体论意义与认识论价值研究”(18AZX008);国家社会科学基金一般项目“人工智能中因果推理模型的哲学研究”(22ZXB00884);国家社会科学基金一般项目“智能机器的认识不透明性问题(23BZX103)

Can Post-Hoc Explanations Eliminate the Epistemic Opacity

Jia Weihan & Dong Chunyu
  

  • Online:2024-06-20 Published:2024-07-21

摘要:

基于深度学习模型的人工智能系统被广泛应用于各个领域,却由于不透明性导致了信任问题。计算科学家试图开发解释黑箱模型的工具以缓解这一矛盾。对可解释技术的认识,有助于区分因果解释与事后解释:因果解释要求对模型机制的完全认识,而可解释性技术对黑箱模型的解释并非总是关于模型内部细节的说明,但它是对因果解释无法获得时的补救措施,仍具有启发性的认识论价值。事后解释使用的近似方法是科学模型哲学研究的重要组成部分,而建构经验论也为事后解释之于模型机制的认识意义或价值提供了支持。

关键词:

Abstract:

Artificial intelligence systems based on deep learning models are widely applied across various domains, yet their opacity has led to trust issues. Computational scientists are attempting to develop tools to interpret black-box models in order to alleviate these contradictions. By analyzing interpretable technology, this article emphasizes the differentiation between causal explanation and post hoc explanation, along with its significance: Causal explanation requires a complete understanding of the model mechanism, while interpretable technology’s elucidation of black boxes does not always concern internal model details, but serves as a remedy when causal explanation is unattainable, still holding heuristic epistemological value. The approximation methods used in post hoc explanation are a vital component of the philosophical study of scientific models, while constructive empiricism also provides support for the understanding of the meaning or value of post hoc explanations in the model mechanism.

Key words: