Xue Zhirong is a designer, engineer, and author of several books; Founder of the Design Open Source Community, Co-founder of MiX Copilot; Committed to making the world a better place with design and technology. This knowledge base will update AI, HCI and other content, including news, papers, presentations, sharing, etc.

全部剧情
Conference Interactive 青春偶像 The content is made up of: 家庭 Xue Zhirong is a designer, engineer, and author of several books; Founder of the Design Open Source Community, Co-founder of MiX Copilot; Committed to making the world a better place with design and technology. This knowledge base will update AI, HCI and other content, including news, papers, presentations, sharing, etc. speech The results show that feedback has a more significant impact on improving users' trust in AI than explainability, but this enhanced trust does not lead to a corresponding performance improvement. Further exploration suggests that feedback induces users to over-trust (i.e., accept the AI's suggestions when it is wrong) or distrust (ignore the AI's suggestions when it is correct), which may negate the benefits of increased trust, leading to a "trust-performance paradox". The researchers call for future research to focus on how to design strategies to ensure that explanations foster appropriate trust to improve the efficiency of human-robot collaboration. outcome Translation The researchers found that although it is generally believed that the interpretability of the model can help improve the user's trust in the AI system, in the actual experiment, the global and local interpretability does not lead to a stable and significant trust improvement. Conversely, feedback (i.e., the output of the results) has a more significant effect on increasing user trust in the AI. However, this increased trust does not directly translate into an equivalent improvement in performance. 乡村 情景 商战 网剧 blog
字母查找
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 0-9

Original address:0To assess trust more accurately, the researchers used behavioral trust (WoA), a measure that takes into account the difference between the user's predictions and the AI's recommendations, and is independent of the model's accuracy. By comparing WoA under different conditions, researchers can analyze the relationship between trust and performance.