skip to main content
10.1145/3340531.3411919acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Explainable Recommender Systems via Resolving Learning Representations

Published: 19 October 2020 Publication History

Abstract

Recommender systems play a fundamental role in web applications in filtering massive information and matching user interests. While many efforts have been devoted to developing more effective models in various scenarios, the exploration on the explainability of recommender systems is running behind. Explanations could help improve user experience and discover system defects. In this paper, after formally introducing the elements that are related to model explainability, we propose a novel explainable recommendation model through improving the transparency of the representation learning process. Specifically, to overcome the representation entangling problem in traditional models, we revise traditional graph convolution to discriminate information from different layers. Also, each representation vector is factorized into several segments, where each segment relates to one semantic aspect in data. Different from previous work, in our model, factor discovery and representation learning are simultaneously conducted, and we are able to handle extra attribute information and knowledge. In this way, the proposed model can learn interpretable and meaningful representations for users and items. Unlike traditional methods that need to make a trade-off between explainability and effectiveness, the performance of our proposed explainable model is not negatively affected after considering explainability. Finally, comprehensive experiments are conducted to validate the performance of our model as well as explanation faithfulness.

Supplementary Material

MP4 File (3340531.3411919.mp4)
Recommender systems play a fundamental role in many web applications. While many efforts have been devoted to developing more effective models, the exploration on the explainability is running behind. In this paper, after formally introducing the elements related to model explainability, we propose a novel explainable recommendation model through improving the transparency of the representation learning process. Specifically, to overcome the representation entangling problem in traditional models, we revise traditional graph convolution to discriminate information from different layers. Also, each representation vector is factorized into several segments, where each segment relates to one semantic aspect in data. Different from previous work, in our model, factor discovery and representation learning are simultaneously conducted, and we are able to handle extra attribute information and knowledge. In this way, the proposed model can learn interpretable and meaningful representations for users and items.

References

[1]
Linas Baltrunas, Bernd Ludwig, and Francesco Ricci. 2011. Matrix factorization techniques for context aware recommendation. In RecSys.
[2]
Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: a review and new perspectives. IEEE TPAMI (2013).
[3]
Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. 2016. Wide & deep learning for recommender systems. In DLRS.
[4]
Tim Donkers, Benedikt Loepp, and J�rgen Ziegler. 2017. Sequential user-based recurrent neural network recommendations. In RecSys.
[5]
Mengnan Du, Ninghao Liu, and Xia Hu. 2018. Techniques for interpretable machine learning. arXiv preprint arXiv:1808.00033 (2018).
[6]
Alessandro Epasto and Bryan Perozzi. 2019. Is a single embedding enough? Learning node representations that capture multiple social contexts. In WWW.
[7]
Ruth C Fong and Andrea Vedaldi. 2017. Interpretable explanations of black boxes by meaningful perturbation. In ICCV.
[8]
Jingyue Gao, Xiting Wang, Yasha Wang, and Xing Xie. 2019. Explainable recommendation through attentive multi-view learning. In AAAI.
[9]
Yong Ge, Hui Xiong, Alexander Tuzhilin, and Qi Liu. 2014. Cost-aware collaborative filtering for travel tour recommendations. ACM TOIS (2014).
[10]
Chaoyu Guan, Xiting Wang, Quanshi Zhang, Runjin Chen, Di He, and Xing Xie. 2019. Towards a deep and unified understanding of deep neural models in NLP. In ICML.
[11]
David Gunning and David W Aha. 2019. DARPA's explainable artificial intelligence program. AI Mag. (2019).
[12]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017a. Inductive representation learning on large graphs. In NIPS.
[13]
William L Hamilton, Rex Ying, and Jure Leskovec. 2017b. Representation learning on graphs: methods and applications. arXiv preprint arXiv:1709.05584 (2017).
[14]
Xiangnan He and Tat-Seng Chua. 2017. Neural factorization machines for sparse predictive analytics. In SIGIR.
[15]
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In WWW.
[16]
Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yanxin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers, et al. 2014. Practical lessons from predicting clicks on ads at facebook. In KDADD.
[17]
Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-VAE: learning basic visual concepts with a constrained variational framework. In ICLR.
[18]
Liang Hu, Songlei Jian, Longbing Cao, and Qingkui Chen. 2018. Interpretable Recommendation via Attraction Modeling: Learning Multilevel Attractiveness over Multimodal Movie Contents. In IJCAI.
[19]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2018. Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In ICML.
[20]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
[21]
Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016).
[22]
Deguang Kong, Chris Ding, and Heng Huang. 2011. Robust nonnegative matrix factorization using l21-norm. In CIKM.
[23]
Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. IEEE Comput. (2009).
[24]
Chenliang Li, Cong Quan, Li Peng, Yunwei Qi, Yuming Deng, and Libing Wu. 2019. A capsule network for recommendation and explaining what you like and dislike. In SIGIR.
[25]
Huayu Li, Richang Hong, Defu Lian, Zhiang Wu, Meng Wang, and Yong Ge. 2016. A relaxed ranking-based factor model for recommender system from implicit feedback. In IJCAI.
[26]
Ninghao Liu, Qiaoyu Tan, Yuening Li, Hongxia Yang, Jingren Zhou, and Xia Hu. 2019. Is a single vector enough? Exploring node polysemy for network embedding. In KDD.
[27]
Jianxin Ma, Chang Zhou, Peng Cui, Hongxia Yang, and Wenwu Zhu. 2019. Learning disentangled representations for recommendation. In NeurIPS.
[28]
Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. 2018. Methods for interpreting and understanding deep neural networks. DSP (2018).
[29]
Georgina Peake and Jun Wang. 2018. Explanation mining: post hoc interpretability of latent factor models for recommendation systems. In KDD.
[30]
Steffen Rendle. 2010. Factorization machines. In ICDM.
[31]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?" Explaining the predictions of any classifier. In KDD.
[32]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. NMI (2019).
[33]
Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In NIPS.
[34]
Sungyong Seo, Jing Huang, Hao Yang, and Yan Liu. 2017. Interpretable convolutional neural networks with dual local and global attention for review rating prediction. In RecSys.
[35]
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013).
[36]
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017).
[37]
Michael Tsang, Hanpeng Liu, Sanjay Purushotham, Pavankumar Murali, and Yan Liu. 2018. Neural interaction transparency (NIT): disentangling learned interactions for improved interpretability. In NIPS.
[38]
Petar Velivc ković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
[39]
Hao Wang, Tong Xu, Qi Liu, Defu Lian, Enhong Chen, Dongfang Du, Han Wu, and Wen Su. 2019 c. MCNE: An end-to-end framework for learning multiple conditional network representations of social network. In KDD.
[40]
Hongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. 2018b. Ripplenet: propagating user preferences on the knowledge graph for recommender systems. In CIKM.
[41]
Hongwei Wang, Fuzheng Zhang, Mengdi Zhang, Jure Leskovec, Miao Zhao, Wenjie Li, and Zhongyuan Wang. 2019 d. Knowledge-aware graph neural networks with label smoothness regularization for recommender systems. In KDD.
[42]
Hongwei Wang, Miao Zhao, Xing Xie, Wenjie Li, and Minyi Guo. 2019 e. Knowledge graph convolutional networks for recommender systems. In WWW.
[43]
Jiaxuan Wang, Jeeheh Oh, Haozhu Wang, and Jenna Wiens. 2018a. Learning credible models. In KDD.
[44]
Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. 2019 a. KGAT: knowledge graph attention network for recommendation. In KDD.
[45]
Xiang Wang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, and Tat-Seng Chua. 2019 b. Explainable reasoning over knowledge graphs for recommendation. In AAAI.
[46]
Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analysis. In ICML.
[47]
Fan Yang, Ninghao Liu, Suhang Wang, and Xia Hu. 2018a. Towards interpretation of recommender systems with sorted explanation paths. In ICDM.
[48]
Min Yang, Wei Zhao, Jianbo Ye, Zeyang Lei, Zhou Zhao, and Soufei Zhang. 2018b. Investigating capsule networks with dynamic routing for text classification. In EMNLP.
[49]
Ting Yao, Yingwei Pan, Yehao Li, and Tao Mei. 2018. Exploring visual relationship for image captioning. In ECCV.
[50]
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In KDD.
[51]
Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. 2016. Collaborative knowledge base embedding for recommender systems. In KDD.
[52]
Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, and Song-Chun Zhu. 2018. Interpreting CNN knowledge via an explanatory graph. In AAAI.
[53]
Quanshi Zhang and Song-Chun Zhu. 2018. Visual interpretability for deep learning: a survey. FITEE (2018).
[54]
Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. 2019. Deep learning based recommender system: A survey and new perspectives. CSUR (2019).
[55]
Yongfeng Zhang and Xu Chen. 2018. Explainable recommendation: a survey and new perspectives. arXiv preprint arXiv:1804.11192 (2018).
[56]
Yongfeng Zhang, Guokun Lai, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma. 2014. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In SIGIR.
[57]
Bolei Zhou, Yiyou Sun, David Bau, and Antonio Torralba. 2018. Interpretable basis decomposition for visual explanation. In ECCV.

Cited By

View all
  • (2024)Enhancing Interpretability and Effectiveness in Recommendation with Numerical Features via Learning to Contrast the Counterfactual samplesCompanion Proceedings of the ACM Web Conference 202410.1145/3589335.3648345(453-460)Online publication date: 13-May-2024
  • (2024)Recent Developments in Recommender Systems: A Survey [Review Article]IEEE Computational Intelligence Magazine10.1109/MCI.2024.336398419:2(78-95)Online publication date: 8-Apr-2024
  • (2023)Using Neural and Graph Neural Recommender Systems to Overcome Choice Overload: Evidence From a Music Education PlatformACM Transactions on Information Systems10.1145/363787342:4(1-26)Online publication date: 20-Dec-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CIKM '20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management
October 2020
3619 pages
ISBN:9781450368599
DOI:10.1145/3340531
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 October 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. explainable artificial intelligence
  2. recommender systems

Qualifiers

  • Research-article

Conference

CIKM '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)148
  • Downloads (Last 6 weeks)8
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Enhancing Interpretability and Effectiveness in Recommendation with Numerical Features via Learning to Contrast the Counterfactual samplesCompanion Proceedings of the ACM Web Conference 202410.1145/3589335.3648345(453-460)Online publication date: 13-May-2024
  • (2024)Recent Developments in Recommender Systems: A Survey [Review Article]IEEE Computational Intelligence Magazine10.1109/MCI.2024.336398419:2(78-95)Online publication date: 8-Apr-2024
  • (2023)Using Neural and Graph Neural Recommender Systems to Overcome Choice Overload: Evidence From a Music Education PlatformACM Transactions on Information Systems10.1145/363787342:4(1-26)Online publication date: 20-Dec-2023
  • (2023)Data-Efficient Graph Learning Meets Ethical ChallengesProceedings of the Sixteenth ACM International Conference on Web Search and Data Mining10.1145/3539597.3572988(1218-1219)Online publication date: 27-Feb-2023
  • (2023)Interpretable patent recommendation with knowledge graph and deep learningScientific Reports10.1038/s41598-023-28766-y13:1Online publication date: 14-Feb-2023
  • (2022)A Tag-Based Post-Hoc Framework for Explainable Conversational RecommendationProceedings of the 2022 ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3539813.3545120(232-242)Online publication date: 23-Aug-2022
  • (2022)Techno-economic assessment of building energy efficiency systems using behavioral change: A case study of an edge-based micro-moments solutionJournal of Cleaner Production10.1016/j.jclepro.2021.129786331(129786)Online publication date: Jan-2022
  • (2022)Causal Disentanglement with�Network Information for�Debiased RecommendationsSimilarity Search and Applications10.1007/978-3-031-17849-8_21(265-273)Online publication date: 5-Oct-2022
  • (2022)Identifying Suitable Tasks for Inductive Transfer Through the Analysis of Feature AttributionsAdvances in Information Retrieval10.1007/978-3-030-99739-7_16(137-143)Online publication date: 5-Apr-2022
  • (undefined)Explaining Recommendation Fairness from a User/Item PerspectiveACM Transactions on Information Systems10.1145/3698877

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media