Generate What You Prefer: Reshaping Sequential Recommendation via Guided Diffusion

Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track

Bibtex Paper

Authors

Zhengyi Yang, Jiancan Wu, Zhicai Wang, Xiang Wang, Yancheng Yuan, Xiangnan He

Abstract

Sequential recommendation aims to recommend the next item that matches a user’sinterest, based on the sequence of items he/she interacted with before. Scrutinizingprevious studies, we can summarize a common learning-to-classify paradigm—given a positive item, a recommender model performs negative sampling to addnegative items and learns to classify whether the user prefers them or not, based onhis/her historical interaction sequence. Although effective, we reveal two inherentlimitations: (1) it may differ from human behavior in that a user could imaginean oracle item in mind and select potential items matching the oracle; and (2)the classification is limited in the candidate pool with noisy or easy supervisionfrom negative samples, which dilutes the preference signals towards the oracleitem. Yet, generating the oracle item from the historical interaction sequence ismostly unexplored. To bridge the gap, we reshape sequential recommendationas a learning-to-generate paradigm, which is achieved via a guided diffusionmodel, termed DreamRec. Specifically, for a sequence of historical items, itapplies a Transformer encoder to create guidance representations. Noising targetitems explores the underlying distribution of item space; then, with the guidance ofhistorical interactions, the denoising process generates an oracle item to recoverthe positive item, so as to cast off negative sampling and depict the true preferenceof the user directly. We evaluate the effectiveness of DreamRec through extensiveexperiments and comparisons with existing methods. Codes and data are open-sourcedat https://github.com/YangZhengyi98/DreamRec.