【DM论文阅读杂记】推荐系统 注意力机制
Paper Title
Real-time Attention Based Look-alike Model for Recommender System
Basic algorithm and main steps
Basic ideas
RALM is a similarity based look-alike model, which consists of user representation learning and look-alike learning. Novel points: attention-merge layer, local and global attention, on-line asynchronous seeds cluster.
1. Offline Traning
1. User Representation Learning
Treat it as multi-class classification that chooses an interest item from millions of candidates.
(1) Calculate the possibility of picking the $ i$-th item as a negative example
$ p(x_i) = \frac{log(k+2)-log(k+1)}{log(D+1)} $
$ D $: the max rank of all the items( rank by their frequency of appearance.)
$ k $: the rank of the $ i$-th item.
(2) Negative sampling: ample in a positive/negative proportion of 1/10
(3) Embedding layer
$ P(c=i|U,X_i) = \frac{e^{x_i u}}{\sum \limits_{j \in X}e^{x_j u}} $
the cross entropy loss : $ L = -\sum \limits_{j \in X} y_i log P(c=i|U,X_i) $
$ u $: a high-dimensional embedding of the user
$ x_j $: embeddings of item $ j $
$ y_i \in {0, 1} $: the label
When converge, output: the representation of user interests.
(4) Attention merge layer
Learn user-related weights for multiple fields.
\(n\) fields are embedded with the same length \(m\) as vector \(h \in R^m\), and then concatenate them in dimension 2, resulting a matrix \(H \in R^{n×m}\). Next, compute weights:
$ u = tanh(W_1H) $
$ w_i = \frac{e{W_2u_iT}}{\sum_j^n e{W_2u_jT}} $
\(W_1 \in R^{k×n}\) and \(W_2 \in R^k\) : weight matrix , \(k\) size of attention unit,
$ u \in R^n$ :the activation unit for fields, \(a ∈ R^n\) weights of fields.
Merge vector $ M \in R^m : M = aH $
Then take it as the input of the MLP layer and get universal user embedding.
2. Look-alike Learning
(1) Transforming matrix.
$ n \times m $ to $ n \times h $
(2) Local attention
To activate local interest / mine personalized info.
$ E_{local_s} = E_s softmax(tanh(E_s^T W_l E_u)) $
\(W_l \in R^{h \times h}\) : the attention matrix,
\(E_s\) : seen user $ E_u $: target user
Note: Firstly, cluster the seed users through K-means algorithm into k clusters, and for each cluster , calculate the average mean of seeds vectors.
(3) Global attention
$ E_{global_s} = E_s softmax(E_s^T tanh(W_g E_s)) $
(4) Calculate the similarity between seeds and target user
$ score_{u,s} = \alpha \cdot cosine(E_u,E_{global_s}) + \beta \cdot cosine(E_u, E_{local_s}) $
(5) Iterative training
2. Online Asynchronous Processing
Update seeds embedding database in real-time . It includes user feedback monitor and seeds clustering.
3. Online Serving
$ score_{u,s} = \alpha \cdot cosine(E_u,E_{global_s}) + \beta \cdot cosine(E_u, E_{local_s}) $
Motivation
- The "Matthew effect" becomes increasingly evident in recent recommendation systems. Many competitive long-tail contents are
difficult to achieve timely exposure because of lacking behavior
features . - Traditional look-alike models which widely used in on-line
advertising are not suitable for recommender systems because of
the strict requirement of both real-time and effectiveness.
Contribution
- Improve the effectiveness of user representation learning. Use the attention to capture various fields of interests.
- Improve the robustness and adaptivity of seeds representation learning. Use local and global attention.
- Realize a real-time and high-performance look-alike model
My own idea
Relations to what I had read
- Method of concatenating feature fields. In other paper about CTR I had read, different feature fields
are concatenated directly. It will cause overfitting in strongly-relevant fields(such as interested tags) and underfitting in to weakly-relevant fields(such as shopping interests) . Then it leads to a result that the recommended results are determined by the few strongly-relevant fields. Such models can not learn comprehensively on multi-fields features, and will lack diversity of recommended results. But in this paper, it uses attention merge to learn effective relations among different fields of user features. - Besides, it uses high-order continuous features instead of categorical features. In my opinion, if we use low-order categorical features to express the user group, we can only use statistical methods to construct the features, which will lose most of the information of the group. However, the higher-order continuous features after presentation learning actually contain the intersections of various lower-order features of users, which can more comprehensively express the information of users. Moreover, the higher-order features are generalized to avoid the expression of memory trapped in historical data.
Shortcomings and potential change I assume
- In this paper, it seems that only a few features are used to learn representation, which may limits the effect in some extends.
【DM论文阅读杂记】推荐系统 注意力机制的更多相关文章
- CAP:多重注意力机制,有趣的细粒度分类方案 | AAAI 2021
论文提出细粒度分类解决方案CAP,通过上下文感知的注意力机制来帮助模型发现细微的特征变化.除了像素级别的注意力机制,还有区域级别的注意力机制以及局部特征编码方法,与以往的视觉方案很不同,值得一看 来源 ...
- 推荐系统中的注意力机制——阿里深度兴趣网络(DIN)
参考: https://zhuanlan.zhihu.com/p/51623339 https://arxiv.org/abs/1706.06978 注意力机制顾名思义,就是模型在预测的时候,对用户不 ...
- [论文阅读]阿里DIN深度兴趣网络之总体解读
[论文阅读]阿里DIN深度兴趣网络之总体解读 目录 [论文阅读]阿里DIN深度兴趣网络之总体解读 0x00 摘要 0x01 论文概要 1.1 概括 1.2 文章信息 1.3 核心观点 1.4 名词解释 ...
- [论文阅读]阿里DIEN深度兴趣进化网络之总体解读
[论文阅读]阿里DIEN深度兴趣进化网络之总体解读 目录 [论文阅读]阿里DIEN深度兴趣进化网络之总体解读 0x00 摘要 0x01论文概要 1.1 文章信息 1.2 基本观点 1.2.1 DIN的 ...
- 自然语言处理中的自注意力机制(Self-attention Mechanism)
自然语言处理中的自注意力机制(Self-attention Mechanism) 近年来,注意力(Attention)机制被广泛应用到基于深度学习的自然语言处理(NLP)各个任务中,之前我对早期注意力 ...
- 深度学习之注意力机制(Attention Mechanism)和Seq2Seq
这篇文章整理有关注意力机制(Attention Mechanism )的知识,主要涉及以下几点内容: 1.注意力机制是为了解决什么问题而提出来的? 2.软性注意力机制的数学原理: 3.软性注意力机制. ...
- Pytorch系列教程-使用Seq2Seq网络和注意力机制进行机器翻译
前言 本系列教程为pytorch官网文档翻译.本文对应官网地址:https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutor ...
- AAAI2018中的自注意力机制(Self-attention Mechanism)
近年来,注意力(Attention)机制被广泛应用到基于深度学习的自然语言处理(NLP)各个任务中.随着注意力机制的深入研究,各式各样的attention被研究者们提出,如单个.多个.交互式等等.去年 ...
- 论文阅读笔记 Improved Word Representation Learning with Sememes
论文阅读笔记 Improved Word Representation Learning with Sememes 一句话概括本文工作 使用词汇资源--知网--来提升词嵌入的表征能力,并提出了三种基于 ...
- TensorFlow从1到2(十)带注意力机制的神经网络机器翻译
基本概念 机器翻译和语音识别是最早开展的两项人工智能研究.今天也取得了最显著的商业成果. 早先的机器翻译实际脱胎于电子词典,能力更擅长于词或者短语的翻译.那时候的翻译通常会将一句话打断为一系列的片段, ...
随机推荐
- mysql19-锁
1.什么是锁 锁是计算机协调多个进程或线程并发访问某一资源的机制.在数据库中,除传统的计算资源(如CPU.RAM.I/O等)的争用以外,数据也是一种供许多用户共享的资源.如何保证数据并发访问的一致性. ...
- UBUNTU切换内核
查询可更换内核的序号 gedit /boot/grub/grub.cfg查询已安装的内核和内核的序号.找到文件中的menuentry (图中在一大堆fi-else底下)menuentry底下还有 ...
- C++并发-同步并发
1.等待事件 std::mutex m; void wait() { std::unique_lock<std::mutex> lk(m); lk.unlock(); std::this_ ...
- Python中的魔术方法大全
魔术方法 一种特殊的方法而已 特点 不需要人工调用,在特定时刻自动触发执行 魔术方法种类 1.__init__初始化方法******* 触发时机:实例化对象之后触发作用:为对象添加对象的所属成员参数: ...
- ASP.NET Core - 配置系统之配置读取
一个应用要运行起来,往往需要读取很多的预设好的配置信息,根据约定好的信息或方式执行一定的行为. 配置的本质就是软件运行的参数,在一个软件实现中需要的参数非常多,如果我们以 Hard Code(硬编码) ...
- LeetCode-432 全O(1)的数据结构
来源:力扣(LeetCode)链接:https://leetcode-cn.com/problems/all-oone-data-structure 题目描述 请你设计一个用于存储字符串计数的数据结构 ...
- js控制关闭layui的switch开关
<input class="switch" type="checkbox" lay-skin="switch" lay-filter= ...
- 支持管道、重定向、*匹配的miniShell
先上成果图 源代码 仅供技术点的分享,抄袭者就算了,所以main.c就不贴了 /* * split_line.c */ #include <stdio.h> #include <st ...
- 简单使用wireshark
wireshark抓包工具 拓扑图: 拓扑图解释:终端用户使用wireshark抓包工具监听无线网卡,监听时,终端访问互联网,可实时监听网络抓包 操作步骤: 一,打开wireshark抓包工具,监听网 ...
- Deer_GF之图片
Hi,今天介绍一下Deer_Gf里的图片组件. 框架介绍请移步[Deer_GF之框架介绍] 接下来为大家介绍一下框架里用到的图片组件及加载流程. 目录 大图(Texture)存 ...