文章主要介绍的是koren 08年发的论文[1],  2.1 部分内容(其余部分会陆续补充上来)。

koren论文中用到netflix 数据集, 过于大, 在普通的pc机上运行时间很长很长。考虑到写文章目地主要是已介绍总结方法为主,所以采用Movielens 数据集。

要用到的变量介绍:

Baseline estimates

     

object function:

梯度变化(利用stochastic gradient descent算法使上述的目标函数值,在设定的迭代次数内,降到最小)

系统评判标准:

参数设置:

迭代次数maxStep = 100, 学习速率(梯度变化速率)取0.99  还有的其他参数设置参考引用论文[2]

具体的代码实现

'''''
Created on Dec 11, 2012 @Author: Dennis Wu
@E-mail: hansel.zh@gmail.com
@Homepage: http://blog.csdn.net/wuzh670 Data set download from : http://www.grouplens.org/system/files/ml-100k.zip '''
from operator import itemgetter, attrgetter
from math import sqrt
import random def load_data(): train = {}
test = {} filename_train = 'data/ua.base'
filename_test = 'data/ua.test' for line in open(filename_train):
(userId, itemId, rating, timestamp) = line.strip().split('\t')
train.setdefault(userId,{})
train[userId][itemId] = float(rating) for line in open(filename_test):
(userId, itemId, rating, timestamp) = line.strip().split('\t')
test.setdefault(userId,{})
test[userId][itemId] = float(rating) return train, test def calMean(train):
sta = 0
num = 0
for u in train.keys():
for i in train[u].keys():
sta += train[u][i]
num += 1
mean = sta*1.0/num
return mean def initialBias(train, userNum, movieNum): mean = calMean(train)
bu = {}
bi = {}
biNum = {}
buNum = {} u = 1
while u < (userNum+1):
su = str(u)
for i in train[su].keys():
bi.setdefault(i,0)
biNum.setdefault(i,0)
bi[i] += (train[su][i] - mean)
biNum[i] += 1
u += 1 i = 1
while i < (movieNum+1):
si = str(i)
biNum.setdefault(si,0)
if biNum[si] >= 1:
bi[si] = bi[si]*1.0/(biNum[si]+25)
else:
bi[si] = 0.0
i += 1 u = 1
while u < (userNum+1):
su = str(u)
for i in train[su].keys():
bu.setdefault(su,0)
buNum.setdefault(su,0)
bu[su] += (train[su][i] - mean - bi[i])
buNum[su] += 1
u += 1 u = 1
while u < (userNum+1):
su = str(u)
buNum.setdefault(su,0)
if buNum[su] >= 1:
bu[su] = bu[su]*1.0/(buNum[su]+10)
else:
bu[su] = 0.0
u += 1 return bu,bi,mean def sgd(train, test, userNum, movieNum): bu, bi, mean = initialBias(train, userNum, movieNum) alpha1 = 0.002
beta1 = 0.1
slowRate = 0.99
step = 0
preRmse = 1000000000.0
nowRmse = 0.0
while step < 100:
rmse = 0.0
n = 0
for u in train.keys():
for i in train[u].keys():
pui = 1.0 * (mean + bu[u] + bi[i])
eui = train[u][i] - pui
rmse += pow(eui,2)
n += 1
bu[u] += alpha1 * (eui - beta1 * bu[u])
bi[i] += alpha1 * (eui - beta1 * bi[i]) nowRmse = sqrt(rmse*1.0/n)
print 'step: %d Rmse: %s' % ((step+1), nowRmse)
if (nowRmse < preRmse):
preRmse = nowRmse
alpha1 *= slowRate
step += 1
return bu, bi, mean def calRmse(test, bu, bi, mean): rmse = 0.0
n = 0
for u in test.keys():
for i in test[u].keys():
pui = 1.0 * (mean + bu[u] + bi[i])
eui = pui - test[u][i]
rmse += pow(eui,2)
n += 1
rmse = sqrt(rmse*1.0 / n)
return rmse; if __name__ == "__main__": # load data
train, test = load_data() # baseline + stochastic gradient descent
bu, bi, mean = sgd(train, test, 943, 1682) # compute the rmse of test set
print 'the Rmse of test test is: %s' % calRmse(test, bu, bi, mean)

实验结果

REFERENCES

1.Y. Koren. Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model. Proc. 14th ACM SIGKDD Int. Conf. On Knowledge Discovery and Data Mining  (KDD’08), pp. 426–434, 2008.

2. Y.Koren.  The BellKor Solution to the Netflix Grand Prize  2009

基于baseline和stochastic gradient descent的个性化推荐系统的更多相关文章

  1. 基于baseline、svd和stochastic gradient descent的个性化推荐系统

    文章主要介绍的是koren 08年发的论文[1],  2.3部分内容(其余部分会陆续补充上来).koren论文中用到netflix 数据集, 过于大, 在普通的pc机上运行时间很长很长.考虑到写文章目 ...

  2. FITTING A MODEL VIA CLOSED-FORM EQUATIONS VS. GRADIENT DESCENT VS STOCHASTIC GRADIENT DESCENT VS MINI-BATCH LEARNING. WHAT IS THE DIFFERENCE?

    FITTING A MODEL VIA CLOSED-FORM EQUATIONS VS. GRADIENT DESCENT VS STOCHASTIC GRADIENT DESCENT VS MIN ...

  3. Stochastic Gradient Descent

    一.从Multinomial Logistic模型说起 1.Multinomial Logistic 令为维输入向量; 为输出label;(一共k类); 为模型参数向量: Multinomial Lo ...

  4. Stochastic Gradient Descent 随机梯度下降法-R实现

    随机梯度下降法  [转载时请注明来源]:http://www.cnblogs.com/runner-ljt/ Ljt 作为一个初学者,水平有限,欢迎交流指正. 批量梯度下降法在权值更新前对所有样本汇总 ...

  5. 机器学习-随机梯度下降(Stochastic gradient descent)

    sklearn实战-乳腺癌细胞数据挖掘(博主亲自录制视频) https://study.163.com/course/introduction.htm?courseId=1005269003& ...

  6. 几种梯度下降方法对比(Batch gradient descent、Mini-batch gradient descent 和 stochastic gradient descent)

    https://blog.csdn.net/u012328159/article/details/80252012 我们在训练神经网络模型时,最常用的就是梯度下降,这篇博客主要介绍下几种梯度下降的变种 ...

  7. Stochastic Gradient Descent收敛判断及收敛速度的控制

    要判断Stochastic Gradient Descent是否收敛,可以像Batch Gradient Descent一样打印出iteration的次数和Cost的函数关系图,然后判断曲线是否呈现下 ...

  8. Gradient Descent 和 Stochastic Gradient Descent(随机梯度下降法)

    Gradient Descent(Batch Gradient)也就是梯度下降法是一种常用的的寻找局域最小值的方法.其主要思想就是计算当前位置的梯度,取梯度反方向并结合合适步长使其向最小值移动.通过柯 ...

  9. 随机梯度下降法(Stochastic gradient descent, SGD)

    BGD(Batch gradient descent)批量梯度下降法:每次迭代使用所有的样本(样本量小)    Mold 一直在更新 SGD(Stochastic gradientdescent)随机 ...

随机推荐

  1. windows下执行tensorflow/models的代码显示No module named 'object_detection'

    Traceback (most recent call last): File "object_detection/builders/model_builder_test.py", ...

  2. 通讯录查询(Profile Lookup)——freeCodeCamp

  3. 【未完成】Jmeter接口自动化测试:参数化设置

    1. 从CSV文件读取参数 创建一个CVS文件,文件第一行不写参数名,直接从参数值开始,每一列代表一个参数 在测试计划或者线程组中,添加一个配置元件-->CSV 数据文件设置 Filename: ...

  4. LUOGU P4322 [JSOI2016]最佳团体(0/1分数规划+树形背包)

    传送门 解题思路 一道0/1分数规划+树上背包,两个应该都挺裸的,话说我常数为何如此之大..不吸氧洛谷过不了啊. 代码 #include<iostream> #include<cst ...

  5. Android Support 包的作用、用法

    1, Android Support V4, V7, V13是什么?本质上就是三个java library. 2,  为什么要有support库?如果在低版本Android平台上开发一个应用程序,而应 ...

  6. AutoMapper Profile用法

    using System; using System.Collections.Generic; using System.Linq; using System.Web; using AutoMappe ...

  7. range()函数在python3与python2中的区别

    range()函数在python3与python2中的区别 - CSDN博客 https://blog.csdn.net/weixin_37579123/article/details/8098038 ...

  8. spring boot中使用javax.validation以及org.hibernate.validator校验入参

    这里springboot用的版本是:<version>2.1.1.RELEASE</version> 自带了hibernate.validator,所以不用添加额外依赖 1.创 ...

  9. <day002>Selenium基本操作+unittest测试框架

    任务1:Selenium基本操作 from selenium import webdriver # 通用选择 from selenium.webdriver.common.by import By # ...

  10. matlab-使用技巧

    sel(1:100); 1 2 3 4 5 ...100 X(sel, :); 1.......2.......3.......4.......5..........100...... nn_para ...