MachineLearningOnCoursera
Week Six
F Score
\[\begin{aligned}
P &= &\dfrac{2}{\dfrac{1}{P}+\dfrac{1}{R}}\\
&= &2 \dfrac{PR}{P+R}
\end{aligned}\]
Week Seven
Support Vector Machine
Cost Function
\[\begin{aligned}
&\min_{\theta}\lbrack-\dfrac{1}{m}{\sum_{y_{i}\in Y, x_{i} \in X}{y_{i} \log h(\theta^{T}x_{i})}+(1-y_{i})\log (1-h(\theta^{T}x_{i}))+\dfrac{\lambda}{2m} \sum_{\theta_{i} \in \theta}{\theta_{i}^{2}}}\rbrack\\
&\Rightarrow \min_{\theta}[-\sum_{y_{i} \in Y,x_{i} \in X}{y_{i} \log{h(\theta^{T}x_{i})}+(1-y_{i})\log(1-h(\theta^{T}x_{i}}))+\dfrac{\lambda}{2}\sum_{\theta_{i} \in \theta }{\theta^2_{i}}]\\
&\Rightarrow\min_{\theta}[C\sum_{y_{i} \in Y,x_{i} \in X}{y_{i} \log{h(\theta^{T}x_{i})}+(1-y_{i})\log(1-h(\theta^{T}x_{i}}))+\sum_{\theta_{i} \in \theta }{\theta^2_{i}}]\\
\end{aligned}\]
C is somewhat \(\dfrac{1}{\lambda}\).
- Large C:
- lower bias, high variance
- Small C:
- Higher bias, low variance
- Large \(\sigma^2\): Features \(f_{i}\) vary more smoothly.
- Higher bias, low variance
- Small \(\sigma^2\): Features \(f_{i}\) vary more sharply.
- Lower bias, high variance.
\[\begin{aligned}
& \dfrac{1}{2} \sum_{\theta_{i} \in \theta}{\theta_{i}^2}\\
&s.t&\theta^{T}x_{i} \geq 1, if\ y_{i} = 1&\\
&&\theta^{T}x_{i} \leq -1, if\ y_{i} = 0&
\end{aligned}\]
- Lower bias, high variance.
PS
If features are too many related to m, use logistic regression or SVM without a kernel.
If n is small, m is intermediate, use SVM with Gaussian kernal.
If n is small, m is large, add more features and use logistic regression or SVM without a kernel.
Week Eight
K-means
Cost Function
It try to minimize
\[\min_{\mu}{\dfrac{1}{m} \sum_{i=1}^{m} ||x^{(i)} - \mu_{c^{(i)}}}||^2\]
For the first loop, minimize the cost function by varing the centorid. For the second loop, it minimize the cost funcion with cetorid fixed and realign the centorid of every x in the training set.
Initialize
Initialize the centorids randomly. Randomly select k samples from the training set and set the centorids to these random selected samples.
It is possible that K-meas fall into the local minimum, So repeat to initialize the centorids randomly until the cost(distortion) is suitable for your purposes.
K-means converge all the time and it will not increase the cost during the training processs. More centoirds will decease the cost, if not, the k-means must fall into the local minimum and reinitialize the centorid until the cost is less.
PCA (Principal Component Analysis)
Restruct x from z meeting the below nonequation
\[1-\dfrac{\dfrac{1}{m} \sum_{i=1}^{m}||x^{(i)}-x^{(i)}_{approximation}||^2}{\dfrac{1}{m} \sum_{i=1}^{m} ||x^{(i)}||^2} \geq 0.99\]
PS:
the nonequation can be equal to the below
\[\begin{aligned}
[U, S, D] &= svd(sigma)\\
U_{reduce} &= U(:, 1:k)\\
z &= U_{reduce}' * x\\
x_{approximation} &= U_{reduce} * x\\\\
S &= \left( \begin{array}{ccc}
s_{11}&0&\cdots&0\\
0&s_{22}&\cdots&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&\cdots&s_{nn}
\end{array} \right)\\\\
\dfrac{\sum_{i=1}^{k}s_{ii}^2}{\sum_{i=1}^{m} s_{ii}^2} &\geq 0.99
\end{aligned}\]
Week Nine
Anomaly Detection
Gaussian Distribution
Multivariate Gaussian Distribution takes the connection of different variants into account
\[p(x) = \dfrac{1}{(2\pi)^{\frac{n}{2}}|\Sigma|^{\frac{1}{2}}}e^{-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)}\]
Single variant Gaussian Distribution is a special example of Multivariate Gaussian Distribution, where
\[\Sigma = \left(\begin{array}{ccc}
\sigma_{11}&&&&\\
&\sigma_{22}&&&\\
&&\ddots&&\\
&&&\sigma_{nn}&\\
\end{array}\right)\]
When training the Anomaly Detection, we can use Maximum Likelihood Estimation
\[\begin{aligned}
\mu &= \dfrac{1}{m} \sum_{i=1}^{m}x^{(i)}\\
\Sigma &= \dfrac{1}{m} \sum_{i=1}^{m} (x^{(i)}-\mu)(x^{(i)}-\mu)^{T}
\end{aligned}\]
When we use single variant anomaly detection, the numerical cost is much cheaper than multivariant. But may need to add some new features to distinguish the normal and non-normal.
Recommender System
Cost Function
\[\begin{aligned}
J(X,\Theta) = \dfrac{1}{2} \sum_{(i,j):r(i,j)=1}((\theta^{(j)})^{T}x^{(i)}-y^{(i,j)})^2 + \dfrac{\lambda}{2}[\sum_{i=1}^{n_{m}}\sum_{k=1}^{n}(x_k^{(i)})^2 + \sum_{j=1}^{n_\mu} \sum_{k=1}^n(\theta_{k}^{(j)})^2]\\
J(X,\Theta) = \dfrac{1}{2}Sum\{(X\Theta'-Y).*R\} + \dfrac{\lambda}{2}(Sum\{\Theta.^2\} + Sum\{X.^2\}\\
\end{aligned}\]
\[\begin{aligned}
\dfrac{\partial J}{\partial X} = ((X\Theta'-Y).*R) \Theta + \lambda X\\
\dfrac{\partial J}{\partial \Theta} = ((X\Theta'-Y).*R)'X + \lambda \Theta
\end{aligned}\]
MachineLearningOnCoursera的更多相关文章
随机推荐
- ArcGis Classic COM Add-Ins插件dll的安装与卸载
本文是去年<ArcGis Classic COM Add-Ins插件开发的一般流程 C#>一文(以下称“开发流程”)的后续.“开发流程”中写到会有“安装与卸载”系列的文章,今天把它补上. ...
- 我的长大app开发教程第一弹:Fragment布局
在接下来的一段时间里我会发布一个相对连续的Android教程,这个教程会讲述我是如何从零开始开发“我的长大”这个Android应用. 在开始之前,我先来介绍一下“我的长大”:这是一个校园社交app,准 ...
- mockplus 原型设计工具
国产原型工具 http://www.mockplus.cn, 该工具功能很棒. 每次打开软件都需先登陆, 好在项目文件是可以保存到本地, 可以注册为免费版/个人版/团队版/企业版. 我是免费账号, 功 ...
- 使用容器编排工具docker swarm安装clickhouse多机集群
1.首先需要安装docker最新版,docker 目前自带swarm容器编排工具 2.选中一台机器作为master,执行命令sudo docker swarm init [options] 3,再需 ...
- sql server 查询log日志 sql语句
xp_readerrorlog 一共有7个参数: 1. 存档编号 2. 日志类型(1为SQL Server日志,2为SQL Agent日志) 3. 查询包含的字符串 4. 查询包含的字符串 5. Lo ...
- 自编译Apache Spark2.3.3支持CDH5.16.1
1 下载源代码文件 https://archive.apache.org/dist/spark/spark-2.3.3/ 2 解压后导入编辑器,修改依赖的Hadoop版本,下面截图是修改后的,要看自己 ...
- hihocoder 1176
hihocoder 1176 题意:N,M.分别表示岛屿数量和木桥数量,一笔画 分析:欧拉路问题(给定无孤立结点图G,若存在一条路,经过图中每边一次且仅一次,该条路称为欧拉路) 欧拉路的条件 一个无向 ...
- 医学图像数据(二)——TCIA完整数据集下载方式
1. 构建下载环境 l TCIA数据集下载文件为.jnlp格式(JNLP(Java Network Launching Protocol )是java提供的一种可以通过浏览器直接执行java应用程序 ...
- Linux kill 命令 以及USR1 信号解释
kill 中的USR信号解释 USR1亦通常被用来告知应用程序重载配置文件:例如,向Apache HTTP服务器发送一个USR1信号将导致以下步骤的发生:停止接受新的连接,等待当前连接停止,重新载入配 ...
- goroute应用-模拟远程调用RPC
go语言简单模拟RPC,详见个人新博客:blog.dlgde.cn 代码如下: package main import ( "errors" "fmt" &qu ...