Week Six

F Score

\[\begin{aligned}
P &= &\dfrac{2}{\dfrac{1}{P}+\dfrac{1}{R}}\\
&= &2 \dfrac{PR}{P+R}
\end{aligned}\]

Week Seven

Support Vector Machine

Cost Function

\[\begin{aligned}
&\min_{\theta}\lbrack-\dfrac{1}{m}{\sum_{y_{i}\in Y, x_{i} \in X}{y_{i} \log h(\theta^{T}x_{i})}+(1-y_{i})\log (1-h(\theta^{T}x_{i}))+\dfrac{\lambda}{2m} \sum_{\theta_{i} \in \theta}{\theta_{i}^{2}}}\rbrack\\
&\Rightarrow \min_{\theta}[-\sum_{y_{i} \in Y,x_{i} \in X}{y_{i} \log{h(\theta^{T}x_{i})}+(1-y_{i})\log(1-h(\theta^{T}x_{i}}))+\dfrac{\lambda}{2}\sum_{\theta_{i} \in \theta }{\theta^2_{i}}]\\
&\Rightarrow\min_{\theta}[C\sum_{y_{i} \in Y,x_{i} \in X}{y_{i} \log{h(\theta^{T}x_{i})}+(1-y_{i})\log(1-h(\theta^{T}x_{i}}))+\sum_{\theta_{i} \in \theta }{\theta^2_{i}}]\\
\end{aligned}\]
C is somewhat \(\dfrac{1}{\lambda}\).

  • Large C:

    • lower bias, high variance
  • Small C:
    • Higher bias, low variance
  • Large \(\sigma^2\): Features \(f_{i}\) vary more smoothly.
    • Higher bias, low variance
  • Small \(\sigma^2\): Features \(f_{i}\) vary more sharply.
    • Lower bias, high variance.
      \[\begin{aligned}
      & \dfrac{1}{2} \sum_{\theta_{i} \in \theta}{\theta_{i}^2}\\
      &s.t&\theta^{T}x_{i} \geq 1, if\ y_{i} = 1&\\
      &&\theta^{T}x_{i} \leq -1, if\ y_{i} = 0&
      \end{aligned}\]

PS

If features are too many related to m, use logistic regression or SVM without a kernel.

If n is small, m is intermediate, use SVM with Gaussian kernal.

If n is small, m is large, add more features and use logistic regression or SVM without a kernel.

Week Eight

K-means

Cost Function

It try to minimize
\[\min_{\mu}{\dfrac{1}{m} \sum_{i=1}^{m} ||x^{(i)} - \mu_{c^{(i)}}}||^2\]
For the first loop, minimize the cost function by varing the centorid. For the second loop, it minimize the cost funcion with cetorid fixed and realign the centorid of every x in the training set.

Initialize

Initialize the centorids randomly. Randomly select k samples from the training set and set the centorids to these random selected samples.

It is possible that K-meas fall into the local minimum, So repeat to initialize the centorids randomly until the cost(distortion) is suitable for your purposes.

K-means converge all the time and it will not increase the cost during the training processs. More centoirds will decease the cost, if not, the k-means must fall into the local minimum and reinitialize the centorid until the cost is less.

PCA (Principal Component Analysis)

Restruct x from z meeting the below nonequation
\[1-\dfrac{\dfrac{1}{m} \sum_{i=1}^{m}||x^{(i)}-x^{(i)}_{approximation}||^2}{\dfrac{1}{m} \sum_{i=1}^{m} ||x^{(i)}||^2} \geq 0.99\]
PS:
the nonequation can be equal to the below
\[\begin{aligned}
[U, S, D] &= svd(sigma)\\
U_{reduce} &= U(:, 1:k)\\
z &= U_{reduce}' * x\\
x_{approximation} &= U_{reduce} * x\\\\
S &= \left( \begin{array}{ccc}
s_{11}&0&\cdots&0\\
0&s_{22}&\cdots&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&\cdots&s_{nn}
\end{array} \right)\\\\
\dfrac{\sum_{i=1}^{k}s_{ii}^2}{\sum_{i=1}^{m} s_{ii}^2} &\geq 0.99
\end{aligned}\]

Week Nine

Anomaly Detection

Gaussian Distribution

Multivariate Gaussian Distribution takes the connection of different variants into account
\[p(x) = \dfrac{1}{(2\pi)^{\frac{n}{2}}|\Sigma|^{\frac{1}{2}}}e^{-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)}\]
Single variant Gaussian Distribution is a special example of Multivariate Gaussian Distribution, where
\[\Sigma = \left(\begin{array}{ccc}
\sigma_{11}&&&&\\
&\sigma_{22}&&&\\
&&\ddots&&\\
&&&\sigma_{nn}&\\
\end{array}\right)\]
When training the Anomaly Detection, we can use Maximum Likelihood Estimation
\[\begin{aligned}
\mu &= \dfrac{1}{m} \sum_{i=1}^{m}x^{(i)}\\
\Sigma &= \dfrac{1}{m} \sum_{i=1}^{m} (x^{(i)}-\mu)(x^{(i)}-\mu)^{T}
\end{aligned}\]
When we use single variant anomaly detection, the numerical cost is much cheaper than multivariant. But may need to add some new features to distinguish the normal and non-normal.

Recommender System

Cost Function

\[\begin{aligned}
J(X,\Theta) = \dfrac{1}{2} \sum_{(i,j):r(i,j)=1}((\theta^{(j)})^{T}x^{(i)}-y^{(i,j)})^2 + \dfrac{\lambda}{2}[\sum_{i=1}^{n_{m}}\sum_{k=1}^{n}(x_k^{(i)})^2 + \sum_{j=1}^{n_\mu} \sum_{k=1}^n(\theta_{k}^{(j)})^2]\\
J(X,\Theta) = \dfrac{1}{2}Sum\{(X\Theta'-Y).*R\} + \dfrac{\lambda}{2}(Sum\{\Theta.^2\} + Sum\{X.^2\}\\
\end{aligned}\]
\[\begin{aligned}
\dfrac{\partial J}{\partial X} = ((X\Theta'-Y).*R) \Theta + \lambda X\\
\dfrac{\partial J}{\partial \Theta} = ((X\Theta'-Y).*R)'X + \lambda \Theta
\end{aligned}\]

MachineLearningOnCoursera的更多相关文章

随机推荐

  1. python super参数错误

    # -*- coding:utf-8 _*-"""@author:Administrator@file: yamlparser.py@time: 2018/09/07&q ...

  2. Groovy闭包详解

    Groovy闭包是一种可执行代码块的方法,闭包也是对象,可以向方法一样传递参数,因为闭包也是对象,因此可以在需要的时候执行,像方法一样闭包可以传递一个或多个参数.闭包最常见的用途就是处理集合,可以遍历 ...

  3. vue适配移动端px自动转化为rem

    1.下载lib-flexible 我使用的是vue-cli+webpack,所以是通过npm来安装的 npm i lib-flexible --save 2.引入lib-flexible 在main. ...

  4. TensorFlow在Windows上的CPU版本和GPU版本的安装指南(亲测有效)

    安装说明 平台:Window.Ubuntu.Mac等操作系统 版本:支持GPU版本和CPU版本 安装方式:pip方式.Anaconda方式 attention: 在Windows上目前支持python ...

  5. LeetCode第十八题-四数之和

    4Sum 问题简介:定n个整数和整数目标的数组nums,是否有元素a,b,c,d在nums中,使a+b+c+d=target? 举例: 给定数组 nums = [1, 0, -1, 0, -2, 2] ...

  6. HTML和XHTML区别

    HTML和XHTML 可扩展超文本标记语言XHTML(eXtensible HyperText Markup Language)是将超文本标记语言HTML(HyperText Markup Langu ...

  7. [Kubernetes]资源模型与资源管理

    作为 Kubernetes 的资源管理与调度部分的基础,需要从它的资源模型说起. 资源管理模型的设计 我们知道,在 Kubernetes 里面, Pod 是最小的原子调度单位,这就意味着,所有和调度和 ...

  8. Git管理源代码

    Git Git 是目前世界上最先进的分布式版本控制系统(没有之一) 作用 源代码管理 为什么要进行源代码管理? 方便多人协同开发 方便版本控制 Git单人本地仓库操作 安装git sudo apt-g ...

  9. Centos6.8安装php5.6

    检查当前安装的PHP包 yum list installed | grep php 如果有安装的PHP包,先删除他们, 如: yum remove php.x86_64 php-cli.x86_64 ...

  10. java程序员技术范围

    1 工具 开发工具.源代码管理.构建工具.测试工具(压力.安全等).接口测试工具.反编译工具.日志工具.第三方工具等 2 java jvm.多线程.socket.io(两种方式).集合(两大接口).异 ...