Monte Carlo Integration

  1. 找到原函数,再计算
  2. 无法找到原函数,MC积分

Assume that we can generate \(U_1, . . . , U_n \sim Uniform (0, 1)\), and define \(\hat \theta_n = \frac{1}{n} \sum_{i=1}^{n} g(U_i)\)

By law of large numbers, if \(\int_{0}^{1}|g(u)|du < \infty\), we have \(\hat \theta_n \rightarrow \theta := \mathbb{E}g(U) = \int_0^1 g(u)du \quad a.s. \quad n \rightarrow \infty\)

a.s. means almost surely:the set of possible exceptions may be non-empty, but it has probability 0.

bounded interval

Eg. Calculate \(\theta = \int_0^1 x^2 dx\)

n <- 100
set.seed(1)
x <- runif(n)
theta.hat <- mean(x^2)
theta.hat

求:\(\theta = \int_a^{b} g(t) dt\)

利用\(\int_a^bg(t)dt = (b-a)\int_a^b g(t)\frac{1}{b-a}dt = (b-a)\mathbb{E}g(Y)\), where Y~ Uniform(a,b)

所以我们可以:

  1. 先生成\(Y_1,...,Y_n \sim Uniform(a,b)\)
  2. \(\hat \theta_n = (b-a) * \frac{1}{n} \sum_{i=1}^n g(Y_i)\)

Eg. Calculate \(\theta = \int_1^4 x^2dx\)

n <- 100
set.seed(1)
x <- runif(n, min = 1, max = 4)
theta.hat <- (4 - 1) * mean(x^2)
theta.hat

Unbounded Interval

\[\Phi(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}}e^\frac{-t^2}{2}dt
\]

Note that $$\Phi(x) = \mathbb{P}(Z \le x) = \mathbb{E}[I_{Z \le x}], \text{where } Z \sim N(0,1)$$

Therefore, we can estimate

\[\widehat{\Phi(x)}=\frac{1}{n} \sum_{i=1}^n \mathbb{I}\left(Z_i \leq x\right)=\frac{\#\left\{i: Z_i \leq x\right\}}{n}
\]

事件发生的概率等于它的示性函数的期望。

Eg. Calculate \(\Phi(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}}e^\frac{t^2}{2}dt\) (1) when x = 0.3. (2) when \(x = (x_1, ..., x_T)\)

#(1)
x <- 0.3
n <- 100
set.seed(1)
z <- rnorm(n)
mean(z < x) # estimate
pnorm(x) # true value by pnorm() #(2)
Phi.hat <- function(x, n = 1000){
z <- rnorm(n)
return(mean(z < x))
}
x <- seq(0, 2.5, len = 5)
Phi.hat(x) #但是这样有问题,输出只有一个值
##原因是当向量和数进行比较的时候,向量的每一个元素会分别跟数比较;而向量和向量比较时,是对应元素进行比较。Be Careful!
round(pnorm(x), digits = 3) # true value #(2)解决方法
probab <- numeric(5)
for(i in 1:5){
probab[i] <- Phi.hat(x[i])
}
probab

For General Distributions

Calculate \(\theta = \int_A g(x)f(x) dx\)

  1. 先选一个满意的\(f(x)\)
  2. 生成一组\(X_1,...,X_n\) 服从\(f(x)\) (上节课的知识)
  3. 计算\(\hat \theta_n = \frac{1}{n} \sum{i=1}^n g(X_i)\)

Eg. Calculate \(\int_0^{\infty} \frac{e^{-x}}{1+x^2}dx\)

可以把\(e^{-x}\)视作均值为1的指数分布的密度函数,由此可以生成X。

Law of Large Numbers

Let\(\xi_1,\xi_2...\) be a sequence of i.i.d. random variables with finite expected value,by \(\mu\). Let

\[\bar{\xi}_n=\frac{1}{n} \sum_{i=1}^n \xi_i .
\]

Then,

  • Weak law of large numbers
\[\bar{\xi}_n \stackrel{P}{\rightarrow} \mu \text { as } n \rightarrow \infty .
\]
  • Strong law of large numbers)
\[\bar{\xi}_n \stackrel{\text { a.s. }}{\longrightarrow} \mu \text { as } n \rightarrow \infty .
\]

Weak law of large numbers

Definition (Convergence in probability)

  • Let \(\left(\bar{\xi}_n\right)_{n \geq 1}\) be a sequence of real-valued random variables defined on a probability space \((\Omega, \mathcal{F}, P)\).
  • For any positive number \(\varepsilon\),
\[\lim _{n \rightarrow \infty} P\left(\left\{\omega \in \Omega:\left|\bar{\xi}_n(\omega)-\mu\right|>\varepsilon\right\}\right)=0 .
\]
  • Then, we say \(\bar{\xi}_n\) converges in probability to \(\mu\), denoted by
\[\bar{\xi}_n \stackrel{P}{\longrightarrow} \mu .
\]

证明见老师ppt。

M <- 100000
n <- 100
eps <- 0.1
xi.bar <- numeric(M) # xi.bar[i] will record the sample mean of trial i
for (i in 1:M){
set.seed(i)
xi.vec <- sample(1:6, size = n, replace = TRUE)
xi.bar[i] <- mean(xi.vec)
}
mu <- 3.5
mean(abs(xi.bar - mu) > eps)

注意这里只跟n有关,m是试验数量。

如何从

Standard Error

估计这种积分形式的标准差。

如果能用数值解,就用数值解法。不知道分布的时候,可以用MC来估计。

\[\theta = \int_{-\infty}^{\infty} g(x)f(x)dx
\]

where \(f(x)\)is a p.d.f.

\[\hat \theta_n = \frac{1}{n}\sum_{i = 1}^ng(X_i)
\]

, where \(X_i \sim f(x)\)

\[\mathbb{E}[\hat\theta_n] = \theta
\]

现求\(Var(\hat\theta_n)\)

standard error:标准误差

  • The variance can be calculated by
\[\operatorname{Var}\left(\hat{\theta}_n\right)=\frac{\sigma^2}{n}=:(\text { standard error of } \hat{\theta})^2=\left(\operatorname{se}\left(\hat{\theta}_n\right)\right)^2,
\]

where

\[\sigma^2=\operatorname{Var}(g(X))=\int_{-\infty}^{\infty}(g(x)-\theta)^2 f(x) d x .
\]
  • However, \(\sigma^2\) is not known in general.
  • We can use the estimated variance:
\[\hat{\sigma}_n^2=\frac{1}{n-1} \sum_{i=1}^n\left(g\left(X_i\right)-\hat{\theta}_n\right)^2 .
\]
  • The estimated variance of \(\hat{\theta}_n\) is
\[\widehat{\operatorname{se}}\left(\hat{\theta}_n\right)^2:=\frac{\hat{\sigma}_n^2}{n}=\frac{1}{n(n-1)} \sum_{i=1}^n\left(g\left(X_i\right)-\hat{\theta}_n\right)^2 .
\]

f(x) 是自己取的,{\(X_i\)}是根据f(x)生成的。n是MC取的样本数。

由前面的内容,

\[\hat{\theta}_n=\frac{1}{n} \sum_{i=1}^n g\left(X_i\right) .
\]

Eg. Calculate \(\int_0^1 xdx\)

Choose f(x)=1, then g(x) = x, 然后可以代入公式计算。

Central limit theorems

重复生成如10000次\(\hat \theta_n\)(每一次都要投100次骰子,总的会投很多很多次骰子), 当n足够大时,$$P(\frac{\theta_n-\theta}{\sigma/\sqrt{n}}) \rightarrow \Phi(x)$$

Eg. Variance of Dice Rolling

sigma <- sqrt(35/12)
xi.bar.standardized <- (xi.bar - mu)/(sigma/sqrt(n))
hist(xi.bar.standardized, probability = TRUE)

By CLT(Central Limit Theorem) $$ P(-1.96 \le \frac{\hat \theta_n - \theta}{\sigma/\sqrt{n}}) \le 1.96 \sim 0.95 $$, So,$$ \hat{\theta}_n \in\left[\theta-1.96 \times \frac{\sigma^{\prime}}{\sqrt{n}}, \theta+1.96 \times \frac{\sigma}{\sqrt{n}}\right] . $$

但是\(\sigma\)不一定知道,可以用估计值代替,于是就有了t-分布。

  • As \(\sigma^2\) is unknown in general, people usually use \(\hat{\sigma}^2\) to replace it in practice. In this case,
\[\frac{\hat{\theta}_n-\theta}{\hat{\sigma}_n / \sqrt{n}}=\frac{\hat{\theta}_n-\theta}{\hat{\operatorname{se}}\left(\hat{\theta}_n\right)} \Longrightarrow t_{n-1} .
\]

So, by limit distribution,

\[P\left(\theta-t_{0.025}(n-1) \frac{\hat{\sigma}_n}{\sqrt{n}} \leq \hat{\theta}_n \leq t_{0.025}(n-1) \frac{\hat{\sigma}_n}{\sqrt{n}}\right) \approx 0.05
\]

Eg. Dice Rolling

# student's t-distribution
M <- 100
xi.student <- numeric(M)
for (i in 1:M){
set.seed(i)
xi.vec <- sample(1:6, size = n, replace = TRUE)
xi.student[i] <- (mean(xi.vec) - 3.5) / (sd(xi.vec) / sqrt(n))
}

Note在n越来越大的时候,t分布会和normal越来越像。

M<-100
n<-200
u<-runif(n)
set.seed(1)
g <- numeric(M)
for (i in 1:M){
g[i]<-sin(u[i])^4*exp(-u[i])
theta<-g[i]
}
theta<-theta/M
theta

Variance and Efficiency

Definition (Efficiency)

Let \(\hat{\theta}_1\) and \(\hat{\theta}_2\) be two estimators of \(\theta\). We say \(\hat{\theta}_1\) is more efficient (in a statistical sense) than \(\hat{\theta}_2\) if

\[\frac{\operatorname{Var}\left(\hat{\theta}_1\right)}{\operatorname{Var}\left(\hat{\theta}_2\right)}<1
\]

Remark.

If the variances of estimators \(\hat{\theta}_1\) and \(\hat{\theta}_2\) are unknown, we can estimate efficiency by substituting a sample estimate of the variance for each estimator.

Antithetic variables (对照变量)

如果自己生成两个负相关的变量,然后就可以生成一个variance更小的变量。

  • We can also use
\[\hat{\theta}_{1,2}=\frac{1}{2}\left(\hat{\theta}_1+\hat{\theta}_2\right)
\]

to estimate \(\theta\).

  • Note that
\[\operatorname{Var}\left(\hat{\theta}_{1,2}\right)=\operatorname{Var}\left(\frac{\hat{\theta}_1+\hat{\theta}_2}{2}\right)=\frac{1}{4}\left(\operatorname{Var}\left(\hat{\theta}_1\right)+\operatorname{Var}\left(\hat{\theta}_2\right)+2 \operatorname{Cov}\left(\hat{\theta}_1, \hat{\theta}_2\right)\right) \text {. }
\]
  • If \(\hat{\theta}_1\) and \(\hat{\theta}_2\) are negatively correlated, then
\[\operatorname{Cov}\left(\hat{\theta}_1, \hat{\theta}_2\right)<0 \text {, }
\]

and hence,

\[\operatorname{Var}\left(\hat{\theta}_{1,2}\right) \leq \frac{1}{4}\left(\operatorname{Var}\left(\hat{\theta}_1\right)+\operatorname{Var}\left(\hat{\theta}_2\right)\right)
\]

想要构造\(\hat\theta_1, \hat\theta_2\)

使得他们负相关。下面构造的\(X_i\)和\(\tilde X_i\)就是负相关的。

  • If $$ \hat{\theta}1=\frac{1}{n} \sum^n g\left(X_i\right) $$

    where
\[X_i \sim F_X(x)
\]
  • Assume that \(X_i\) is generated from the inverse transform method, then
\[X_i=F^{-1}\left(U_i\right) \stackrel{d}{=} \tilde{X}_i=F^{-1}\left(1-U_i\right) .
\]
  • If the function \(g\) is a monotone function, then
\[\operatorname{Cov}\left(g\left(X_i\right), g\left(\tilde{X}_i\right)\right) \leq 0
\]
  • Therefore,
\[\hat{\theta}_2=\frac{1}{n} \sum_{i=1}^n g\left(\tilde{X}_i\right)
\]

is also an estimator of \(\theta\)

Eg.\(\int_0^1x^2dx\)

u<-runif(100)
theta1 <-mean(u^2)
theta2<-mean((1-u)^2)
theta<-mean(theta1,theta2)
theta

Eg. Normal distribution function

x <- 1
pnorm(1) n <- 100
V <- runif(100, 0, 1)
V.tilde <- 1 - V theta1 <- mean(exp(-V^2/2)*(1/sqrt(2*pi)))
theta2 <- mean(exp(-V.tilde^2/2)*(1/sqrt(2*pi))) theta <- mean(theta1, theta2)
theta

控制方差的程度和g(x)有关,也就是和f(x)的选择有关,好的时候能控制到90%以下

Control variates

  • Choose $$ c^*=-\frac{\operatorname{Cov}(g(X), f(X))}{\operatorname{Var}(f(X))} $$
  • Therefore, the improved estimator is defined as
\[\hat{\theta}^*=\frac{1}{n} \sum_{i=1}^n g\left(X_i\right)+\hat{c}^* \cdot \frac{1}{n} \sum_{i=1}^n\left(f\left(X_i\right)-\mu\right),
\]

where

\[\hat{c}^*=-\frac{\text { sample covariance }}{\text { sample variance }} \text {. }
\]

Eg. \(\theta = \int_0^1 e^u du\)

u <- runif(100)
gu <- exp(u)
theta.hat <- mean(gu)
fu <- u
mu <- 1/2
sample.cov <- cov(gu,fu)
sample.var <- var(fu) c.star <- -sample.cov / sample.var
theta.star <- mean(gu) + c.star*mean(fu-1/2) print(theta.hat)
print(theta.star)

这里取f(u)=u

Simulation-计算统计——Monte Carlo的更多相关文章

  1. 利用蒙特卡洛(Monte Carlo)方法计算π值[ 转载]

    部分转载自:https://blog.csdn.net/daniel960601/article/details/79121055 圆周率π是一个无理数,没有任何一个精确公式能够计算π值,π的计算只能 ...

  2. Monte Carlo计算Pi,python实现

    Monte Carlo import random import matplotlib.pyplot as plt import numpy as np 6 # 函数模拟点的随机掉落,并分为两组 de ...

  3. Markov Chain Monte Carlo Simulation using C# and MathNet

    Math.Net Numerics has capability to conduct Markov Chair Monte Carlo simulations, yet the document i ...

  4. 蒙特卡罗(Monte Carlo)方法简介

    蒙特卡罗(Monte Carlo)方法,也称为计算机随机模拟方法,是一种基于"随机数"的计算方法. 二 解决问题的基本思路 Monte Carlo方法的基本思想很早以前就被人们所发 ...

  5. Monte carlo

    转载 http://blog.sciencenet.cn/blog-324394-292355.html 蒙特卡罗(Monte Carlo)方法,也称为计算机随机模拟方法,是一种基于"随机数 ...

  6. 蒙特卡罗方法、蒙特卡洛树搜索(Monte Carlo Tree Search,MCTS)初探

    1. 蒙特卡罗方法(Monte Carlo method) 0x1:从布丰投针实验说起 - 只要实验次数够多,我就能直到上帝的意图 18世纪,布丰提出以下问题:设我们有一个以平行且等距木纹铺成的地板( ...

  7. Monte Carlo方法简介(转载)

    Monte Carlo方法简介(转载)       今天向大家介绍一下我现在主要做的这个东东. Monte Carlo方法又称为随机抽样技巧或统计实验方法,属于计算数学的一个分支,它是在上世纪四十年代 ...

  8. 增强学习(四) ----- 蒙特卡罗方法(Monte Carlo Methods)

    1. 蒙特卡罗方法的基本思想 蒙特卡罗方法又叫统计模拟方法,它使用随机数(或伪随机数)来解决计算的问题,是一类重要的数值计算方法.该方法的名字来源于世界著名的赌城蒙特卡罗,而蒙特卡罗方法正是以概率为基 ...

  9. Monte Carlo Approximations

    准备总结几篇关于 Markov Chain Monte Carlo 的笔记. 本系列笔记主要译自A Gentle Introduction to Markov Chain Monte Carlo (M ...

  10. (转)Monte Carlo method 蒙特卡洛方法

    转载自:维基百科  蒙特卡洛方法 https://zh.wikipedia.org/wiki/%E8%92%99%E5%9C%B0%E5%8D%A1%E7%BE%85%E6%96%B9%E6%B3%9 ...

随机推荐

  1. SQL语句底层执行顺序

    1. SELECT 语句的完整结构 SQL92语法: SELECT ...,....,(存在聚合函数) FROM ...,...,... WHERE 多表的连接条件 AND 不包含聚合函数的过滤条件 ...

  2. RabbitMQ身份验证、授权、访问控制

    原文:https://www.rabbitmq.com/access-control.html 概述 不同的用户只能访问特定的虚拟主机.他们在每个虚拟主机中的权限也可以被限制. RabbitMQ支持两 ...

  3. Mysql 行号+分组行号+取Top记录 SQL

    Mysql 行号+分组行号+取Top记录 SQL select * from ( SELECT (@rowNum := @rowNum + 1) as rowNum -- 全量行号 , a.col1 ...

  4. element ui upload 组件多文件上传,最终只显示上传一个的问题

    问题描述:一次选多张图片上传的时候界面上只有一张图片显示,并且上传调用的接口次数与选择的图片数量一致,且接口已200. JSON格式,"url"是最终显示的图片地址 {     & ...

  5. 行转列 && 字段拆分

    explode称之为Hive爆炸函数,意思就是将一行数据炸开.Usage:explode(array/map) explode函数传递的参数必须是一个array或者是map hivescala> ...

  6. Strus框架

    一.框架概述 1.框架的意义与作用: 所谓框架,就是把一些繁琐的重复性代码封装起来,使程序员在编码中把更多的经历放到业务需求的分析和理解上面. 特点:封装了很多细节,程序员在使用的时候会非常简单. 2 ...

  7. 关于windows cmd 控制台输出中文

    由于中文在window 输出总是优乱码可能性  ,先建cmd.reg  负责下面内容  ,双击运行即可. Windows Registry Editor Version 5.00 [HKEY_CURR ...

  8. mybaits-plus 部分注解说明

    参考: https://blog.csdn.net/qq_45684867/article/details/123951309

  9. windows下 mstsc 远程Ubuntu 图形界面2

    采用 xrdp-0.9.2 + xorgxrdp-0.2.0 + xfce ,可完美实现远程,且: 1.支持中断后重新连接上一次连接. 2.支持clipbord跨平台复制. 3.xfce桌面,双机打开 ...

  10. 图像主题颜色提取(Median cut)

    前言 之前想对图片素材进行分类管理,除了打标签,还有一样是通过主题色进行分类.于是开始寻找能提取主主题色的工具,最后找到了大名鼎鼎的 Leptonica 库,其中就有中位切割算法的实现.下面附上中位切 ...