Programming a Hearthstone agent using Monte Carlo Tree Search(chapter one)
Markus Heikki Andersson
Håkon Helgesen
Hesselberg
Master of Science in Computer Science
Submission date: June 2016
Supervisor: Helge Langseth, IDI
Norwegian University of Science and Technology
Department of Computer and Information Science
Abstract
This thesis describes the effort of adapting Monte Carlo Tree Search (MCTS) to the game
of Hearthstone, a card game with hidden information and stochastic随机的 elements. The focus
is on discovering the suitability of MCTS for this environment, as well as which domainspecific
adaptations are needed.
An MCTS agent is developed for a Hearthstone simulator, which is used to conduct experiments
to measure the agent’s performance both against human and computer players.
The implementation includes determinizations确定化操作 to work around hidden information, and
introduces action chains to handle multiple actions within a turn. The results are analyzed
and possible future directions of research are proposed.
Introduction
This chapter is inspired by the work done in the specialization project [Andersson and
Hesselberg (2015)].
1.1 Background and Motivation
Video games are one of the quickest growing types of entertainment and is a popular
pastime娱乐,消遣 across the world, representing a billion-dollar industry just in the US [Newzoo
(2014)].
Games will often include time constraints on a player’s actions. Creating an AI to play
these games therefore becomes a challenge, as they must be able to find solutions to problems
quickly. This is why games are ideal to test applications of real-time AI research
[Buro and Furtak (2003)].
This is not only true now, but also from the beginning of the computer era时代; many computer
science pioneers先锋 have been using games like chess, checkers西洋棋 and others to research algorithms.
Examples of such luminaries杰出人物,知识渊博的人 can be found by looking at Alan Turing, John von
Neumann, Claude Shannon, Herbert Simon, Alan Newell, John McCarthy, Arthur Samuel,
Donald Knuth, Donald Michie, and Ken Thompson [Billings (1995)].
There are a number of difficult problems represented in board games, card games, other
mathematical games and their digital representations, and there are many reasons why
their study is desirable. Usually they have all or some of these properties:
- Well-defined rules and concise简明的 logistics – As games are usually well-defined it can be relatively simple to create a complete player, which makes it possible to spend more time and effort on what is actually the topic of scientific interest.
- Complex strategies – Some of the hardest problems known in computational complexity and theoretical computer science can be found in games.
- Specific and clear goals – Games will often have an unambiguous definition of success, so that efforts can be focused on achieving that goal.
- Measurable results – By measuring either degree of success in playing against other opponents, or in solutions for related sub-tasks, it is possible to see how well an AI is performing.
Since a game can be looked at as a simplified situation from the real world, creating strong
computer players for these games might have potential when applied to real world problems
[Laird and VanLent (2001)].
There are many games where computer players will either cheat by using access to the
game engine to obtain additional information, resources or units - or they won’t be a
match for an expert level human player. However, players usually enjoy games more when
they are presented with a challenge appropriate to their skill [Malone (1980); Whitehouse
(2014)]. Player enjoyment is easier to achieve when they perceive the game is played on
equal terms, instead of fighting an inferior差的,下等的 computer opponent who is receiving an unfair
advantage.
Among the hardest problems known in theoretical computer science is the class of games
with stochastic, imperfect information games with partial observability. This class includes
a lot of problems which are computationally undecidable, but easy to express [Shi
and Littman (2001); Gilpin and Sandholm (2006)].
A game which fits the above description well is Hearthstone: Heroes of Warcraft. It is
a digital strategic card game developed by Blizzard Entertainment. Two players compete
against each other with self-made decks, using cards specifically created for Hearthstone.
The game is easy to grasp理解, but complexity quickly arises because of the interaction between
cards – each card has potential to subtly巧妙地 change the rules of the game. It is because
of this complexity that Hearthstone is an interesting area for AI research, especially for
search in imperfect information domains and general game playing.
1.2 Research Questions
In the specialization project Hearthstone was analyzed and compared to other games to
understand which techniques could be used to create a strong computer player [Andersson
and Hesselberg (2015)].
It was discovered that the key points to consider when when creating an AI for Hearthstone
were the large game tree, the difficulty of node evaluation评估 and the way the game is
constantly evolving. Monte Carlo Tree Search proved to be a viable可行的 solution. It can handle
large game trees by using random sampling, can be run without a heuristic探索的 function and as
such can deal with new elements being introduced.
This project is focused on further examining Monte Carlo Tree Search, implementing it to
create a Hearthstone computer player and analyze its playing strength. As such the high
level research questions are:
1. Can Monte Carlo Tree Search be implemented to create a strong Hearthstone computer
player?
2. Which parameters and implementation choices are important for the agent’s performance?
1.3 Research Method
The research method for this project is to acquire a deep understanding of the Monte Carlo
Tree Search algorithm and implement it to create a strong Hearthstone computer player.
The work can be divided into three parts
Research and understand the underlying principles and theories behind MCTS.
Implement MCTS specific to Hearthstone.
Perform experiments to figure out which parameter values give the best results.
1.4 Thesis论文 Structure
Chapter two describes Hearthstone in detail and describes it formally.
Chapter three explains the theories leading up to Monte Carlo Tree Search together with
related research, and describes how MCTS works.
Chapter four describes the implementation of MCTS, which choices have been made and
why, as well as the simulator being used.
Chapter five describes the agents and parameters used and sets up experiments to test the
MCTS algorithm.
Chapter six describes the results of the experiments shown in the previous chapter and
analyses them.
Chapter seven concludes the thesis.
Chapter eight describes future work.
Programming a Hearthstone agent using Monte Carlo Tree Search(chapter one)的更多相关文章
- Introduction to Monte Carlo Tree Search (蒙特卡罗搜索树简介)
Introduction to Monte Carlo Tree Search (蒙特卡罗搜索树简介) 部分翻译自“Monte Carlo Tree Search and Its Applicati ...
- Monte Carlo tree search 学习
https://en.wikipedia.org/wiki/Monte_Carlo_tree_search 蒙特卡洛树搜索(MCTS)基础 http://mcts.ai/about/index.htm ...
- 蒙特卡罗方法、蒙特卡洛树搜索(Monte Carlo Tree Search,MCTS)初探
1. 蒙特卡罗方法(Monte Carlo method) 0x1:从布丰投针实验说起 - 只要实验次数够多,我就能直到上帝的意图 18世纪,布丰提出以下问题:设我们有一个以平行且等距木纹铺成的地板( ...
- Monte Calro Tree Search (MCTS)
https://blog.csdn.net/natsu1211/article/details/50986810, 感谢分享! Intro最近阿法狗和李师师的人机大战着实火了一把,还顺带捧红了柯杰,古 ...
- 论文笔记:Mastering the game of Go with deep neural networks and tree search
Mastering the game of Go with deep neural networks and tree search Nature 2015 这是本人论文笔记系列第二篇 Nature ...
- Monte Carlo Policy Evaluation
Model-Based and Model-Free In the previous several posts, we mainly talked about Model-Based Reinfor ...
- (转)Markov Chain Monte Carlo
Nice R Code Punning code better since 2013 RSS Blog Archives Guides Modules About Markov Chain Monte ...
- Monte Carlo Control
Problem of State-Value Function Similar as Policy Iteration in Model-Based Learning, Generalized Pol ...
- Monte Carlo方法简介(转载)
Monte Carlo方法简介(转载) 今天向大家介绍一下我现在主要做的这个东东. Monte Carlo方法又称为随机抽样技巧或统计实验方法,属于计算数学的一个分支,它是在上世纪四十年代 ...
随机推荐
- RAII Theory && auto_ptr
RAII(Resource Acquisition is Initialization),也称为"资源获取即初始化",是C++语言的一种管理资源,避免泄露的惯用法. C++标准保证 ...
- spring data 入门
提出问题 我是Sping Data,是程序员的春天,因为我提供很多接口给开发人员, 减少程序员重复的写CRUD和分页等方法,你们也可以叫我春D,或者春帝,因为我很酷 解决问题 在Spring Data ...
- 挖矿病毒watchbog处理过程
1 挖矿病毒watchbog处理过程 简要说明 这段时间公司的生产服务器中了病毒watchbog,cpu动不动就是100%,查看cpu使用情况,发现很大一部分都是us,而且占100%左右的都是进程wa ...
- Netty4实现JTT809对接
网上的使用的netty版本过老,最近自己接触到这一块,重新写了一个 服务器流程 1,判定报文起始和结束标识 ,2去掉头尾标识进行转义,3,去掉CRC码进行CRC计算,4读取报文头,(5,如果加密则解密 ...
- 【16】大调 Leading Bass
一小节2拍: (歌曲举例子) 菊花台(主).知足(主).青春修炼手册(副).记得(副).当你(副). 一小节1拍: (歌曲举例子) 菊花台(主).知足(主).青春修炼手册(副).记得(副).当你(副) ...
- Ubuntu系统---进行C++项目开发的工具
Ubuntu系统---进行C++项目开发的工具 在Ubuntu系统下进行C++工作任务,还没接触过.像 Windows + vs 一样,Ubuntu应该也有自己的C++开发工具.网上搜罗了一圈,发现有 ...
- 华中校赛 14th
https://www.nowcoder.com/acm/contest/106#question A 分类讨论 #include<bits/stdc++.h> using namespa ...
- JDK源码那些事儿之红黑树基础下篇
说到HashMap,就一定要说到红黑树,红黑树作为一种自平衡二叉查找树,是一种用途较广的数据结构,在jdk1.8中使用红黑树提升HashMap的性能,今天就来说一说红黑树,上一讲已经给出插入平衡的调整 ...
- [Visual Studio] 一些VS2013的使用技巧
作者:h46incon的Blog 1. Peek View 可以在不新建TAB的情况下快速查看.编辑一个函数的代码. 用法:在光标移至某个函数下,按下alt+F12. 然后在Peek窗口里可以继续按a ...
- 正则的lastIndex 属性
简介:正则的lastIndex 属性用于规定下次匹配的起始位置. 注意: 该属性只有设置标志 g 才能使用. 上次匹配的结果是由方法 RegExp.exec() 和 RegExp.test() 找到的 ...