Programming a Hearthstone agent using Monte Carlo Tree Search(chapter one)
Markus Heikki Andersson
Håkon Helgesen
Hesselberg
Master of Science in Computer Science
Submission date: June 2016
Supervisor: Helge Langseth, IDI
Norwegian University of Science and Technology
Department of Computer and Information Science
Abstract
This thesis describes the effort of adapting Monte Carlo Tree Search (MCTS) to the game
of Hearthstone, a card game with hidden information and stochastic随机的 elements. The focus
is on discovering the suitability of MCTS for this environment, as well as which domainspecific
adaptations are needed.
An MCTS agent is developed for a Hearthstone simulator, which is used to conduct experiments
to measure the agent’s performance both against human and computer players.
The implementation includes determinizations确定化操作 to work around hidden information, and
introduces action chains to handle multiple actions within a turn. The results are analyzed
and possible future directions of research are proposed.
Introduction
This chapter is inspired by the work done in the specialization project [Andersson and
Hesselberg (2015)].
1.1 Background and Motivation
Video games are one of the quickest growing types of entertainment and is a popular
pastime娱乐,消遣 across the world, representing a billion-dollar industry just in the US [Newzoo
(2014)].
Games will often include time constraints on a player’s actions. Creating an AI to play
these games therefore becomes a challenge, as they must be able to find solutions to problems
quickly. This is why games are ideal to test applications of real-time AI research
[Buro and Furtak (2003)].
This is not only true now, but also from the beginning of the computer era时代; many computer
science pioneers先锋 have been using games like chess, checkers西洋棋 and others to research algorithms.
Examples of such luminaries杰出人物,知识渊博的人 can be found by looking at Alan Turing, John von
Neumann, Claude Shannon, Herbert Simon, Alan Newell, John McCarthy, Arthur Samuel,
Donald Knuth, Donald Michie, and Ken Thompson [Billings (1995)].
There are a number of difficult problems represented in board games, card games, other
mathematical games and their digital representations, and there are many reasons why
their study is desirable. Usually they have all or some of these properties:
- Well-defined rules and concise简明的 logistics – As games are usually well-defined it can be relatively simple to create a complete player, which makes it possible to spend more time and effort on what is actually the topic of scientific interest.
- Complex strategies – Some of the hardest problems known in computational complexity and theoretical computer science can be found in games.
- Specific and clear goals – Games will often have an unambiguous definition of success, so that efforts can be focused on achieving that goal.
- Measurable results – By measuring either degree of success in playing against other opponents, or in solutions for related sub-tasks, it is possible to see how well an AI is performing.
Since a game can be looked at as a simplified situation from the real world, creating strong
computer players for these games might have potential when applied to real world problems
[Laird and VanLent (2001)].
There are many games where computer players will either cheat by using access to the
game engine to obtain additional information, resources or units - or they won’t be a
match for an expert level human player. However, players usually enjoy games more when
they are presented with a challenge appropriate to their skill [Malone (1980); Whitehouse
(2014)]. Player enjoyment is easier to achieve when they perceive the game is played on
equal terms, instead of fighting an inferior差的,下等的 computer opponent who is receiving an unfair
advantage.
Among the hardest problems known in theoretical computer science is the class of games
with stochastic, imperfect information games with partial observability. This class includes
a lot of problems which are computationally undecidable, but easy to express [Shi
and Littman (2001); Gilpin and Sandholm (2006)].
A game which fits the above description well is Hearthstone: Heroes of Warcraft. It is
a digital strategic card game developed by Blizzard Entertainment. Two players compete
against each other with self-made decks, using cards specifically created for Hearthstone.
The game is easy to grasp理解, but complexity quickly arises because of the interaction between
cards – each card has potential to subtly巧妙地 change the rules of the game. It is because
of this complexity that Hearthstone is an interesting area for AI research, especially for
search in imperfect information domains and general game playing.
1.2 Research Questions
In the specialization project Hearthstone was analyzed and compared to other games to
understand which techniques could be used to create a strong computer player [Andersson
and Hesselberg (2015)].
It was discovered that the key points to consider when when creating an AI for Hearthstone
were the large game tree, the difficulty of node evaluation评估 and the way the game is
constantly evolving. Monte Carlo Tree Search proved to be a viable可行的 solution. It can handle
large game trees by using random sampling, can be run without a heuristic探索的 function and as
such can deal with new elements being introduced.
This project is focused on further examining Monte Carlo Tree Search, implementing it to
create a Hearthstone computer player and analyze its playing strength. As such the high
level research questions are:
1. Can Monte Carlo Tree Search be implemented to create a strong Hearthstone computer
player?
2. Which parameters and implementation choices are important for the agent’s performance?
1.3 Research Method
The research method for this project is to acquire a deep understanding of the Monte Carlo
Tree Search algorithm and implement it to create a strong Hearthstone computer player.
The work can be divided into three parts
Research and understand the underlying principles and theories behind MCTS.
Implement MCTS specific to Hearthstone.
Perform experiments to figure out which parameter values give the best results.
1.4 Thesis论文 Structure
Chapter two describes Hearthstone in detail and describes it formally.
Chapter three explains the theories leading up to Monte Carlo Tree Search together with
related research, and describes how MCTS works.
Chapter four describes the implementation of MCTS, which choices have been made and
why, as well as the simulator being used.
Chapter five describes the agents and parameters used and sets up experiments to test the
MCTS algorithm.
Chapter six describes the results of the experiments shown in the previous chapter and
analyses them.
Chapter seven concludes the thesis.
Chapter eight describes future work.
Programming a Hearthstone agent using Monte Carlo Tree Search(chapter one)的更多相关文章
- Introduction to Monte Carlo Tree Search (蒙特卡罗搜索树简介)
Introduction to Monte Carlo Tree Search (蒙特卡罗搜索树简介) 部分翻译自“Monte Carlo Tree Search and Its Applicati ...
- Monte Carlo tree search 学习
https://en.wikipedia.org/wiki/Monte_Carlo_tree_search 蒙特卡洛树搜索(MCTS)基础 http://mcts.ai/about/index.htm ...
- 蒙特卡罗方法、蒙特卡洛树搜索(Monte Carlo Tree Search,MCTS)初探
1. 蒙特卡罗方法(Monte Carlo method) 0x1:从布丰投针实验说起 - 只要实验次数够多,我就能直到上帝的意图 18世纪,布丰提出以下问题:设我们有一个以平行且等距木纹铺成的地板( ...
- Monte Calro Tree Search (MCTS)
https://blog.csdn.net/natsu1211/article/details/50986810, 感谢分享! Intro最近阿法狗和李师师的人机大战着实火了一把,还顺带捧红了柯杰,古 ...
- 论文笔记:Mastering the game of Go with deep neural networks and tree search
Mastering the game of Go with deep neural networks and tree search Nature 2015 这是本人论文笔记系列第二篇 Nature ...
- Monte Carlo Policy Evaluation
Model-Based and Model-Free In the previous several posts, we mainly talked about Model-Based Reinfor ...
- (转)Markov Chain Monte Carlo
Nice R Code Punning code better since 2013 RSS Blog Archives Guides Modules About Markov Chain Monte ...
- Monte Carlo Control
Problem of State-Value Function Similar as Policy Iteration in Model-Based Learning, Generalized Pol ...
- Monte Carlo方法简介(转载)
Monte Carlo方法简介(转载) 今天向大家介绍一下我现在主要做的这个东东. Monte Carlo方法又称为随机抽样技巧或统计实验方法,属于计算数学的一个分支,它是在上世纪四十年代 ...
随机推荐
- iOS7 新后台及下载SDK介绍
在iOS7以前的系统中,App默认是不能后台运行的,如果要后台运行,可以采用以下两类方法: (1)使用beginBackgroundTaskWithExpirationHandler函数,向系统申请一 ...
- Matplotlib介绍
目录 一. Matplotlib介绍 1 二. 初级绘制 1 1. 绘图简介 1 2. 在上面的过程中,主要就是下面三个元素: 1 三. 2D各种 ...
- MySQL授权(用户权限)
一.mysql查询与权限 (二)授权 用户管理: 设置用户密码 前期准备工作: 停止服务 将配置文件当中的skip-grant-tables删除掉 重启服务: 执行修改命令 查看用户状态(如果数据过多 ...
- webpack中shimming的概念
在webpack打包过程中会去做一些代码上的兼容,或者打包过程的兼容,比如之前使用过的babel-polyfill这个工具,他解决了es6代码在低版本浏览器的兼容.这就是webpack中的垫片.他解决 ...
- 推荐一个.NET(C#)的HTTP辅助类组件--restsharp
互联网上关于.NET(C#)的HTTP相关的辅助类还是比较多的,这里再为大家推荐一个.NET的HTTP辅助类,它叫RestSharp.RestSharp是一个轻量的,不依赖任何第三方的组件或者类库的H ...
- 这些Winforms界面开发技巧你还没学会?OUT了
DevExpress Winforms Controls内置140多个UI控件和库,完美构建流畅.美观且易于使用的应用程序.无论是Office风格的界面,还是分析处理大批量的业务数据,DevExpre ...
- mybatis-oracle 新增序列
1.参考 https://blog.csdn.net/qq_29001173/article/details/82106853 2.思考: 2.1获取序列下一个值:seq_car.nextval 2. ...
- 利用XtraBackup给MYSQL热备(基于数据文件)
利用XtraBackup给MYSQL热备(基于数据文件) By JRoBot on 2013 年 11 月 26 日 | Leave a response 利用XtraBackup给MYSQL热备(基 ...
- gRPC 到 JSON 代理生成器 grpc-gateway
grpc-gateway是protoc的插件,它读取protobuf服务定义并生成反向代理服务器,该服务将RESTful HTTP API转换为gRPC. 这个服务是根据你的服务定义中的google. ...
- 机器学习mark一下
https://developers.google.cn/machine-learning/crash-course/ml-intro