Programming a Hearthstone agent using Monte Carlo Tree Search(chapter one)
Markus Heikki Andersson
Håkon Helgesen
Hesselberg
Master of Science in Computer Science
Submission date: June 2016
Supervisor: Helge Langseth, IDI
Norwegian University of Science and Technology
Department of Computer and Information Science
Abstract
This thesis describes the effort of adapting Monte Carlo Tree Search (MCTS) to the game
of Hearthstone, a card game with hidden information and stochastic随机的 elements. The focus
is on discovering the suitability of MCTS for this environment, as well as which domainspecific
adaptations are needed.
An MCTS agent is developed for a Hearthstone simulator, which is used to conduct experiments
to measure the agent’s performance both against human and computer players.
The implementation includes determinizations确定化操作 to work around hidden information, and
introduces action chains to handle multiple actions within a turn. The results are analyzed
and possible future directions of research are proposed.
Introduction
This chapter is inspired by the work done in the specialization project [Andersson and
Hesselberg (2015)].
1.1 Background and Motivation
Video games are one of the quickest growing types of entertainment and is a popular
pastime娱乐,消遣 across the world, representing a billion-dollar industry just in the US [Newzoo
(2014)].
Games will often include time constraints on a player’s actions. Creating an AI to play
these games therefore becomes a challenge, as they must be able to find solutions to problems
quickly. This is why games are ideal to test applications of real-time AI research
[Buro and Furtak (2003)].
This is not only true now, but also from the beginning of the computer era时代; many computer
science pioneers先锋 have been using games like chess, checkers西洋棋 and others to research algorithms.
Examples of such luminaries杰出人物,知识渊博的人 can be found by looking at Alan Turing, John von
Neumann, Claude Shannon, Herbert Simon, Alan Newell, John McCarthy, Arthur Samuel,
Donald Knuth, Donald Michie, and Ken Thompson [Billings (1995)].
There are a number of difficult problems represented in board games, card games, other
mathematical games and their digital representations, and there are many reasons why
their study is desirable. Usually they have all or some of these properties:
- Well-defined rules and concise简明的 logistics – As games are usually well-defined it can be relatively simple to create a complete player, which makes it possible to spend more time and effort on what is actually the topic of scientific interest.
- Complex strategies – Some of the hardest problems known in computational complexity and theoretical computer science can be found in games.
- Specific and clear goals – Games will often have an unambiguous definition of success, so that efforts can be focused on achieving that goal.
- Measurable results – By measuring either degree of success in playing against other opponents, or in solutions for related sub-tasks, it is possible to see how well an AI is performing.
Since a game can be looked at as a simplified situation from the real world, creating strong
computer players for these games might have potential when applied to real world problems
[Laird and VanLent (2001)].
There are many games where computer players will either cheat by using access to the
game engine to obtain additional information, resources or units - or they won’t be a
match for an expert level human player. However, players usually enjoy games more when
they are presented with a challenge appropriate to their skill [Malone (1980); Whitehouse
(2014)]. Player enjoyment is easier to achieve when they perceive the game is played on
equal terms, instead of fighting an inferior差的,下等的 computer opponent who is receiving an unfair
advantage.
Among the hardest problems known in theoretical computer science is the class of games
with stochastic, imperfect information games with partial observability. This class includes
a lot of problems which are computationally undecidable, but easy to express [Shi
and Littman (2001); Gilpin and Sandholm (2006)].
A game which fits the above description well is Hearthstone: Heroes of Warcraft. It is
a digital strategic card game developed by Blizzard Entertainment. Two players compete
against each other with self-made decks, using cards specifically created for Hearthstone.
The game is easy to grasp理解, but complexity quickly arises because of the interaction between
cards – each card has potential to subtly巧妙地 change the rules of the game. It is because
of this complexity that Hearthstone is an interesting area for AI research, especially for
search in imperfect information domains and general game playing.
1.2 Research Questions
In the specialization project Hearthstone was analyzed and compared to other games to
understand which techniques could be used to create a strong computer player [Andersson
and Hesselberg (2015)].
It was discovered that the key points to consider when when creating an AI for Hearthstone
were the large game tree, the difficulty of node evaluation评估 and the way the game is
constantly evolving. Monte Carlo Tree Search proved to be a viable可行的 solution. It can handle
large game trees by using random sampling, can be run without a heuristic探索的 function and as
such can deal with new elements being introduced.
This project is focused on further examining Monte Carlo Tree Search, implementing it to
create a Hearthstone computer player and analyze its playing strength. As such the high
level research questions are:
1. Can Monte Carlo Tree Search be implemented to create a strong Hearthstone computer
player?
2. Which parameters and implementation choices are important for the agent’s performance?
1.3 Research Method
The research method for this project is to acquire a deep understanding of the Monte Carlo
Tree Search algorithm and implement it to create a strong Hearthstone computer player.
The work can be divided into three parts
Research and understand the underlying principles and theories behind MCTS.
Implement MCTS specific to Hearthstone.
Perform experiments to figure out which parameter values give the best results.
1.4 Thesis论文 Structure
Chapter two describes Hearthstone in detail and describes it formally.
Chapter three explains the theories leading up to Monte Carlo Tree Search together with
related research, and describes how MCTS works.
Chapter four describes the implementation of MCTS, which choices have been made and
why, as well as the simulator being used.
Chapter five describes the agents and parameters used and sets up experiments to test the
MCTS algorithm.
Chapter six describes the results of the experiments shown in the previous chapter and
analyses them.
Chapter seven concludes the thesis.
Chapter eight describes future work.
Programming a Hearthstone agent using Monte Carlo Tree Search(chapter one)的更多相关文章
- Introduction to Monte Carlo Tree Search (蒙特卡罗搜索树简介)
Introduction to Monte Carlo Tree Search (蒙特卡罗搜索树简介) 部分翻译自“Monte Carlo Tree Search and Its Applicati ...
- Monte Carlo tree search 学习
https://en.wikipedia.org/wiki/Monte_Carlo_tree_search 蒙特卡洛树搜索(MCTS)基础 http://mcts.ai/about/index.htm ...
- 蒙特卡罗方法、蒙特卡洛树搜索(Monte Carlo Tree Search,MCTS)初探
1. 蒙特卡罗方法(Monte Carlo method) 0x1:从布丰投针实验说起 - 只要实验次数够多,我就能直到上帝的意图 18世纪,布丰提出以下问题:设我们有一个以平行且等距木纹铺成的地板( ...
- Monte Calro Tree Search (MCTS)
https://blog.csdn.net/natsu1211/article/details/50986810, 感谢分享! Intro最近阿法狗和李师师的人机大战着实火了一把,还顺带捧红了柯杰,古 ...
- 论文笔记:Mastering the game of Go with deep neural networks and tree search
Mastering the game of Go with deep neural networks and tree search Nature 2015 这是本人论文笔记系列第二篇 Nature ...
- Monte Carlo Policy Evaluation
Model-Based and Model-Free In the previous several posts, we mainly talked about Model-Based Reinfor ...
- (转)Markov Chain Monte Carlo
Nice R Code Punning code better since 2013 RSS Blog Archives Guides Modules About Markov Chain Monte ...
- Monte Carlo Control
Problem of State-Value Function Similar as Policy Iteration in Model-Based Learning, Generalized Pol ...
- Monte Carlo方法简介(转载)
Monte Carlo方法简介(转载) 今天向大家介绍一下我现在主要做的这个东东. Monte Carlo方法又称为随机抽样技巧或统计实验方法,属于计算数学的一个分支,它是在上世纪四十年代 ...
随机推荐
- vue使用layer主动关闭弹窗
关闭当前框的弹出层 layer.close(layer.index); 刷新父层 parent.location.reload(); // 父页面刷新 关闭iframe 弹出的全屏层 var inde ...
- Microsoft Internet Explorer v11 XML External Entity Injection 0day
[+] Credits: John Page (aka hyp3rlinx) [+] Website: hyp3rlinx.altervista.org[+] Source: http://hyp3 ...
- Flutter学习之Dart语言基础(构造函数)
最常见的构造函数形式,即生成构造函数,创建一个类的新实例: class Point { num x, y; //Dart中int和double是num的子类 //this引用当前类对象 Point(n ...
- 还在用ABAP进行SAP产品的二次开发?来了解下这种全新的二次开发理念吧
Jerry从2018年底至今,已经写了一系列关于SAP Kyma的文章,您可以移步到本文末尾获得这些文章的列表.Kyma是SAP开源的一个基于Kubernetes的云原生应用开发平台,能够允许SAP的 ...
- Elasticsearch 9300无法访问,客户端出现NoNodeAvailableException[None of the configured nodes are available: [{#transport#‐1}{exvgJLR‐RlCNMJy‐hzKtnA}
1. 进入容器 docker exec ‐it ID /bin/bash 2. 拷贝配置文件到宿主机 docker cp ID:/usr/share/elasticsearch/config/el ...
- 初级文件IO——IO过程、open、close、write、read、lseek、dup、dup2、errno、perror
先要回答的问题 文件IO指的是什么? 本文主要讲述如何调用Linux OS所提供的相关的OS API,实现文件的读写. 如何理解文件IO? IO就是input output的意思,文件io就是文件输入 ...
- 5.Hbase API 操作开发
Hbase API 操作开发需要连接Zookeeper进行节点的管理控制 1.配置 HBaseConfiguration: 包:org.apache.hadoop.hbase.HBaseConfigu ...
- zznu-oj-2117 : 我已经在路上了(求函数的原函数的字符串)--【暴力模拟题,花式模拟题,String大法好】
2117 : 我已经在路上了 时间限制:1 Sec 内存限制:256 MiB提交:39 答案正确:8 提交 状态 编辑 讨论区 题目描述 spring是不折不扣的学霸,那可是机房考研中的头号选手,不吹 ...
- 【产品对比】Word开发工具Aspose.Words和Spire.Doc性能和优劣对比一览
转:evget.com/article/2018/4/3/27885.html 概述:Microsoft Office Word是微软公司的一个文字处理器应用程序,作为办公软件必不可少的神器之一,Wo ...
- Css案例整理
1.实现两个div并排显示 案例:checkbox的标题和内容需要并排显示,checkbox竖向排列 <head> <style type="text/css"& ...