Prerequisites

Introduction

In part 3, we scale our application and enable load-balancing. To do this, we must go one level up in the hierarchy of a distributed application: the service.

  • Stack
  • Services (you are here)
  • Container (covered in part 2)

About services

In a distributed application, different pieces of the app are called “services.”

For example, if you imagine a video sharing site, it probably includes a service for storing application data in a database, a service for video transcoding in the background after a user uploads something, a service for the front-end, and so on.

Services are really just “containers in production.”

A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on.

Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process.

Luckily it’s very easy to define, run, and scale services with the Docker platform

-- just write a docker-compose.yml file.

Your first docker-compose.yml file

A docker-compose.yml file is a YAML file that defines how Docker containers should behave in production.

docker-compose.yml

Save this file as docker-compose.yml wherever you want. Be sure you have pushed the image you created in Part 2 to a registry, and update this .yml by replacing username/repo:tag with your image details.

version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:

 

This docker-compose.yml file tells Docker to do the following:

  • Pull the image we uploaded in step 2 from the registry.

  • Run 5 instances of that image as a service called web, limiting each one to use, at most, 10% of the CPU (across all cores), and 50MB of RAM.

  • Immediately restart containers if one fails.

  • Map port 4000 on the host to web’s port 80.

  • Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves publish to web’s port 80 at an ephemeral port.)

  • Define the webnet network with the default settings (which is a load-balanced overlay network).

Run your new load-balanced app

Before we can use the docker stack deploy command we first run:

docker swarm init

 Note: We get into the meaning of that command in part 4. If you don’t run docker swarm init you get an error that “this node is not a swarm manager.” 

Now let’s run it. You need to give your app a name. Here, it is set to getstartedlab:

docker stack deploy -c docker-compose.yml getstartedlab

Our single service stack is running 5 container instances of our deployed image on one host. Let’s investigate.

Get the service ID for the one service in our application:

docker service ls

Look for output for the web service, prepended with your app name. If you named it the same as shown in this example, the name is getstartedlab_web.

The service ID is listed as well, along with the number of replicas, image name, and exposed ports.

A single container running in a service is called a task. Tasks are given unique IDs that numerically increment, up to the number of replicas you defined in docker-compose.yml. List the tasks for your service:

docker service ps getstartedlab_web

  

Tasks also show up if you just list all the containers on your system, though that is not filtered by service:

docker container ls -q

 

You can run curl -4 http://localhost several times in a row, or go to that URL in your browser and hit refresh a few times.

Either way, the container ID changes, demonstrating the load-balancing;

with each request, one of the 5 tasks is chosen, in a round-robin fashion, to respond.

The container IDs match your output from the previous command (docker container ls -q).

Running Windows 10?

Windows 10 PowerShell should already have curl available, but if not you can grab a Linux terminal emulator like Git BASH, or download wget for Windows which is very similar.

Slow response times?

Depending on your environment’s networking configuration, it may take up to 30 seconds for the containers to respond to HTTP requests. This is not indicative of Docker or swarm performance, but rather an unmet Redis dependency that we address later in the tutorial. For now, the visitor counter isn’t working for the same reason; we haven’t yet added a service to persist data.

Scale the app

You can scale the app by changing the replicas value in docker-compose.yml, saving the change, and re-running the docker stack deploy command:

docker stack deploy -c docker-compose.yml getstartedlab

  

Docker performs an in-place update, no need to tear the stack down first or kill any containers.

Now, re-run docker container ls -q to see the deployed instances reconfigured.

If you scaled up the replicas, more tasks, and hence, more containers, are started.

 

Take down the app and the swarm

  • Take the app down with docker stack rm:

    docker stack rm getstartedlab
  • Take down the swarm.

    docker swarm leave --force

It’s as easy as that to stand up and scale your app with Docker.

You’ve taken a huge step towards learning how to run containers in production.

Up next, you learn how to run this app as a bonafide swarm on a cluster of Docker machines.

Note: Compose files like this are used to define applications with Docker, and can be uploaded to cloud providers using Docker Cloud, or on any hardware or cloud provider you choose with Docker Enterprise Edition.

Recap and cheat sheet (optional)

Here’s a terminal recording of what was covered on this page:

  

To recap, while typing docker run is simple enough, the true implementation of a container in production is running it as a service.

Services codify a container’s behavior in a Compose file, and this file can be used to scale, limit, and redeploy our app.

Changes to the service can be applied in place, as it runs, using the same command that launched the service: docker stack deploy.

Some commands to explore at this stage:

docker stack ls                                            # List stacks or apps
docker stack deploy -c <composefile> <appname> # Run the specified Compose file
docker service ls # List running services associated with an app
docker service ps <service> # List tasks associated with an app
docker inspect <task or container> # Inspect task or container
docker container ls -q # List container IDs
docker stack rm <appname> # Tear down an application
docker swarm leave --force # Take down a single node swarm from the manager

  

 

Docker:Service的更多相关文章

  1. 老司机实战Windows Server Docker:2 docker化现有iis应用的正确姿势

    前言 上一篇老司机实战Windows Server Docker:1 初体验之各种填坑介绍了安装docker服务过程中的一些小坑.这一篇,我们来填一些稍大一些的坑:如何docker化一个现有的iis应 ...

  2. docker:(1)docker基本命令使用及发布镜像

    docker镜像可以完全看作一台全新的电脑使用,无论什么镜像都是对某一东西进行了配置,然后打包后可以快速移植到需要的地方直接使用 省去复杂的配置工作 比如java web项目部署,如果是新部署,需要装 ...

  3. kubernetes进阶之七:Service

    1.概述 Service也是Kubernetes里的最核心的资源对象之一,Kubernetes里的每个Service其实就是我们经常提起的微服务架构中的一个“微服务”,之前我们所说的Pod.RC等资源 ...

  4. docker:学习笔记

    docker run -itd --net=none 22565cef72c2 /usr/sbin/sshd -Dpipework br0 5a3e7bab4c5c5260a93e153aa7fec3 ...

  5. Docker Kubernetes Service 网络服务代理模式详解

    Docker Kubernetes  Service 网络服务代理模式详解 Service service是实现kubernetes网络通信的一个服务 主要功能:负载均衡.网络规则分布到具体pod 注 ...

  6. Docker Kubernetes Service 代理服务创建

    Docker Kubernetes  Service 代理服务创建 创建Service需要提前创建好pod容器.再创建Service时需要指定Pod标签,它会提供一个暴露端口默会分配容器内网访问的唯一 ...

  7. 【亲测有效】Centos安装完成docker后启动docker报错docker: unrecognized service的两种解决方案

    今天在学习Docker的时候 使用yum install docker安装完后启动不了,报错如下: [root@Sakura ~]# service docker start docker: unre ...

  8. docker:轻量级图形页面管理之Portainer

    docker:轻量级图形页面管理之Portainer 原创甘兵2018-03-05 14:26:56评论(8)2586人阅读   1.介绍 docker 图形化管理提供了很多工具,有Portainer ...

  9. 【06】循序渐进学 docker:跨主机通信

    写在前面的话 目前解决容器跨主机通信的方案有很多种,这里给出的只是其中的一种,而且还不是最好的方案,不过归根结底,大同小异.在学习 docker swarm 之前,大家可以先看看这种. 啥是 over ...

随机推荐

  1. tomcat的JVM调优

    1.error场景 Tomcat 长期运行过程遇到Caused by: java.lang.OutOfMemoryError: PermGen space或java.lang.OutOfMemoryE ...

  2. 任务调度工具 Apache Airflow 初识

    参考文章: Apache Airflow (incubating) Documentation — Airflow ... 任务调度神器 airflow 之初体验 airflow 介绍 - 简书(原文 ...

  3. MQ(转)

    1. 到底什么时候该使用MQ? 1). 典型场景一:数据驱动的任务依赖 采用MQ的优点是: a. 不需要预留buffer,上游任务执行完,下游任务总会在第一时间被执行 b. 依赖多个任务,被多个任务依 ...

  4. Mysql相关技术细节整理

    一.错误日志相关 1.mysql错误日志所在位置 windows下,错误日志文件一般在安装目录下的data目录下.扩展名是.err的文件,也可以打开安装目录下的my.ini文件检查一下linux下,错 ...

  5. AI赌神称霸德扑的秘密,刚刚被《科学》“曝光”了

    AI赌神称霸德扑的秘密,刚刚被<科学>“曝光”了 称霸德州扑克赛场的赌神Libratus,是今年最瞩目的AI明星之一. 刚刚,<科学>最新发布的预印版论文,详细解读了AI赌神背 ...

  6. Linux 进程管理 ps、top、pstree命令

    ps命令:查看系统中正在运行的进程 ps 是用来静态地查看系统中正在运行的进程的命令.不过这个命令有些特殊,它的部分选项不能加入"-",比如命令"ps aux" ...

  7. Inernet TLS协议注册表 开启

    IE高级配置中,存在SSL支持协议,例如SSL TLS. 其在注册表的路径为:HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\I ...

  8. kali linux常用软件配置记录

    首先膜一波,认真细致,简明有效. 感谢原博主的分享,留作参考. https://www.cnblogs.com/youfang/p/5272746.html

  9. Eclipse导入MyEclipse创建的WEB项目无法识别的解决方案

    Eclipse导入MyEclipse创建的WEB项目无法识别的解决方案

  10. bzoj 3083

    bzoj 3083 树链剖分,换根 对于一颗树有以下操作 1.确定x为根,2.将u到v的简单路径上的数全改成c,3.询问当前以x为根的子树中的节点最小权值. 如果没有操作1:树链剖分很明显. 于是考虑 ...