Connecting Elixir Nodes with libcluster, locally and on Kubernetes
Transcript
In the last few articles we saw how to make our Phoenix chat app distributed; at the beginning with Redis, and after with distributed Elixir connecting the nodes together.
We had just one problem; we had to manually connect to the nodes in the IEX console. This is an issue in production. In this video we will see how to automatically cluster the Phoenix chat nodes using the libcluster library locally, and on a Kubernetes cluster with a dynamic number of nodes.
Let’s download that first code of the Phoenix chat example from my github account, poeticoding, and let’s use the pubsub_pg2 branch. Let’s clone the code, and check out to the pubsub_pg2 branch.
$ git clone https://github.com/poeticoding/phoenix_chat_example.git
...
$ cd phoenix_chat_example
$ git co pubsub_pg2
$ mix deps.get
Let’s download the dependencies, and try to run it locally so we can pass the port as an environment variable, so the first node at the port 4000. We give a name, first node A. And we start another Phoenix server, node B, and port 4001.
# Node A
$ PORT=4000 iex --sname a -S mix phx.server
# Node B
$ PORT=4001 iex --sname b -S mix phx.server
Okay, great. Let’s now connect the node A to B. And we see that, this needs to be connected correctly. Let’s try the chat app with the two browsers.
iex(a@mbp)> Node.connect :b@mbp
true
iex(b@mbp)> Node.list
[:a@mbp]
So, let’s connect with one tab to 4000 (node A), and the other tab with 4001(node B). We see that the messages are propagated correctly.
libcluster
We had to manually connect the nodes using the connect/1 function in the Node module. Let’s see how to use libcluster to automatically connect the nodes.
So at first, we need to add the libcluster library as a dependency.
# mix.exs
defp deps do
[
...
{:libcluster, "~> 3.0"}
]
end
# lib/chat.ex
defmodule Chat do
use Application
def start(_type, _args) do
import Supervisor.Spec, warn: false
topologies = [
chat: [
strategy: Cluster.Strategy.Gossip
]
]
children = [
{Cluster.Supervisor, [topologies, [name: Chat.ClusterSupervisor]]},
supervisor(Chat.Endpoint, [])
]
opts = [strategy: :one_for_one, name: Chat.Supervisor]
Supervisor.start_link(children, opts)
end
end
We then need to start a Cluster.Supervisor, which is part of the libcluster library, with some topologies. We use the Gossipstrategy, which uses multicast UDP to gossip node names to other nodes in the network.
# Node A
$ PORT=4000 iex --sname a -S mix phx.server
# Node B
$ PORT=4001 iex --sname b -S mix phx.server
# Node C
$ PORT=4002 iex --sname c -S mix phx.server
Three Phoenix nodes connected
Great, and it should work straight away. So as before we start one node, the A node on port 4000, and the node B on port 4001. And you see that the node A is now connected to node B and vice versa. Node list. We see that we didn’t have to connect them manually. The same if I add another node, named C, on port 4002. They will connect automatically.
Kubernetes
Let’s now see how to deploy this distributed application on Kubernetes, and making the clustering of the nodes, of the Elixir nodes, automatic with libcluster.

We are going to deploy multiple chat nodes on my Kubernetes local setup. But what I’m going to show you could work without any radical change on any cloud provider. We’re going to deploy our chat nodes with Kubernetes deployment and we will connect them automatically together thanks to libcluster, and something called the Kubernetes headless service, which we’ll see in a moment. We will then create a load balancer, which will spread the connections from different browsers to different chat nodes.
So, what is a headless service? I’ve put this file, Nginx kube test. You can find all this code under the libcluster branch. So, let’s try to see with a simple Nginx deployment, what is a headless service.
# nginx_kube_test.yaml
kind: Service
apiVersion: v1
metadata:
name: nginx-nodes
namespace: default
spec:
clusterIP: None
selector:
app: nginx
ports:
- name: http
port: 80
It’s a service, but we specify clusterIP: None, and the DNS will be nginx- nodes, under the default namespace. And the port, in this case, is going to target port is 80.
This is just an Nginx deployment with 4 replicas. Let’s create the service and the deployment.
$ kubectl apply -f nginx_kube_test.yaml
We then start an ubuntu container installing dnsutils and curl.
$ kubectl run bash --rm -it --image ubuntu --bash
# apt-get update && apt-get install dnsutils curl -y
# nslookup nginx-nodes
We see that using this DNS we are able to list all the nginx nodes. If we scale out adding more replicas, we see launching again nslookup nginx-nodes that the new nodes are all present in the list.
Let’s start changing the topology. So we now use the Cluster.Kubernetes.DNS strategy, which will use the headless service we’re going to create.
# lib/chat.ex
topologies = [
k8s_chat: [
strategy: Cluster.Strategy.Kubernetes.DNS,
config: [
service: "chat-nodes",
application_name: "chat"
]
]
]
# web/controllers/page_controller.ex
defmodule Chat.PageController do
use Chat.Web, :controller
def index(conn, _params) do
self_node = inspect(node())
nodes = inspect(Node.list())
render(conn, "index.html", %{node: self_node, nodes: nodes})
end
end
# web/templates/page/index.html.eex
<div>
<p>nodes: <%=@nodes%></p>
<p>self: <%=@node%></p>
</div>
<div id="messages" class="container">
</div>
...
So the application now is ready. We need to build a Docker image. But before building the Docker image, we’re gonna see first the headless service Kubernetes file.
kind: Service
apiVersion: v1
metadata:
name: chat-nodes
namespace: default
spec:
clusterIP: None
selector:
app: chat
ports:
- name: epmd
port: 4369
We expose the EPMD and the DNS is chat-nodes. We also create a chat load balancer.
kind: Service
apiVersion: v1
metadata:
name: chat
namespace: default
spec:
type: LoadBalancer
selector:
app: chat
ports:
- name: http
port: 8000
targetPort: 4000
Let’s see the deployment.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: chat
namespace: default
spec:
replicas: 4
selector:
matchLabels:
app: chat
template:
metadata:
labels:
app: chat
spec:
containers:
- name: phoenix-chat
image: chat:libcluster #alvises/phoenix-chat-example:libcluster-kube
ports:
- containerPort: 4000
env:
- name: PORT
value: "4000"
- name: PHOENIX_CHAT_HOST
value: "localhost"
- name: ERLANG_COOKIE
value: "secret"
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command: ["elixir"]
args: [
"--name",
"chat@$(MY_POD_IP)",
"--cookie","$(ERLANG_COOKIE)",
"--no-halt",
"-S","mix",
"phx.server"
]
We’re going at first, to create 4 replicas. We are going to build our image but you can use the image I’ve published on DockerHub: alvises/phoenix-chat-example:libcluster-kube.
The exposed container port is 4000, we need also to set the same Erlang cookie in each node (in production better to use Kubernetes Secrets).
The important part is the environment variable MY_POD_IP. We define an environment variable, where we set the IP of each node. We then use this variable when we start the server specifying the node name and cookie
elixir --name chat@$(MY_POD_IP) --cookie $(ERLANG_COOKIE) --no-halt -S mix phx.server
To build the Docker image is pretty simple.
$ docker image build -t chat:libcluster .
Let’s create the chat deployment and services in Kubernetes
$ kubectl create -f kube_chat_deploy_and_svc.yaml
We then connect to the load-balancer to our local port 8000. We see the node list and that the nodes automatically connect. If we add new replicas and we will see almost immediately the new nodes under the node list.
Wrap up
We saw how easy it is with libcluster to connect the nodes together and deploy, also on Kubernetes, distributed Phoenix chat application.
If you have a question or something wasn’t clear, please post a comment in the comment section below, and subscribe to be updated with new articles and screencasts. See you next week!
Connecting Elixir Nodes with libcluster, locally and on Kubernetes的更多相关文章
- Running Elixir in Docker Containers
转自:https://www.poeticoding.com/running-elixir-in-docker-containers/ One of the wonderful things abou ...
- 入门-k8s查看Pods/Nodes (四)
目标 了解Kubernetes Pods(容器组) 了解Kubernetes Nodes(节点) 排查故障 Kubernetes Pods 在 部署第一个应用程序 中创建 Deployment 后,k ...
- Distributed Phoenix Chat with PubSub PG2 adapter
转自:https://www.poeticoding.com/distributed-phoenix-chat-with-pubsub-pg2-adapter/ In this article we’ ...
- infoq - neo4j graph db
My name is Charles Humble and I am here at QCon New York 2014 with Ian Robinson. Ian, can you introd ...
- Distributed Phoenix Chat using Redis PubSub
转自:https://www.poeticoding.com/distributed-phoenix-chat-using-redis-pubsub/ In the previous articl ...
- Mobile Push Notification
In one embodiment, a method includes sending to a mobile client computing device a first notificatio ...
- Walls(floyd POJ1161)
Walls Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 7677 Accepted: 3719 Description ...
- 行为识别(action recognition)相关资料
转自:http://blog.csdn.net/kezunhai/article/details/50176209 ================华丽分割线=================这部分来 ...
- uva539 The Settlers of Catan
The Settlers of Catan Within Settlers of Catan, the 1995 German game of the year, players attempt to ...
随机推荐
- Observer,观察者模式,C++描述
body, table{font-family: 微软雅黑; font-size: 13.5pt} table{border-collapse: collapse; border: solid gra ...
- socket 编程通信实例
socket 编程通信实例:TCPserver: , ServerThread, ; WSADATA wsaData; ,), ; } ; } } ; g ...
- openstack网络DVR
一.DVR描述 分布式路由 二.相关的专业术语 术语名称 术语解释 SNAT 在路由器后(POSTROUTING)将内网的ip地址修改为外网网卡的ip地址,也就是绑定浮动IP和外部通信 DNAT 在路 ...
- 11.Python-第三方库requests详解(三)
Response对象 使用requests方法后,会返回一个response对象,其存储了服务器响应的内容,如上实例中已经提到的 r.text.r.status_code……获取文本方式的响应体实例: ...
- [深入理解Java虚拟机]<自动内存管理>
Overview 走近Java:介绍Java发展史 第二部分:自动内存管理机制 程序员把内存控制的权利交给了Java虚拟机,从而可以在编码时享受自动内存管理.但另一方面一旦出现内存泄漏和溢出等问题,就 ...
- 使用libcurl下载https地址的文件
使用libcurl下载https地址的文件 void downLoadFile(std::string filename, std::string newFilename) { CURL *curl_ ...
- Day4作业及默写
1,写代码,有如下列表,按照要求实现每一个功能 li = ["alex", "WuSir", "ritian", "barry&q ...
- python day11 ——1. 函数名的使⽤ 2. 闭包 3. 迭代器
⼀. 函数名的运⽤. 1.函数名的内存地址 def func(): print("呵呵") print(func) 结果: <function func at 0x11 ...
- Python 黏包及黏包解决方案
粘包现象 说粘包之前,我们先说两个内容,1.缓冲区.2.windows下cmd窗口调用系统指令 1 缓冲区(下面粘包现象的图里面还有关于缓冲区的解释) 每个 socket 被创建后,都会分配两个缓冲区 ...
- mysql 数据查询全讲
数据查询 涉及到DQL(Data Query Language)是sql语句的一类 本文全面介绍了mysql下 select 语句的各种查询方式:普通查询,模糊查询,查询排序,分页查询,聚合函数查询 ...