转自:https://www.poeticoding.com/distributed-phoenix-chat-with-pubsub-pg2-adapter/

In this article we’ll see how to cluster the Phoenix Chat nodes, using a really powerful functionality embedded in BEAM (the Elixir/Erlang VM), for easily communicate between Elixir nodes. We’ll then see how pg2 works and inspect how Phoenix efficiently broadcasts the messages in a distributed chat app.

We previously saw, in Distributed Phoenix Chat using Redis PubSub, how to distribute multiple Phoenix Chat nodes and broadcast the messages using Redis. It worked well and it’s really easy to setup, especially in a Kubernetes cluster. Each single Chat node just needs to know the internet Redis server DNS and port to connect to.

This approach is easy but has some drawbacks:

The Redis server acts as a single point of failure: if Redis goes down, the whole service goes down. There is no way for the nodes to broadcast messages to clients in other nodes.

Single point of failure

We also need then to maintain a Redis server, or a new cluster of Redis servers. With Docker and Kubernetes it’s really easy to spawn new services in the cluster. But we need to keep in mind that maintaining a new server in production doesn’t come for free, especially under heavy loads.

Clustering Elixir nodes

Distributed Phoenix

At first we need to fully connect each node to the other nodes, using the communication protocol embedded in the Erlang VM. I’ve briefly shown in Running Elixir in Docker Containers how to connect multiple Elixir nodes using Docker.

Let’s quickly see how to manually connect two Elixir nodes using iex in two separate terminals. We need to start the two iex sessions setting the node name and IP with the --name

# Terminal 1
$ iex --name a@127.0.0.1
iex(a@127.0.0.1)> # Terminal 2
$ iex --name b@127.0.0.1
iex(b@127.0.0.1)> Node.connect :"a@127.0.0.1"
true
iex(b@127.0.0.1)> Node.list
[":a@127.0.0.1"]

Connecting two Elixir nodes

Using the function Node.connect/1 we’ve created a cluster made by two nodes: a@127.0.0.1 and b@127.0.0.1. Once the nodes are connected we can start sending messages to remote processes, like when we are on a single node.

# Terminal 1
iex(a@127.0.0.1)> Agent.start(
fn -> %{hello: "world"} end,
name: {:global, GlobalAgent}
)
{:ok, #PID<0.116.0>} # Terminal 2
iex(b@127.0.0.1)> Agent.get {:global,GlobalAgent}, &(&1)
%{hello: "world"}

In a@127.0.0.1 we start an Agent process, registering it under the GlobalAgentname in the global registry. The node b@127.0.0.1 then sends a message to GlobalAgentrunning on a@127.0.0.1, and get its state.

Sending messages to a remote Agent process

We can easily configure Phoenix to leverage this powerful functionality to broadcast messages to remote nodes.

PG2 Module

Before configuring our Phoenix Chat with the PG2 PubSub adapter, let’s dig a bit into understanding what PG2 is and how it works.

pg2is an Erlang module which implements process grouping. Process groups can be useful when we need to group processes distributed over multiple nodes, so we can easily monitor and message them.

This module implements process groups. Each message can be sent to one, some, or all group members.

http://erlang.org/doc/man/pg2.html

Let’s see in practice how PG2 works, starting three different Elixir nodes: a@127.0.0.1b@127.0.0.1 and c@127.0.0.1.

#Terminal 3
iex(c@127.0.0.1)> Node.connect :"a@127.0.0.1"
iex(c@127.0.0.1)> Node.connect :"b@127.0.0.1" iex(c@127.0.0.1)> Node.list
[:"a@127.0.0.1", :"b@127.0.0.1"] iex(c@127.0.0.1)> :pg2.create :agents_group

In the c@127.0.0.1 node, we form the cluster connecting c to the other two nodes. We then create the :agents_group process group with the :pg2.create/1function.

Creation of a distributed process group

Each node runs a local pg2 process, which monitors the processes in the group and holding their PID in the local :pg2_table ETS table. 
Without going deeper into the pg2 implementation itself, let’s start an agent for each node and add it to the :agents_group we have just created.

# TERMINAL 1
iex(a@127.0.0.1)> {:ok, a_pid} = Agent.start fn -> :agent_a end
{:ok, #PID<0.121.0>}
iex(a@127.0.0.1)> :pg2.join :agents_group, a_pid iex(a@127.0.0.1)> :pg2.get_members :agents_group
[#PID<0.121.0>] # TERMINAL 2
iex(b@127.0.0.1)> {:ok, b_pid} = Agent.start fn -> :agent_b end
{:ok, #PID<0.126.0>}
iex(b@127.0.0.1)> :pg2.join :agents_group, b_pid iex(b@127.0.0.1)> :pg2.get_members :agents_group
[#PID<10547.121.0>, #PID<0.126.0>] # TERMINAL 3
iex(c@127.0.0.1)> {:ok, c_pid} = Agent.start fn -> :agent_c end
{:ok, #PID<0.126.0>}
iex(c@127.0.0.1)> :pg2.join :agents_group, c_pid iex(c@127.0.0.1)> :pg2.get_members :agents_group
[#PID<10631.121.0>, #PID<10715.126.0>, #PID<0.126.0>]

Agent processes join a pg2 grouppg2 monitors the processes in the group

We start an Agent process in each node, each one holding its own state. We add them to the :agents_group with the function :pg2.join(:agents_group, agent_pid).

Once the agents are added to the group, pg2 starts to monitor them. If a process exits it will be immediately removed from the group. We’ve seen that’s quite easy to make multiple processes part of a group, but how can we send a message to the so-called group’s members?

iex(c)> :pg2.get_members(:agents_group) \
|> Enum.map( &Agent.get(&1,fn s-> s end) ) [:agent_a, :agent_b, :agent_c]

Broadcasting a message

The pg2 module doesn’t offer a broadcast or a send function to send a message to all the members. We need to enumerate the PIDs given by the :pg2.get_members(:agents_group) function and send them a message one by one. This actually gives us the freedom to selectively send a message to just a subset of the group’s members. We’ll see later how this freedom becomes handy.

# TERMINAL 1
iex(a)> Process.exit a_pid, :halt
iex(a)> :pg2.get_members :agents_group
[#PID<10547.126.0>, #PID<10546.126.0>] # TERMINAL 2
iex(b)> :pg2.get_members :agents_group
[#PID<0.126.0>, #PID<10546.126.0>]

pg2 monitors the processes joined in the group. When we halt one agent, we see how the process is immediately removed from the group.

PubSub.PG2 adapter

Now I’m going to use the code in the poeticoding/phoenix_chat_example GitHub repository, under pubsub_pg2 branch.

When we create a new Phoenix app, it comes with a PubSub PG2 adapter configured by default.

#config/config.exs
config :chat, ChatWeb.Endpoint,
...
pubsub: [name: Chat.PubSub, adapter: Phoenix.PubSub.PG2]

So, coming from the previous version in the pubsub_redis branch, we just need to change the pubsub configuration in the config/config.exs file. Let’s open two iex nodes running each one a chat server on port 4000 and on port 4001.

# NODE a
$ PORT=4000 iex --name a@127.0.0.1 -S mix phx.server
iex(a@127.0.0.1)> # NODE b
$ PORT=4001 iex --name b@127.0.0.1 -S mix phx.server
iex(b@127.0.0.1)>

If we try to connect a browser to 4000 and another browser to 4001 we see that the messages are not propagated. The two nodes are not connected, we need to cluster them.

# NODE a
iex(a@127.0.0.1)> Node.connect :"b@127.0.0.1"

Once the nodes are connected, we see that the messages are correctly broadcasted from one browser to the other one. It works and we don’t need any other configuration. I find interesting to hack a bit around though, inspecting the Phoenix.PubSub.PG2 adapter to understand how it works under the hood.

Each Phoenix node starts its own local PubSub.PG2Server and registers it in a pg2group with name {:phx, Chat.PubSub}.

iex(a)> :pg2.which_groups
[phx: Chat.PubSub] iex(a)> :pg2.get_members {:phx, Chat.PubSub}
[#PID<25838.1566.0>, #PID<0.1820.0>]

The important thing to see here is that the members of the pg2 group are the PIDs of PubSub.PG2Server running in each node. If we spawn and connect another phoenix node in the cluster, we would see its PubSub.PG2Server PID as third member.

The members are not the users connection processes, this would be highly inefficient for how pg2 is built and because one single node would have to broadcast a single message to each user over multiple nodes.

How Phoenix uses PubSub with pg2

Let’s see instead how Phoenix handles a broadcast over multiple nodes.

  • We connect a browser to the http server on b@127.0.0.1 node, port 4001
  • We send a message to the chat room. This message is sent to the node b, via the WebSocket connection. The PubSub.PG2Server, running locally in the node, broadcasts the message to all the browsers connected to the same node.
  • The PubSub.PG2Server in b then forwards the message to the remote PubSub.PG2Server running in a@127.0.0.1.
  • PubSub.PG2Server in the a node then broadcasts the message to all the browser connected to the node.

In this way the message is replicated over the cluster network just one time!

Let’s try to manually send the broadcast message from the node b to the PubSub.PG2Server running in node a. The message looks like this.

#{:forward_to_local, fastlane, from_pid, topic, msg}
forward_msg = {
:forward_to_local, Phoenix.Channel.Server,
:none, "rooms:lobby",
%Phoenix.Socket.Broadcast{
event: "new:msg",
payload: %{body: "message from node b", user: "user_b"},
topic: "rooms:lobby"
}
}

Sending a message to PG2Server

We need at first to get the PIDs of the remote PubSub.PG2Server, which is part of the {:phx, Chat.PubSub} pg2 group.

iex(b@127.0.0.1)> [a_server_pid] = \
:pg2.get_members({:phx, Chat.PubSub}) -- \
:pg2.get_local_members({:phx, Chat.PubSub})

With :pg2.get_members we get all the members part of the group, which are the PubSub.PG2Server running locally in b, and the remote one running in a.:pg2.get_local_members returns only the processes running locally, in this case in node b.

Let’s connect a browser to the node a http server (port 4000) and see what happens when forwarding a message to the PG2Server running in a.

iex(b@127.0.0.1)> send a_server_pid, forward_msg

We see how the message is correctly broadcasted, by the PubSub.PG2Serverprocess, to the open connections.

Wrap Up

We’ve seen how pg2 works and how Phoenix conscientiously handles messages in a distributed PubSub. So far, we’ve always manually connected the nodes, which is an issue when we want to deploy our app into production and on a Kubernetes cluster. We’ll see in further articles how to use tools like libcluster to automatically cluster nodes and easily scale out using Kubernetes DNS for nodes auto-discovery.

 
 
 
 

Distributed Phoenix Chat with PubSub PG2 adapter的更多相关文章

  1. Distributed Phoenix Chat using Redis PubSub

      转自:https://www.poeticoding.com/distributed-phoenix-chat-using-redis-pubsub/ In the previous articl ...

  2. Connecting Elixir Nodes with libcluster, locally and on Kubernetes

    转自:https://www.poeticoding.com/connecting-elixir-nodes-with-libcluster-locally-and-on-kubernetes/ Tr ...

  3. 基于Server-Sent Event的简单在线聊天室

    Web即时通信 所谓Web即时通信,就是说我们可以通过一种机制在网页上立即通知用户一件事情的发生,是不需要用户刷新网页的.Web即时通信的用途有很多,比如实时聊天,即时推送等.如当我们在登陆浏览知乎时 ...

  4. 基于SignalR的web端即时通讯 - ChatJS

    先看下效果. ChatJS 是基于SignalR实现的Web端IM,界面风格模仿的是“脸书”,可以很方便的集成到已有的产品中. 项目官网:http://chatjs.net/ github地址:htt ...

  5. Bluetooth篇 开发实例之十 官网的Bluetooth Chat sample app.

    运行的时候,会报错: java.lang.NullPointerException: Attempt to invoke virtual method 'void android.app.Action ...

  6. HBase+Phoenix整合入门--集群搭建

    环境:CentOS 6.6 64位    hbase 1.1.15  phoenix-4.7.0-HBase-1.1 一.前置环境: 已经安装配置好Hadoop 2.6和jdk 1.7 二.安装hba ...

  7. 三周,用长轮询实现Chat并迁移到Azure测试

    公司的OA从零开始进行开发,继简单的单点登陆.角色与权限.消息中间件之后,轮到在线即时通信的模块需要我独立去完成.这三周除了逛网店见爱*看动漫接兼职,基本上都花在这上面了.简单地说就是用MVC4基于长 ...

  8. 转:各种Adapter的用法

    各种Adapter的用法   同样是一个ListView,可以用不同的Adapter让它显示出来,比如说最常用的ArrayAdapter,SimpleAdapter,SimpleCursorAdapt ...

  9. phoenix 开发API系列(三)phoenix api 结合数据库

    概述 介绍了 api 的各种写法之后,下面介绍构建 api 时与数据库连接的方式. 注 下面使用的工程的完整代码已经公开在: http://git.oschina.net/wangyubin/phoe ...

随机推荐

  1. Eclipse界面简介

    下载安装完成后,Eclipse的界面如下: (6)为eclipse的perspective(视图方案)由于安装的是for Java development的eclipse,故视图界面默认 为使用Jav ...

  2. urllib 获取页面或发送信息

    #! /usr/bin/env python3 # -*- coding:utf-8 -*- #urllib提供了一系列用于操作URL的功能. #urllib的request模块可以非常方便地抓取UR ...

  3. 使用DLL在进程间共享数据

    0x01 DLL在进程间共享数据理论 1.可以在Dll中使用#pragma data_seg建立共享类型的数据段将需要共享的数据分离出来,放置在一个独立的数据段里,并把该段的属性设置为共享,从而实现不 ...

  4. 牛客第三场多校 H Diff-prime Pairs

    链接:https://www.nowcoder.com/acm/contest/141/H来源:牛客网 Eddy has solved lots of problem involving calcul ...

  5. java学习笔记12(final ,static修饰符)

    final: 意思是最终的,是一个修饰符,有时候一个功能类被开发好了,不想被子类重写就用final定义, 用final修饰的最终数据成员:如果一个类的数据成员用final修饰符修饰,则这个数据成员就被 ...

  6. C# Windows IPSEC监控(仅此一家,别无分店)

    Windows IPSEC监控,使用C#编写,输出为一行字符串,可以按照既有IPSEC规则生成模板 using System; using System.Diagnostics; using Syst ...

  7. CSS3一个酷炫的加载效果

    上效果图,用截屏工具制作的,看起来有点卡,在网页上面显示还是不错的. CSS代码: <style type="text/css"> .loader{ position: ...

  8. 单臂路由实现VLAN间通信

    实验要求:利用路由器完成同vlan能通信,不同vlan也能通信 拓扑如下: 涉及内容有: 1.VTP的创建和配置 2.VLAN的创建和划分 3.路由器的单臂路由配置 配置如下: route1 enab ...

  9. 前端笔记 (2.CSS)

    知识点借鉴于慕课网,菜鸟教程和w3shool CSS方面: CSS全称为“层叠样式表”,它主要是用于定义HTML内容在浏览器内的显示样式,如文字大小.颜色.字体加粗等. 使用CSS样式的一个好处是通过 ...

  10. str 类型

    1.capitalize():首字母大写 2.center(size,fillwith): 3.count(sub,start,end):计算子序列的个数 4.decode() 5.encode() ...