后台的架构是由前台的需求决定的。做 mobile app 的需求跟做 web app 是不一样的,比如 mobile app 对实时性的要求比较强(移动用户都没耐性),移动设备网络不稳定(要能做到断点续传),网络流量有限(后台传来的数据得尽量小)。

Facebook 的这个架构主要的 idea 是数据分层存储,热数据存在双指针的队列里(放在内存),冷数据存在 MySql (在 SSD 上),最冷的数据存在磁盘上。这篇文章最好搭配这个视频看
MySQL for Messaging - @Scale 2014 - Data9

Facebook: Building Mobile-First Infrastructure for Messenger

Messages have been part of Facebook for many years, beginning as direct messaging similar to email (available in your inbox the next time you visited the site) and then eventually evolving into a real-time messaging platform that provides access to your
messages on a number of mobile apps or in a browser. But until recently the back-end systems hadn't evolved much from early iterations, and Messenger's performance and data usage started to lag behind — especially on networks with costly data plans and limited
bandwidth. To fix this, we needed to completely re-imagine how data is synchronized to the device and change how data is processed in the back end to support our new synchronization protocol.

The version of Messenger we released at the end of last year was the first taste of a “mobile first” experience for Facebook Messenger. For the past year, while our app developer teammates have been improving the UI and expanding Messenger's feature set,
the Messaging infrastructure team has been working to make the platform more reliable on the back end and use less data. As a result, we created a new Messenger sync protocol that decreased non-media data usage by 40% and developed a new service called Iris
to power it. By reducing congestion on the network, we've seen an approximately 20% decrease in the number of people who experience errors when trying to send a message.

The clients

The original protocol for getting data down to Messenger apps was pull-based. When receiving a message, the app first received a lightweight push notification indicating new data was available. This triggered the app to send the server a complicated HTTPS
query and receive a very large JSON response with the updated conversation view.

Instead of this model, we decided to move to a push-based snapshot + delta model. In this model, the client retrieves an initial snapshot of their messages (typically the only HTTPS pull ever made) and then subscribes to delta updates, which are immediately
pushed to the app through MQTT (a low-power, low-bandwidth protocol) as messages are received. When the client is pushed an update, it simply applies them to its local copy of the snapshot. As a result, without ever making an HTTPS request, the app can quickly
display an up-to-date view.

We further optimized this flow by moving away from JSON encoding for the messages and delta updates. JSON is great if you need a flexible, human-readable format for transferring data without a lot of developer overhead. However, JSON is not very efficient
on the wire. Compression helps some, but it doesn’t entirely compensate for the inherent inefficiencies of JSON’s wire format. We evaluated several possibilities for replacing JSON and ultimately decided to use Thrift. Switching to Thrift from JSON allowed
us to reduce our payload size on the wire by roughly 50%.

The server

Messaging data has traditionally been stored on spinning disks. In the pull-based model, we’d write to disk before sending a trigger to Messenger to read from disk. Thus, this giant storage tier would serve real-time message data as well as the full conversation
history. One large storage tier doesn't scale well to synchronize recent messages to the app in real time. So in order to support this new, faster sync protocol and maintain consistency between the Messenger app and long-term storage, we need to be able to
stream the same sequence of updates in real time to Messenger and to the storage tier in parallel on a per user basis.

Iris is a totally ordered queue of messaging updates (new messages, state change for messages read, etc.) with separate pointers into the queue indicating the last update sent to your Messenger app and the traditional storage tier. When successfully sending
a message to disk or to your phone, the corresponding pointer is advanced. When your phone is offline, or there is a disk outage, the pointer stays in place while new messages can still be enqueued and other pointers advanced. As a result, long disk write
latencies don't hinder Messenger's real-time communication, and we can keep Messenger and the traditional storage tier in sync at independent rates.

10574692_663814820398517_431290582_n.png720x1165 155 KB

Effectively, this queue allows a tiered storage model based on recency:

  • The most recent messages are immediately sent to online apps and to the disk storage tier from Iris's memory

  • A week's worth of messages are served by the queue's backing store in the case of disk outage or the Messenger app being offline for a while

  • Older conversation history and full inbox snapshot fetches are served from the traditional disk storage tier

We looked at several existing technologies to support the queue's backing store, but couldn't find anything that met our needs in terms of scale, reliability, speed, and flexibility. Ultimately, we opted to build the queue storage on top of MySQL and flash.
For MySQL we decided to use semi-sync replication, which can give you durability across multiple servers. By leveraging this technology, we can handle database hardware failures in under 30 seconds, and the latency for enqueueing a new message is an order
of magnitude less than writing to the traditional disk storage. Since we enqueue a message once to MySQL and then push it to apps and disk in parallel, Messenger receives messages faster and more reliably.[1]

Results

The benefits of this new infrastructure are quite remarkable. The new sync protocol reduces Messenger's non-media data usage by about 40%. Additionally, reducing congestion on the network leads to roughly a 20% decrease in the number of people who experience
errors when trying to send a message.

Lessons learned

When building a high-quality real-time mobile application, it’s important to remember that the network is a scarce resource that must be used as efficiently as possible. Every byte wasted has a very real impact on the experience of the application. By sending
less data and reducing HTTPS fetches, apps receive updates with lower latency and higher reliability. Extending desktop-focused infrastructure for a mobile world could work well, but building new mobile first infrastructure with protocols designed for pushable
devices offers even better experiences.

Footnotes

[1] For more information on the improvements required in MySQL availability and performance to have it serve this new protocol, please see
Harrison Fisk's presentation9 at the
@Scale 2014 conference.

Thanks to all the engineers who have contributed to this project, including Andrew Lutsenko, Andy Chen, Brian Tang, Changle Wang, Domas Mituzas, Harrison Fisk, Jeff Ferland, Olivia Bishop, Pierre-Luc Bertrand, Sachin Kulkarni, Thomas Georgiou, and Ting
Yang.

原链接可能国内被墙了,搬到国内。

原文地址:http://whosmall.com

Facebook Messenger的后台架构是什么样的?的更多相关文章

  1. 为Facebook messenger平台开发聊天机器人

    介绍 在电子商务网上商店发明之前,我们总是有机会与销售代表或分销商在选择商品或服务时交谈.在进入数字世界后,这个领域变得沉默.这样对顾客方便吗?我认为不是.向销售代表或经销商询问他们想要的产品或服务是 ...

  2. 利用ThinkPHP搭建网站后台架构

    记录一下ThinkPHP搭建网站后台.调整好样式等操作步骤 下载好ThinkPHP(3.2.3),解压后将核心文件夹ThinkPHP以及index.php等文件复制到网站根目录如下图 对index.p ...

  3. QPS从0到4000请求每秒,谈达达后台架构演化之路

    达达是全国领先的最后三公里物流配送平台. 达达的业务模式与滴滴以及Uber很相似,以众包的方式利用社会闲散人力资源,解决O2O最后三公里即时性配送难题(目前达达已经与京东到家合并). 达达业务主要包含 ...

  4. Ultimate Facebook Messenger for Business Guide (Feb 2019)

    Ultimate Facebook Messenger for Business Guide (Updated: Feb 2019) By Iaroslav Kudritskiy November 2 ...

  5. 达达O2O后台架构演进实践:从0到4000高并发请求背后的努力

    1.引言   达达创立于2014年5月,业务覆盖全国37个城市,拥有130万注册众包配送员,日均配送百万单,是全国领先的最后三公里物流配送平台. 达达的业务模式与滴滴以及Uber很相似,以众包的方式利 ...

  6. QPS从0到4000请求每秒,谈达达后台架构演化之路(转载)

    https://blog.csdn.net/czbing308722240/article/details/52350219 QPS从0到4000请求每秒,谈达达后台架构演化之路   达达是全国领先的 ...

  7. IT咨询顾问:一次吐血的项目救火 java或判断优化小技巧 asp.net core Session的测试使用心得 【.NET架构】BIM软件架构02:Web管控平台后台架构 NetCore入门篇:(十一)NetCore项目读取配置文件appsettings.json 使用LINQ生成Where的SQL语句 js_jquery_创建cookie有效期问题_时区问题

    IT咨询顾问:一次吐血的项目救火   年后的一个合作公司上线了一个子业务系统,对接公司内部的单点系统.我收到该公司的技术咨询:项目启动后没有规律的突然无法登录了,重新启动后,登录一断时间后又无法重新登 ...

  8. .NET Core API后台架构搭建

    ASP.NET Core API后台架构搭建 项目文件:https://files.cnblogs.com/files/ZM191018/WebAPI.zip 本篇可以了解到: 依赖注入 Dapper ...

  9. Java生鲜电商平台-商城后台架构与原型图实战

    Java生鲜电商平台-商城后台架构与原型图实战 说明:生鲜电商平台的运营平台,其中需要很多的功能进行管理.目前把架构与原型图实战分享给大家,希望对大家有用. 仪表盘/首页,简单统计,报表页,运营快捷口 ...

随机推荐

  1. [HDOJ5439]Aggregated Counting(乱搞)

    题目:http://acm.hdu.edu.cn/showproblem.php?pid=5439 题意:按规则构造一个数列a a(1)=1 a(2)=2 a(2)=2 -------> 写两个 ...

  2. Activity之多启动图标

    如果想要Activity有多个启动图标,只需要在manifest.xml文件中配置一下就可以了,直接上代码: 1 <application 2 android:allowBackup=" ...

  3. Install marvel and head plugin for ealsticsearch

    安装ES插件 marvel marvel是ES的供开发者免费使用的管理工具,他内置了一款叫做Sense的控制台,Sense是运行在浏览器中的,基于Sense可以很方便的和ES进行通讯.官方文档中的很多 ...

  4. iOS开发小技巧--巧用ImageView中的mode(解决图片被拉伸的情况)

    一.自己遇到的问题:在布局ImageView的时候,通过约束将ImageView布局好,但是里面的图片被拉伸的很难看.这时候就用到了Mode属性,如图: 代码实现方式: 二.让图片按照比例拉伸,并不是 ...

  5. zoj1492 最大团

    Maximum Clique Time Limit: 10 Seconds      Memory Limit: 32768 KB Given a graph G(V, E), a clique is ...

  6. Web性能测试基本指标

    Web性能测试基本指标 Web性能测试的部分概况一般来说,一个Web请求的处理包括以下步骤: (1)客户发送请求 (2)web server接受到请求,进行处理: (3)web server向DB获取 ...

  7. VS 远程发布IIS

    <?xml version="1.0" encoding="utf-8"?><!--您 Web 项目的发布/打包进程将使用此文件.您可以通过编 ...

  8. win7下安装redies

    https://github.com/MSOpenTech/redis 打开以后,可以直接使用浏览器下载,或者git克隆.注意:下载release版 解压后,目录下有以下这些文件: redis-ben ...

  9. mac下卸载MySQL

    所有跟mysql相关进程都停止掉, 然后终端输入: cd ~/ sudo rm /usr/local/mysqlsudo rm -rf /usr/local/var/mysqlsudo rm -rf ...

  10. this action could not be completed.try again登陆appstore错误提示

    今天升级10.11后登陆appstore的时候发现报错了: this action could not be completed.try again 解决办法,终端敲入: sudo mkdir -p ...