Futures is a framework for expressing asynchronous code in C++ using the Promise/Future pattern.

Overview

Folly Futures is an async C++ framework inspired by Twitter's Futures implementation in Scala (see also Future.scalaPromise.scala, and friends), and loosely builds upon the existing but anemic Futures code found in the C++11 standard (std::future) and boost::future (especially >= 1.53.0). Although inspired by the C++11 std::future interface, it is not a drop-in replacement because some ideas don't translate well enough to maintain API compatibility.

The primary difference from std::future is that you can attach callbacks to Futures (with then()), under the control of an executor to manage where work runs, which enables sequential and parallel composition of Futures for cleaner asynchronous code.

Brief Synopsis

#include <folly/futures/Future.h>
#include <folly/executors/ThreadedExecutor.h>
using namespace folly;
using namespace std; void foo(int x) {
// do something with x
cout << "foo(" << x << ")" << endl;
} // ...
folly::ThreadedExecutor executor;
cout << "making Promise" << endl;
Promise<int> p;
Future<int> f = p.getSemiFuture().via(&executor);
auto f2 = f.then(foo);
cout << "Future chain made" << endl; // ... now perhaps in another event callback cout << "fulfilling Promise" << endl;
p.setValue();
f2.get();
cout << "Promise fulfilled" << endl;

This would print:

making Promise
Future chain made
fulfilling Promise
foo()
Promise fulfilled

Blog Post

In addition to this document, there is a blog post on code.facebook.com (June 2015).

Brief guide

This brief guide covers the basics. For a more in-depth coverage skip to the appropriate section.

Let's begin with an example using an imaginary simplified Memcache client interface:

using std::string;
class MemcacheClient {
public:
struct GetReply {
enum class Result {
FOUND,
NOT_FOUND,
SERVER_ERROR,
}; Result result;
// The value when result is FOUND,
// The error message when result is SERVER_ERROR or CLIENT_ERROR
// undefined otherwise
string value;
}; GetReply get(string key);
};

This API is synchronous, i.e. when you call get() you have to wait for the result. This is very simple, but unfortunately it is also very easy to write very slow code using synchronous APIs.

Now, consider this traditional asynchronous signature for the same operation:

int async_get(string key, std::function<void(GetReply)> callback);

When you call async_get(), your asynchronous operation begins and when it finishes your callback will be called with the result. Very performant code can be written with an API like this, but for nontrivial applications the code devolves into a special kind of spaghetti code affectionately referred to as "callback hell".

The Future-based API looks like this:

SemiFuture<GetReply> future_get(string key);

SemiFuture<GetReply> or Future<GetReply> is a placeholder for the GetReply that we will eventually get. For most of the descriptive text below, Future can refer to either folly::SemiFuture or folly::Future as the former is a safe subset of the latter. A Future usually starts life out "unfulfilled", or incomplete, i.e.:

fut.isReady() == false
fut.value() // will throw an exception because the Future is not ready

At some point in the future, the Future will have been fulfilled, and we can access its value.

fut.isReady() == true
GetReply& reply = fut.value();

Futures support exceptions. If the asynchronous producer fails with an exception, your Future may represent an exception instead of a value. In that case:

fut.isReady() == true
fut.value() // will rethrow the exception

Just what is exceptional depends on the API. In our example we have chosen not to raise exceptions for SERVER_ERROR, but represent this explicitly in the GetReply object. On the other hand, an astute Memcache veteran would notice that we left CLIENT_ERROR out of GetReply::Result, and perhaps a CLIENT_ERROR would have been raised as an exception, because CLIENT_ERROR means there's a bug in the library and this would be truly exceptional. These decisions are judgement calls by the API designer. The important thing is that exceptional conditions (including and especially spurious exceptions that nobody expects) get captured and can be handled higher up the "stack".

So far we have described a way to initiate an asynchronous operation via an API that returns a Future, and then sometime later after it is fulfilled, we get its value. This is slightly more useful than a synchronous API, but it's not yet ideal. There are two more very important pieces to the puzzle.

First, we can aggregate Futures, to define a new Future that completes after some or all of the aggregated Futures complete. Consider two examples: fetching a batch of requests and waiting for all of them, and fetching a group of requests and waiting for only one of them.

MemcacheClient mc;

vector<SemiFuture<GetReply>> futs;
for (auto& key : keys) {
futs.push_back(mc.future_get(key));
}
auto all = collectAll(futs.begin(), futs.end()); vector<SemiFuture<GetReply>> futs;
for (auto& key : keys) {
futs.push_back(mc.future_get(key));
}
auto any = collectAny(futs.begin(), futs.end());

all and any are Futures (for the exact type and usage see the header files). They will be complete when all/one of futs are complete, respectively. (There is also collectN() for when you need some.)

Second, we can associate a Future with an executor. An executor specifies where work will run, and we detail this more later. In summary, given an executor we can convert a SemiFuture to a Future with an executor, or a Future on one executor to a Future on another executor.

For example:

folly::ThreadedExecutor executor;
SemiFuture<GetReply> semiFut = mc.future_get("foo");
Future<GetReply> fut1 = semiFut.via(&executor);

Once an executor is attached, a Future allows continuations to be attached and chained together monadically. An example will clarify:

SemiFuture<GetReply> semiFut = mc.future_get("foo");
Future<GetReply> fut1 = semiFut.via(&executor); Future<string> fut2 = fut1.then(
[](GetReply reply) {
if (reply.result == MemcacheClient::GetReply::Result::FOUND)
return reply.value;
throw SomeException("No value");
}); Future<Unit> fut3 = fut2
.then([](string str) {
cout << str << endl;
})
.onError([](std::exception const& e) {
cerr << e.what() << endl;
});

That example is a little contrived but the idea is that you can transform a result from one type to another, potentially in a chain, and unhandled errors propagate. Of course, the intermediate variables are optional.

Using .then to add callbacks is idiomatic. It brings all the code into one place, which avoids callback hell.

Up to this point we have skirted around the matter of waiting for Futures. You may never need to wait for a Future, because your code is event-driven and all follow-up action happens in a then-block. But if want to have a batch workflow, where you initiate a batch of asynchronous operations and then wait for them all to finish at a synchronization point, then you will want to wait for a Future. Futures have a blocking method called wait() that does exactly that and optionally takes a timeout.

Futures are partially threadsafe. A Promise or Future can migrate between threads as long as there's a full memory barrier of some sort. Future::then and Promise::setValue (and all variants that boil down to those two calls) can be called from different threads. But, be warned that you might be surprised about which thread your callback executes on. Let's consider an example, where we take a future straight from a promise, without going via the safer SemiFuture, and where we therefore have a Future that does not carry an executor. This is in general something to avoid.

// Thread A
Promise<Unit> p;
auto f = p.getFuture(); // Thread B
f.then(x).then(y).then(z); // Thread A
p.setValue();

This is legal and technically threadsafe. However, it is important to realize that you do not know in which thread xy, and/or z will execute. Maybe they will execute in Thread A when p.setValue() is called. Or, maybe they will execute in Thread B when f.then is called. Or, maybe x will execute in Thread A, but y and/or z will execute in Thread B. There's a race between setValue and then—whichever runs last will execute the callback. The only guarantee is that one of them will run the callback.

For safety, .via should be preferred. We can chain .via operations to give very strong control over where callbacks run:

aFuture
.then(x)
.via(e1).then(y1).then(y2)
.via(e2).then(z);

x will execute in the context of the executor associated with aFuturey1 and y2 will execute in the context of e1, and zwill execute in the context of e2. If after z you want to get back to the original context, you need to get there with a call to via passing the original executor. Another way to express this is using an overload of then that takes an Executor:

aFuture
.then(x)
.then(e1, y1, y2)
.then(e2, z);

Either way, there is no ambiguity about which executor will run y1y2, or z.

You can still have a race after via if you break it into multiple statements, e.g. in this counterexample:

f2 = f.via(e1).then(y1).then(y2); // nothing racy here
f2.then(y3); // racy

You make me Promises, Promises

If you are wrapping an asynchronous operation, or providing an asynchronous API to users, then you will want to make Promises. Every Future has a corresponding Promise (except Futures that spring into existence already completed, with makeFuture()). Promises are simple: you make one, you extract the Future, and you fulfill it with a value or an exception. Example:

Promise<int> p;
SemiFuture<int> f = p.getSemiFuture(); f.isReady() == false p.setValue(); f.isReady() == true
f.value() ==

and an exception example:

Promise<int> p;
SemiFuture<int> f = p.getSemiFuture(); f.isReady() == false p.setException(std::runtime_error("Fail")); f.isReady() == true
f.value() // throws the exception

It's good practice to use setWith which takes a function and automatically captures exceptions, e.g.

Promise<int> p;
p.setWith([]{
try {
// do stuff that may throw
return ;
} catch (MySpecialException const& e) {
// handle it
return ;
}
// Any exceptions that we didn't catch, will be caught for us
});

Futures的更多相关文章

  1. Python标准模块--concurrent.futures

    1 模块简介 concurrent.futures模块是在Python3.2中添加的.根据Python的官方文档,concurrent.futures模块提供给开发者一个执行异步调用的高级接口.con ...

  2. Guava 并行 Futures实例

    Future可以用来构建复杂的异步操作,方法不是返回一个值,而是一个Future对象.创建Future对象的过程(比如调用Future异步函数接口),不会阻塞当前线程操作,而且对象第一个次创建没有值, ...

  3. Guava - 并行编程Futures

    Guava为Java并行编程Future提供了很多有用扩展,其主要接口为ListenableFuture,并借助于Futures静态扩展. 继承至Future的ListenableFuture,允许我 ...

  4. 在python中使用concurrent.futures实现进程池和线程池

    #!/usr/bin/env python # -*- coding: utf-8 -*- import concurrent.futures import time number_list = [1 ...

  5. Scala 并行和并发编程-Futures 和 Promises【翻译】

    官网地址 本文内容 简介 Futures 阻塞 异常 Promises 工具 最近看了<七周七语言:理解多种编程泛型>,介绍了七种语言(四种编程范型)的主要特性:基本语法,集合,并行/并发 ...

  6. python简单粗暴多进程之concurrent.futures

    python在前面写过多线程的库threading: python3多线程趣味详解 但是今天发现一个封装得更加简单暴力的多进程库concurrent.futures: # !/usr/bin/pyth ...

  7. 流动python - 写port扫描仪和各种并发尝试(多线程/多进程/gevent/futures)

    port扫描仪的原理非常easy.没有什么比操作更socket,能够connect它认为,port打开. import socket def scan(port): s = socket.socket ...

  8. 45、concurrent.futures模块与协程

    concurrent.futures  —Launching parallel tasks    concurrent.futures模块同时提供了进程池和线程池,它是将来的使用趋势,同样我们之前学习 ...

  9. python concurrent.futures

    python因为其全局解释器锁GIL而无法通过线程实现真正的平行计算.这个论断我们不展开,但是有个概念我们要说明,IO密集型 vs. 计算密集型. IO密集型:读取文件,读取网络套接字频繁. 计算密集 ...

  10. 进程池与线程池(concurrent.futures)

    from concurrent.futures import ProcessPoolExecutor import os,time,random def task(n): print('%s is r ...

随机推荐

  1. html 实体和htmlspecialchars()

    HTML 中的预留字符必须被替换为字符实体. HTML 实体 在 HTML 中,某些字符是预留的. 在 HTML 中不能使用小于号(<)和大于号(>),这是因为浏览器会误认为它们是标签. ...

  2. 从JDK源码角度看Byte

    Java的Byte类主要的作用就是对基本类型byte进行封装,提供了一些处理byte类型的方法,比如byte到String类型的转换方法或String类型到byte类型的转换方法,当然也包含与其他类型 ...

  3. SQL基础五(作业代码)

    create database stuinfo create table student ( mid ) not null primary key, mname ) not null ) create ...

  4. 程序设计入门-C语言基础知识-翁恺-第四周:循环控制-详细笔记(四)

    目录 第四周:循环控制 4-1 for循环 4-2 循环控制 各运算符优先级(图) 4-3 课后习题 4-4 讨论题 第四周:循环控制 4-1 for循环 for循环像一个计数循环:设定一个计数器,初 ...

  5. 在IIS上搭建FTP站点

    操作环境 系统:win7 IIS版本:7.5 FTP传输工具:FlashXP 概述 本文介绍了如何在win7下利用IIS(默认已安装IIS和FTP功能)搭建FTP站点,FTP站点的常用配置. 快速搭建 ...

  6. 博通BCM53101M以太网交换芯片原理解析

    Quality of Service 服务质量 BCM53101M的QoS为每个端口提供6个内部队列以支持6种不同的流量类别(traffic class, TC).在流量拥塞的情况下,可通过拥塞管理, ...

  7. JAVA多线程------用1

    火车上车厢的卫生间,为了简单,这里只模拟一个卫生间,这个卫生间会被多个人同时使用,在实际使用时,当一个人进入卫生间时则会把卫生间锁上,等出来时 打开门,下一个人进去把门锁上,如果有一个人在卫生间内部则 ...

  8. BZOJ4709 Jsoi2011 柠檬【决策单调性+单调栈】

    Description Flute 很喜欢柠檬.它准备了一串用树枝串起来的贝壳,打算用一种魔法把贝壳变成柠檬.贝壳一共有 N (1 ≤ N ≤ 100,000) 只,按顺序串在树枝上.为了方便,我们从 ...

  9. Codeforces 280C Game on tree【概率DP】

    Codeforces 280C Game on tree LINK 题目大意:给你一棵树,1号节点是根,每次等概率选择没有被染黑的一个节点染黑其所有子树中的节点,问染黑所有节点的期望次数 #inclu ...

  10. 报错 Inferred type 'S' for type parameter 'S' is not within its bound; 解决办法

    出现情况: Inferred type 'S' for type parameter 'S' is not within its bound; should extends xxxxxx 出现这种问题 ...