Elastic Load Balancing with Sticky Sessions — Shlomo Swidler https://shlomoswidler.com/2010/04/elastic-load-balancing-with-sticky-sessions.html

Elastic Load Balancing with Sticky Sessions

by SHLOMO SWIDLER on APRIL 8, 2010

At long last, the most oft-requested feature for EC2’s Elastic Load Balancer is here: session affinity, also known as “sticky sessions”. What is session affinity? Why is this feature in such high demand? How can it be used with existing applications? Let’s take a look at these questions. But first, let’s explore what a session is – then we’ll cover why we want it to be sticky, and what ELB’s sticky session limitations are. [To skip directly to an explanation of how to use ELB sticky sessions, go toward the bottom of the article.]

What is a Session?

A session is a way to get your application involved in a long-lasting conversation with a particular client. Without a session, a conversation between your application and a client would  look like something straight out of the movie Memento. It would look like this:

Life Without Sessions

Client: Hi, I’d like to see /products/awesomeDoohickey.html

Application: I don’t know who you are. Please go here to login first: /login

Client: OK, I’d like to see /login

Application: Here it is: “…”

Client: Thanks. Here’s the filled in login form.

Application: Thanks for logging in. Where do you want to go?

Client: I’d like to see /products/awesomeDoohickey.html

Application: I don’t know who you are. Please go here to login first: /login

Client: >Sigh< OK, I’d like to see /login

Application: Happily! Here it is: “…”

Client: Here’s the filled in login form.

Application: Thanks for logging in. Where do you want to go?

Client: Show me /products/awesomeDoohickey.html already!

Application: I don’t know who you are. Please go here to login first: /login

Client: *$#%& this!

The application can’t remember who the client is – it has no context to process each request as part of a conversation. The client gets so frustrated he starts thinking he’s living in an Adam Sandler movie.

On a technical level: Each HTTP request-response pair between the client and application happens (most often) on a different TCP connection. This is especially true when a load balancer sits between the client and the application. So the application can’t use the TCP connection as a way to remember the conversational context. And, HTTP itself is stateless: any request can be sent at any time, in any sequence, regardless of the preceding requests. Sure, the application may demand a particular pattern of interaction – like logging in before accessing certain resources – but that application-level state is enforced by the application, not by HTTP. So HTTP cannot be relied on to maintain conversational context between the client and the application.

There are two ways to solve this problem of forgetting the context. The first is for the client to remind the application of the context every time he requests something: “My name is Henry Whatsisface, I have these items in my shopping cart (…), I got here via this affiliate (…), yada yada yada… and I’d like to see /products/awesomeDoohickey.html”. No sane client would ever agree to interact with an application that needed to be sent the entire context at every stage of the conversation. Its burdensome for the client, it’s difficult to maintain for the application, and it’s expensive (in bandwidth) for both of them. Besides, the application usually maintains the conversational state, not the client. So it’s wrong to require the client to send the entire conversation context along with each request.

The accepted solution is to have the application remember the context by creating an associated memento. This memento is given to the client and returned to the application on subsequent requests. Upon receiving the memento the application looks for the associated context, and – voila – discovers it. Thus, the conversation is preserved.

One way of providing a memento is by putting it into the URL. It looks really ugly when you do this: http://www.example.com/products/awesomeDoohickey.html?sessionID=0123456789ABCDEFGH

More commonly, mementos are provided via cookies, which all browsers these days support. Cookies are placed within the HTTP request so they can be discovered by the application even if a load balancer intervenes.

Here’s what that conversation looks like with cookies:

Life With Sessions, Take 1

Client: Hi, I’d like to see /products/awesomeDoohickey.html

Application: I don’t know who you are. Please go here to login first: /login

Client: OK, I’d like to see /login

Application: Here it is: “…”

Client: Thanks. Here’s the filled in login form.

Application: Thanks for logging in. Here’s a cookie. Where do you want to go?

Client: I’d like to see /products/awesomeDoohickey.html and here’s my cookie.

ApplicationI know you – I’d recognize that cookie anywhere! Great, here’s that page: “…”

Client: I’d like to buy 5000 units. Here’s my cookie.

Much improved, yes?

A side point: most modern applications will provide a cookie earlier in the conversation. This allows the following more optimal conversation:

Life With Sessions, Take 2

Client: Hi, I’d like to see /products/awesomeDoohickey.html

Application: I don’t know who you are. Here’s a cookie. Take this login page and fill it out: “…”

Client: OK. Here’s the filled in login form. And here’s my cookie.

Application: I know you – I’d recognize that cookie anywhere! Thanks for logging in. I recall you wanted to see /products/awesomeDoohickey.html. Here it is: “…”

Client: I’d like to buy 5000 units. Here’s my cookie.

That’s about as optimized a conversation as you can have. Cookies make it possible.

What is Session Affinity (Sticky Sessions)? Why is it in High Demand?

When you only have one application server talking to your clients life is easy: all the session contexts can be stored in that application server’s memory for fast retrieval. But in the world of highly available and scalable applications there’s likely to be more than one application server fulfilling requests, behind a load balancer. The load balancer routes the first request to an application server, who stores the session context in its own memory and gives the client back a cookie. The next request from the same client will contain the cookie – and, if the same application server gets the request again, the application will rediscover the session context. But what happens if that client’s next request instead gets routed to a different application server? That application server will not have the session context in its memory – even though the request contains the cookie, the application can’t discover the context.

If you’re willing to modify your application you can overcome this problem. You can store the session context in a shared location, visible to all application servers: the database or memcached, for example. All application servers will then be able to lookup the cookie in the central, shared location and discover the context. Until now, this was the approach you needed to take in order to retain the session context behind an Elastic Load Balancer.

But not all applications can be modified in this way. And not all developers want to modify existing applications. Instead of modifying the application, you need the load balancer to route the same client to the same application server. Once the client’s request has been routed to the correct application server, that application server can lookup the session cookie in its own memory and recover the conversational context.

That’s what sticky sessions are: the load balancer routing the same client to the same application server. And that’s why they’re so important: If the load balancer supports sticky sessions then you don’t need to modify your application to remember client session context.

How to Use ELB with Sticky Sessions with Existing Applications

The key to managing ELB sticky sessions is the duration of the stickiness: how long the client should consistently be routed to the same back-end instance. Too short, and the session context will be lost, forcing the client to login again. Too long, and the load balancer will not be able to distribute requests equally across the application servers.

Controlling the ELB Stickiness Duration

ELB supports two ways of managing the stickiness’ duration: either by specifying the duration explicitly, or by indicating that the stickiness expiration should follow the expiration of the application server’s own session cookie.

If your application server has an existing session cookie already, the simplest way to get stickiness is to configure your ELB to use the existing application cookie for determining the stickiness duration. PHP applications usually have a session cookie called PHPSESSID. Java applications usually have a session cookie called JSESSIONID. The expiration of these cookies is controlled by your application, and the stickiness expiration can be set to match as follows. Assuming your load balancer is called myLoadBalancer and it has an HTTP listener on port 80:

elb-create-app-cookie-stickiness-policy myLoadBalancer --cookie-name PHPSESSID --policy-name followPHPPolicy
elb-set-lb-policies-of-listener myLoadBalancer --lb-port 80 --policy-names followPHPPolicy

The above commands create a stickiness policy that says “make the session stickiness last as long as the cookie PHPSESSID does” and sets the load balancer to use that stickiness policy. Behind the scenes, the ELB’s session cookie will have the same lifetime as the PHPSESSID cookie provided by your application.

If your application does not have its own session cookie already, set your own stickiness duration for the load balancer, as follows:

elb-create-lb-cookie-stickiness-policy myLoadBalancer --policy-name fifteenMinutesPolicy --expiration-period 900
elb-set-lb-policies-of-listener myLoadBalancer --lb-port 80 --policy-names fifteenMinutesPolicy

These commands create a stickiness policy that says “make the session stickiness last for fifteen minutes” and sets the load balancer to use that stickiness policy. Behind the scenes, the ELB’s session cookie will have a lifetime of fifteen minutes.

What Can’t ELB Sticky Session Do?

Life is not all roses with ELB’s sticky session support. Here are some things it can’t do.

Update October 2010: ELB now supports SSL termination, and it can provide sticky sessions over HTTPS as well.

HTTPS

Remember how sticky sessions are typically provided via cookies? The cookie is inserted into the HTTP request by the client’s browser, and any server or load balancer that can read the request can recover the cookie. This works great for plain old HTTP-based communications.

With HTTPS connections the entire communications stream is encrypted. Only servers that have the proper decryption credentials can decipher the stream and discover the cookies. If the load balancer has the server’s SSL certificate then it can decrypt the stream. Because it does not have your application’s SSL certificate (and there’s no way to give it your certificate) ELB does not support HTTPS communications. If you need to support sticky sessions and HTTPS in EC2 then you can’t use ELB today. You need to use HAProxy or aiCache or another product that provide load balancing with session affinity and SSL termination.

Scaling-down-proof stickiness

What happens when you add or remove an application server to/from the load balancer? Depending on the stickiness implementation the load balancer may or may not be able to route requests to the same application servers as it did before the scaling event (caused, for example, by an AutoScaling trigger).

When scaling up (adding more application servers) ELB maintains stickiness of existing sessions. Only new connections will be forwarded to the newly-added application servers.

When scaling down (removing application servers), you should expect some of your clients to lose their sessions and require logins again. This is because some of the stored sessions were on the application server that is no longer servicing requests.

If you really want your sessions to persist even through scaling-down events, you need to go back to basics: your application will need to store the sessions independently, as it did before sticky sessions were supported. In this case, sticky session support can provide an added optimization, allowing you to cache the session locally on each application server and only retrieve it from the central session store (the DB?) if it’s not in the local cache. Such cache misses would happen when application servers are removed from the load balancing pool, but otherwise would not impact performance. Note that this technique can be used equally well with ELB and with other load balancers.

With the introduction of sticky sessions for ELB, you – the application developer – can avoid modifying your application in order to retain session context behind a load balancer. The technical term for this is “a good thing”. Sticky sessions are, despite their limitations, a very welcome addition to ELB’s features.

Thanks to Jeff Barr of Amazon Web Services for providing feedback on drafts of this article.

Elastic Load Balancing with Sticky Sessions的更多相关文章

  1. NGINX Load Balancing - HTTP Load Balancer

    This chapter describes how to use NGINX and NGINX Plus as a load balancer. Overview Load balancing a ...

  2. Load Balancing with NGINX 负载均衡算法

    Using nginx as HTTP load balancer Using nginx as HTTP load balancer http://nginx.org/en/docs/http/lo ...

  3. 【架构】How To Use HAProxy to Set Up MySQL Load Balancing

    How To Use HAProxy to Set Up MySQL Load Balancing Dec  2, 2013 MySQL, Scaling, Server Optimization U ...

  4. How Network Load Balancing Technology Works--reference

    http://technet.microsoft.com/en-us/library/cc756878(v=ws.10).aspx In this section Network Load Balan ...

  5. Network Load Balancing Technical Overview--reference

    http://technet.microsoft.com/en-us/library/bb742455.aspx Abstract Network Load Balancing, a clusteri ...

  6. 负载均衡(Load Balancing)学习笔记(一)

    概述 在分布式系统中,负载均衡(Load Balancing)是一种将任务分派到多个服务端进程的方法.例如,将一个HTTP请求派发到实际的Web服务器中执行的过程就涉及负载均衡的实现.一个HTTP请求 ...

  7. How Load Balancing Policies Work

    How Load Balancing Policies Work https://docs.cloud.oracle.com/en-us/iaas/Content/Balance/Reference/ ...

  8. CF# Educational Codeforces Round 3 C. Load Balancing

    C. Load Balancing time limit per test 2 seconds memory limit per test 256 megabytes input standard i ...

  9. Codeforces Educational Codeforces Round 3 C. Load Balancing 贪心

    C. Load Balancing 题目连接: http://www.codeforces.com/contest/609/problem/C Description In the school co ...

随机推荐

  1. cocos2d-x onMouseMove中CCTouch *pTouch参数的细节

    /**************************************************************************** Copyright (c) 2010 coc ...

  2. 深入浅出MFC--第一章

    Windows程序的生与死 当使用者按下系统菜单中的Close命令项,系统送出WM_CLOSE.通常程序的窗口函数不拦截次消息,于是DefWindowProc函数处理它.DefWindowProc收到 ...

  3. QT4.8.5 源码编译记录

    今天想将以前的虚拟机的 QT4.8.5 集成到一个虚拟机里面,所以就重新编译了一次 QT4.8.5的源码 走了一点点小弯路,特此记录. 一.交叉编译器,不能直接从原来的虚拟机里面拷贝,必须使用官网的交 ...

  4. 关于Nvelocity的主要语法和一些代码示例

    context.Response.ContentType = "text/html"; VelocityEngine vltEngine = new VelocityEngine( ...

  5. MapReduce与Hadoop之比较

    MapReduce与Hadoop之比较 Hadoop是Apache软件基金会发起的一个项目,在大数据分析以及非结构化数据蔓延的背景下,Hadoop受到了前所未有的关注. Hadoop是一种分布式数据和 ...

  6. ZooKeeper是以Fast Paxos算法为基础的

    ZooKeeper是以Fast Paxos算法为基础的,Paxos 算法存在活锁的问题,即当有多个proposer交错提交时,有可能互相排斥导致没有一个proposer能提交成功,而Fast Paxo ...

  7. 【BZOJ】1630: [Usaco2007 Demo]Ant Counting(裸dp/dp/生成函数)

    http://www.lydsy.com/JudgeOnline/problem.php?id=1630 题意,给你n种数,数量为m个,求所有的数组成的集合选长度l-r的个数 后两者待会写.. 裸dp ...

  8. 【BZOJ】1690: [Usaco2007 Dec]奶牛的旅行(分数规划+spfa)

    http://www.lydsy.com/JudgeOnline/problem.php?id=1690 第一题不是水题的题.. 分数规划.. T-T 百度吧..http://blog.csdn.ne ...

  9. cx_Freeze的生成可执行文件

    ①.生成setup.py文件,仿照cx_Freeze给的例子 ②.python setup.py build 生成单个的可执行,会自动带着需要的动态链接库的.默认路径:build\bdist.win3 ...

  10. Redis分布式锁,基于StringRedisTemplate和基于Lettuce实现setNx

    使用redis分布式锁,来确保多个服务对共享数据操作的唯一性一般来说有StringRedisTemplate和RedisTemplate两种redis操作模板. 根据key-value的类型决定使用哪 ...