https://groups.google.com/forum/#!topic/golang-nuts/I7a_3B8_9Gw

https://groups.google.com/forum/#!msg/golang-nuts/coc6bAl2kPM/ypNLG3I4mk0J

ask:   -----------------------

Hello,

I'm curious as to what the proper way of listening multiple simultaneous sockets is?
Please consider the two implementations I currently have: http://play.golang.org/p/LOd7q3aawd
 
Receive1 is a rather simple implementation, spawning a goroutine per connection, and simply blocking on each one via the ReadFromUDP method of the connection.
Receive2 is a more C-style approach, using the real select to block until either one of the sockets actually receives something, and then actually trying to read.
 
Now, Receive2 is _supposedly_ better, since we rely on the kernel to notify the select if one of the file descriptors is ready for reading. That would, under ideal conditions (and maybe less goroutines), allow the process itself to fall asleep until data starts pouring into the socket.
Receive1 is relying on the I/O read blocking until something comes along, and is doing so for each connection. Or at least that's what my understanding is, I'm not sure whether internally, the connection is still using select. Though even if it was, I don't thing the go scheduler is smart enough to put the whole process to sleep if two separate goroutines are waiting for I/O. That being said, Receive1 looks so much better than its counterpart.
 
If there is also a better way than either of these, please share it with me.
 
 
answer1:   ------------------------------------------
Recieve1 is better. Go will use asynchronous I/O (equivalent to select) under the covers for you. The go scheduler is smart enough to "put the whole process to sleep if [all] goroutines are waiting for I/O". Don't worry about it. Just use idiomatic Go.
 
answer2:   ------------------------------------------
Receive1 is certainly the Go way. I wonder however why you need to read from two UDP ports. UDP is connectionless, so you can support multiple clients with one open UDP port.

 

That being said you should know that any goroutine blocking in a system call consumes one kernel thread. This will not be a problem until you need to support thousands of connections. But at this scale the file descriptor bitmaps used by select become a performance bottleneck as well. In this situation you might want to look at epoll on Linux. On other systems poll might be an alternative. If you are in this territory I strongly recommend to have a look into Michael Kerrisk's excellent reference "The LINUX Programming Interface".

 
answer3:   ------------------------------------------
It's true that a goroutine blocking in a syscall consumes a kernel thread. However, Receive1 will *not* use any kernel threads while waiting in conn.ReadFromUDP, because under the covers, the Go runtime uses nonblocking I/O for all network activity. It's much better just to rely on the runtime implementation of network I/O rather than trying to roll your own. If you don't believe me, try doing syscall traces or profiling to prove it out.
 
answer4:   ------------------------------------------
receive2 approach is not portable (due to syscall), and is more complex. also, unless profiling can prove it, efficiency of the approach is a speculation.
 
answer5:   ------------------------------------------
Matt, thank you I learned something. During network access the Goroutine is not blocked in a syscall and Go is already using epoll internally. So unless you know what you are doing the Goroutine approach will work best.
 
 
这个问答中的 example code:
package main

import (
"fmt"
"net"
"os"
"syscall"
) func Receive1(conn1, conn2 *net.UDPConn, done chan struct{}) <-chan string {
res := make(chan string)
tokenChan := make(chan []string) for _, conn := range []*net.UDPConn{conn1, conn2} {
go func(conn *net.UDPConn) {
buf := make([]byte, )
for {
select {
case <-done:
return
default:
if n, _, err := conn.ReadFromUDP(buf); err == nil {
fmt.Println(string(buf[:n]))
res <- string(buf[:n])
}
}
}
}(conn)
} return res
} func Receive2(conn1, conn2 *net.UDPConn, done chan struct{}) <-chan string {
res := make(chan string)
fds := &syscall.FdSet{}
filemap := map[int]*os.File{}
var maxfd =
for _, conn := range []*net.UDPConn{conn1, conn2} {
if file, err := conn.File(); err == nil {
fd := int(file.Fd())
FD_SET(fds, fd)
filemap[fd] = file
if fd > maxfd {
maxfd = fd
}
}
} go func() {
for {
select {
case <-done:
return
default:
fdsetCopy := *fds
tv := syscall.Timeval{, }
if _, err := syscall.Select(maxfd+, &fdsetCopy, nil, nil, &tv); err == nil {
for fd, file := range filemap {
if !FD_ISSET(&fdsetCopy, fd) {
continue
} buf := make([]byte, )
if n, err := file.Read(buf); err == nil {
fmt.Println(string(buf[:n]))
res <- string(buf[:n])
}
}
}
}
}
}() return res
} func FD_SET(p *syscall.FdSet, i int) {
p.Bits[i/] |= << (uint(i) % )
} func FD_ISSET(p *syscall.FdSet, i int) bool {
return (p.Bits[i/] & ( << (uint(i) % ))) !=
} func main() {
fmt.Println("Hello, playground")
}
 
 
ask:  -------------------------------
Hello,
It is said that event-driven nonblocking model is not the preferred programming model in Go, so I use "one goroutine for one client" model, but is it OK to handle millions of concurrent goroutines in a server process?
 
And, how can I "select" millions of channel to see which goroutine has data received? The "select" statement can only select on predictable number of channels, not on a lot of unpredictable channels. And how can I "select" a TCP Connection (which is not a channel) to see if there is any data arrived? Is there any "design patterns" on concurrent programming in Go?
Thanks in advance.
 
answer: ----------------------------------

Hello,
 
It
is said that event-driven nonblocking model is not the preferred
programming model in Go, so I use "one goroutine for one client" model,
but is it OK to handle millions of concurrent goroutines in a server
process?
 

A goroutine itself is
4kb. So 1e6 goroutines would require 4gb of base memory. And then
whatever your server needs per goroutine that you add.

 
Any machine that might be handling 1e6 concurrent connections should have well over 4gb of memory. 
 
And, how can I "select" millions of channel to see which goroutine has data received?
 
That's
not how it works. You just try to .Read() in each goroutine, and the
select is done under the hood. The select{} statement is for channel
communication, specifically.
 
All IO in go is
event driven under the hood, but as far as the code you write, looks
linear. The Go runtime maintains a single thread that runs epoll or
kqueue or whatever under the hood, and wakes up a goroutine when new
data has arrived for that goroutine. 
 
The
"select" statement can only select on predictable number of channels,
not on a lot of unpredictable channels. And how can I "select" a TCP
Connection (which is not a channel) to see if there is any data arrived?
Is there any "design patterns" on concurrent programming in Go?
 
These problems you anticipate simply do not exist with go. Give it a shot!
 
 
 

golang 中处理大规模tcp socket网络连接的方法,相当于c语言的 poll 或 epoll的更多相关文章

  1. inux中,关于多路复用的使用,有三种不同的API,select、poll和epoll

    inux中,关于多路复用的使用,有三种不同的API,select.poll和epoll https://www.cnblogs.com/yearsj/p/9647135.html 在上一篇博文中提到了 ...

  2. 在c#中利用keep-alive处理socket网络异常断开的方法

    本文摘自 http://www.z6688.com/info/57987-1.htm 最近我负责一个IM项目的开发,服务端和客户端采用TCP协议连接.服务端采用C#开发,客户端采用Delphi开发.在 ...

  3. CentOS下netstat + awk 查看tcp的网络连接状态

    执行以下命令: #netstat -n | awk ‘/^tcp/ {++state[$NF]} END {for(key in state) print key."\t".sta ...

  4. 【虚拟机】在VMware中安装Server2008之后配置网络连接的几种方式

    VMware虚拟机的网络连接方式分为三种:桥接模式.NAT模式.仅主机(Host Only) (1)桥接模式 桥接模式即在虚拟机中虚拟一块网卡,这样主机和虚拟机在一个网段中就被看作是两个独立的IP地址 ...

  5. socket 网络连接基础

    socket 客户端 import socket 1.client = socket.socket()  # socket.TCP/IP 选择连接的类型,默认为本地连接 2.client.connec ...

  6. VC:检测网络连接的方法

    方法一: #include "stdafx.h" #include "windows.h" #include <Sensapi.h> #includ ...

  7. golang中浮点型底层存储原理和decimal使用方法

    var price float32 = 39.29 float64和float32类似,只是用于表示各部分的位数不同而已,其中:sign=1位,exponent=11位,fraction=52位,也就 ...

  8. 服务器中判断客户端socket断开连接的方法

    1, 如果服务端的Socket比客户端的Socket先关闭,会导致客户端出现TIME_WAIT状态,占用系统资源. 所以,必须等客户端先关闭Socket后,服务器端再关闭Socket才能避免TIME_ ...

  9. 服务器中判断客户端socket断开连接的方法【转】

    本文转载自:http://www.cnblogs.com/jacklikedogs/p/3976208.html 1, 如果服务端的Socket比客户端的Socket先关闭,会导致客户端出现TIME_ ...

随机推荐

  1. springboot集成rabbitmq的一些坑

    一.默认管理页面地址是 http://127.0.0.1:15672 但是spring配置连接里面要把端口改成5672,如果不配置的话默认就是端口5672 spring.rabbitmq.host=1 ...

  2. [转]Java中一周前一个月前时间计算方法

    Java中一周前一个月前时间计算方法 在java语言中,用如下方法获取系统时间: Date date = new Date(); String year=new SimpleDateFormat(&q ...

  3. 时间模块和random模块

    时间模块 和时间有关系的我们就要用到时间模块.在使用模块之前,应该首先导入这个模块. #常用方法 1.time.sleep(secs) (线程)推迟指定的时间运行.单位为秒. 2.time.time( ...

  4. mtd工具

    http://daemons.net/linux/storage/mtd.html MTD The Memory Technology Devices (MTD) subsystem provides ...

  5. EF将IEnumerable<T>类型转换为Dictionary<T,T>类型

    x 无标题 #region 博客Code {DBEntities}生成EFModel的时候自己命名的 using ({DBEntities} db = new { DBEntities }()) { ...

  6. PHP利用反射根据类名反向寻找类所在文件

    有时候分析源码时,会被博大精深的层层代码搞得晕头转向,不知道类是定义在哪个文件里的,有时候IDE所提供的方法声明未必准确.在这种情况下,我们可以利用反射找到类所在的文件. 在你发现实例化类的地方(例如 ...

  7. 用NFS挂载root出现:NFS: failed to create MNT RPC client, status=-101(-110)

      2014-02-18 08:06:17 By Ly #Linux 阅读(78) 评论(0) 错误信息如下: Root-NFS: nfsroot=/home/zenki/nfs/rootfs NFS ...

  8. screen基本用法

    当某些命令执行时间过长或者某些程序关闭shell后自动断开时,就能使用screen 1.安装yum -y install screen 2.输入screen命令进入screen控制台 3.输入scre ...

  9. elasticsearch in docker/ and aggregation,,performance tune ;throughout

    Docker环境中Elasticsearch的安装 ]https://wenchao.ren/archives/category/elasticsearch/page/2 [ElasticSearch ...

  10. ubuntu物理机上搭建Kubernetes集群 -- minion 配置

    1. flannel配置 下载二进制文件 https://github.com/coreos/flannel/releases 版本:flannel-v0.7.0-linux-amd64.tar.gz ...