1. 场景

使用windows, wsl2 进行日常开发测试工作。 但是wsl2经常会遇到网络问题。比如今天在测试一个项目,核心功能是将postgres 的数据使用开源组件synch 同步到clickhouse 这个工作。

测试所需组件

  1. postgres
  2. kafka
  3. zookeeper
  4. redis
  5. synch容器

最开始测试时,选择的方案是, 将上述五个服务使用 docker-compose 进行编排, network_modules使用hosts模式, 因为考虑到kafka的监听安全机制,这种网络模式,无需单独指定暴露端口。

docker-compose.yaml 文件如下

version: "3"

services:
postgres:
image: failymao/postgres:12.7
container_name: postgres
restart: unless-stopped
privileged: true # 设置docker-compose env 文件
command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ]
volumes:
- ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf
- ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf
environment:
POSTGRES_PASSWORD: abc123
POSTGRES_USER: postgres
POSTGRES_PORT: 15432
POSTGRES_HOST: 127.0.0.1
healthcheck:
test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';"
interval: 30s
timeout: 10s
retries: 3
network_mode: "host" zookeeper:
image: failymao/zookeeper:1.4.0
container_name: zookeeper
restart: always
network_mode: "host" kafka:
image: failymao/kafka:1.4.0
container_name: kafka
restart: always
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_BROKER_ID: 1
KAFKA_LOG_RETENTION_HOURS: 24
KAFKA_LOG_DIRS: /data/kafka-data #数据挂载
network_mode: "host" producer:
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: producer
command: sh -c "
sleep 30 &&
synch --alias pg2ch_test produce"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host" # 一个消费者消费一个数据库
consumer:
tty: true
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: consumer
command: sh -c
"sleep 30 &&
synch --alias pg2ch_test consume --schema pg2ch_test"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host" redis:
hostname: redis
container_name: redis
image: redis:latest
volumes:
- redis:/data
network_mode: "host" volumes:
redis:
kafka:
zookeeper:

测试过程中因为要使用 postgres, wal2json组件,在容器里单独安装组件很麻烦, 尝试了几次均已失败而告终,所以后来选择了将 postgres 服务安装在宿主机上, 容器里面的synch服务 使用宿主机的 ip,port端口。

但是当重新启动服务后,synch服务一直启动不起来, 日志显示 postgres 无法连接. synch配置文件如下

core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch

这种情况很奇怪,首先确认 postgres, 启动,且监听端口(此处是5433) 也正常,使用localhost和主机eth0网卡地址均报错。

2. 解决

google 答案,参考 stackoverflow 高赞回答,问题解决,原答案如下

If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).

If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker

container with the --add-host host.docker.internal:host-gateway option.

Otherwise, read below

Use** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.

更多详情见 源贴

host 模式下 容器内服务访问宿主机服务

将postgres监听地址修改如下 host.docker.internal 报错解决。 查看宿主机 /etc/hosts 文件如下


root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost 10.111.130.24 host.docker.internal

可以看到,宿主机 ip跟域名的映射. 通过访问域名,解析到宿主机ip, 访问宿主机服务。

最终启动 synch 服务配置如下

core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: host.docker.internal
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch host: host.docker.internal
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host:
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch

## 3. 总结
1. 以--networks="host" 模式下启动容器时,如果想在容器内访问宿主机上的服务, 将ip修改为`host.docker.internal`

4. 参考

  1. https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

docker内服务访问宿主机服务的更多相关文章

  1. docker 访问宿主机网络

    使用宿主机IP 在安装Docker的时候,会在宿主机安装一个虚拟网关docker0,我们可以使用宿主机在docker0上的IP地址来代替localhost. 首先,使用如下命令查询宿主机IP地址: i ...

  2. docker centos7 配置和宿主机同网段IP

    docker centos7 配置和宿主机同网段IP 1.安装brctl 命令 # yum -y install bridge-utils 2.编辑网卡配置文件 # vi ifcfg-eno16777 ...

  3. docker 容器时间跟宿主机时间同步

    docker 容器时间跟宿主机时间同步 docker cp /etc/localtime 87986863838b:/etc/docker cp /etc/localtime container-na ...

  4. docker 安装redis , 让宿主机可以访问

    1, docker 拉去最新版本的redis docker pull redis #后面可以带上tag号, 默认拉取最新版本 2, docker安装redis container 安装之前去定义我们的 ...

  5. 解决Docker MySQL无法被宿主机访问的问题

    1 问题描述 Docker启动MySQL容器后,创建一个localhost访问的用户: create user test@localhost identified by 'test'; 但是在宿主机中 ...

  6. docker 容器不能访问宿主端口原因

    因为数据包到了eth0的 上的iptables 表,首先匹配PREROUTING 链,这个拒绝了来自docker0的流量,从而跳到input链,input没有放开服务端口,所以容器访问宿主端口失败;但 ...

  7. docker 启动mysql 挂载宿主机目录

    在使用docker run 运行镜像获取容器时,有些容器会自动产生一些数据,为了这些数据会因为container (容器)的消失而消失,保证数据的安全,比如mysql 容器在运行中产生的一些表的数据, ...

  8. 【解决】修改 docker 容器时间与宿主机不同

    修改 docker 容器时间 需求: 这几天,开发提了个需求 "测试需要模拟未来某天的业务,发现容器里面没有修改时间的权限",想在我们 k8s 集群上,调整容器时间 解决方案: 使 ...

  9. 安装Samba服务让宿主机和虚拟机共享文件

    安装 samba 服务器之后,很方便的实现 Windows 和 Linux 进行通信. 安装步骤: 1 .在 Ubuntu 系统下面安装 samba 服务: $ sudo apt-get instal ...

随机推荐

  1. 最长回文子序列---DP

    问题描述 给定一个字符串s,找到其中最长的回文子序列.可以假设s的最大长度为1000. 解题思路 1.说明 首先要弄清楚回文子串和回文子序列的区别,如果一个字符串是"bbbab", ...

  2. MySQL——MySQL安装

    1.rpm yum安装:安装方便.速度快.无法定制 2.二进制安装:解压即可使用,不能定制功能 3.编译安装: 可定制.安装慢: MySQL5.5之前:./configure make make in ...

  3. Linux常用命令 - rm命令详解

    21篇测试必备的Linux常用命令,每天敲一篇,每次敲三遍,每月一循环,全都可记住!! https://www.cnblogs.com/poloyy/category/1672457.html 删除/ ...

  4. 铺路、建路、指路:联想ISG给出一份全新的智能化“路书”

    新基建,新服务,新智能:联想给出的"高质量"方案 昨天,第七届联想创新科技大会(Lenovo Tech World 2021)正式召开.每年通过这个大会,各行各业不仅可以了解联想最 ...

  5. sort-uniq-tr-cut命令 对文件处理相关操作

    目录: 一.sort命令 二.uniq命令 三.tr命令 四.cut命令 五.eval命令 一.sort命令 以行为单位对文件内容进行排序,也可以根据不同的数据类型来排序 语法格式 sort [选项] ...

  6. [第五篇]——Docker 镜像加速之Spring Cloud直播商城 b2b2c电子商务技术总结

    Docker 镜像加速 国内从 DockerHub 拉取镜像有时会遇到困难,此时可以配置镜像加速器.Docker 官方和国内很多云服务商都提供了国内加速器服务,例如: 科大镜像: 网易: 阿里云: 你 ...

  7. Excel中怎么快速选中区域

    连续的表格选定 一张表格中会有不同的部分,若想选择某一个区域的数据的时候我们可以使用快捷键Ctrl+A,这是需要先选中第一个单元格,接着点击Ctrl+A即可选中连续的单元格.       汇总后需要汇 ...

  8. K8s 开始

    Kubernetes 是用于自动部署,扩展和管理容器化应用程序的开源系统.本文将介绍如何快速开始 K8s 的使用. 了解 K8s Kubernetes / Overview 搭建 K8s 本地开发测试 ...

  9. 截断误差VS舍入误差

     截断误差:是指计算某个算式时没有精确的计算结果,如积分计算,无穷级数计算等,使用极限的形式表达的,显然我们只能截取有限项进行计算,此时必定会有误差存在,这就是截断误差. 舍入误差:是指由于计算机表示 ...

  10. Xftp乱码问题

    Xftp出现乱码 修改编码