1. 场景

使用windows, wsl2 进行日常开发测试工作。 但是wsl2经常会遇到网络问题。比如今天在测试一个项目,核心功能是将postgres 的数据使用开源组件synch 同步到clickhouse 这个工作。

测试所需组件

  1. postgres
  2. kafka
  3. zookeeper
  4. redis
  5. synch容器

最开始测试时,选择的方案是, 将上述五个服务使用 docker-compose 进行编排, network_modules使用hosts模式, 因为考虑到kafka的监听安全机制,这种网络模式,无需单独指定暴露端口。

docker-compose.yaml 文件如下

version: "3"

services:
postgres:
image: failymao/postgres:12.7
container_name: postgres
restart: unless-stopped
privileged: true # 设置docker-compose env 文件
command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ]
volumes:
- ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf
- ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf
environment:
POSTGRES_PASSWORD: abc123
POSTGRES_USER: postgres
POSTGRES_PORT: 15432
POSTGRES_HOST: 127.0.0.1
healthcheck:
test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';"
interval: 30s
timeout: 10s
retries: 3
network_mode: "host" zookeeper:
image: failymao/zookeeper:1.4.0
container_name: zookeeper
restart: always
network_mode: "host" kafka:
image: failymao/kafka:1.4.0
container_name: kafka
restart: always
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_BROKER_ID: 1
KAFKA_LOG_RETENTION_HOURS: 24
KAFKA_LOG_DIRS: /data/kafka-data #数据挂载
network_mode: "host" producer:
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: producer
command: sh -c "
sleep 30 &&
synch --alias pg2ch_test produce"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host" # 一个消费者消费一个数据库
consumer:
tty: true
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: consumer
command: sh -c
"sleep 30 &&
synch --alias pg2ch_test consume --schema pg2ch_test"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host" redis:
hostname: redis
container_name: redis
image: redis:latest
volumes:
- redis:/data
network_mode: "host" volumes:
redis:
kafka:
zookeeper:

测试过程中因为要使用 postgres, wal2json组件,在容器里单独安装组件很麻烦, 尝试了几次均已失败而告终,所以后来选择了将 postgres 服务安装在宿主机上, 容器里面的synch服务 使用宿主机的 ip,port端口。

但是当重新启动服务后,synch服务一直启动不起来, 日志显示 postgres 无法连接. synch配置文件如下

core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch

这种情况很奇怪,首先确认 postgres, 启动,且监听端口(此处是5433) 也正常,使用localhost和主机eth0网卡地址均报错。

2. 解决

google 答案,参考 stackoverflow 高赞回答,问题解决,原答案如下

If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).

If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker

container with the --add-host host.docker.internal:host-gateway option.

Otherwise, read below

Use** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.

更多详情见 源贴

host 模式下 容器内服务访问宿主机服务

将postgres监听地址修改如下 host.docker.internal 报错解决。 查看宿主机 /etc/hosts 文件如下


root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost 10.111.130.24 host.docker.internal

可以看到,宿主机 ip跟域名的映射. 通过访问域名,解析到宿主机ip, 访问宿主机服务。

最终启动 synch 服务配置如下

core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: host.docker.internal
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch host: host.docker.internal
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host:
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch

## 3. 总结
1. 以--networks="host" 模式下启动容器时,如果想在容器内访问宿主机上的服务, 将ip修改为`host.docker.internal`

4. 参考

  1. https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

docker内服务访问宿主机服务的更多相关文章

  1. docker 访问宿主机网络

    使用宿主机IP 在安装Docker的时候,会在宿主机安装一个虚拟网关docker0,我们可以使用宿主机在docker0上的IP地址来代替localhost. 首先,使用如下命令查询宿主机IP地址: i ...

  2. docker centos7 配置和宿主机同网段IP

    docker centos7 配置和宿主机同网段IP 1.安装brctl 命令 # yum -y install bridge-utils 2.编辑网卡配置文件 # vi ifcfg-eno16777 ...

  3. docker 容器时间跟宿主机时间同步

    docker 容器时间跟宿主机时间同步 docker cp /etc/localtime 87986863838b:/etc/docker cp /etc/localtime container-na ...

  4. docker 安装redis , 让宿主机可以访问

    1, docker 拉去最新版本的redis docker pull redis #后面可以带上tag号, 默认拉取最新版本 2, docker安装redis container 安装之前去定义我们的 ...

  5. 解决Docker MySQL无法被宿主机访问的问题

    1 问题描述 Docker启动MySQL容器后,创建一个localhost访问的用户: create user test@localhost identified by 'test'; 但是在宿主机中 ...

  6. docker 容器不能访问宿主端口原因

    因为数据包到了eth0的 上的iptables 表,首先匹配PREROUTING 链,这个拒绝了来自docker0的流量,从而跳到input链,input没有放开服务端口,所以容器访问宿主端口失败;但 ...

  7. docker 启动mysql 挂载宿主机目录

    在使用docker run 运行镜像获取容器时,有些容器会自动产生一些数据,为了这些数据会因为container (容器)的消失而消失,保证数据的安全,比如mysql 容器在运行中产生的一些表的数据, ...

  8. 【解决】修改 docker 容器时间与宿主机不同

    修改 docker 容器时间 需求: 这几天,开发提了个需求 "测试需要模拟未来某天的业务,发现容器里面没有修改时间的权限",想在我们 k8s 集群上,调整容器时间 解决方案: 使 ...

  9. 安装Samba服务让宿主机和虚拟机共享文件

    安装 samba 服务器之后,很方便的实现 Windows 和 Linux 进行通信. 安装步骤: 1 .在 Ubuntu 系统下面安装 samba 服务: $ sudo apt-get instal ...

随机推荐

  1. SSH无法正常连接服务器

    远程权限没有打开 #允许root登录 PermitRootLogin yes #不允许空密码登录 PermitEmptyPasswords no 远端的ssh信息有变化,本地保存的那个需要删掉 Use ...

  2. Hystrix集群及监控turbine

    Hystrix集群及监控turbine 前面Dashboard演示的仅仅是单机服务监控,实际项目基本都是集群,所以这里集群监控用的是turbine. turbine是基于Dashboard的. 先搞个 ...

  3. 前后端数据交互(五)——什么是 axios?

    一.什么是 axios ? axios是基于 Promise 的 ajax 封装库,也是前端目前最流行的 ajax 请求库.简单地说发送 get.post 请求,是一个轻量级的库,使用时可直接引入. ...

  4. kivy之ProgressBar、ToggleButton实操学习

    之所以将kivy的ProgressBar(进度条)与ToggleButton(切换按钮)作一篇内容来记录学习,是因为这两个内容比较简单,源码内容篇幅也少. 两个功能实例源码均以main.py+prog ...

  5. CommonsBeanutils1 分析笔记

    1.PropertyUtils.getProperty commons-beanutils-1.9.2.jar 包下的 PropertyUtils#getProperty方法相对于getXxx方法,取 ...

  6. leetcode 盛水最多的容器 解析

    采用双指针法: 主要思想:定义头尾两个指针,分别向中间遍历,当相遇时结束循环.存储每一次遍历容器盛水的最大容量,并不断更新. 盛水的最大容量为 左右指针高度的最小值 乘以 左右指针的距离即宽度. 则可 ...

  7. PHP设计模式之备忘录模式

    备忘录,这个名字其实就已经很形象的解释了它的作用.典型的例子就是我们原来玩硬盘游戏时的存档功能.当你对即将面对的大BOSS有所顾虑时,一般都会先保存一次进度存档.如果挑战失败了,直接读取存档就可以恢复 ...

  8. CI框架页面打开空白,无报错为解决方法新环境

    1.打开错误显示,可以在控制controllers的首页入口加入以下代码,查看错误 error_reporting(-1); ini_set('display_errors', 1); //插入显示所 ...

  9. mybatis的mapper特殊字符转移以及动态SQL条件查询

    前言 我们知道在项目开发中之前使用数据库查询,都是基于jdbc,进行连接查询,然后是高级一点jdbcTemplate进行查询,但是我们发现还是不是很方便,有大量重复sql语句,与代码偶合,效率低下,于 ...

  10. CLion远程调试嵌入式开发板程序

    CLion远程调试嵌入式开发板程序 目录 CLion远程调试嵌入式开发板程序 1. 目的 2. 前提条件 3. CLion设置 3.1 设置一个Deployment 3.2 上传需要的目录到目标板子 ...