1. 场景

使用windows, wsl2 进行日常开发测试工作。 但是wsl2经常会遇到网络问题。比如今天在测试一个项目,核心功能是将postgres 的数据使用开源组件synch 同步到clickhouse 这个工作。

测试所需组件

  1. postgres
  2. kafka
  3. zookeeper
  4. redis
  5. synch容器

最开始测试时,选择的方案是, 将上述五个服务使用 docker-compose 进行编排, network_modules使用hosts模式, 因为考虑到kafka的监听安全机制,这种网络模式,无需单独指定暴露端口。

docker-compose.yaml 文件如下

version: "3"

services:
postgres:
image: failymao/postgres:12.7
container_name: postgres
restart: unless-stopped
privileged: true # 设置docker-compose env 文件
command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ]
volumes:
- ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf
- ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf
environment:
POSTGRES_PASSWORD: abc123
POSTGRES_USER: postgres
POSTGRES_PORT: 15432
POSTGRES_HOST: 127.0.0.1
healthcheck:
test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';"
interval: 30s
timeout: 10s
retries: 3
network_mode: "host" zookeeper:
image: failymao/zookeeper:1.4.0
container_name: zookeeper
restart: always
network_mode: "host" kafka:
image: failymao/kafka:1.4.0
container_name: kafka
restart: always
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092
KAFKA_BROKER_ID: 1
KAFKA_LOG_RETENTION_HOURS: 24
KAFKA_LOG_DIRS: /data/kafka-data #数据挂载
network_mode: "host" producer:
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: producer
command: sh -c "
sleep 30 &&
synch --alias pg2ch_test produce"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host" # 一个消费者消费一个数据库
consumer:
tty: true
depends_on:
- redis
- kafka
- zookeeper
image: long2ice/synch
container_name: consumer
command: sh -c
"sleep 30 &&
synch --alias pg2ch_test consume --schema pg2ch_test"
volumes:
- ./synch.yaml:/synch/synch.yaml
network_mode: "host" redis:
hostname: redis
container_name: redis
image: redis:latest
volumes:
- redis:/data
network_mode: "host" volumes:
redis:
kafka:
zookeeper:

测试过程中因为要使用 postgres, wal2json组件,在容器里单独安装组件很麻烦, 尝试了几次均已失败而告终,所以后来选择了将 postgres 服务安装在宿主机上, 容器里面的synch服务 使用宿主机的 ip,port端口。

但是当重新启动服务后,synch服务一直启动不起来, 日志显示 postgres 无法连接. synch配置文件如下

core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: 127.0.0.1
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch

这种情况很奇怪,首先确认 postgres, 启动,且监听端口(此处是5433) 也正常,使用localhost和主机eth0网卡地址均报错。

2. 解决

google 答案,参考 stackoverflow 高赞回答,问题解决,原答案如下

If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).

If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker

container with the --add-host host.docker.internal:host-gateway option.

Otherwise, read below

Use** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.

更多详情见 源贴

host 模式下 容器内服务访问宿主机服务

将postgres监听地址修改如下 host.docker.internal 报错解决。 查看宿主机 /etc/hosts 文件如下


root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost 10.111.130.24 host.docker.internal

可以看到,宿主机 ip跟域名的映射. 通过访问域名,解析到宿主机ip, 访问宿主机服务。

最终启动 synch 服务配置如下

core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host: host.docker.internal
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch host: host.docker.internal
core:
debug: true # when set True, will display sql information.
insert_num: 20000 # how many num to submit,recommend set 20000 when production
insert_interval: 60 # how many seconds to submit,recommend set 60 when production
# enable this will auto create database `synch` in ClickHouse and insert monitor data
monitoring: true redis:
host: redis
port: 6379
db: 0
password:
prefix: synch
sentinel: false # enable redis sentinel
sentinel_hosts: # redis sentinel hosts
- 127.0.0.1:5000
sentinel_master: master
queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs:
- db_type: postgres
alias: pg2ch_test
broker_type: kafka # current support redis and kafka
host:
port: 5433
user: postgres
password: abc123
databases:
- database: pg2ch_test
auto_create: true
tables:
- table: pgbench_accounts
auto_full_etl: true
clickhouse_engine: CollapsingMergeTree
sign_column: sign
version_column:
partition_by:
settings: clickhouse:
# shard hosts when cluster, will insert by random
hosts:
- 127.0.0.1:9000
user: default
password: ''
cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable.
distributed_suffix: _all # distributed tables suffix, available in cluster kafka:
servers:
- 127.0.0.1:9092
topic_prefix: synch

## 3. 总结
1. 以--networks="host" 模式下启动容器时,如果想在容器内访问宿主机上的服务, 将ip修改为`host.docker.internal`

4. 参考

  1. https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

docker内服务访问宿主机服务的更多相关文章

  1. docker 访问宿主机网络

    使用宿主机IP 在安装Docker的时候,会在宿主机安装一个虚拟网关docker0,我们可以使用宿主机在docker0上的IP地址来代替localhost. 首先,使用如下命令查询宿主机IP地址: i ...

  2. docker centos7 配置和宿主机同网段IP

    docker centos7 配置和宿主机同网段IP 1.安装brctl 命令 # yum -y install bridge-utils 2.编辑网卡配置文件 # vi ifcfg-eno16777 ...

  3. docker 容器时间跟宿主机时间同步

    docker 容器时间跟宿主机时间同步 docker cp /etc/localtime 87986863838b:/etc/docker cp /etc/localtime container-na ...

  4. docker 安装redis , 让宿主机可以访问

    1, docker 拉去最新版本的redis docker pull redis #后面可以带上tag号, 默认拉取最新版本 2, docker安装redis container 安装之前去定义我们的 ...

  5. 解决Docker MySQL无法被宿主机访问的问题

    1 问题描述 Docker启动MySQL容器后,创建一个localhost访问的用户: create user test@localhost identified by 'test'; 但是在宿主机中 ...

  6. docker 容器不能访问宿主端口原因

    因为数据包到了eth0的 上的iptables 表,首先匹配PREROUTING 链,这个拒绝了来自docker0的流量,从而跳到input链,input没有放开服务端口,所以容器访问宿主端口失败;但 ...

  7. docker 启动mysql 挂载宿主机目录

    在使用docker run 运行镜像获取容器时,有些容器会自动产生一些数据,为了这些数据会因为container (容器)的消失而消失,保证数据的安全,比如mysql 容器在运行中产生的一些表的数据, ...

  8. 【解决】修改 docker 容器时间与宿主机不同

    修改 docker 容器时间 需求: 这几天,开发提了个需求 "测试需要模拟未来某天的业务,发现容器里面没有修改时间的权限",想在我们 k8s 集群上,调整容器时间 解决方案: 使 ...

  9. 安装Samba服务让宿主机和虚拟机共享文件

    安装 samba 服务器之后,很方便的实现 Windows 和 Linux 进行通信. 安装步骤: 1 .在 Ubuntu 系统下面安装 samba 服务: $ sudo apt-get instal ...

随机推荐

  1. Kotlin之内联回调函数

    let 定义: let扩展函数的实际上是一个作用域函数,当你需要去定义一个变量在一个特定的作用域范围内,let函数的是一个不错的选择:let函数另一个作用就是可以避免写一些判断null的操作. 翻译: ...

  2. vue 引入 tcplayer,并且实现视频点播,腾讯点播

    这篇文章仅用于上传到 腾讯的视频点播,上传到腾讯视频请看上一篇文章,话不多说,直接上代码 <template> <div> <video :id="tcPlay ...

  3. java agent简介

    java agent简介 主要就是两种,一种的方法是premain,一种是agentmain.这两种的区别是: premain是在jvm启动的时候类加载到虚拟机之前执行的 agentmain是可以在j ...

  4. 学习Linux tar 命令:最简单也最困难

    摘要:在本文中,您将学习与tar 命令一起使用的最常用标志.如何创建和提取 tar 存档以及如何创建和提取 gzip 压缩的 tar 存档. 本文分享自华为云社区<Linux 中的 Tar 命令 ...

  5. freeswitch的event事件处理

    概述 之前的文章中,我们讲解了freeswitch的源码基本结构,如何新增一个插件式模块,以及如何在模块中新增一个命令式API接口和APP接口. freeswitch本身是事件驱动的,它可以并发响应多 ...

  6. Redis++:Redis 内存爆满 之 淘汰策略

    前言: 我们的redis使用的是内存空间来存储数据的,但是内存空间毕竟有限,随着我们存储数据的不断增长,当超过了我们的内存大小时,即在redis中设置的缓存大小(maxmeory 4GB),redis ...

  7. 聊聊 Jmeter 如何并发执行 Python 脚本

    1. 前言 大家好,我是安果! 最近有小伙伴后台给我留言,说自己用 Django 写了一个大文件上传的 Api 接口,现在想本地检验一下接口并发的稳定性,问我有没有好的方案 本篇文章以文件上传为例,聊 ...

  8. springmvc图片上传、json

    springmvc的图片上传 1.导入相应的pom依赖 <dependency> <groupId>commons-fileupload</groupId> < ...

  9. Django项目使用requirements.txt文件

    1.生成requirements.txt pip freeze > requirements.txt 2.使用requirements.txt pip install -r requiremen ...

  10. Flask(4)- URL 组成部分详解

    URL Uniform Resource Locator 的简写,中文名叫统一资源定位符 用于表示服务端的各种资源,例如网页 下面将讲解 Flask 中如何提取组成 URL 的各个部分   URL 组 ...