Ned4J 图库的集群部署与基础使用

部署机器

名称 配置 IP
server1 8 核 16G 172.16.0.2
server2 8 核 16G 172.16.0.3
server3 8 核 16G 172.16.0.4
server4 8 核 16G 172.16.0.5

# 创建项目目录
mkdir -p /opt/neo4j/ # 环境变量
export USER_ID="$(id -u)"
export GROUP_ID="$(id -g)"
export NEO4J_DOCKER_IMAGE=neo4j:5-community
export NEO4J_EDITION=docker_compose
export EXTENDED_CONF=yes
export NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
export NEO4J_AUTH=neo4j/your_password

neo4j.conf 配置文件

# Setting that specifies how much memory Neo4j is allowed to use for the page cache.
server.memory.pagecache.size=100M # Setting that specifies the initial JVM heap size.
server.memory.heap.initial_size=100M # The behavior of the initial discovery is determined by the parameters `dbms.cluster.discovery.resolver_type` and `dbms.cluster.discovery.endpoints`.
# The DNS strategy fetches the IP addresses of the cluster members using the DNS A records.
# ### dbms.cluster.discovery.resolver_type=DNS # The value of `dbms.cluster.discovery.endpoints` should be set to a single domain name and the port of the discovery service.
# The domain name returns an A record for every server in the cluster when a DNS lookup is performed.
# Each A record returned by DNS should contain the IP address of the server in the cluster.
# The configured server uses all the IP addresses from the A records to join or form a cluster.
# The discovery port must be the same on all servers when using this configuration.
# ### dbms.cluster.discovery.endpoints=neo4j-network:5000 # Address (the public hostname/IP address of the machine)
# and port setting that specifies where this instance advertises for discovery protocol messages from other members of the cluster.
# ### server.discovery.advertised_address=$(hostname -i) # Address (the public hostname/IP address of the machine)
# and port setting that specifies where this instance advertises for Raft messages within the cluster.
# ### server.cluster.raft.advertised_address=$(hostname) # Address (the public hostname/IP address of the machine)
# and port setting that specifies where this instance advertises for requests for transactions in the transaction-shipping catchup protocol.
# ### server.cluster.advertised_address=$(hostname) # Enable server-side routing
# ###dbms.routing.enabled=true # Use server-side routing for neo4j:// protocol connections.
# ###dbms.routing.default_router=SERVER # The advertised address for the intra-cluster routing connector.
# ###server.routing.advertised_address=$(hostname) # HTTP Connector
#
dbms.connector.http.type=HTTP
dbms.connector.http.enabled=true
dbms.connectors.default_listen_address=0.0.0.0
#
#dbms.connector.http.address=0.0.0.0:#{default.http.port}
dbms.connector.http.address=0.0.0.0:7474
dbms.connector.http.listen_address=0.0.0.0:7474

Docker-Compose.yml 文件

version: '3.8'

# Custom top-level network
networks:
neo4j-internal: services: server1:
# Docker image to be used
image: ${NEO4J_DOCKER_IMAGE} # Hostname
hostname: server1 # Service-level network, which specifies the networks, from the list of the top-level networks (in this case only neo4j-internal), that the server will connect to.
# Adds a network alias (used in neo4j.conf when configuring the discovery members)
networks:
neo4j-internal:
aliases:
- neo4j-network # The ports that will be accessible from outside the container - HTTP (7474) and Bolt (7687).
ports:
- "7474:7474"
- "7687:7687" # Uncomment the volumes to be mounted to make them accessible from outside the container.
volumes:
- /opt/neo4j/neo4j.conf:/conf/neo4j.conf # This is the main configuration file.
- /opt/neo4j/data/server1:/var/lib/neo4j/data
- /opt/neo4j/logs/server1:/var/lib/neo4j/logs
- /opt/neo4j/conf/server1:/var/lib/neo4j/conf
- /opt/neo4j/import/server1:/var/lib/neo4j/import
#- /opt/neo4j/metrics/server1:/var/lib/neo4j/metrics
#- /opt/neo4j/licenses/server1:/var/lib/neo4j/licenses
#- /opt/neo4j/ssl/server1:/var/lib/neo4j/ssl # Passes the following environment variables to the container
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT
- NEO4J_AUTH
- EXTENDED_CONF
- NEO4J_EDITION
- NEO4J_initial_server_mode__constraint=PRIMARY # Simple check testing whether the port 7474 is opened.
# If so, the instance running inside the container is considered as "healthy".
# This status can be checked using the "docker ps" command.
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"] # Set up the user
user: ${USER_ID}:${GROUP_ID} server2:
image: ${NEO4J_DOCKER_IMAGE}
hostname: server2
networks:
neo4j-internal:
aliases:
- neo4j-network
ports:
- "7475:7474"
- "7688:7687"
volumes:
- /opt/neo4j/neo4j.conf:/conf/neo4j.conf
- /opt/neo4j/data/server2:/var/lib/neo4j/data
- /opt/neo4j/logs/server2:/var/lib/neo4j/logs
- /opt/neo4j/conf/server2:/var/lib/neo4j/conf
- /opt/neo4j/import/server2:/var/lib/neo4j/import
#- /opt/neo4j/metrics/server2:/var/lib/neo4j/metrics
#- /opt/neo4j/licenses/server2:/var/lib/neo4j/licenses
#- /opt/neo4j/ssl/server2:/var/lib/neo4j/ssl
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT
- NEO4J_AUTH
- EXTENDED_CONF
- NEO4J_EDITION
- NEO4J_initial_server_mode__constraint=PRIMARY
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"]
user: ${USER_ID}:${GROUP_ID} server3:
image: ${NEO4J_DOCKER_IMAGE}
hostname: server3
networks:
neo4j-internal:
aliases:
- neo4j-network
ports:
- "7476:7474"
- "7689:7687"
volumes:
- /opt/neo4j/neo4j.conf:/conf/neo4j.conf
- /opt/neo4j/data/server3:/var/lib/neo4j/data
- /opt/neo4j/logs/server3:/var/lib/neo4j/logs
- /opt/neo4j/conf/server3:/var/lib/neo4j/conf
- /opt/neo4j/import/server3:/var/lib/neo4j/import
#- /opt/neo4j/metrics/server3:/var/lib/neo4j/metrics
#- /opt/neo4j/licenses/server3:/var/lib/neo4j/licenses
#- /opt/neo4j/ssl/server3:/var/lib/neo4j/ssl
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT
- NEO4J_AUTH
- EXTENDED_CONF
- NEO4J_EDITION
- NEO4J_initial_server_mode__constraint=PRIMARY
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"]
user: ${USER_ID}:${GROUP_ID} server4:
image: ${NEO4J_DOCKER_IMAGE}
hostname: server4
networks:
neo4j-internal:
aliases:
- neo4j-network
ports:
- "7477:7474"
- "7690:7687"
volumes:
- /opt/neo4j/neo4j.conf:/conf/neo4j.conf
- /opt/neo4j/data/server4:/var/lib/neo4j/data
- /opt/neo4j/logs/server4:/var/lib/neo4j/logs
- /opt/neo4j/conf/server4:/var/lib/neo4j/conf
- /opt/neo4j/import/server4:/var/lib/neo4j/import
#- /opt/neo4j/metrics/server4:/var/lib/neo4j/metrics
#- /opt/neo4j/licenses/server4:/var/lib/neo4j/licenses
#- /opt/neo4j/ssl/server4:/var/lib/neo4j/ssl
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT
- NEO4J_AUTH
- EXTENDED_CONF
- NEO4J_EDITION
- NEO4J_initial_server_mode__constraint=SECONDARY
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider localhost:7474 || exit 1"]
user: ${USER_ID}:${GROUP_ID}
version: '2'
services:
neo4j:
image: docker.io/bitnami/neo4j:5
ports:
- '7474:7474'
- '7473:7473'
- '7687:7687'
volumes:
- 'neo4j_data:/bitnami'
environment:
- NEO4J_AUTH=neo4j/bitnami1
volumes:
neo4j_data:
driver: local

Neo4J 的基础使用

基于 7474 端口的访问

通过远程配置的IP访问与配置,特别需要注意的就是对应的Auth这块,从而基于对应的外部IP才可以访问 比如 http://192.168.1.217:7474/

参考

Neo4J Developer Docs

Neo4J 图库的集群部署与基础使用的更多相关文章

  1. HBase集群部署与基础命令

    HBase 集群部署 安装 hbase 之前需要先搭建好 hadoop 集群和 zookeeper 集群.hadoop 集群搭建可以参考:https://www.cnblogs.com/javammc ...

  2. Storm1.0.3集群部署

    Storm集群部署 所有集群部署的基本流程都差不多:下载安装包并上传.解压安装包并配置环境变量.修改配置文件.分发安装包.启动集群.查看集群是否部署成功. 1.所有的集群上都要配置hosts vi   ...

  3. Kafka集群部署 (守护进程启动)

    1.Kafka集群部署 1.1集群部署的基本流程 下载安装包.解压安装包.修改配置文件.分发安装包.启动集群 1.2集群部署的基础环境准备 安装前的准备工作(zk集群已经部署完毕)  关闭防火墙 c ...

  4. Storm集群部署及单词技术

    1. 集群部署的基本流程 集群部署的流程:下载安装包.解压安装包.修改配置文件.分发安装包.启动集群 注意: 所有的集群上都需要配置hosts vi  /etc/hosts 192.168.239.1 ...

  5. 2.Storm集群部署及单词统计案例

    1.集群部署的基本流程 2.集群部署的基础环境准备 3.Storm集群部署 4.Storm集群的进程及日志熟悉 5.Storm集群的常用操作命令 6.Storm源码下载及目录熟悉 7.Storm 单词 ...

  6. openstack(pike 版)集群部署(一)----基础环境部署

    一.环境 1.系统: a.CentOS Linux release 7.4.1708 (Core) b.更新yum源和安装常用软件 #  yum -y install  epel-release ba ...

  7. k8s1.9.0安装--基础集群部署

    二.基础集群部署 - kubernetes-simple 1. 部署ETCD(主节点) 1.1 简介 kubernetes需要存储很多东西,像它本身的节点信息,组件信息,还有通过kubernetes运 ...

  8. Dubbo入门到精通学习笔记(二十):MyCat在MySQL主从复制的基础上实现读写分离、MyCat 集群部署(HAProxy + MyCat)、MyCat 高可用负载均衡集群Keepalived

    文章目录 MyCat在MySQL主从复制的基础上实现读写分离 一.环境 二.依赖课程 三.MyCat 介绍 ( MyCat 官网:http://mycat.org.cn/ ) 四.MyCat 的安装 ...

  9. 2.Ceph 基础篇 - 集群部署及故障排查

    文章转载自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247485243&idx=1&sn=e425c31a ...

  10. 理解 OpenStack + Ceph (1):Ceph + OpenStack 集群部署和配置

    本系列文章会深入研究 Ceph 以及 Ceph 和 OpenStack 的集成: (1)安装和部署 (2)Ceph RBD 接口和工具 (3)Ceph 物理和逻辑结构 (4)Ceph 的基础数据结构 ...

随机推荐

  1. MarkdownStudy03JDK

    卸载JDK 删除jdk文件夹 删除配置环境(计算机>属性>高级>系统变量中的JAVA_HOME和Path) 删除JAVA_HOME环境变量 删除path环境变量 用dos窗口查看是否 ...

  2. JS一切皆对象理解

    对象都是通过函数创建的 function Fn() { this.name = '王福朋'; this.year = 1988; } var fn1 = new Fn(); fn1是个对象,它是由函数 ...

  3. 有执行语句:console.log(fn2(2)[3]),补充函数,使执行结果为"hello"

    function fn2(a){ return [1,2,3,"hello"];}console.log(fn2(2)[3])//hello 这个2是混淆视线的,即使没有这个2.函 ...

  4. 迁移学习《Asymmetric Tri-training for Unsupervised Domain Adaptation》

    论文信息 论文标题:Asymmetric Tri-training for Unsupervised Domain Adaptation论文作者:Kuniaki Saito, Y. Ushiku, T ...

  5. [J2EE]SSM整合/Spring 与 MyBatis的整合【待续】

    参考文献 spring与mybatis五种整合方法 - 博客园/推荐

  6. linux防火墙开放1521端口

    问题描述:使用plsql连接数据库发现TNS报错,登录服务器发现防火墙开放,如果直接关闭防火墙,所有的端口都可以连接,但是实际中可能会遇到开启防火墙的库,这时候需要开放单一端口对某一服务器或者某一网段 ...

  7. 【Vue项目】尚品汇(三)Home模块+Floor模块+Swiper轮播图

    写在前面 今天是7.23,这一篇内容主要完成了Home模块和部分Search模块的开发,主要是使用了swiper轮播图插件获取vuex仓库数据展示组件以及其他信息. 1 Search模块 1.1 Se ...

  8. 彻底解决VSCode无法远程ssh,提示The remote host may not meet VS Code Server‘s prerequisites for glibc and libstdc++

    彻底解决VSCode无法远程ssh,提示The remote host may not meet VS Code Server's prerequisites for glibc and libstd ...

  9. 批量更新Postgresql的序列

    序列(sequence)是 PostgreSQL 中的一种对象,用于生成自动递增的唯一标识符.通常,序列会与表的自增主键一起使用,以确保每个新插入的行都有一个唯一的标识符.在某些情况下,可能需要更新序 ...

  10. Java实现平衡二叉搜索树(AVL树)

    上一篇实现了二叉搜索树,本章对二叉搜索树进行改造使之成为平衡二叉搜索树(Balanced Binary Search Tree). 不平衡的二叉搜索树在极端情况下很容易退变成链表,与新增/删除/查找时 ...