Zookepper

集群同步

下载解压

wget http://apache.fayea.com/zookeeper/stable/zookeeper-3.4.8.tar.gz

tar xvf zookeeper-3.4.8.tar.gz

cd zookeeper-3.4.8

配置zookeeper配置文件

cp zoo_sample.cfg zoo.cfg

vim zoo.cfg

#每个tick默认2s

# The number of milliseconds of each tick

tickTime=2000

#初始化同步tick,默认10,为20s,超过剔除。

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

同步tick,默认为5,为10s,超过剔除

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes

#修改数据文件目录.

dataDir=/usr/local/zookeeper/data

#增加数据日志文件目录

dataLogDir=/usr/local/zookeeper/datalog

# the port at which the clients will connect

#客户端连接端口

clientPort=2181

#定义客户端连接数,默认60

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable auto purge feature

#autopurge.purgeInterval=1

创建配置文件中指定的目录

mkdir /usr/local/zookeeper/data

mkdir /usr/local/zookeeper/datalog

启动zookeeper

cd /usr/local/src/zookeeper-3.4.8/

bin/zkServer.sh start

连接测试

bin/zkCli.sh -server 127.0.0.1:2181

Heatbeat

本文以heatbeat+nginx进行测试。

生产环境下得确保是共享存储哦。

Ip&主机名规划清单

#虚拟vip

vip 192.168.211.134/eth0:0

#主机1:

cs01:

192.168.211.128/eth0/public

192.168.244.128/eth1/private

#主机2:

cs02:

192.168.211.135/eth0/public

192.168.244.129/eth1/private

主机1设置

hostname cs01

vim /etc/sysconfig/network

HOSTNAME=cs01

base

iptables -F

service iptables save

setenforce 0

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

vi /etc/hosts

192.168.211.128 cs01

192.168.211.135 cs02

yum install -y epel-release

yum install -y heartbeat* libnet nginx

cd /usr/share/doc/heartbeat-3.0.4/

cp authkeys ha.cf haresources /etc/ha.d/

cd /etc/ha.d/

vim authkeys

auth 3

#1 crc

#2 sha1 HI!

3 md5 Hello!

chmod 600 authkeys

vim haresources

cs01 192.168.211.134/24/eth0:0 nginx

vim ha.cf

#

# There are lots of options in this file. All you have to have is a set

# of nodes listed {"node ...} one of {serial, bcast, mcast, or ucast},

# and a value for "auto_failback".

#

# ATTENTION: As the configuration file is read line by line,

# THE ORDER OF DIRECTIVE MATTERS!

#

# In particular, make sure that the udpport, serial baud rate

# etc. are set before the heartbeat media are defined!

# debug and log file directives go into effect when they

# are encountered.

#

# All will be fine if you keep them ordered as in this example.

#

#

# Note on logging:

# If all of debugfile, logfile and logfacility are not defined,

# logging is the same as use_logd yes. In other case, they are

# respectively effective. if detering the logging to syslog,

# logfacility must be "none".

#

# File to write debug messages to

debugfile /var/log/ha-debug

#

#

# File to write other messages to

#

logfile /var/log/ha-log

#

#

16,1 Top

#

#

# Facility to use for syslog()/logger

#

logfacility local0

#

#

# A note on specifying "how long" times below...

#

# The default time unit is seconds

# 10 means ten seconds

#

# You can also specify them in milliseconds

# 1500ms means 1.5 seconds

#

#

# keepalive: how long between heartbeats?

#

keepalive 2

#

# deadtime: how long-to-declare-host-dead?

#

# If you set this too low you will get the problematic

# split-brain (or cluster partition) problem.

# See the FAQ for how to use warntime to tune deadtime.

#

deadtime 30

#

# warntime: how long before issuing "late heartbeat" warning?

# See the FAQ for how to use warntime to tune deadtime.

#

30,1 9%

# See the FAQ for how to use warntime to tune deadtime.

#

warntime 10

#

#

# Very first dead time (initdead)

#

# On some machines/OSes, etc. the network takes a while to come up

# and start working right after you've been rebooted. As a result

# we have a separate dead time for when things first come up.

# It should be at least twice the normal dead time.

#

initdead 60

#

#

# What UDP port to use for bcast/ucast communication?

#

udpport 694

#

# Baud rate for serial ports...

#

#baud 19200

#

# serial serialportname ...

#serial /dev/ttyS0 # Linux

#serial /dev/cuaa0 # FreeBSD

#serial /dev/cuad0 # FreeBSD 6.x

#serial /dev/cua/a # Solaris

#

#

# What interfaces to broadcast heartbeats over?

59,1 18%

#

# What interfaces to broadcast heartbeats over?

#

#bcast eth0 # Linux

#bcast eth1 eth2 # Linux

#bcast le0 # Solaris

#bcast le1 le2 # Solaris

#

# Set up a multicast heartbeat medium

# mcast [dev] [mcast group] [port] [ttl] [loop]

#

# [dev] device to send/rcv heartbeats on

# [mcast group] multicast group to join (class D multicast address

# 224.0.0.0 - 239.255.255.255)

# [port] udp port to sendto/rcvfrom (set this value to the

# same value as "udpport" above)

# [ttl] the ttl value for outbound heartbeats. this effects

# how far the multicast packet will propagate. (0-255)

# Must be greater than zero.

# [loop] toggles loopback for outbound multicast heartbeats.

# if enabled, an outbound packet will be looped back and

# received by the interface it was sent on. (0 or 1)

# Set this value to zero.

#

#

#mcast eth0 225.0.0.1 694 1 0

#

# Set up a unicast / udp heartbeat medium

# ucast [dev] [peer-ip-addr]

#

# [dev] device to send/rcv heartbeats on

88,1 28%

#

# [dev] device to send/rcv heartbeats on

# [peer-ip-addr] IP address of peer to send packets to

#

ucast eth1 192.168.244.129

#

#

# About boolean values...

#

# Any of the following case-insensitive values will work for true:

# true, on, yes, y, 1

# Any of the following case-insensitive values will work for false:

# false, off, no, n, 0

#

#

#

# auto_failback: determines whether a resource will

# automatically fail back to its "primary" node, or remain

# on whatever node is serving it until that node fails, or

# an administrator intervenes.

#

# The possible values for auto_failback are:

# on - enable automatic failbacks

# off - disable automatic failbacks

# legacy - enable automatic failbacks in systems

# where all nodes do not yet support

# the auto_failback option.

#

# auto_failback "on" and "off" are backwards compatible with the old

# "nice_failback on" setting.

#

117,1 37%

# "nice_failback on" setting.

#

# See the FAQ for information on how to convert

# from "legacy" to "on" without a flash cut.

# (i.e., using a "rolling upgrade" process)

#

# The default value for auto_failback is "legacy", which

# will issue a warning at startup. So, make sure you put

# an auto_failback directive in your ha.cf file.

# (note: auto_failback can be any boolean or "legacy")

#

auto_failback on

#

#

# Basic STONITH support

# Using this directive assumes that there is one stonith

# device in the cluster. Parameters to this device are

# read from a configuration file. The format of this line is:

#

# stonith <stonith_type> <configfile>

#

# NOTE: it is up to you to maintain this file on each node in the

# cluster!

#

#stonith baytech /etc/ha.d/conf/stonith.baytech

#

# STONITH support

# You can configure multiple stonith devices using this directive.

# The format of the line is:

# stonith_host <hostfrom> <stonith_type> <params...>

# <hostfrom> is the machine the stonith device is attached

146,1 46%

# stonith_host <hostfrom> <stonith_type> <params...>

# <hostfrom> is the machine the stonith device is attached

# to or * to mean it is accessible from any host.

# <stonith_type> is the type of stonith device (a list of

# supported drives is in /usr/lib/stonith.)

# <params...> are driver specific parameters. To see the

# format for a particular device, run:

# stonith -l -t <stonith_type>

#

#

# Note that if you put your stonith device access information in

# here, and you make this file publically readable, you're asking

# for a denial of service attack ;-)

#

# To get a list of supported stonith devices, run

# stonith -L

# For detailed information on which stonith devices are supported

# and their detailed configuration options, run this command:

# stonith -h

#

#stonith_host * baytech 10.0.0.3 mylogin mysecretpassword

#stonith_host ken3 rps10 /dev/ttyS1 kathy 0

#stonith_host kathy rps10 /dev/ttyS1 ken3 0

#

# Watchdog is the watchdog timer. If our own heart doesn't beat for

# a minute, then our machine will reboot.

# NOTE: If you are using the software watchdog, you very likely

# wish to load the module with the parameter "nowayout=0" or

# compile it without CONFIG_WATCHDOG_NOWAYOUT set. Otherwise even

# an orderly shutdown of heartbeat will trigger a reboot, which is

# very likely NOT what you want.

175,1 56%

# an orderly shutdown of heartbeat will trigger a reboot, which is

# very likely NOT what you want.

#

#watchdog /dev/watchdog

#

# Tell what machines are in the cluster

# node nodename ... -- must match uname -n

node cs01

node cs02

#

# Less common options...

#

# Treats 10.10.10.254 as a psuedo-cluster-member

# Used together with ipfail below...

# note: don't use a cluster node as ping node

#

ping 192.168.244.1

#

# Treats 10.10.10.254 and 10.10.10.253 as a psuedo-cluster-member

# called group1. If either 10.10.10.254 or 10.10.10.253 are up

# then group1 is up

# Used together with ipfail below...

#

#ping_group group1 10.10.10.254 10.10.10.253

#

# HBA ping derective for Fiber Channel

# Treats fc-card-name as psudo-cluster-member

# used with ipfail below ...

#

# You can obtain HBAAPI from http://hbaapi.sourceforge.net. You need

# to get the library specific to your HBA directly from the vender

204,1 65%

# You can obtain HBAAPI from http://hbaapi.sourceforge.net. You need

# to get the library specific to your HBA directly from the vender

# To install HBAAPI stuff, all You need to do is to compile the common

# part you obtained from the sourceforge. This will produce libHBAAPI.so

# which you need to copy to /usr/lib. You need also copy hbaapi.h to

# /usr/include.

#

# The fc-card-name is the name obtained from the hbaapitest program

# that is part of the hbaapi package. Running hbaapitest will produce

# a verbose output. One of the first line is similar to:

# Apapter number 0 is named: qlogic-qla2200-0

# Here fc-card-name is qlogic-qla2200-0.

#

#hbaping fc-card-name

#

#

# Processes started and stopped with heartbeat. Restarted unless

# they exit with rc=100

#

#respawn userid /path/name/to/run

respawn hacluster /usr/lib64/heartbeat/ipfail

#

# Access control for client api

# default is no access

#

#apiauth client-name gid=gidlist uid=uidlist

#apiauth ipfail gid=haclient uid=hacluster

###########################

#

# Unusual options.

233,1 75%

#

# Unusual options.

#

###########################

#

# hopfudge maximum hop count minus number of nodes in config

#hopfudge 1

#

# deadping - dead time for ping nodes

#deadping 30

#

# hbgenmethod - Heartbeat generation number creation method

# Normally these are stored on disk and incremented as needed.

#hbgenmethod time

#

# realtime - enable/disable realtime execution (high priority, etc.)

# defaults to on

#realtime off

#

# debug - set debug level

# defaults to zero

#debug 1

#

# API Authentication - replaces the fifo-permissions-based system of the past

#

#

# You can put a uid list and/or a gid list.

# If you put both, then a process is authorized if it qualifies under either

# the uid list, or under the gid list.

#

# The groupname "default" has special meaning. If it is specified, then

262,1 84%

#

# The groupname "default" has special meaning. If it is specified, then

# this will be used for authorizing groupless clients, and any client groups

# not otherwise specified.

#

# There is a subtle exception to this. "default" will never be used in the

# following cases (actual default auth directives noted in brackets)

# ipfail (uid=HA_CCMUSER)

# ccm (uid=HA_CCMUSER)

# ping (gid=HA_APIGROUP)

# cl_status (gid=HA_APIGROUP)

#

# This is done to avoid creating a gaping security hole and matches the most

# likely desired configuration.

#

#apiauth ipfail uid=hacluster

#apiauth ccm uid=hacluster

#apiauth cms uid=hacluster

#apiauth ping gid=haclient uid=alanr,root

#apiauth default gid=haclient

# message format in the wire, it can be classic or netstring,

# default: classic

#msgfmt classic/netstring

# Do we use logging daemon?

# If logging daemon is used, logfile/debugfile/logfacility in this file

# are not meaningful any longer. You should check the config file for logging

# daemon (the default is /etc/logd.cf)

# more infomartion can be fould in the man page.

# Setting use_logd to "yes" is recommended

scp authkeys ha.cf haresources cs02:/etc/ha.d/

主机2上设置

hostname cs02

vim /etc/sysconfig/network

HOSTNAME=cs02

base

iptables -F

service iptables save

setenforce 0

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

vi /etc/hosts

192.168.211.128 cs01

192.168.211.135 cs02

yum install -y epel-release

yum install -y heartbeat* libnet nginx

vim /etc/ha.d/ha.cf

ucast eth1 192.168.244.128

主从先后启动

service heartbeat start

service heartbeat start

检查测试

ifconfig,确认有eth0:0

ps aux | grep nginx

主上停止服务

备用启动

LVS

负载均衡模式NAT\DR,物理部署要求是共享存储哦。

lvs-nat

ip&主机名规划

#Director

192.168.211.137/eth0

192.168.244.130/eth1

#主机1:

cs01:

192.168.244.128/gw:130

#主机2:

cs02:

192.168.244.129/gw:130

director上设置

yum install -y ipvsadm

[root@director ~]# vi /usr/local/sbin/lvs_nat.sh

#!/bin/bash

echo 1 > /proc/sys/net/ipv4/ip_forward

echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects

echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects

echo 0 > /proc/sys/net/ipv4/conf/eth0/send_redirects

echo 0 > /proc/sys/net/ipv4/conf/eth1/send_redirects

iptables -t nat -F

iptables -t nat -X

iptables -t nat -A POSTROUTING -s 192.168.244.0/24 -j MASQUERADE

IPVSADM='/sbin/ipvsadm'

$IPVSADM -C

$IPVSADM -A -t 192.168.211.137:80 -s lc -p 300

$IPVSADM -a -t 192.168.211.137:80 -r 192.168.244.128:80 -m -w 1

$IPVSADM -a -t 192.168.211.137:80 -r 192.168.244.129:80 -m -w 1

/bin/bash /usr/local/sbin/lvs_nat.sh

RS上设置

cs01和cs02上安装nginx

yum install -y epel-release

yum install -y nginx

分别写入写入测试数据

echo "rs1rs1" /usr/share/nginx/html/index.html

echo "rs2rs2" /usr/share/nginx/html/index.html

分别启动服务

service nginx start

测试

curl 192.168.211.137

lvs-dr

ip&主机名规划

#Director

192.168.244.130/eth1

192.168.244.131/eth1:1

#主机1:

cs01:

192.168.244.128/eth1

192.168.244.131/lo:0

#主机2:

cs02:

192.168.244.129/eth1

192.168.244.131/lo:0

Director上设置

#!/bin/bash

echo 1 > /proc/sys/net/ipv4/ip_forward

ipv=/sbin/ipvsadm

vip=192.168.244.131

rs1=192.168.244.128

rs2=192.168.244.128

ifconfig eth1:1 $vip broadcast $vip netmask 255.255.255.255 up

route add -host $vip dev eth1:1

$ipv -C

$ipv -A -t $vip:80 -s rr

$ipv -a -t $vip:80 -r $rs1:80 -g -w 1

$ipv -a -t $vip:80 -r $rs2:80 -g -w 1

bash /usr/local/sbin/lvs_dr.sh

2台RS上设置(cs01,cs02)

vim /usr/local/sbin/lvs_dr_rs.sh

#!/bin/bash

vip=192.168.244.131

ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up

route add -host $vip dev lo:0

echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

bash /usr/local/sbin/lvs_dr_rs.sh

windows测试

http://192.168.244.131

lvs结合keepalived

#ip&主机名规划:

vip:

192.168.211.139

cs01:

192.168.211.137/eth0

cs02:

192.168.211.137/eth0

#2台主机执行操作:

yum install -y epel-release

yum install -y nginx

yum install -y keepalived

echo 1 > /proc/sys/net/ipv4/ip_forward

/etc/init.d/nginx start

vim /usr/local/sbin/lvs_dr_rs.sh

#!/bin/bash

vip=192.168.211.139

ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up

route add -host $vip dev lo:0

echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

#主机cs01 nginx

echo "keep1rs1" > /usr/share/nginx/html/index.html

#主机cs02 nginx

echo "keep2rs2" > /usr/share/nginx/html/index.html

#主机cs01 keepalived

vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

#global_defs {

# notification_email {

## acassen@firewall.loc

# failover@firewall.loc

# sysadmin@firewall.loc

# }

# notification_email_from Alexandre.Cassen@firewall.loc

# smtp_server 192.168.200.1

# smtp_connect_timeout 30

# router_id LVS_DEVEL

#}

vrrp_instance VI_1 {

state MASTER

interface eth0

virtual_router_id 51

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.211.139

}

}

virtual_server 192.168.211.139 80 {

"/etc/keepalived/keepalived.conf" 57L, 1118C 28,5 Top

virtual_server 192.168.211.139 80 {

delay_loop 6

lb_algo wlc

lb_kind DR

# nat_mask 255.255.255.0

persistence_timeout 60

protocol TCP

real_server 192.168.211.137 80 {

weight 100

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

real_server 192.168.211.138 80 {

weight 100

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

}

#主机cs02 keepalived :

vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

#global_defs {

# notification_email {

## acassen@firewall.loc

# failover@firewall.loc

# sysadmin@firewall.loc

# }

# notification_email_from Alexandre.Cassen@firewall.loc

# smtp_server 192.168.200.1

# smtp_connect_timeout 30

# router_id LVS_DEVEL

#}

vrrp_instance VI_1 {

state BACKUP

interface eth0

virtual_router_id 51

priority 90

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.211.139

}

}

21,9 Top

virtual_server 192.168.211.139 80 {

delay_loop 6

lb_algo wlc

lb_kind DR

# nat_mask 255.255.255.0

persistence_timeout 60

protocol TCP

real_server 192.168.211.137 80 {

weight 100

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

real_server 192.168.211.138 80 {

weight 100

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

}

2个rs上执行lvs:

bash /usr/local/sbin/lvs_dr_rs.sh

2个rs上执行keepalived:

/etc/init.d/keepalived start

客户端访问vip测试。

CentOS下关于集群同步/LB/HA 的尝试的更多相关文章

  1. CentOS下redis集群安装

    环境: 一台CentOS虚拟机上部署六个节点,创建3个master,3个slave节点 1.下载并解压 cd /root wget http://download.redis.io/releases/ ...

  2. Centos下Redis集群的搭建实现读写分离

    Centos下Redis一主多从架构搭建 搭建目标:因为自己笔记本电脑配置较低的原因,模拟两台机器之间搭建一主一从的架构,主节点Redis主要用来写数据,数据写入到主节点的Redis,然后从节点就可以 ...

  3. centos 7 Chrony 集群同步时间

    Chrony有两个核心组件,分别是:chronyd:是守护进程,主要用于调整内核中运行的系统时间和时间服务器同步.它确定计算机增减时间的比率,并对此进行调整补偿.chronyc:提供一个用户界面,用于 ...

  4. centos下zookeeper集群搭建

    单机模式: 1)  首先下载zookeeper压缩包, 这里采用zookeeper3.4.8.... wget http://mirror.bit.edu.cn/apache/zookeeper/zo ...

  5. CentOS下 elasticsearch集群安装

    1.进入root目录并下载elasticsearch cd /root wget https://download.elastic.co/elasticsearch/elasticsearch/ela ...

  6. CentOS 下 LVS集群( 可能更新 )

    lvs-nat模型构建 假设测试环境:使用IP172.16.16.16. 需要A.B俩台Centos6.5虚拟机.提前关闭selinux 两台真实服务器的IP分别是192.168.1.1.192.16 ...

  7. centos下etcd集群安装

    先仔细了解学习etcd 官方: https://github.com/etcd-io/etcd https://www.cnblogs.com/softidea/p/6517959.html http ...

  8. centos下mysql集群初尝试

    原文:http://www.lvtao.net/database/mysql-cluster.html 五台服务器篇 安装要求 安装环境:CentOS-6.3安装方式:源码编译安装软件名称:mysql ...

  9. Centos下使用Heartbeat实现集群[转]

    Linux 包括 CentOS 下高可用性(HA:High Availability)集群方案很多,而 Heartbeat 是比较常见和性价比比较高的一种。一、硬件及网络连接 群集一般需要2台以上服务 ...

随机推荐

  1. 【开源】使用.Net Core和GitHub Actions实现哔哩哔哩每日自动签到、投币、领取奖励

    BiliBiliTool是一个B站自动执行任务的工具,使用.NET Core编写,通过它可以实现B站帐号的每日自动观看.分享.投币视频,获取经验,每月自动领取会员权益.自动为自己充电等功能,帮助我们轻 ...

  2. mysql报错10061和10038

    用navicat连接报错10038,用sqlyog报错10061,又去查看服务,发现服务丢失 经过一系列的查阅资料,用下面的方式解决了问题 1.用管理员的方式打开命令行窗口 2.进入mysql的bin ...

  3. 《Clojure编程》笔记 第4章 多线程和并发

    目录 背景简述 第4章 多线程和并发 4.0 我的问题 4.1 术语 4.1.1 一个必须要先确定的思考基础 4.2 计算在时间和空间内的转换 4.2.1 delay 4.2.2 future 4.2 ...

  4. STC转STM32第一次开发

    目录 前言 项目 1. 模数转换,并通过OLED屏显示出来 需求: 实验器材: 接线: 源程序: 成品: 2. 简易频率计(0.1-10MHZ) 需求: 原理: 实验器材: 接线: 源程序: 写在结尾 ...

  5. 6.1 接口 - 6.3 lambda表达式

    6.1 接口 接口概念 接口是对类的一组需求描述,这些类要遵从接口描述的统一格式进行定义.设计目的是解决多继承的问题 接口中所有方法时 public 不用现实声明 java.lang.Comparab ...

  6. Elasticsearch原理解析与性能调优

    基本概念 定义 一个分布式的实时文档存储,每个字段 可以被索引与搜索 一个分布式实时分析搜索引擎 能胜任上百个服务节点的扩展,并支持 PB 级别的结构化或者非结构化数据 用途 全文检索 结构化搜索 分 ...

  7. ABP框架中一对多,多对多关系的处理以及功能界面的处理(1)

    在我们开发业务的时候,一般数据库表都有相关的关系,除了单独表外,一般还包括一对多.多对多等常见的关系,在实际开发过程中,需要结合系统框架做对应的处理,本篇随笔介绍基于ABP框架对EF实体.DTO关系的 ...

  8. 一年前,我来到国企搞IT

    ​ 2020.11.01日,这一天是我加盟xxx国企的一年整,这篇分享本来是要提前写的,不过由于前段时间确实繁忙,一直没有机会提笔.今天简单和大家分享下我在国企的一些工作内容,感悟等等,希望能给那些对 ...

  9. 对accuracy、precision、recall、F1-score、ROC-AUC、PRC-AUC的一些理解

    最近做了一些分类模型,所以打算对分类模型常用的评价指标做一些记录,说一下自己的理解.使用何种评价指标,完全取决于应用场景及数据分析人员关注点,不同评价指标之间并没有优劣之分,只是各指标侧重反映的信息不 ...

  10. 论文学习笔记 - 高光谱 和 LiDAR 融合分类合集

    A³CLNN: Spatial, Spectral and Multiscale Attention ConvLSTM Neural Network for Multisource Remote Se ...