单节点部署和原理请看上一篇文章

https://www.cnblogs.com/you-men/p/12863555.html

环境

  1. [Fastdfs-Server]
  2. 系统 = CentOS7.3
  3. 软件 =
  4. fastdfs-nginx-module_v1.16.tar.gz
  5. FastDFS_v5.05.tar.gz
  6. libfastcommon-master.zip
  7. nginx-1.8.0.tar.gz
  8. ngx_cache_purge-2.3.tar.gz
节点名 IP 软件版本 硬件 网络 说明
Tracker-233 192.168.43.233 list 里面都有 2C4G Nat,内网 测试环境
Tracker2-234 192.168.43.234 list 里面都有 2C4G Nat,内网 测试环境
Group1-60 192.168.43.60 list 里面都有 2C4G Nat,内网 测试环境
Group1-97 192.168.43.97 list 里面都有 2C4G Nat,内网 测试环境
Group2-24 192.168.43.24 list 里面都有 2C4G Nat,内网 测试环境
Group2-128 192.168.43.128 list 里面都有 2C4G Nat,内网 测试环境
Nginx1-220 192.168.43.220 list 里面都有 2C4G Nat,内网 测试环境
Nginx2-53 192.168.43.53 list 里面都有 2C4G Nat,内网 测试环境
安装相关工具和依赖

所有机器

  1. yum -y install unzip gcc-c++ perl make libstdc++-devel cmake gcc gcc-c++

安装tracker

解压编译安装libfastcommon
  1. # 安装libfastcommon, fastdfs5.X 取消了对libevent的依赖,添加了对libfastcommon的依赖.
  2. wget https://github.com/happyfish100/libfastcommon/archive/master.zip
  3. unzip libfastcommon-master.zip
  4. cd libfastcommon-master/
  5. ./make.sh
  6. ./make.sh install
下载安装FastDFS
  1. tar xvf FastDFS_v5.05.tar.gz -C /usr/local/fast/
  2. cd /usr/local/fast/FastDFS/
  3. ./make.sh && ./make.sh install
  4. # 创建软链接
  5. ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
  6. ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so
  7. ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so
  8. ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so
修改fastdfs配置文件

修改fastdfs服务脚本bin目录为/usr/local/bin,但是实际我们安装在了/usr/bin/下面。所以我们需要修改FastDFS配置文件中的路径,也就是需要修改俩 个配置文件: 命令:vim /etc/init.d/fdfs_storaged 然后输入全局替换命令:%s+/usr/local/bin+/usr/bin并按回车即可完成替换

修改tracker.conf

第一处:base_path,将默认的路径修改为/fastdfs/tracker。第二处:store_lookup,该值默认是2(即负载均衡策略),现在把它修改为0(即轮询策略,修改成这样方便一会儿我们进行测试,当然,最终还是要改回到2的。如果值为1的话表明要始终向某个group进行上传下载操作,这时下图中的"store_group=group2"才会起作用,如果值是0或2,则

  1. mkdir -p /fastdfs/tracker
  2. cd /etc/fdfs
  3. cp tracker.conf.sample tracker.conf
  4. vim /etc/fdfs/tracker.conf
  5. base_path=/fastdfs/tracker
  6. store_lookup=0
  7. # 然后将这个配置拷贝到另一台tracker上
启动tracker
  1. # 启动tracker
  2. fdfs_trackerd /etc/fdfs/tracker.conf start
  3. or
  4. service fdfs_trackerd start
  5. # 验证端口
  6. lsof -i:22122
  7. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  8. fdfs_trac 15222 root 5u IPv4 49284 0t0 TCP *:22122 (LISTEN)
  9. fdfs_trac 15222 root 18u IPv4 49304 0t0 TCP tracker1:22122->192.168.43.60:63056 (ESTABLISHED)

部署Storage

配置storage.conf
  1. mkdir -p /fastdfs/storage
  2. vim /etc/fdfs/storage.conf
  3. base_path=/fastdfs/storage
  4. store_path0=/fastdfs/storage
  5. store_path_count=1
  6. disabled=false
  7. tracker_server=192.168.43.234:22122
  8. tracker_server=192.168.43.233:22122
  9. group_name=group1
  10. # 分配的group1可以直接将此文件拷贝过去,group2修改下group_name就行了
启动storage
  1. service fdfs_trackerd start
  2. # 验证端口
  3. lsof -i:23000
  4. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  5. fdfs_stor 14105 root 5u IPv4 44719 0t0 TCP *:inovaport1 (LISTEN)
  6. fdfs_stor 14105 root 20u IPv4 45629 0t0 TCP group1:inovaport1->192.168.43.60:44823 (ESTABLISHED)
  7. fdfs_monitor /etc/fdfs/storage.conf |grep ACTIVE
  8. [2020-07-03 20:35:09] DEBUG - base_path=/fastdfs/storage, connect_timeout=30, network_timeout=60, tracker_server_count=2, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0
  9. ip_addr = 192.168.43.24 ACTIVE
  10. ip_addr = 192.168.43.60 (group1) ACTIVE
  11. ip_addr = 192.168.43.128 ACTIVE
  12. ip_addr = 192.168.43.97 ACTIVE
验证服务可用性
  1. # 测试下是否启动成功,我们尝试上传文件,从/root/目录上传一张图片试试
  2. # 修改client客户端上传配置
  3. vim /etc/fdfs/client.conf
  4. base_path=/fastdfs/tracker
  5. tracker_server=192.168.43.233:22122
  6. tracker_server=192.168.43.234:22122
  7. fdfs_upload_file /etc/fdfs/client.conf /root/1.png
  8. group2/M00/00/00/wKgrgF7_KL-AKIsEAAztU10n3gA362.png
  9. # 我们去下面目录可以看到有这个文件
  10. ls /fastdfs/storage/data/00/00/
  11. wKgrgF7_KL-AKIsEAAztU10n3gA362.png
  12. # 或者使用下面这种
  13. fdfs_test /etc/fdfs/client.conf upload /tmp/test.jpg
  14. example file url: http://192.168.171.140/group1/M00/00/00/wKirjF64N2CAZZODAAGgIaqSzTc877_big.jpg
  15. # 出现最后的一个url说明上传成功
  16. # M00代表磁盘目录,如果电脑只有一个磁盘那就只有M00, 如果有多个磁盘,那就M01、M02...等等。
  17. # 00/00代表磁盘上的两级目录,每级目录下是从00到FF共256个文件夹,两级就是256*256个
配置Nginx
  1. # 安装编译工具和依赖
  2. yum install gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel
  3. # 下载nginx安装包
  4. wget http://nginx.org/download/nginx-1.14.0.tar.gz
  5. # 解压安装包
  6. tar xvf nginx-1.14.0.tar.gz
  7. # 部署Nginx的fastdfs模块
  8. tar xvf fastdfs-nginx-module_v1.16.tar.gz -C /usr/local/fast/
  9. # 修改一下配置文件
  10. # 去掉local目录
  11. vim /usr/local/fast/fastdfs-nginx-module/src/config
  12. CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"
  13. cd /usr/local/fast/fastdfs-nginx-module/src/
  14. ./configure --add-module=/usr/local/fast/fastdfs-nginx-module/src/
  15. make && make install
  16. cp /usr/local/fast/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/
  17. vim /etc/fdfs/mod_fastdfs.conf
  18. connect_timeout=12
  19. tracker_server=192.168.43.233:22122
  20. tracker_server=192.168.43.234:22122
  21. url_have_group_name = true
  22. # 是否允许从地址栏进行访问
  23. store_path0=/fastdfs/storage
  24. group_name=group1
  25. # group2注意修改此处
  26. group_count = 2
  27. [group1]
  28. group_name=group1
  29. storage_server_port=23000
  30. store_path_count=1
  31. store_path0=/fastdfs/storage
  32. [group2]
  33. group_name=group2
  34. storage_server_port=23000
  35. store_path_count=1
  36. store_path0=/fastdfs/storage
  37. cp /usr/local/fast/FastDFS/conf/http.conf /etc/fdfs
  38. cp /usr/local/fast/FastDFS/conf/mime.types /etc/fdfs/
  39. ln -s /fastdfs/storage/data/ /fastdfs/storage/data/M00
  40. vim /usr/local/nginx/conf/nginx.conf
  41. listen 8888;
  42. server_name localhost;
  43. location ~/group([0-9])/M00 {
  44. ngx_fastdfs_module;
  45. }
  46. systemctl reload nginx
  47. # 我们再来上传一下文件
  48. fdfs_upload_file /etc/fdfs/client.conf /root/1.png
  49. group1/M00/00/00/wKgrGF7_LISASYwWAAztU10n3gA811.png
  50. # 浏览器访问

配置tracker反向代理

我们在两个跟踪器上安装nginx,目的是使用统一的一个IP地址对外提供服务

安装nginx
  1. tar xvf ngx_cache_purge-2.3.tar.gz -C /usr/local/fast
  2. cd /usr/local/nginx-1.8.0/
  3. ./configure --add-module=/usr/local/fast/ngx_cache_purge-2.3
  4. make && make install
配置nginx
  1. mkdir -p /fastdfs/cache/nginx/proxy_cache
  2. mkdir -p /fastdfs/cache/nginx/proxy_cache/tmp
  3. cat /usr/local/nginx/conf/nginx.conf
  4. worker_processes 1;
  5. events {
  6. worker_connections 1024;
  7. }
  8. http {
  9. include mime.types;
  10. default_type application/octet-stream;
  11. sendfile on;
  12. tcp_nopush on;
  13. keepalive_timeout 65;
  14. # 设置缓存
  15. server_names_hash_bucket_size 128;
  16. client_header_buffer_size 32k;
  17. large_client_header_buffers 4 32k;
  18. client_max_body_size 300m;
  19. proxy_redirect off;
  20. proxy_set_header Host $http_host;
  21. proxy_set_header X-Real-IP $remote_addr;
  22. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  23. proxy_connect_timeout 90;
  24. proxy_send_timeout 90;
  25. proxy_read_timeout 90;
  26. proxy_buffer_size 16k;
  27. proxy_buffers 4 64k;
  28. proxy_busy_buffers_size 128k;
  29. proxy_temp_file_write_size 128k;
  30. proxy_cache_path /fastdfs/cache/nginx/proxy_cache levels=1:2
  31. keys_zone=http-cache:200m max_size=1g inactive=30d;
  32. proxy_temp_path /fastdfs/cache/nginx/proxy_cache/tmp;
  33. # group1的服务设置
  34. upstream fdfs_group1 {
  35. server 192.168.43.60:8888 weight=1 max_fails=2 fail_timeout=30s;
  36. server 192.168.43.24:8888 weight=1 max_fails=2 fail_timeout=30s;
  37. }
  38. upstream fdfs_group2 {
  39. server 192.168.43.97:8888 weight=1 max_fails=2 fail_timeout=30s;
  40. server 192.168.43.128:8888 weight=1 max_fails=2 fail_timeout=30s;
  41. }
  42. server {
  43. listen 80;
  44. server_name localhost;
  45. location /group1/M00 {
  46. proxy_next_upstream http_502 http_504 error timeout invalid_header;
  47. proxy_cache http-cache;
  48. proxy_cache_valid 200 304 12h;
  49. proxy_cache_key $uri$is_args$args;
  50. proxy_pass http://fdfs_group1;
  51. expires 30d;
  52. }
  53. location /group2/M00 {
  54. proxy_next_upstream http_502 http_504 error timeout invalid_header;
  55. proxy_cache http-cache;
  56. proxy_cache_valid 200 304 12h;
  57. proxy_cache_key $uri$is_args$args;
  58. proxy_pass http://fdfs_group2;
  59. expires 30d;
  60. }
  61. location ~/purge(/.*) {
  62. allow 127.0.0.1;
  63. allow 192.168.43.0/24;
  64. deny all;
  65. proxy_cache_purge http-cache $1$is_args$args;
  66. }
  67. error_page 500 502 503 504 /50x.html;
  68. location = /50x.html {
  69. root html;
  70. }
  71. location / {
  72. root html;
  73. index index.html index.htm;
  74. }
  75. }
  76. }
启动验证服务
  1. /usr/local/nginx/sbin/nginx
  2. lsof -i:80
  3. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
  4. nginx 18152 root 6u IPv4 64547 0t0 TCP *:http (LISTEN)
  5. nginx 18153 nobody 6u IPv4 64547 0t0 TCP *:http (LISTEN)
  6. # 上传文件测试负载均衡
  7. [root@tracker1 ~]# fdfs_upload_file /etc/fdfs/client.conf /root/1.png
  8. group2/M00/00/00/wKgrYV7_LtKAVzpbAAztU10n3gA079.png
  9. [root@tracker1 ~]# fdfs_upload_file /etc/fdfs/client.conf /root/1.png
  10. group1/M00/00/00/wKgrPF7_LtOACpZhAAztU10n3gA410.png
  11. [root@tracker1 ~]# fdfs_upload_file /etc/fdfs/client.conf /root/1.png
  12. group2/M00/00/00/wKgrgF7_LtWAQfW7AAztU10n3gA673.png
  13. [root@tracker1 ~]# fdfs_upload_file /etc/fdfs/client.conf /root/1.png
  14. group1/M00/00/00/wKgrGF7_LtWAaML9AAztU10n3gA712.png

配置Nginx反向代理

安装nginx
  1. rpm -ivh nginx-1.16.0-1.el7.ngx.x86_64.rpm
配置nginx

添加负载均衡 fastdfs_tracker

nginx.conf

  1. cat /etc/nginx/nginx.conf
  2. user nginx;
  3. worker_processes 1;
  4. error_log /var/log/nginx/error.log warn;
  5. pid /var/run/nginx.pid;
  6. events {
  7. worker_connections 1024;
  8. }
  9. http {
  10. upstream fastdfs_tracker {
  11. server 192.168.43.234:80 weight=1 max_fails=2 fail_timeout=30s;
  12. server 192.168.43.233:80 weight=1 max_fails=2 fail_timeout=30s;
  13. }
  14. include /etc/nginx/mime.types;
  15. default_type application/octet-stream;
  16. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  17. '$status $body_bytes_sent "$http_referer" '
  18. '"$http_user_agent" "$http_x_forwarded_for"';
  19. access_log /var/log/nginx/access.log main;
  20. sendfile on;
  21. #tcp_nopush on;
  22. keepalive_timeout 65;
  23. #gzip on;
  24. include /etc/nginx/conf.d/*.conf;
  25. }

添加location并且匹配规则路径当中有fastdfs

default.conf

  1. cat /etc/nginx/conf.d/default.conf
  2. server {
  3. listen 80;
  4. server_name localhost;
  5. location / {
  6. root /usr/share/nginx/html;
  7. index index.html index.htm;
  8. }
  9. location /fastdfs {
  10. root html;
  11. index index.html index.htm;
  12. proxy_pass http://fastdfs_tracker/;
  13. proxy_set_header Host $http_host;
  14. proxy_set_header Cookie $http_cookie;
  15. proxy_set_header X-Real-IP $remote_addr;
  16. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  17. proxy_set_header X-Forwarded-Proto $scheme;
  18. client_max_body_size 300m;
  19. }
  20. error_page 500 502 503 504 /50x.html;
  21. location = /50x.html {
  22. root /usr/share/nginx/html;
  23. }
  24. }
启动验证nginx
  1. systemctl start nginx
  2. fdfs_upload_file /etc/fdfs/client.conf /root/1.png
  3. group2/M00/00/00/wKgrYV7_MB-AQE_wAAztU10n3gA850.png

配置keepalived高可用

安装keepalived
  1. yum -y install keepalived
配置keepalived

主机配置

  1. cat keepalived.conf
  2. global_defs {
  3. router_id nginx_master
  4. }
  5. vrrp_script check_nginx {
  6. script "/etc/keepalived/check_nginx.sh"
  7. interval 5
  8. }
  9. vrrp_instance VI_1 {
  10. state BACKUP
  11. nopreempt
  12. interface ens33
  13. virtual_router_id 50
  14. priority 80
  15. advert_int 1
  16. authentication {
  17. auth_type PASS
  18. auth_pass 1111
  19. }
  20. virtual_ipaddress {
  21. 192.168.43.251
  22. }
  23. track_script {
  24. check_nginx
  25. }
  26. }

从机配置

  1. cat keepalived.conf
  2. global_defs {
  3. router_id nginx_slave
  4. }
  5. vrrp_script check_nginx {
  6. script "/etc/keepalived/check_nginx.sh"
  7. interval 5
  8. }
  9. vrrp_instance VI_1 {
  10. state BACKUP
  11. nopreempt
  12. interface ens32
  13. virtual_router_id 50
  14. priority 100
  15. advert_int 1
  16. authentication {
  17. auth_type PASS
  18. auth_pass 1111
  19. }
  20. virtual_ipaddress {
  21. 192.168.43.251
  22. }
  23. track_script {
  24. check_nginx
  25. }
  26. }

check_nginx.sh

  1. cat /etc/keepalived/check_nginx.sh
  2. #!/bin/bash
  3. curl -I http://localhost &>/dev/null
  4. if [ $? -ne 0 ];then
  5. systemctl stop keepalived
  6. fi
  7. chmod +x /etc/keepalived/check_nginx.sh
启动并验证高可用
  1. [root@nginx2 ~]# ip a
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  7. link/ether 00:0c:29:4b:e9:6e brd ff:ff:ff:ff:ff:ff
  8. inet 192.168.43.53/24 brd 192.168.43.255 scope global dynamic ens32
  9. valid_lft 2647sec preferred_lft 2647sec
  10. inet 192.168.43.251/32 scope global ens32
  11. valid_lft forever preferred_lft forever
  12. systemctl stop nginx
  13. # 我们切换到另一台机器,可以看到vip自动切换了
  14. [root@nginx1 ~]# ip a
  15. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
  16. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  17. inet 127.0.0.1/8 scope host lo
  18. valid_lft forever preferred_lft forever
  19. 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  20. link/ether 00:0c:29:17:4a:03 brd ff:ff:ff:ff:ff:ff
  21. inet 192.168.43.220/24 brd 192.168.43.255 scope global dynamic ens32
  22. valid_lft 2882sec preferred_lft 2882sec
  23. inet 192.168.43.251/32 scope global ens32
  24. valid_lft forever preferred_lft forever

浏览器访问

我们可以看到即使nginx宕机一台,也不影响服务的可用性

02 . 分布式存储之FastDFS 高可用集群部署的更多相关文章

  1. hbase高可用集群部署(cdh)

    一.概要 本文记录hbase高可用集群部署过程,在部署hbase之前需要事先部署好hadoop集群,因为hbase的数据需要存放在hdfs上,hadoop集群的部署后续会有一篇文章记录,本文假设had ...

  2. (十)RabbitMQ消息队列-高可用集群部署实战

    原文:(十)RabbitMQ消息队列-高可用集群部署实战 前几章讲到RabbitMQ单主机模式的搭建和使用,我们在实际生产环境中出于对性能还有可用性的考虑会采用集群的模式来部署RabbitMQ. Ra ...

  3. RocketMQ的高可用集群部署

    RocketMQ的高可用集群部署 标签(空格分隔): 消息队列 部署 1. RocketMQ 集群物理部署结构 Rocket 物理部署结构 Name Server: 单点,供Producer和Cons ...

  4. RabbitMQ的高可用集群部署

    RabbitMQ的高可用集群部署 标签(空格分隔): 消息队列 部署 1. RabbitMQ部署的三种模式 1.1 单一模式 单机情况下不做集群, 仅仅运行一个RabbitMQ. # docker-c ...

  5. rocketmq高可用集群部署(RocketMQ-on-DLedger Group)

    rocketmq高可用集群部署(RocketMQ-on-DLedger Group) rocketmq部署架构 rocketmq部署架构非常多,都是为了解决一些问题,越来越高可用,越来越复杂. 单ma ...

  6. MySQL MHA 高可用集群部署及故障切换

    MySQL MHA 高可用集群部署及故障切换 1.概念 2.搭建MySQL + MHA 1.概念: a)MHA概念 : MHA(MasterHigh Availability)是一套优秀的MySQL高 ...

  7. Centos6.9下RocketMQ3.4.6高可用集群部署记录(双主双从+Nameserver+Console)

    之前的文章已对RocketMQ做了详细介绍,这里就不再赘述了,下面是本人在测试和生产环境下RocketMQ3.4.6高可用集群的部署手册,在此分享下: 1) 基础环境 ip地址 主机名 角色 192. ...

  8. Hadoop部署方式-高可用集群部署(High Availability)

    版权声明:原创作品,谢绝转载!否则将追究法律责任. 本篇博客的高可用集群是建立在完全分布式基础之上的,详情请参考:https://www.cnblogs.com/yinzhengjie/p/90651 ...

  9. Kubernetes容器集群 - harbor仓库高可用集群部署说明

    之前介绍Harbor私有仓库的安装和使用,这里重点说下Harbor高可用集群方案的部署,目前主要有两种主流的Harbor高可用集群方案:1)双主复制:2)多harbor实例共享后端存储. 一.Harb ...

随机推荐

  1. Java实现 蓝桥杯 算法训练 二进制数数

    试题 算法训练 二进制数数 资源限制 时间限制:1.0s 内存限制:256.0MB 问题描述 给定L,R.统计[L,R]区间内的所有数在二进制下包含的"1"的个数之和. 如5的二进 ...

  2. Java实现LeetCode_0007_ReverseInteger

    package javaLeetCode_primary; import java.util.Scanner; /** * Given a 32-bit signed integer, reverse ...

  3. Python 图像处理 OpenCV (7):图像平滑(滤波)处理

    前文传送门: 「Python 图像处理 OpenCV (1):入门」 「Python 图像处理 OpenCV (2):像素处理与 Numpy 操作以及 Matplotlib 显示图像」 「Python ...

  4. css3图片防止变形

    1.object-fit 由于图片原始大小都不一样,强行设定大小值会导致拉伸,如果不设定大小则参差不齐. 之前我们大多数用的 大多数都是利用background-size: cover 来避免对图片造 ...

  5. Grafana邮箱告警

    1.grafana-server 配置 smtp 服务器 vim /etc/grafana/grafana.ini #修改一下内容 ################################## ...

  6. 最新 iOS 框架整体梳理(二)

    在前面一篇中整理出来了一些了,下面的内容是接着上面一篇的接着整理.上篇具体的内容可以点击这里查看:   最新 iOS 框架整体梳理(一) Part - 2          34.CoreTeleph ...

  7. Python如何绘制可视化图?给你一段代码,你能自己做出来吗

    前言 本文的文字及图片来源于网络,仅供学习.交流使用,不具有任何商业用途,版权归原作者所有,如有问题请及时联系我们以作处理. 喜欢的朋友欢迎关注小编,除了分享技术文章之外还有很多福利 没有数据生成的图 ...

  8. hibernate中的映射

    hibernate中的映射是指Java类和数据库表中的属性来进行关联,然后通过类来操作数据库中,这就是简单的映射.

  9. C# ASP.NET递归循环生成嵌套json结构树

    1. 建立用来保存树结构数据的目标对象 public class TreeObject { public string name { get; set; } public string value { ...

  10. refs转发 React.forwardRef

    2020-04-01 refs转发 前几天刚总结完ref&DOM之间的关系,并且想通了3种ref的绑定方式 今天总结一下refs转发 这是react中一直困扰我的一个点 示例: 输入: wor ...