一、设置NGINX日志格式

  1. [root@zabbix_server ~]# vim /etc/nginx/nginx.conf
  1. log_format access_json_log '{"@timestamp":"$time_local",'
  2. '"http_host":"$http_host",'
  3. '"clinetip":"$remote_addr",'
  4. '"request":"$request",'
  5. '"status":"$status",'
  6. '"size":"$body_bytes_sent",'
  7. '"upstream_addr":"$upstream_addr",'
  8. '"upstream_status":"$upstream_status",'
  9. '"upstream_response_time":"$upstream_response_time",'
  10. '"request_time":"$request_time",'
  11. '"http_referer":"$http_referer",'
  12. '"http_user_agent":"$http_user_agent",'
  13. '"http_x_forwarded_for":"$http_x_forwarded_for"}';
  14.  
  15. access_log /var/log/nginx/access.log access_json_log;

二、在logstash目录下,下载geolite数据库。

geoip是logstash的一个过滤插件,用于分析IP获取地理位置。

  1. root@server- logstash]# wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
    ---- ::-- http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
  2. 正在解析主机 geolite.maxmind.com (geolite.maxmind.com)... 104.17.200.89, 104.17.201.89, ::::c859, ...
  3. 正在连接 geolite.maxmind.com (geolite.maxmind.com)|104.17.200.89|:... 已连接。
  4. 已发出 HTTP 请求,正在等待回应... OK
  5. 长度: (29M) [application/gzip]
  6. 正在保存至: GeoLite2-City.tar.gz
  7.  
  8. % [===========================> ] ,, .1KB/s 用时 11m 30s
  9.  
  10. -- :: (15.0 KB/s) - 字节处连接关闭。重试中。
  11.  
  12. ---- ::-- (尝试次数: ) http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
  13. 正在连接 geolite.maxmind.com (geolite.maxmind.com)|104.17.200.89|:... 已连接。
  14. 已发出 HTTP 请求,正在等待回应... Partial Content
  15. 长度: (29M),剩余 (18M) [application/gzip]
  16. 正在保存至: GeoLite2-City.tar.gz
  17.  
  18. %[++++++++++++++++++++++++++++====================================================>] ,, .2KB/s 用时 9m 9s
  19.  
  20. -- :: (34.4 KB/s) - 已保存 GeoLite2-City.tar.gz [/])

三、解压

  1. [root@server- logstash]# tar -zxvf GeoLite2-City.tar.gz
  2. GeoLite2-City_20191119/
  3. GeoLite2-City_20191119/LICENSE.txt
  4. GeoLite2-City_20191119/GeoLite2-City.mmdb
  5. GeoLite2-City_20191119/COPYRIGHT.txt
  6. GeoLite2-City_20191119/README.txt
  7. [root@server- logstash]#

四、设置logstash配置文件

在/etc/logstash/conf.d目录下新建一个nginx.conf的配置文件

  1. [root@server- conf.d]# vim /etc/logstash/conf.d/nginx.conf
  1. input {
  2. beats {
  3. port =>
  4. }
  5. }
  6.  
  7. filter {
  8. geoip {
  9. source => "clientip"
  10. target => "geoip"
  11. database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
  12. add_field => ["[geoip][coordinates]","%{[geoip][longitude]}"]
  13. add_field => ["[geoip][coordinates]","%{[geoip][latitude]}"]
  14. }
  15. }
  16.  
  17. output {
  18. stdout{
  19. codec=>rubydebug
  20. }
  21. }

source:需要查询IP位置的源字段

target:目标字段。默认为geoip

database:IP位置信息数据库目录

add_field:增加经纬度字段

五、测试配置文件

  1. [root@server- conf.d]# logstash -f /etc/logstash/conf.d/nginx.conf
  2. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
  3. Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
  4. [INFO ] -- ::04.916 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
  5. [INFO ] -- ::04.931 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
  6. [WARN ] -- ::05.931 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
  7. [INFO ] -- ::06.292 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.4"}
  8. [INFO ] -- ::06.542 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>}
  9. [INFO ] -- ::08.302 [Ruby--Thread-: /usr/share/logstash/vendor/bundle/jruby/2.3./gems/stud-0.0./lib/stud/task.rb:] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>}
  10. [INFO ] -- ::08.329 [[main]-pipeline-manager] geoip - Using geoip database {:path=>"/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"}
  11. [INFO ] -- ::09.704 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:10001"}
  12. [INFO ] -- ::09.911 [Ruby--Thread-: /usr/share/logstash/vendor/bundle/jruby/2.3./gems/stud-0.0./lib/stud/task.rb:] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x17715055@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}
  13. [INFO ] -- ::09.936 [Ruby--Thread-: /usr/share/logstash/vendor/bundle/jruby/2.3./gems/stud-0.0./lib/stud/task.rb:] agent - Pipelines running {:count=>, :pipelines=>["main"]}
  14. [INFO ] -- ::09.948 [[main]<beats] Server - Starting server on port:

新开一个SSH连接,查看JAVA进程

  1. [root@server- conf.d]# netstat -tunlp|grep java
  2. tcp6 172.28.18.69: :::* LISTEN /java
  3. tcp6 ::: :::* LISTEN /java
  4. tcp6 172.28.18.69: :::* LISTEN /java
  5. tcp6 127.0.0.1: :::* LISTEN /java
  6. tcp6 172.28.18.69: :::* LISTEN /java
  7. tcp6 ::: :::* LISTEN /java

此时,10001端口已经被监听,启动成功,过一会屏幕打印收到的NGINX日志数据如如下:

  1. "http_referer" => "http://zabbix.9500.cn/zabbix.php?action=dashboard.view&ddreset=1",
  2. "upstream_addr" => "127.0.0.1:9000",
  3. "clinetip" => "219.239.8.14",
  4. "source" => "/var/log/nginx/access.log",
  5. "beat" => {
  6. "name" => "zabbix_server.jinglong",
  7. "version" => "6.2.4",
  8. "hostname" => "zabbix_server.jinglong"
  9. },
  10. "fields" => {
  11. "log_topics" => "nginx-172.28.18.75"
  12. },
  13. "@version" => "",
  14. "upstream_status" => "",
  15. "http_user_agent" => "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0",
  16. "offset" => ,
  17. "prospector" => {
  18. "type" => "log"
  19. },
  20. "request_time" => "0.639",
  21. "status" => "",
  22. "host" => "zabbix_server.jinglong"
  23. }
  24. {
  25. "upstream_response_time" => "0.828",
  26. "@timestamp" => --20T09::.368Z,
  27. "http_host" => "zabbix.9500.cn",
  28. "tags" => [
  29. [] "beats_input_raw_event"
  30. ],
  31. "request" => "GET /map.php?sysmapid=8&severity_min=0&sid=126eba41a3be1fb9&curtime=1574242326679&uniqueid=BCYQV&used_in_widget=1 HTTP/1.1",
  32. "http_x_forwarded_for" => "-",
  33. "size" => "",
  34. "geoip" => {
  35. "ip" => "219.239.8.14",
  36. "longitude" => 116.3883,
  37. "country_code2" => "CN",
  38. "region_code" => "BJ",
  39. "country_code3" => "CN",
  40. "continent_code" => "AS",
  41. "timezone" => "Asia/Shanghai",
  42. "latitude" => 39.9289,
  43. "country_name" => "China",
  44. "region_name" => "Beijing",
  45. "location" => {
  46. "lon" => 116.3883,
  47. "lat" => 39.9289
  48. }
  49. },

此时已经能够看到geoip的数据了,包括经纬度、国家代码,国家名称、城市名称。

修改配置文件,指定需要的字段

  1. [root@server- conf.d]# vim nginx.conf
  1. filter {
  2. geoip {
  3. source => "clinetip"
  4. database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
  5. fileds => ["country_name","region_name","longitude","latitude"]
  6. }
  7. }

fields:指定需要的字段

保存,退出,重新启动配置文件

  1. "request" => "POST /elasticsearch/_msearch HTTP/1.1",
  2. "upstream_status" => "",
  3. "fields" => {
  4. "log_topics" => "nginx-172.28.18.75"
  5. },
  6. "size" => "",
  7. "beat" => {
  8. "name" => "zabbix_server.jinglong",
  9. "hostname" => "zabbix_server.jinglong",
  10. "version" => "6.2.4"
  11. },
  12. "request_time" => "0.159",
  13. "offset" => ,
  14. "@version" => "",
  15. "upstream_addr" => "172.28.18.69:5601",
  16. "http_host" => "elk.9500.cn"
  17. }
  18. {
  19. "geoip" => {
  20. "latitude" => 39.9289,
  21. "region_name" => "Beijing",
  22. "longitude" => 116.3883,
  23. "country_name" => "China"
  24. },
  25. "http_user_agent" => "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0",
  26. "prospector" => {
  27. "type" => "log"
  28. },

此时geoip的数据字段就只显示我们指定的那几个了。修改配置文件将数据输出到elasticsearch

  1. input {
  2. beats {
  3. port =>
  4. }
  5. }
  6.  
  7. filter {
  8. geoip {
  9. source => "clinetip"
  10. database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
  11. fields => ["country_name","region_name","longitude","latitude"]
  12. }
  13. }
  14.  
  15. output {
  16. elasticsearch {
  17. hosts=>["172.28.18.69:9200"]
  18. index=>"nginx-172.28.18.75-%{+YYYY.MM.dd}"
  19. }
  20. }

启动logstash配置文件nginx.conf

  1. ~
  2. [root@server- conf.d]# logstash -f /etc/logstash/conf.bak/nginx.conf
  3. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
  4. Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
  5. [INFO ] -- ::40.934 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
  6. [INFO ] -- ::40.965 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
  7. [WARN ] -- ::41.962 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
  8. [INFO ] -- ::42.365 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.4"}
  9. [INFO ] -- ::42.637 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>}
  10. [INFO ] -- ::44.436 [Ruby--Thread-: /usr/share/logstash/vendor/bundle/jruby/2.3./gems/stud-0.0./lib/stud/task.rb:] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>, "pipeline.batch.size"=>, "pipeline.batch.delay"=>}
  11. [INFO ] -- ::45.078 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://172.28.18.69:9200/]}}
  12. [INFO ] -- ::45.089 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://172.28.18.69:9200/, :path=>"/"}
  13. [WARN ] -- ::45.337 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://172.28.18.69:9200/"}
  14. [INFO ] -- ::45.856 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>}
  15. [WARN ] -- ::45.857 [[main]-pipeline-manager] elasticsearch - Detected a .x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
  16. [INFO ] -- ::45.874 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
  17. [INFO ] -- ::45.878 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
  18. [INFO ] -- ::45.897 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//172.28.18.69:9200"]}
  19. [INFO ] -- ::45.902 [[main]-pipeline-manager] geoip - Using geoip database {:path=>"/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"}
  20. [INFO ] -- ::46.712 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:10001"}
  21. [INFO ] -- ::46.846 [Ruby--Thread-: /usr/share/logstash/vendor/bundle/jruby/2.3./gems/stud-0.0./lib/stud/task.rb:] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1b610349@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}
  22. [INFO ] -- ::46.909 [Ruby--Thread-: /usr/share/logstash/vendor/bundle/jruby/2.3./gems/stud-0.0./lib/stud/task.rb:] agent - Pipelines running {:count=>, :pipelines=>["main"]}
  23. [INFO ] -- ::46.911 [[main]<beats] Server - Starting server on port:

六、配置kibana展示

打开kibana,建立索引

下一步,点击创建索引模式,创建成功后,就能看到索引对应的字段列表,其中包含geoip字段

在“发现“里,新建,选择刚才建立的索引模式,,此时能看到关于geoip的相关字段

接下来,用地图展示数据

“可视化”里面点击创建一个可视化视图“,选择“坐标地图”

选择创建的索引,选择“选择buckets类型”为"GEOHASH"

此时报错:

说没有发现字段类型为geo_point的数据字段,此时需要修改logstash配置文件,增加location字段

  1. input {
  2. beats {
  3. port =>
  4. }
  5. }
  6.  
  7. filter {
  8. geoip {
  9. source => "clinetip"
  10. database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
  11. fields => ["country_name","region_name","location"]
  12. }
  13. }
  14.  
  15. output {
  16. elasticsearch {
  17. hosts=>["172.28.18.69:9200"]
  18. index=>"nginx-172.28.18.75-%{+YYYY.MM.dd}"
  19. }
  20. }

重启logstash配置文件,并删除elasticsearch 的索引

  1. [root@server- conf.d]# curl -XDELETE http://172.28.18.69:9200/nginx-172.28.18.75-*

重启kibana

  1. root@server- conf.d]# systemctl restart kibana

打开kibana,重新建立索引,发现已经有了geoip.location字段

再建立坐标地图,还是报错

后来,百度发现是因为输出index的文件名不对,必须以logstash开头才可以使location字段输出为geo_point类型,于是修改logstash配置文件

  1. input {
  2. beats {
  3. port =>
  4. }
  5. }
  6.  
  7. filter {
  8. geoip {
  9. source => "clinetip"
  10. database => "/etc/logstash/GeoLite2-City_20191119/GeoLite2-City.mmdb"
  11. fields => ["country_name","region_name","location"]
  12. #add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
  13. #add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
  14. }
  15. }
  16.  
  17. output {
  18. elasticsearch {
  19. hosts=>["172.28.18.69:9200"]
  20. index=>"logstash-nginx-172.28.18.75-%{+YYYY.MM.dd}"
  21. }
  22. }

将index文件名改为logstash-nginx-172.28.18.75-%{+YYYY.MM.dd},重新启动配置文件,并删除以前的index

  1. [root@server- conf.d]# logstash -f /etc/logstash/conf.bak/nginx.conf
  1. curl -XDELETE http://172.28.18.69:9200/nginx-172.28.18.75-2019.11.21

打开kibana,删除之前的索引,重新建立索引

此时,发现geoip.location字段的类型变成了geo_point,问题解决,重新建立坐标地图

展示数据成功。

七、使用高德地图展示数据为中文

编辑kibana配置文件,在最后加一行

tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'

  1. [root@server- conf.d]# vim /etc/kibana/kibana.yml
  1. # The default locale. This locale can be used in certain circumstances to substitute any missing
  2. # translations.
  3. #i18n.defaultLocale: "en"
  4.  
  5. tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'

重启kibana

  1. [root@server- conf.d]# systemctl restart kibana

刷新kibana页面,即可显示中文地图

ELK展示NGINX访问IP地理位置图的更多相关文章

  1. 利用 ELK系统分析Nginx日志并对数据进行可视化展示

    一.写在前面 结合之前写的一篇文章:Centos7 之安装Logstash ELK stack 日志管理系统,上篇文章主要讲了监控软件的作用以及部署方法.而这篇文章介绍的是单独监控nginx 日志分析 ...

  2. 设置nginx禁止IP直接访问,只能通过指定的域名访问

    nginx的版本是1.2.1. 设置配置文件disableip.conf: server {     listen 80;     server_name _;     return500; } 这是 ...

  3. Nginx禁止ip访问或IP网段访问方法

    Nginx禁止ip访问可以防止指定IP访问我们的网站,本例子可以实现是防止单IP访问或IP网段访问了,非常的有用我们一起来看看吧. 常用的linux做法 iptables参考规则  代码如下 复制代码 ...

  4. Nginx 禁止IP访问

    我们在使用的时候会遇到很多的恶意IP攻击,这个时候就要用到Nginx 禁止IP访问了.下面我们就先看看Nginx的默认虚拟主机在用户通过IP访问,或者通过未设置的域名访问(比如有人把他自己的域名指向了 ...

  5. 使用python找出nginx访问日志中访问次数最多的10个ip排序生成网页

    使用python找出nginx访问日志中访问次数最多的10个ip排序生成网页 方法1:linux下使用awk命令 # cat access1.log | awk '{print $1" &q ...

  6. Nginx禁止IP访问,只允许域名访问

    Nginx禁止IP访问,只允许域名访问 我们在使用nginx的过程中会遇到很多的恶意IP攻击,这个时候就要用到Nginx 禁止IP访问了. 1.直接返回403错误 server { listen de ...

  7. Nginx反向代理+Tomcat+Springmvc获取用户访问ip

    Nginx+Tomcat+Springmvc获取用户访问ip 1.Nginx反向代理 修改Nginx配置文件 location / { ***********之前代码*******; proxy_se ...

  8. nginx 查看访问 IP 并封禁 IP 详解

    1.查找服务器所有访问者ip方法: awk '{print $1}' nginx_access.log |sort |uniq -c|sort -n nginx.access.log 为nginx访问 ...

  9. Linux(7)- Nginx.conf主配置文件、Nginx虚拟主机/访问日志/限制访问IP/错误页面优化、Nginx反向代理、Nginx负载均衡

    一.Nginx.conf主配置文件 Nginx主配置文件conf/nginx.conf是一个纯文本类型的文件,整个配置文件是以区块的形式组织的.一般,每个区块以一对大括号{}来表示开始与结束. 核心模 ...

随机推荐

  1. Validform表单验证时的 【坑】

    代码如下 <input style="width: 360px" name="ll_wb_job.qcwyJobUrl" value="&quo ...

  2. SpringBoot: 7.整合jsp(转)

    springboot内部对jsp的支持并不是特别理想,而springboot推荐的视图是Thymeleaf,对于java开发人员来说还是大多数人员喜欢使用jsp 1.创建maven项目,添加pom依赖 ...

  3. Django-DRF(1)

    一. WEB应用模式 在开发Web应用中,有两种应用模式: 1. 前后端不分离 2. 前后端分离 二. API接口 为了在团队内部形成共识.防止个人习惯差异引起的混乱,我们需要找到一种大家都觉得很好的 ...

  4. jQuery 相关插件

    jQuery 是一个快速的,简洁的 javaScript 库,使用户能更方便地处理 HTML documents.events.实现动画效果,并且方便地为网站提供 AJAX 交互. jQuery 还有 ...

  5. 第12课.经典问题解析(const;指针和引用)

    问题1:const什么时候为只读变量?什么时候是常量? const常量的判别准则: a.只有用字面量初始化的const常量才会进入符号表(直接初始化过的const为常量) b.被使用其他变量初始化的c ...

  6. JAVA实验报告及第八周总结

    JAVA第八周作业 实验报告六 实验一 编写一个类,在其main()方法中创建一个一维数组,在try字句中访问数组元素,使其产生ArrayIndexOutOfBoundsException异常.在ca ...

  7. ucloud-monitor

    创建报警模板: 可以从现有模板导入: 设定指标: #通知人管理,可以设置报警短信通知人: #给主机绑定告警模板: 勾选要绑定的主机 点设置: #选择要添加的告警模板

  8. python列表的切片与复制

    切片,即处理一个完整列表中部分数据. 语法 变量[起始索引:终止索引:步长] 首先创建一个字符串列表 >>> cars = ['toyota', 'honda', 'mazda', ...

  9. Oracle-DQL 7- 集合操作

    集合操作: --将查询结果看作是一个集合,可以将多个查询结果之间用集合操作找出特点的数据--很多的集合操作可以使用条件的组合进行代替,集合操作的效率高于条件组合--某些复杂的查询结果只能通过集合操作得 ...

  10. 通过模板创建一个ABP项目

    ⒈下载 进入ABP模板页面,选择模板后下载 ⒉运行 1.初始化数据库 修改xxxx.Migrator.xxxx.Web.Host appsettings.json中的连接字符串 2.还原数据库 在Nu ...