Kubernetes nginx ingress controller部署

1.下载kubernetes nginx的yaml文件

Wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

Github上的reposity#https://github.com/kubernetes/ingress-nginx

2.创建ingress-nginx

[root@k8s-m1 nginx-ingress]# kubectl apply -f ./mandatory.yaml

namespace "ingress-nginx" created

configmap "nginx-configuration" created

configmap "tcp-services" created

configmap "udp-services" created

serviceaccount "nginx-ingress-serviceaccount" created

clusterrole.rbac.authorization.k8s.io "nginx-ingress-clusterrole" created

role.rbac.authorization.k8s.io "nginx-ingress-role" created

rolebinding.rbac.authorization.k8s.io "nginx-ingress-role-nisa-binding" created

clusterrolebinding.rbac.authorization.k8s.io "nginx-ingress-clusterrole-nisa-binding" created

deployment.apps "nginx-ingress-controller" created

查看创建的nginx pod

[root@k8s-m1 nginx-ingress]# kubectl get pods -n ingress-nginx

NAME                                        READY     STATUS    RESTARTS   AGE

nginx-ingress-controller-57548b96c8-r7mfr   1/1       Running   0          19m

3.创建nginx服务 ,创建ingress-nginx-service.yaml文件,内容如下:

[root@k8s-m1 nginx-ingress]# cat ingress-nginx-service.yml

apiVersion: v1

kind: Service

metadata:

name: ingress-nginx

namespace: ingress-nginx

labels:

app: nginx-ingress-controller

spec:

type: NodePort

#  externalIPs:

#  - 192.168.4.116

ports:

- port: 80

targetPort: 80

selector:

app.kubernetes.io/name: ingress-nginx

app.kubernetes.io/part-of: ingress-nginx

创建nginx service

[root@k8s-m1 nginx-ingress]# kubectl apply -f ./ingress-nginx-service.yml

service "ingress-nginx" created

查看创建的nginx服务

[root@k8s-m1 nginx-ingress]# kubectl get service -n ingress-nginx

NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE

ingress-nginx   NodePort   10.108.50.183   <none>        80:32721/TCP   12s

[root@k8s-m1 nginx-ingress]# kubectl describe service -n ingress-nginx

Name:                     ingress-nginx

Namespace:                ingress-nginx

Labels:                   app=nginx-ingress-controller

Annotations:              kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx-ingress-controller"},"name":"ingress-nginx","namespace":"ingres...

Selector:                 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx

Type:                     NodePort

IP:                       10.108.50.183

Port:                     <unset>  80/TCP

TargetPort:               80/TCP

NodePort:                 <unset>  32721/TCP

Endpoints:                10.244.2.26:80

Session Affinity:         None

External Traffic Policy:  Cluster

Events:                   <none>

3.创建 ingress 策略

首先查看已经安装好的guestbook(即frontend)和nginx服务(作为web服务器)

[root@k8s-m1 nginx-ingress]# kubectl get service

NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE

frontend       ClusterIP      10.96.97.204    <none>        80/TCP         5d

kubernetes     ClusterIP      10.96.0.1       <none>        443/TCP        7d

nginx          LoadBalancer   10.110.0.86     <pending>     80:31316/TCP   6d

redis-master   ClusterIP      10.97.234.59    <none>        6379/TCP       6d

redis-slave    ClusterIP      10.106.15.249   <none>        6379/TCP       6d

创建ingress的配置文件,域名www.guest.com负载到服务frontend ,域名www.nginx.com负载到服务nginx。

[root@k8s-m1 nginx-ingress]# cat test-nginx-service.yaml

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

name: test-service-ingress

namespace: default

annotations:

nginx.ingress.kubernetes.io/ingress.class: "nginx"

spec:

rules:

- host: www.guest.com

http:

paths:

- path: /

backend:

serviceName: frontend

servicePort: 80

- host: www.nginx.com

http:

paths:

- path: /

backend:

serviceName: nginx

servicePort: 80

创建test-service-ingress

[root@k8s-m1 nginx-ingress]# kubectl apply -f ./test-nginx-service.yaml

ingress.extensions "test-service-ingress" created

查看创建好的ingress策略

[root@k8s-m1 nginx-ingress]# kubectl get ingress

NAME                   HOSTS                         ADDRESS   PORTS     AGE

test-service-ingress   www.guest.com,www.nginx.com             80        39s

[root@k8s-m1 nginx-ingress]# kubectl describe ingress

Name:             test-service-ingress

Namespace:        default

Address:

Default backend:  default-http-backend:80 (<none>)

Rules:

Host           Path  Backends

----           ----  --------

www.guest.com

/   frontend:80 (<none>)

www.nginx.com

/   nginx:80 (<none>)

Annotations:

kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/ingress.class":"nginx"},"name":"test-service-ingress","namespace":"default"},"spec":{"rules":[{"host":"www.guest.com","http":{"paths":[{"backend":{"serviceName":"frontend","servicePort":80},"path":"/"}]}},{"host":"www.nginx.com","http":{"paths":[{"backend":{"serviceName":"nginx","servicePort":80},"path":"/"}]}}]}}

nginx.ingress.kubernetes.io/ingress.class:  nginx

Events:

Type    Reason  Age   From                      Message

----    ------  ----  ----                      -------

Normal  CREATE  1m    nginx-ingress-controller  Ingress default/test-service-ingress

[root@k8s-m1 nginx-ingress]#

4.验证nginx服务是否生效:

 查看ingress-nginx 的cluster ip地址:

[root@k8s-m1 nginx-ingress]# kubectl get service -n ingress-nginx

NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE

ingress-nginx   NodePort   10.108.50.183   <none>        80:32721/TCP   5m

用curl模拟访问www.nginx.com, 显示成功:

[root@k8s-m1 nginx-ingress]# curl -H "host:www.nginx.com" http://10.108.50.183

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

<style>

body {

width: 35em;

margin: 0 auto;

font-family: Tahoma, Verdana, Arial, sans-serif;

}

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.</p>

<p>For online documentation and support please refer to

<a href="http://nginx.org/">nginx.org</a>.<br/>

Commercial support is available at

<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>

</body>

</html>

用curl模拟访问www.guest.com, 显示成功:

[root@k8s-m1 nginx-ingress]# curl -H "host:www.guest.com" http://10.108.50.183

<html ng-app="redis">

<head>

<title>Guestbook</title>

<link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">

<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.12/angular.min.js"></script>

<script src="controllers.js"></script>

<script src="https://cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/0.13.0/ui-bootstrap-tpls.js"></script>

</head>

<body ng-controller="RedisCtrl">

<div style="width: 50%; margin-left: 20px">

<h2>Guestbook</h2>

<form>

<fieldset>

<input ng-model="msg" placeholder="Messages" class="form-control" type="text" name="input"><br>

<button type="button" class="btn btn-primary" ng-click="controller.onRedis()">Submit</button>

</fieldset>

</form>

<div>

<div ng-repeat="msg in messages track by $index">

{{msg}}

</div>

</div>

</div>

</body>

</html>

[root@k8s-m1 nginx-ingress]#

5.查看ingress-nginx的负载均衡日志:

查看pod的名字:

[root@k8s-m1 nginx-ingress]# kubectl get pods -n ingress-nginx

NAME                                        READY     STATUS    RESTARTS   AGE

nginx-ingress-controller-57548b96c8-r7mfr   1/1       Running   0          15m

[root@k8s-m1 nginx-ingress]#

用kubectl logs 查看负载分发的log,看到有两条请求,分别转发给default-nginx-80和[default-frontend-80]

[root@k8s-m1 nginx-ingress]# kubectl logs nginx-ingress-controller-57548b96c8-r7mfr -n ingress-nginx

I0405 13:29:45.667543       5 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"guestbook-ingress", UID:"1ba24d4d-55f7-11e9-997c-005056b66e19", APIVersion:"extensions/v1beta1", ResourceVersion:"827383", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/guestbook-ingress

I0405 13:29:45.815499       5 controller.go:190] Backend successfully reloaded.

[05/Apr/2019:13:29:45 +0000]TCP200000.000

10.244.0.0 - [10.244.0.0] - - [05/Apr/2019:13:30:59 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" 76 0.001 [default-nginx-80] 10.244.1.7:80 612 0.001 200 325c5a0460a6a96e5b0942c3118531d2

10.244.0.0 - [10.244.0.0] - - [05/Apr/2019:13:31:23 +0000] "GET / HTTP/1.1" 200 921 "-" "curl/7.29.0" 76 0.002 [default-frontend-80] 10.244.2.11:80 921 0.001 200 cb2cc5b9e473741eb626cb1f72300111

看到pod的ip地址

[root@k8s-m1 nginx-ingress]# kubectl get pods -o wide

NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE

frontend-5c548f4769-jwcnc       1/1       Running   0          6d        10.244.2.12   k8s-n2

frontend-5c548f4769-q7tmq       1/1       Running   0          6d        10.244.1.10   k8s-n1

frontend-5c548f4769-qftlv       1/1       Running   0          6d        10.244.2.11   k8s-n2

nginx-56f766d96f-26ftc          1/1       Running   0          6d        10.244.2.7    k8s-n2

nginx-56f766d96f-9f6ms          1/1       Running   0          6d        10.244.1.8    k8s-n1

nginx-56f766d96f-jmrfr          1/1       Running   0          6d        10.244.2.8    k8s-n2

nginx-56f766d96f-p26ns          1/1       Running   0          6d        10.244.1.7    k8s-n1

redis-master-55db5f7567-wvd9g   1/1       Running   0          6d        10.244.2.9    k8s-n2

redis-slave-584c66c5b5-7p76n    1/1       Running   0          6d        10.244.2.10   k8s-n2

redis-slave-584c66c5b5-cp2bp    1/1       Running   0          6d        10.244.1.9    k8s-n1

查看ingress-nginx的配置,内有guest和nginx两个域名的负载配置。

 

kubectl -n <namespace> exec <nginx-ingress-controller-pod-name> -- cat /etc/nginx/nginx.conf

[root@k8s-m1 nginx-ingress]#

[root@k8s-m1 nginx-ingress]# kubectl -n ingress-nginx exec nginx-ingress-controller-57548b96c8-r7mfr  -- cat /etc/nginx/nginx.conf

# Configuration checksum: 8514084035854042481

# setup custom paths that do not require root access

pid /tmp/nginx.pid;

load_module /etc/nginx/modules/ngx_http_modsecurity_module.so;

daemon off;

worker_processes 2;

worker_rlimit_nofile 31744;

worker_shutdown_timeout 10s ;

events {

multi_accept        on;

worker_connections  16384;

use                 epoll;

}

http {

lua_package_cpath "/usr/local/lib/lua/?.so;/usr/lib/lua-platform-path/lua/5.1/?.so;;";

lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;/usr/local/lib/lua/?.lua;;";

lua_shared_dict configuration_data 5M;

lua_shared_dict certificate_data 16M;

init_by_lua_block {

require("resty.core")

collectgarbage("collect")

local lua_resty_waf = require("resty.waf")

lua_resty_waf.init()

-- init modules

local ok, res

ok, res = pcall(require, "lua_ingress")

if not ok then

error("require failed: " .. tostring(res))

else

lua_ingress = res

end

ok, res = pcall(require, "configuration")

if not ok then

error("require failed: " .. tostring(res))

else

configuration = res

configuration.nameservers = { "10.96.0.10" }

end

ok, res = pcall(require, "balancer")

if not ok then

error("require failed: " .. tostring(res))

else

balancer = res

end

ok, res = pcall(require, "monitor")

if not ok then

error("require failed: " .. tostring(res))

else

monitor = res

end

}

init_worker_by_lua_block {

lua_ingress.init_worker()

balancer.init_worker()

monitor.init_worker()

}

geoip_country       /etc/nginx/geoip/GeoIP.dat;

geoip_city          /etc/nginx/geoip/GeoLiteCity.dat;

geoip_org           /etc/nginx/geoip/GeoIPASNum.dat;

geoip_proxy_recursive on;

aio                 threads;

aio_write           on;

tcp_nopush          on;

tcp_nodelay         on;

log_subrequest      on;

reset_timedout_connection on;

keepalive_timeout  75s;

keepalive_requests 100;

client_body_temp_path           /tmp/client-body;

fastcgi_temp_path               /tmp/fastcgi-temp;

proxy_temp_path                 /tmp/proxy-temp;

ajp_temp_path                   /tmp/ajp-temp;

client_header_buffer_size       1k;

client_header_timeout           60s;

large_client_header_buffers     4 8k;

client_body_buffer_size         8k;

client_body_timeout             60s;

http2_max_field_size            4k;

http2_max_header_size           16k;

http2_max_requests              1000;

types_hash_max_size             2048;

server_names_hash_max_size      1024;

server_names_hash_bucket_size   32;

map_hash_bucket_size            64;

proxy_headers_hash_max_size     512;

proxy_headers_hash_bucket_size  64;

variables_hash_bucket_size      128;

variables_hash_max_size         2048;

underscores_in_headers          off;

ignore_invalid_headers          on;

limit_req_status                503;

limit_conn_status               503;

include /etc/nginx/mime.types;

default_type text/html;

gzip on;

gzip_comp_level 5;

gzip_http_version 1.1;

gzip_min_length 256;

gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;

gzip_proxied any;

gzip_vary on;

# Custom headers for response

server_tokens on;

# disable warnings

uninitialized_variable_warn off;

# Additional available variables:

# $namespace

# $ingress_name

# $service_name

# $service_port

log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';

map $request_uri $loggable {

default 1;

}

access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;

error_log  /var/log/nginx/error.log notice;

resolver 10.96.0.10 valid=30s;

# See https://www.nginx.com/blog/websocket-nginx

map $http_upgrade $connection_upgrade {

default          upgrade;

# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

''               '';

}

# The following is a sneaky way to do "set $the_real_ip $remote_addr"

# Needed because using set is not allowed outside server blocks.

map '' $the_real_ip {

default          $remote_addr;

}

map '' $pass_access_scheme {

default          $scheme;

}

map '' $pass_server_port {

default          $server_port;

}

# Obtain best http host

map $http_host $best_http_host {

default          $http_host;

''               $host;

}

# validate $pass_access_scheme and $scheme are http to force a redirect

map "$scheme:$pass_access_scheme" $redirect_to_https {

default          0;

"http:http"      1;

"https:http"     1;

}

map $pass_server_port $pass_port {

443              443;

default          $pass_server_port;

}

# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.

# If no such header is provided, it can provide a random value.

map $http_x_request_id $req_id {

default   $http_x_request_id;

""        $request_id;

}

# Create a variable that contains the literal $ character.

# This works because the geo module will not resolve variables.

geo $literal_dollar {

default "$";

}

server_name_in_redirect off;

port_in_redirect        off;

ssl_protocols TLSv1.2;

# turn on session caching to drastically improve performance

ssl_session_cache builtin:1000 shared:SSL:10m;

ssl_session_timeout 10m;

# allow configuring ssl session tickets

ssl_session_tickets on;

# slightly reduce the time-to-first-byte

ssl_buffer_size 4k;

# allow configuring custom ssl ciphers

ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';

ssl_prefer_server_ciphers on;

ssl_ecdh_curve auto;

proxy_ssl_session_reuse on;

upstream upstream_balancer {

server 0.0.0.1; # placeholder

balancer_by_lua_block {

balancer.balance()

}

keepalive 32;

keepalive_timeout  60s;

keepalive_requests 100;

}

# Global filters

## start server _

server {

server_name _ ;

listen 80 default_server reuseport backlog=511;

listen [::]:80 default_server reuseport backlog=511;

set $proxy_upstream_name "-";

listen 443  default_server reuseport backlog=511 ssl http2;

listen [::]:443  default_server reuseport backlog=511 ssl http2;

# PEM sha: 91dea33a9c35869823040d446b07b26bf9f51813

ssl_certificate                         /etc/ingress-controller/ssl/default-fake-certificate.pem;

ssl_certificate_key                     /etc/ingress-controller/ssl/default-fake-certificate.pem;

location / {

set $namespace      "";

set $ingress_name   "";

set $service_name   "";

set $service_port   "0";

set $location_path  "/";

rewrite_by_lua_block {

balancer.rewrite()

}

header_filter_by_lua_block {

}

body_filter_by_lua_block {

}

log_by_lua_block {

balancer.log()

monitor.call()

}

if ($scheme = https) {

more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains";

}

access_log off;

port_in_redirect off;

set $proxy_upstream_name    "upstream-default-backend";

set $proxy_host             $proxy_upstream_name;

client_max_body_size                    1m;

proxy_set_header Host                   $best_http_host;

# Pass the extracted client certificate to the backend

# Allow websocket connections

proxy_set_header                        Upgrade           $http_upgrade;

proxy_set_header                        Connection        $connection_upgrade;

proxy_set_header X-Request-ID           $req_id;

proxy_set_header X-Real-IP              $the_real_ip;

proxy_set_header X-Forwarded-For        $the_real_ip;

proxy_set_header X-Forwarded-Host       $best_http_host;

proxy_set_header X-Forwarded-Port       $pass_port;

proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

proxy_set_header X-Original-URI         $request_uri;

proxy_set_header X-Scheme               $pass_access_scheme;

# Pass the original X-Forwarded-For

proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

# mitigate HTTPoxy Vulnerability

# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/

proxy_set_header Proxy                  "";

# Custom headers to proxied server

proxy_connect_timeout                   5s;

proxy_send_timeout                      60s;

proxy_read_timeout                      60s;

proxy_buffering                         off;

proxy_buffer_size                       4k;

proxy_buffers                           4 4k;

proxy_request_buffering                 on;

proxy_http_version                      1.1;

proxy_cookie_domain                     off;

proxy_cookie_path                       off;

# In case of errors try the next upstream server before returning an error

proxy_next_upstream                     error timeout;

proxy_next_upstream_tries               3;

proxy_pass http://upstream_balancer;

proxy_redirect                          off;

}

# health checks in cloud providers require the use of port 80

location /healthz {

access_log off;

return 200;

}

# this is required to avoid error if nginx is being monitored

# with an external software (like sysdig)

location /nginx_status {

allow 127.0.0.1;

allow ::1;

deny all;

access_log off;

stub_status on;

}

}

## end server _

## start server www.guest.com

server {

server_name www.guest.com ;

listen 80;

listen [::]:80;

set $proxy_upstream_name "-";

location / {

set $namespace      "default";

set $ingress_name   "test-service-ingress";

set $service_name   "frontend";

set $service_port   "80";

set $location_path  "/";

rewrite_by_lua_block {

balancer.rewrite()

}

header_filter_by_lua_block {

}

body_filter_by_lua_block {

}

log_by_lua_block {

balancer.log()

monitor.call()

}

port_in_redirect off;

set $proxy_upstream_name    "default-frontend-80";

set $proxy_host             $proxy_upstream_name;

client_max_body_size                    1m;

proxy_set_header Host                   $best_http_host;

# Pass the extracted client certificate to the backend

# Allow websocket connections

proxy_set_header                        Upgrade           $http_upgrade;

proxy_set_header                        Connection        $connection_upgrade;

proxy_set_header X-Request-ID           $req_id;

proxy_set_header X-Real-IP              $the_real_ip;

proxy_set_header X-Forwarded-For        $the_real_ip;

proxy_set_header X-Forwarded-Host       $best_http_host;

proxy_set_header X-Forwarded-Port       $pass_port;

proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

proxy_set_header X-Original-URI         $request_uri;

proxy_set_header X-Scheme               $pass_access_scheme;

# Pass the original X-Forwarded-For

proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

# mitigate HTTPoxy Vulnerability

# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/

proxy_set_header Proxy                  "";

# Custom headers to proxied server

proxy_connect_timeout                   5s;

proxy_send_timeout                      60s;

proxy_read_timeout                      60s;

proxy_buffering                         off;

proxy_buffer_size                       4k;

proxy_buffers                           4 4k;

proxy_request_buffering                 on;

proxy_http_version                      1.1;

proxy_cookie_domain                     off;

proxy_cookie_path                       off;

# In case of errors try the next upstream server before returning an error

proxy_next_upstream                     error timeout;

proxy_next_upstream_tries               3;

proxy_pass http://upstream_balancer;

proxy_redirect                          off;

}

}

## end server www.guest.com

## start server www.nginx.com

server {

server_name www.nginx.com ;

listen 80;

listen [::]:80;

set $proxy_upstream_name "-";

location / {

set $namespace      "default";

set $ingress_name   "test-service-ingress";

set $service_name   "frontend";

set $service_port   "80";

set $location_path  "/";

rewrite_by_lua_block {

balancer.rewrite()

}

header_filter_by_lua_block {

}

body_filter_by_lua_block {

}

log_by_lua_block {

balancer.log()

monitor.call()

}

port_in_redirect off;

set $proxy_upstream_name    "default-nginx-80";

set $proxy_host             $proxy_upstream_name;

client_max_body_size                    1m;

proxy_set_header Host                   $best_http_host;

# Pass the extracted client certificate to the backend

# Allow websocket connections

proxy_set_header                        Upgrade           $http_upgrade;

proxy_set_header                        Connection        $connection_upgrade;

proxy_set_header X-Request-ID           $req_id;

proxy_set_header X-Real-IP              $the_real_ip;

proxy_set_header X-Forwarded-For        $the_real_ip;

proxy_set_header X-Forwarded-Host       $best_http_host;

proxy_set_header X-Forwarded-Port       $pass_port;

proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

proxy_set_header X-Original-URI         $request_uri;

proxy_set_header X-Scheme               $pass_access_scheme;

# Pass the original X-Forwarded-For

proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

# mitigate HTTPoxy Vulnerability

# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/

proxy_set_header Proxy                  "";

# Custom headers to proxied server

proxy_connect_timeout                   5s;

proxy_send_timeout                      60s;

proxy_read_timeout                      60s;

proxy_buffering                         off;

proxy_buffer_size                       4k;

proxy_buffers                           4 4k;

proxy_request_buffering                 on;

proxy_http_version                      1.1;

proxy_cookie_domain                     off;

proxy_cookie_path                       off;

# In case of errors try the next upstream server before returning an error

proxy_next_upstream                     error timeout;

proxy_next_upstream_tries               3;

proxy_pass http://upstream_balancer;

proxy_redirect                          off;

}

}

## end server www.nginx.com

# backend for when default-backend-service is not configured or it does not have endpoints

server {

listen 8181 default_server reuseport backlog=511;

listen [::]:8181 default_server reuseport backlog=511;

set $proxy_upstream_name "internal";

access_log off;

location / {

return 404;

}

}

# default server, used for NGINX healthcheck and access to nginx stats

server {

listen unix:/tmp/nginx-status-server.sock;

set $proxy_upstream_name "internal";

keepalive_timeout 0;

gzip off;

access_log off;

location /healthz {

return 200;

}

location /is-dynamic-lb-initialized {

content_by_lua_block {

local configuration = require("configuration")

local backend_data = configuration.get_backends_data()

if not backend_data then

ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)

return

end

ngx.say("OK")

ngx.exit(ngx.HTTP_OK)

}

}

location /nginx_status {

stub_status on;

}

location /configuration {

# this should be equals to configuration_data dict

client_max_body_size                    10m;

client_body_buffer_size                 10m;

proxy_buffering                         off;

content_by_lua_block {

configuration.call()

}

}

location / {

content_by_lua_block {

ngx.exit(ngx.HTTP_NOT_FOUND)

}

}

}

}

stream {

lua_package_cpath "/usr/local/lib/lua/?.so;/usr/lib/lua-platform-path/lua/5.1/?.so;;";

lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;/usr/local/lib/lua/?.lua;;";

lua_shared_dict tcp_udp_configuration_data 5M;

init_by_lua_block {

require("resty.core")

collectgarbage("collect")

-- init modules

local ok, res

ok, res = pcall(require, "configuration")

if not ok then

error("require failed: " .. tostring(res))

else

configuration = res

configuration.nameservers = { "10.96.0.10" }

end

ok, res = pcall(require, "tcp_udp_configuration")

if not ok then

error("require failed: " .. tostring(res))

else

tcp_udp_configuration = res

end

ok, res = pcall(require, "tcp_udp_balancer")

if not ok then

error("require failed: " .. tostring(res))

else

tcp_udp_balancer = res

end

}

init_worker_by_lua_block {

tcp_udp_balancer.init_worker()

}

lua_add_variable $proxy_upstream_name;

log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;

access_log /var/log/nginx/access.log log_stream ;

error_log  /var/log/nginx/error.log;

upstream upstream_balancer {

server 0.0.0.1:1234; # placeholder

balancer_by_lua_block {

tcp_udp_balancer.balance()

}

}

server {

listen unix:/tmp/ingress-stream.sock;

content_by_lua_block {

tcp_udp_configuration.call()

}

}

# TCP services

# UDP services

}

 

kubernetes nginx ingress controller部署的更多相关文章

  1. 11. Ingress及Ingress Controller(主nginx ingress controller)

    11. Ingress,Ingress Controller拥有七层代理调度能力 什么是Ingress: Ingress是授权入站连接到达集群服务的规则集合 Ingress是一个Kubernetes资 ...

  2. 见异思迁:K8s 部署 Nginx Ingress Controller 之 kubernetes/ingress-nginx

    前天才发现,区区一个 nginx ingress controller 竟然2个不同的实现.一个叫 kubernetes/ingress-nginx ,是由 kubernetes 社区维护的,对应的容 ...

  3. Kubernetes 部署 Nginx Ingress Controller 之 nginxinc/kubernetes-ingress

    更新:这里用的是 nginxinc/kubernetes-ingress ,还有个 kubernetes/ingress-nginx ,它们的区别见 Differences Between nginx ...

  4. Kubernetes 服务入口管理与 Nginx Ingress Controller

    Kubernetes 具有强大的副本,动态扩容等特性,每一次 Pod 的变化 IP 地址都会发生变化,所以 Kubernetes 引进了 Service 的概念.Kubernetes 中使用 Serv ...

  5. kubernetes nginx ingress 使用记录

    前言 ingress是一种可以暴露k8s集群内部service的方式,用户编辑配置文件定义一个ingress资源即可实现外部网络访问内网service. ingress controller是来管理所 ...

  6. kubernetes 安装 ingress controller

    文章链接 ingress-nginx ingress 官方网站 ingress 仓库地址 ingress-nginx v1.0 最新版本 v1.0 适用于 Kubernetes 版本 v1.19+ ( ...

  7. [经验交流] Kubernetes Nginx Ingress 安装与使用

    Ingress 介绍 Kubernetes 上部署的微服务运行在它的私有网络中, 通过Pod实例的hostPort或Service实例的NodePort可以暴露到主机端口上,便于用户访问.但这样的方法 ...

  8. kubernetes 的ingress controller 的nginx configuration配置参数

    下列列举一些参数其中常用的大家可根据实际情况自行添加(影响全局) kubectl edit cm nginx-configuration -n ingress-nginx 配置文件cm的定义: htt ...

  9. 使用 NGINX 和 NGINX Plus 的 Ingress Controller 进行 Kubernetes 的负载均衡

    运行和管理跨机器集群的大规模的容器微服务应用是一个极具挑战的任务.Kubernetes 提供了一个强大的容器编排解决方案,从而帮助我们迎接这个挑战.它包含了一些重要特性,比如容错,自动伸缩,滚动升级, ...

随机推荐

  1. C#服务端Web Api

    本教程已经录制视频,欢迎大家观看我在CSDN学院录制的课程:http://edu.csdn.net/lecturer/944

  2. windows下配置下burpsuite的小方法。

    1.下载破解版burpsuite和正版burpsuite. 2.安装正版burpsuite(免费版) 3.打开安装路径 4.把破解版的burp拷贝到安装路径下 5.该路径下应该有个burpsuite_ ...

  3. node中间件概念

    中间件就是请求req和响应res之间的一个应用,请求浏览器向服务器发送一个请求后,服务器直接通过request定位属性的方式得到通过request携带过去的数据,就是用户输入的数据和浏览器本身的数据信 ...

  4. HTML中引入CSS的四种常用方法及各自的缺点

    在HTML中引入CSS的方法主要有四种,它们分别是行内式.内嵌式.链接式和导入式. 1.行内式 行内式是在标记的style属性中设定CSS样式.这种方式没有体现出CSS的优势,不推荐使用.格式如下: ...

  5. 用js简单实现一下迪克斯特拉算法

    今天看书看到了迪克斯特拉算法,大概用js实现一下呢,计算最短路径. 首先,迪克斯特拉算法只适用于有向无环图,且没有负权重,本例关系图如下哦,数字为权重,emmmm,画得稍微有点丑~ //大概用js实现 ...

  6. PythonStudy——迭代器 iterator

    # 迭代器对象: 可以不用依赖索引取值的容器# 可迭代对象:可以通过某种方法得到迭代器对象 # 迭代器优点:可以不用依赖索引取值# 迭代器缺点:只能从前往后依次取值 可迭代对象 # 可迭代对象:有__ ...

  7. CCNet: Criss-Cross Attention for Semantic Segmentation 里的Criss-Cross Attention计算方法

    论文地址:https://arxiv.org/pdf/1811.11721v1.pdf  code address: https://github.com/speedinghzl/CCNet 相关论文 ...

  8. 记一次java电话面试

    答案补充中... 一.java基础 1.简述java的几种基本数据类型 JAVA的基本数据类型有:byte.char.boolean.short.int.long.float.double 2.什么是 ...

  9. 如何实现Proxifier只代理部分程序

    转载自:https://jingyan.baidu.com/article/48b558e35e12f97f38c09a28.html 小编工作时上外网要通过局域网内其他人开代理,然后通过IE代理上网 ...

  10. Git从库中移除已删除大文件

    写在前面大家一定遇到过在使用Git时,不小心将一个很大的文件添加到库中,即使删除,记录中还是保存了这个文件.以后不管是拷贝,还是push/pull都比较麻烦.今天在上传工程到github上,发现最大只 ...