1、安装方式

1、  传统方式,以下组件全部运行在系统层面(yum或者rpm包),都为系统级守护进程

2、  kubeadm方式,master和node上的组件全部运行为pod容器,k8s也为pod

  a)         master/nodes

  安装kubelet,kubeadm,docker

  b)         master:kubeadm init(完成集群初始化)

  c)         nodes:kubeadm join

2、规划

主机           IP

k8smaster    192.168.2.19

k8snode01    192.168.2.21

k8snode02    192.168.2.5

版本号:

docker: 18.06.1.ce-3.el7

OS: CentOS release 7

kubernetes: 1.11.2

kubeadm 1.11.2

kubectl 1.11.2

kubelet 1.11.2

etcdctl: 3.2.15

3、主机设置

3.1、关闭交换分区

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

3.2、初始化系统

关闭SELinux和Firewall

sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
setenforce
systemctl disable firewalld.service
systemctl disable firewalld
systemctl stop firewalld

防火墙开启转发

iptables -P FORWARD ACCEPT

安装必要和常用工具

yum install -y epel-release vim vim-enhanced lrzsz unzip ntpdate sysstat dstat wget mlocate mtr lsof iotop bind-tools git net-tools

其他

#时间设置
timedatectl set-timezone Asia/Shanghai
ntpdate ntp1.aliyun.com
rm -f /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
#设置日志格式
systemctl restart rsyslog
echo 'export HISTTIMEFORMAT="%m/%d %T "' >> ~/.bashrc
source ~/.bashrc
#设置连接符
cat >> /etc/security/limits.conf << EOF
* soft nproc
* hard nproc
* soft nofile
* hard nofile
EOF
echo "ulimit -SHn 65535" >> /etc/profile
echo "ulimit -SHn 65535" >> /etc/rc.local
ulimit -SHn
sed -i 's/4096/10240/' /etc/security/limits.d/-nproc.conf
modprobe ip_conntrack
modprobe br_netfilter
cat >> /etc/rc.d/rc.local <<EOF
modprobe ip_conntrack
modprobe br_netfilter
EOF
chmod /etc/rc.d/rc.local
#设置内核
cat <<EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables =
net.bridge.bridge-nf-call-iptables =
net.ipv6.conf.all.disable_ipv6 =
net.ipv6.conf.default.disable_ipv6 =
net.ipv6.conf.lo.disable_ipv6 =
vm.swappiness =
vm.overcommit_memory=
vm.panic_on_oom=
kernel/panic=
kernel/panic_on_oops=
kernel.pid_max =
vm.max_map_count =
fs.aio-max-nr =
fs.file-max =
EOF
sysctl -p /etc/sysctl.conf
/sbin/sysctl -p # vim /etc/sysconfig/kubelet
#由于云环境默认没有swap分区,关闭swap选项
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
#设置kubeproxy模式为ipvs
KUBE_PROXY_MODE=ipvs

4、master设置yum源

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo vim kubernetes.repo
i
[kubernetes]
name=Kuberneters Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled= wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg rpm --import yum-key.gpg
rpm --import rpm-package-key.gpg

4.1、拷贝到nodes

由于本例是实验环境,如果生产环境存在saltstack和ansible等分发工具,可以用分发工具去做

1. 在服务器 S 上执行如下命令来生成配对密钥:

node01:
mkdir /root/.ssh -p
node02:
mkdir /root/.ssh -p
master:
ssh-keygen -t rsa
scp .ssh/id_rsa.pub node01:/root/.ssh/authorized_keys
scp .ssh/id_rsa.pub node02:/root/.ssh/authorized_keys

2、分发镜像源和验证秘钥

scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node01:/etc/yum.repos.d/
scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node02:/etc/yum.repos.d/ scp /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node01:/root
scp /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node02:/root

4.2、nodes上执行

rpm --import yum-key.gpg

rpm --import rpm-package-key.gpg

5、master安装k8s

yum install docker-ce kubelet-1.11. kubeadm-1.11. kubectl-1.11. -y

5.1、配置镜像源

安装完毕后需要初始化,此时需要下载镜像

由于国内没办法访问Google的镜像源,变通的方法有两种:

1、是从其他镜像源下载后,修改tag。

A、执行下面这个Shell脚本即可。

#!/bin/bash
images=(kube-proxy-amd64:v1.11.2 kube-scheduler-amd64:v1.11.2 kube-controller-manager-amd64:v1.11.2 kube-apiserver-amd64:v1.11.2
etcd-amd64:3.2. coredns:1.1. pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14. k8s-dns-kube-dns-amd64:1.14.
k8s-dns-dnsmasq-nanny-amd64:1.14. )
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com
done
docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

B、从官方拉取

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.
docker pull coredns/coredns:1.1.
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2 k8s.gcr.io/kube-scheduler-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2 k8s.gcr.io/kube-apiserver-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2 k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2. k8s.gcr.io/etcd-amd64:3.2.
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.1. k8s.gcr.io/coredns:1.1.

C、从阿里云拉取

docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14. k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14. k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14. k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1. k8s.gcr.io/etcd-amd64:3.1.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

2、是有代理服务器

vim /usr/lib/systemd/system/docker.service
[Service]下增加:
Environment="HTTPS_PROXY=$PROXY_SERVER_IP:PORT"
Environment="NO_PROXY=127.0.0.0/8,192.168.0.0/16"

5.2、配置镜像加速器

Docker 官方和国内很多云服务商都提供了国内加速器服务,例如:

当配置某一个加速器地址之后,若发现拉取不到镜像,请切换到另一个加速器地址。

国内各大云服务商均提供了 Docker 镜像加速服务,建议根据运行 Docker 的云平台选择对应的镜像加速服务。

使用阿里云加速器或者其他加速器:https://yeasy.gitbooks.io/docker_practice/content/install/mirror.html

配置镜像加速器,您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://registry-mirror.qiniu.com","https://zsmigm0p.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

5.3、重启服务

systemctl daemon-reload
systemctl restart docker.service

设置K8S开机自启

systemctl enable kubelet

systemctl enable docker

5.4、初始化master节点

kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/ --service-cidr=10.96.0.0/ --ignore-preflight-errors=all

过程简介

[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0829 ::00.722283 kernel_validator.go:] Validating kernel version
I0829 ::00.722398 kernel_validator.go:] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
############certificates 证书##############
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.2.19 127.0.0.1 ::]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
############ 配置文件 ##############
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.503085 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: 0tjvur.56emvwc4k4ghwenz
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
v1.11版本正式命名
# 早期版本演进:skyDNS ---->kubeDNS---->CoreDNS
[addons] Applied essential addon: kube-proxy
# 作为附件运行,自托管在K8S上,动态为service资源生成ipvs(iptables)规则,.11版本开始默认使用IPvs,不支持的话自动降级到iptables
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube #在家目录下创建kube文件
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #admin.conf包含一些kubectl拿来做配置文件指定连接K8S的认证信息
sudo chown $(id -u):$(id -g) $HOME/.kube/config #改属主属组 You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node
as root:
#以root在其他任意节点上执行以下命令加入K8S集群中。
kubeadm join 192.168.2.19: --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf
注释
#--discovery-token-ca-cert-hash #发现master使用的哈希码验证

根据提示执行如下操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

master部署网络组件

  CNI(容器网络插件):         flannel      #不支持网络策略         calico         canel         kube-router

注:如果是有代理服务器此处可以正常部署,如果没有,那么节点需要下载镜像后才可以部署

nodes:

docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2

master:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6、nodes安装k8s

下面的命令需要在所有的node上执行。

yum install docker-ce kubelet-1.11. kubeadm-1.11. kubectl-1.11. -y

6.1、nodes启动docker kubelet

master端

scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
scp /etc/sysconfig/kubelet node01:/etc/sysconfig/
scp /etc/sysconfig/kubelet node02:/etc/sysconfig/

nodes端

systemctl start docker
systemctl enable docker kubelet

6.2、加入master

这个token是在master节点上kubeadm初始化时最后给出的结果:
kubeadm join 192.168.2.19: --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf

###########如果token失效了##############

[root@master ~]# kubeadm token create

0wnvl7.kqxkle57l4adfr53

[root@node02 yum.repos.d]# kubeadm join 192.168.2.19:6443 --token 0wnvl7.kqxkle57l4adfr53 --discovery-token-unsafe-skip-ca-verification

7、验证

此时master执行如下命令,应该会出现master和nodes的状态为Ready。

# kubectl get nodes

Centos7使用kubeadm部署kubernetes-1.11.2的更多相关文章

  1. Centos7 使用 kubeadm 安装Kubernetes 1.13.3

    目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...

  2. kubeadm安装kubernetes V1.11.1 集群

    之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...

  3. 02 . Kubeadm部署Kubernetes及简单应用

    kubeadm部署Kubernetes kubeadm简介 # kubeadm是一位高中生的作品,他叫Lucas Kaldstrom,芬兰人,17岁用业余时间完成的一个社区项目: # kubeadm的 ...

  4. 使用kubeadm部署Kubernetes v1.13.3

    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具. 这个工具能通过两条指令完成一个kubernetes集群的部署: 1. 安装要求 在开始之前,部署Kubernetes集群 ...

  5. [原]使用kubeadm部署kubernetes(一)

    #######################    以下为声明  ##################### 在公众号  木子李的菜田 输入关键词:   k8s 有系列安装文档 此文档是之前做笔记在 ...

  6. [转帖]CentOS 7 使用kubeadm 部署 Kubernetes

    CentOS 7 使用kubeadm 部署 Kubernetes   关闭swap 执行swapoff临时关闭swap. 重启后会失效,若要永久关闭,可以编辑/etc/fstab文件,将其中swap分 ...

  7. 附025.kubeadm部署Kubernetes更新证书

    一 查看证书 1.1 查看过期时间-方式一 1 [root@master01 ~]# tree /etc/kubernetes/pki/ 2 [root@master01 ~]# for tls in ...

  8. Kubeadm部署Kubernetes

    Kubeadm部署Kubernetes 1.环境准备 主机名 IP 说明 宿主机系统 k8s-master 10.0.0.101 Kubernetes集群的master节点 Ubuntu2004 k8 ...

  9. centos7.1使用kubeadm部署kubernetes 1.16.2的master高可用

    机器列表,配置域名解析 cat /etc/hosts192.168.200.210 k8s-master1192.168.200.211 k8s-master2192.168.200.212 k8s- ...

随机推荐

  1. Centos 上部署 tomcat7

     在 Centos 上部署 tomcat7 搜索tomcat,选下面红色框框的官网 选箭头指着的版本7, 选 tar.gz 格式, 下载完压缩包,使用 ftpx 工具,放在 centos 的 /opt ...

  2. mysql 1194 – Table ‘tbl_video_info’ is marked as crashed and should be repaired 解决方法

    执行REPAIR TABLE `tbl_vedio_info`; 然后就可以了

  3. 2018-2019-2 20165234 《网络对抗技术》 Exp2 后门原理与实践

    实验二 后门原理与实践 实验内容 (1)使用netcat获取主机操作Shell,cron启动 (2)使用socat获取主机操作Shell, 任务计划启动 (3)使用MSF meterpreter(或其 ...

  4. 服务发现 consul cluster 的搭建【转】

    consul cluster setup 介绍和指南: consul用于服务发现.当底层服务发生变化时,能及时更新正确的mysql服务IP. 并提供给业务查询.但需要自行编写脚本,监测数据库状态和切断 ...

  5. 题解 P4093 【[HEOI2016/TJOI2016]序列】

    这道题原来很水的? noteskey 一开始以为是顺序的 m 个修改,然后选出一段最长子序列使得每次修改后都满足不降 这 TM 根本不可做啊! 于是就去看题解了,然后看到转移要满足的条件的我发出了黑人 ...

  6. jmeter发起form-data格式

    两者缺一不可,等下再来研究..

  7. MSYS 编译 nginx rtmp-module

    1. 下载源码 http://hg.nginx.org/nginx nginx-c74904a17021.zip https://github.com/arut/nginx-rtmp-module n ...

  8. pwnable.tw start&orw

    emm,之前一直想做tw的pwnable苦于没有小飞机(,今天做了一下发现都是比较硬核的pwn题目,对于我这种刚入门?的菜鸡来说可能难度刚好(orz 1.start 比较简单的一个栈溢出,给出一个li ...

  9. dash视频服务器本地搭建 (初探)

    2019-4-17 15:54:17 星期三 技术说明: dash: 将一个大视频分解成不同分辨率, 不同清晰度的小视频, 以及一个描述文件(后缀: mpd), 根据网络带宽自动调整视频流, 看起来更 ...

  10. [原创]X-HDL 4.2安装与使用

    由于涉及到VHDL工程,但实际工作中,用Verilog更多些,因此安装X-HDL进行转换,安装步骤与使用如下: X-HDL进行破解,破解如下: 安装完毕后,打开一个带转换的文件,进行如下操作: 链接: ...