前言

Kubernetes已经成为容器编排标准,没有OpenStack复杂而庞大的知识体系,但是需要老老实实学习的知识点也不少,由于K8s每年的改动也挺多,我这边文章暂时分享一些我觉得有用的链接,方便大家系统性学习和思考

https://www.youtube.com/watch?v=TlHvYWVUZyc

更新历史

2020年07月03日 - 增加K8s开源工具和参考文章
2020年05月30日 - 更新Kubernetes跟着官方文档从零搭建K8S实践
2020年04月16日 - 初稿

阅读原文 - https://wsgzao.github.io/post/kubernetes/


Kubernetes跟着官方文档从零搭建K8S

本文将带领读者一起, 参照着Kubernetes官方文档, 对其安装部署进行讲解. Kubernetes更新迭代很快, 书上、网上等教程可能并不能适用于新版本, 但官方文档能.

阅读这篇文章你能收获到:

  • 如何阅读Kubernetes官方安装指南并搭建一个Kubernetes环境
  • Kubernetes安装过程中的注意事项
  • 避过常见的坑

阅读本文你需要:

  • 熟悉Linux命令
  • 知道Kubernetes是用来干什么的 (不然装它干啥(ಥ_ಥ))
  • 知道Docker

Before installing kubeadm

K8s官网写的非常详细,列举实际操作的步骤供大家参考

This page shows how to install the kubeadm toolbox. For information how to create a cluster with kubeadm once you have performed this installation process, see the Using kubeadm to Create a Cluster page.

服务器或虚拟机数量:2
机器配置: CPU >= 2, 内存 >= 2G
操作系统:本文为 CentOS 7
备注:根据不同操作系统需认真阅读和理解官方文档说明

  • One or more machines running one of:
    • Ubuntu 16.04+
    • Debian 9+
    • CentOS 7
    • Red Hat Enterprise Linux (RHEL) 7
    • Fedora 25+
    • HypriotOS v1.0.1+
    • Container Linux (tested with 1800.6.0)
  • 2 GB or more of RAM per machine (any less will leave little room for your apps)
  • 2 CPUs or more
  • Full network connectivity between all machines in the cluster (public or private network is fine)
  • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
  • Certain ports are open on your machines. See here for more details.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

从官网找到kubeadm安装文档入口, 文档很详细. 英文阅读没有障碍的读者推荐直接查看英文文档, 中文文档不全且更新不及时安装时可能存在问题.

以下操作应用于k8s-master和k8s-worker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# 修改主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-worker

# 修改hosts
vim /etc/hosts
10.71.16.32 k8s-master
10.71.16.33 k8s-worker

# 确认MAC和product_uuid的唯一性
# 查看MAC
ifconfig -a
# 查看product_uuid
cat /sys/class/dmi/id/product_uuid

# 检查网络端口和防火墙,这个根据实际情况配置,应该没有难度,如果是内部测试可以直接禁用防火墙
systemctl stop firewalld
systemctl disable firewalld

# 禁用SELinux
# 关闭selinux,重启后生效
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
# 设置selinux状态,临时生效命令
setenforce 0
# 检查selinux状态
sestatus
SELinux status: disabled

# 禁用交换分区
# 将swap注释掉,重启后生效
vim /etc/fstab
#UUID=30e08a6d-75ba-4750-ab6d-2d11f6137c97 swap swap defaults 0 0
# 临时禁用swap
swapoff -a

# Letting iptables see bridged traffic
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

# Make sure that the br_netfilter module is loaded before this step. This can be done by running
lsmod | grep br_netfilter
# To load it explicitly call
sudo modprobe br_netfilter


Installing runtime

选择安装Docker, 注意阅读官网文档的推荐版本,若网络不好, 可换用国内源

Installing runtime

https://docs.docker.com/engine/install/centos/

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# (Install Docker CE)
## Set up the repository
### Install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2

## Add the Docker repository
yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

# Install Docker CE
yum update -y && yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.8 \
docker-ce-cli-19.03.8

## Create /etc/docker
mkdir /etc/docker

# Set up the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker

# 利用docker-cn提供的镜像源
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

Installing kubeadm, kubelet and kubectl

Installing kubeadm, kubelet and kubectl

官网有一行配置是错误的,首次部署K8s时请移除改行,exclude=kubelet kubeadm kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0

yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes

systemctl enable –now kubelet

配置国内kubeadm源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Debian/Ubuntu
apt-get update && apt-get install -y apt-transport-https
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl

# CentOS/RHEL/Fedora
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

setenforce 0
yum install -y kubelet kubeadm kubectl

初始化k8s-master

以下操作应用于k8s-master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
# 生成初始化文件
kubeadm config print init-defaults > kubeadm-init.yaml

# 该文件有两处需要修改:
## 将advertiseAddress: 1.2.3.4修改为本机地址
### (若国内网络)将imageRepository: k8s.gcr.io修改为imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

# 编辑kubeadm-init.yaml
vim kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.71.16.32
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}

# 下载镜像
kubeadm config images pull --config kubeadm-init.yaml

# 执行初始化
kubeadm init --config kubeadm-init.yaml

Your Kubernetes control-plane has initialized successfully!

# 接下来配置环境, 让当前用户可以执行kubectl命令:
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

# 最后两行需要保存下来, kubeadm join ...是worker节点加入所需要执行的命令
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.71.16.32:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:67e23b74df39cfca4dd0ba3d747139cb0dd4ea5c546a12c60e84b3c9b057fc6e

# 测试一下此处的NotReady是因为网络还没配置
kubectl get node

NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 12m v1.18.3

配置网络,注意文档中的最新版本差异

create-cluster-kubeadm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 下载描述文件
# https://docs.projectcalico.org/releases
# https://docs.projectcalico.org/v3.14/manifests/calico.yaml
wget https://docs.projectcalico.org/manifests/calico.yaml

# 打开calico.yaml, 将192.168.0.0/16修改为10.96.0.0/12
cat kubeadm-init.yaml | grep serviceSubnet:
serviceSubnet: 10.96.0.0/12
# 需要注意的是, calico.yaml中的IP和kubeadm-init.yaml需要保持一致, 要么初始化前修改kubeadm-init.yaml, 要么初始化后修改calico.yaml
vim calico.yaml
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.96.0.0/12"

# 初始化网络
kubectl apply -f calico.yaml

# 此时查看node信息, master的状态已经是Ready了
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 15m v1.18.3

安装Dashboard,注意官方文档的更新

Web UI (Dashboard)

1
2
3
4
5
6
7
8
# Deploying the Dashboard UI
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

# 部署完毕后, 执行kubectl get pods --all-namespaces查看pods状态
kubectl get pods --all-namespaces | grep dashboard
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-79rvt 1/1 Running 0 4h5m
kubernetes-dashboard kubernetes-dashboard-7b544877d5-zb4bl 1/1 Running 0 4h5m

创建用户,如果方便登录测试可以创建匿名账户

Creating sample user

创建一个用于登录Dashboard的用户. 创建文件dashboard-adminuser.yaml内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

执行命令kubectl apply -f dashboard-adminuser.yaml

创建用于登录Dashboard的匿名用户. 创建文件dashboard-annoymous.yaml内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-anonymous
rules:
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["https:kubernetes-dashboard:"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- nonResourceURLs: ["/ui", "/ui/*", "/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-anonymous
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard-anonymous
subjects:
- kind: User
name: system:anonymous

执行命令kubectl apply -f dashboard-annoymous.yaml.yaml

生成证书,这步非必须,根据情况生成并导入客户端

Accessing Dashboard

1
2
3
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

第三条命令生成证书时会提示输入密码, 可以直接两次回车跳过

kubecfg.p12即需要导入客户端机器的证书. 将证书拷贝到客户端机器上, 导入即可

需要注意的是: 若生成证书时跳过了密码, 导入时提示填写密码直接回车即可, 不要纠结密码哪来的

此时我们可以登录面板了, 访问地址:
https://{k8s-master-ip}:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
登录时会提示选择证书, 确认后会提示输入当前用户名密码(注意是电脑的用户名密码)

登录Dashboard

执行kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}'), 获取Token.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Name: admin-user-token-wdj28
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: f97e69ff-c4aa-45d4-bbb2-5221f3fb43cc

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlMxUGJ5TTR0cUQzd3JSODZfdWNScU14YXdHdmpLakdwSldrdTdhT1UyV2cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXdkajI4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmOTdlNjlmZi1jNGFhLTQ1ZDQtYmJiMi01MjIxZjNmYjQzY2MiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.wTdqAJfK0Z7C43SU0mAwoQw2dCdBcUD6A3uVo03OkW-r8Y630NP6MQqwmpi4IERohYjY6528oW0Ucj_2ukWu8z5eUAabNgL-_BQGXe2Zg1oGRK_LY90JN_L6f8mpYrPDFLfjSnUAdgb3zzCzhzIa2RihiYZW-mGm_ucfK5xt3dpFbDeIeEgePyFjUiX5ZdoMJEuerd6zgee1yeXEctQD4TRRxxtFebcLRgFDWVfOz0xWRNN1qSOB5v1ChkaQ5a6YxvGwjcrwrQaVN8bp73Zueu7FwmbkObT_EpWy0aZ7csTcSuNZ2K8QTpXr6NtN5xIcTpyMmIHmc9qCaskr5uM3qA

复制该Token到登录页, 点击登录即可

添加k8s-worker

添加Worker节点,以下操作应用于k8s-worker

重复执行 前期准备-修改hostname ~ 安装Kubernetes-修改网络配置的全部操作, 初始化一个Worker机器

1
2
3
4
5
6
7
8
# 执行如下命令将Worker加入集群,此处的秘钥是初始化Master后生成的
kubeadm join 10.71.16.32:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:67e23b74df39cfca4dd0ba3d747139cb0dd4ea5c546a12c60e84b3c9b057fc6e
# 添加完毕后, 在Master上查看节点状态
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 10h v1.18.3
k8s-worker Ready <none> 96s v1.18.3

https://github.com/kubernetes/dashboard

Kubernetes(一) 跟着官方文档从零搭建K8S

Kubernetes应用部署

本文将与读者一起, 学习如何在Kubernetes上部署应用. 包括: 如何部署应用、发布文件讲解、将本地项目打包发布等.

阅读这篇文章你能收获到:

  • 学会如何在k8s部署应用
  • 如何打包Docker镜像、上传到私有库

阅读本文你需要:

  • 熟悉Linux命令
  • 有一个Kubernetes环境

理解描述文件

首先, 我们通过在Kubernetes部署Nginx来理解描述文件.

一般地, Kubernetes使用yaml(或json)来描述发布配置. 下面是一个简单的描述文件: nginx-pod.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1      # 描述文件所遵循KubernetesAPI的版本
kind: Pod # 描述的类型是pod
metadata:
name: nginx-pod # pod的名称
labels: # 标签
app: nginx-pod
env: test
spec:
containers:
- name: nginx-pod # 容器名
image: nginx:1.18 # 镜像名称及版本
imagePullPolicy: IfNotPresent # 如果本地不存在就去远程仓库拉取
ports:
- containerPort: 80 # pod对外端口
restartPolicy: Always

我们登录Master节点, 使用kubectl命令来部署这个文件所描述的应用. (当然, 使用dashboard发布也可)

1
2
3
4
5
6
kubectl apply -f nginx-pod.yaml
pod/nginx-pod created

kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 39s

kubectl get pods命令是用来查看pod列表的, 使用grep筛选出nginx相关的pod. 此时nginx已经发布完成了. 我们可以从dashboard来直观地查看到应用状态.

备注: 删除pod可使用kubectl delete -f nginx-pod.yaml 命令, 也可直接在dashboard进行操作.

如何访问服务

上一小节我们部署了一个Nginx pod, 但我们无法访问到该Nginx.

想要访问到pod中的服务, 最简单的方式就是通过端口转发, 执行如下命令, 将宿主机的9999端口与nginx-pod的80端口绑定:

1
2
3
kubectl port-forward --address 0.0.0.0 nginx-pod 9999:80
Forwarding from 0.0.0.0:9999 -> 80
Handling connection for 9999

此时, 我们可以通过访问宿主机9999端口来访问Nginx.

部署本地项目

将本地开发的项目发布到Kubernetes, 需要将项目打包成Docker镜像, 然后将镜像推送到仓库(公开/私有仓库都可).

首先, 我们需要一个可以运行的本地项目, 笔者使用spring-boot构建了一个简单的web项目:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@RestController
@RequestMapping(value = "/k8s-test")
@SpringBootApplication
public class K8sTestApplication {

@GetMapping(value = "/timestamp")
public ResponseEntity<?> getTimestamp() {
return ResponseEntity.ok(System.currentTimeMillis() + "\n");
}

public static void main(String[] args) {
SpringApplication.run(K8sTestApplication.class, args);
}
}

打包Docker镜像

Dockerfile reference

有了项目, 我们需要将其打包成一个Docker镜像, Dockerfile内容如下:

1
2
3
4
FROM java:8-alpine
COPY ./k8s-test-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
ENTRYPOINT ["java", "-jar", "k8s-test-0.0.1-SNAPSHOT.jar"]
  • FROM java:8-alpine: 该镜像基于java-8-alpine镜像.
  • COPY ./target/k8s-test-0.0.1-SNAPSHOT.jar /usr/app/: 将编译打包好的jar拷贝到镜像的/usr/app目录下.
  • WORKDIR /usr/app: 工作目录指定为/usr/app.
  • ENTRYPOINT ["java", "-jar", "k8s-test-0.0.1-SNAPSHOT.jar"]: 启动docker时执行java -jar k8s-test-0.0.1-SNAPSHOT.jar命令

进入到Dockerfile所在目录执行docker build -t piaoruiqing/k8s-test .进行打包. 注意不要遗漏掉命令最后面的. , 它代表当前目录.

1
2
3
4
5
# 打包
docker build -t piaoruiqing/k8s-test .
# 通过docker images命令可以查看本地镜像列表:
docker images | grep k8s
piaoruiqing/k8s-test latest efe9e9625376 4 minutes ago 174MB

推送到远程仓库

留空给Harbor

Kubernetes(二) 应用部署

如何从外部访问服务

通过前文的讲解,《跟着官方文档从零搭建K8S》、《应用部署》相信读者已经对Kubernetes安装及部署应用有了一定的了解. 接下来, 本文将针对如何将服务暴露给外部进行讲解.

阅读这篇文章你能收获到:

  • 了解Kubernetes暴露服务的几种方案及其优缺点.

阅读本文你需要:

  • 了解基本的Kubernetes命令
  • 有一个Kubernetes环境

将服务暴露给外部客户端的几种方式

  • 通过port-forward转发, 这种方式在之前的文章中有提到过, 操作方便、适合调试时使用, 不适用于生产环境.
  • 通过NodePort, 此时集群中每一个节点(Node)都会监听指定端口, 我们通过任意节点的端口即可访问到指定服务. 但过多的服务会开启大量端口难以维护.
  • 通过LoadBalance来暴露服务. LoadBalance(负载均衡 LB)通常由云服务商提供, 如果云环境中不提供LB服务, 我们通常直接使用Ingress, 或使用MetalLB来自行配置LB.
  • 通过Ingress公开多个服务. Ingress公开了从群集外部到群集内 services 的HTTP和HTTPS路由. 流量路由由Ingress资源上定义的规则控制. 在云服务商不提供LB服务的情况下, 我们可以直接使用Ingress来暴露服务. (另外, 使用LB + Ingress的部署方案可以避免过多LB应用带来的花费).

Kubernetes(三) 如何从外部访问服务

K8s开源管理产品推荐

kubeasz

kuboard

KubeSphere

cert-manager

参考文章

Kubernetes Documentation

This tutorial provides a walkthrough of the basics of the Kubernetes cluster orchestration system.
Learn Kubernetes Basics

Kubernetes Explained in 6 Minutes | k8s Architecture

Google Kubernetes Engine 文档

Kubernetes Tutorial for Beginners – Basic Concepts & Examples

云原生资料库

Kubernetes指南

才云开源内部 Kubernetes 学习路径

阿里云和CNCF联合开发 云原生技术公开课

云原生实战 KubeSphere x 尚硅谷

深入剖析Kubernetes

Kuboard for K8S

PHIPPY AND FRIENDS

文章目录
  1. 1. 前言
  2. 2. 更新历史
  3. 3. Kubernetes跟着官方文档从零搭建K8S
    1. 3.1. Before installing kubeadm
    2. 3.2. Installing runtime
    3. 3.3. Installing kubeadm, kubelet and kubectl
    4. 3.4. 初始化k8s-master
    5. 3.5. 添加k8s-worker
  4. 4. Kubernetes应用部署
    1. 4.1. 理解描述文件
    2. 4.2. 如何访问服务
    3. 4.3. 部署本地项目
    4. 4.4. 打包Docker镜像
    5. 4.5. 推送到远程仓库
  5. 5. 如何从外部访问服务
    1. 5.1. 将服务暴露给外部客户端的几种方式
  6. 6. K8s开源管理产品推荐
  7. 7. 参考文章