首页
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,587 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,083 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,944 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,876 阅读
5
kubernetes (k8s) 二进制高可用安装
1,814 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
208
篇文章
累计收到
124
条评论
首页
栏目
默认分类
页面
直播
统计
壁纸
留言
友链
关于
搜索到
208
篇与
默认分类
的结果
2021-12-30
在k8s(kubernetes) 上安装 ingress V1.1.0
Ingress 公开了从集群外部到集群内服务的 HTTP 和 HTTPS 路由。流量路由由 Ingress 资源上定义的规则控制。下面是一个将所有流量都发送到同一 Service 的简单 Ingress 示例:在使用 ingress 创建后发现没有默认HTTP[root@hello ~/yaml/nginx]# kubectl describe ingress Name: ingress-host-bar Namespace: default Address: Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) Rules: Host Path Backends ---- ---- -------- hello.chenby.cn / hello-server:8000 (172.20.1.13:9000,172.20.1.14:9000) demo.chenby.cn /nginx nginx-demo:8000 (172.20.2.14:80,172.20.2.15:80) Annotations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 43m nginx-ingress-controller Scheduled for sync [root@hello ~/yaml/nginx]#出现该问题后是因为没有创建默认的后端,需要卸载之前安装的,之前用什么方式安装就用对应的方式卸载写入配置文件,并执行[root@hello ~/yaml]# vim deploy.yaml [root@hello ~/yaml]# [root@hello ~/yaml]# [root@hello ~/yaml]# cat deploy.yaml apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx --- # Source: ingress-nginx/templates/controller-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx automountServiceAccountToken: true --- # Source: ingress-nginx/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx data: allow-snippet-annotations: 'true' --- # Source: ingress-nginx/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx rules: - apiGroups: - '' resources: - configmaps - endpoints - nodes - pods - secrets - namespaces verbs: - list - watch - apiGroups: - '' resources: - nodes verbs: - get - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch --- # Source: ingress-nginx/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - namespaces verbs: - get - apiGroups: - '' resources: - configmaps - pods - secrets - endpoints verbs: - get - list - watch - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch - apiGroups: - '' resources: - configmaps resourceNames: - ingress-controller-leader verbs: - get - update - apiGroups: - '' resources: - configmaps verbs: - create - apiGroups: - '' resources: - events verbs: - create - patch --- # Source: ingress-nginx/templates/controller-rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-service-webhook.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginx spec: type: ClusterIP ports: - name: https-webhook port: 443 targetPort: webhook appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: type: NodePort externalTrafficPolicy: Local ipFamilyPolicy: SingleStack ipFamilies: - IPv4 ports: - name: http port: 80 protocol: TCP targetPort: http appProtocol: http - name: https port: 443 protocol: TCP targetPort: https appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: dnsPolicy: ClusterFirst containers: - name: controller image: k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: LD_PRELOAD value: /usr/local/lib/libmimalloc.so livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true resources: requests: cpu: 100m memory: 90Mi nodeSelector: kubernetes.io/os: linux serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 300 volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission --- # Source: ingress-nginx/templates/controller-ingressclass.yaml # We don't support namespaced ingressClass yet # So a ClusterRole and a ClusterRoleBinding is required apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: nginx namespace: ingress-nginx spec: controller: k8s.io/ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml # before changing this value, check the required kubernetes version # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook name: ingress-nginx-admission webhooks: - name: validate.nginx.ingress.kubernetes.io matchPolicy: Equivalent rules: - apiGroups: - networking.k8s.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - ingresses failurePolicy: Fail sideEffects: None admissionReviewVersions: - v1 clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /networking/v1/ingresses --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - update --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - '' resources: - secrets verbs: - get - create --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-create namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-create labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: create image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 imagePullPolicy: IfNotPresent args: - create - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc - --namespace=$(POD_NAMESPACE) - --secret-name=ingress-nginx-admission env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-patch namespace: ingress-nginx annotations: helm.sh/hook: post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-patch labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: patch image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 imagePullPolicy: IfNotPresent args: - patch - --webhook-name=ingress-nginx-admission - --namespace=$(POD_NAMESPACE) - --patch-mutating=false - --secret-name=ingress-nginx-admission - --patch-failure-policy=Fail env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000 [root@hello ~/yaml]#启用后端,写入配置文件执行[root@hello ~/yaml]# vim backend.yaml [root@hello ~/yaml]# cat backend.yaml apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend labels: app.kubernetes.io/name: default-http-backend namespace: kube-system spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: default-http-backend template: metadata: labels: app.kubernetes.io/name: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend image: k8s.gcr.io/defaultbackend-amd64:1.5 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: kube-system labels: app.kubernetes.io/name: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app.kubernetes.io/name: default-http-backend [root@hello ~/yaml]#安装测试应用[root@hello ~/yaml]# vim ingress-demo-app.yaml [root@hello ~/yaml]# [root@hello ~/yaml]# cat ingress-demo-app.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-server spec: replicas: 2 selector: matchLabels: app: hello-server template: metadata: labels: app: hello-server spec: containers: - name: hello-server image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server ports: - containerPort: 9000 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-demo name: nginx-demo spec: replicas: 2 selector: matchLabels: app: nginx-demo template: metadata: labels: app: nginx-demo spec: containers: - image: nginx name: nginx --- apiVersion: v1 kind: Service metadata: labels: app: nginx-demo name: nginx-demo spec: selector: app: nginx-demo ports: - port: 8000 protocol: TCP targetPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: hello-server name: hello-server spec: selector: app: hello-server ports: - port: 8000 protocol: TCP targetPort: 9000 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-host-bar spec: ingressClassName: nginx rules: - host: "hello.chenby.cn" http: paths: - pathType: Prefix path: "/" backend: service: name: hello-server port: number: 8000 - host: "demo.chenby.cn" http: paths: - pathType: Prefix path: "/nginx" backend: service: name: nginx-demo port: number: 8000 [root@hello ~/yaml]# [root@hello ~/yaml]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-demo-app <none> app.demo.com 192.168.1.11 80 20m ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 192.168.1.11 80 2m17s [root@hello ~/yaml]#过滤查看ingress端口[root@hello ~/yaml]# kubectl get svc -A | grep ingress default ingress-demo-app ClusterIP 10.68.231.41 <none> 80/TCP 51m ingress-nginx ingress-nginx-controller NodePort 10.68.93.71 <none> 80:32746/TCP,443:30538/TCP 32m ingress-nginx ingress-nginx-controller-admission ClusterIP 10.68.146.23 <none> 443/TCP 32m [root@hello ~/yaml]#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。70篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
2,083 阅读
4 评论
0 点赞
2021-12-30
在 k8s(kubernetes)中使用 Loki 进行日志监控
安装helm环境[root@hello ~/yaml]# [root@hello ~/yaml]# curl https://baltocdn.com/helm/signing.asc | sudo apt-key add - [root@hello ~/yaml]# sudo apt-get install apt-transport-https --yes [root@hello ~/yaml]# echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list deb https://baltocdn.com/helm/stable/debian/ all main [root@hello ~/yaml]# sudo apt-get update [root@hello ~/yaml]# sudo apt-get install helm [root@hello ~/yaml]#添加安装下载源[root@hello ~/yaml]# helm repo add loki https://grafana.github.io/loki/charts && helm repo update WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config "loki" has been added to your repositories WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "loki" chart repository Update Complete. ⎈Happy Helming!⎈ [root@hello ~/yaml]# [root@hello ~/yaml]# [root@hello ~/yaml]# helm pull loki/loki-stack WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config [root@hello ~/yaml]# ls loki-stack-2.1.2.tgz nfs-storage.yaml nginx-ingress.yaml [root@hello ~/yaml]# tar xf loki-stack-2.1.2.tgz [root@hello ~/yaml]# ls loki-stack loki-stack-2.1.2.tgz nfs-storage.yaml nginx-ingress.yaml安装loki日志系统[root@hello ~/yaml]# helm install loki loki-stack/ WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config WARNING: This chart is deprecated W1203 07:31:04.751065 212245 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W1203 07:31:04.754254 212245 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W1203 07:31:04.833003 212245 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W1203 07:31:04.833003 212245 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ NAME: loki LAST DEPLOYED: Fri Dec 3 07:31:04 2021 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: The Loki stack has been deployed to your cluster. Loki can now be added as a datasource in Grafana. See http://docs.grafana.org/features/datasources/loki/ for more detail. [root@hello ~/yaml]#查看安装后是否完成[root@hello ~/yaml]# helm list -A WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION loki default 1 2021-12-03 07:31:04.3324429 +0000 UTC deployed loki-stack-2.1.2 v2.0.0 [root@hello ~/yaml]# [root@hello ~/yaml]# kubectl get pod NAME READY STATUS RESTARTS AGE loki-0 0/1 Running 0 68s loki-promtail-79tn8 1/1 Running 0 68s loki-promtail-qzjjs 1/1 Running 0 68s loki-promtail-zlt7p 1/1 Running 0 68s nfs-client-provisioner-dc5789f74-jsrh7 1/1 Running 0 44m [root@hello ~/yaml]#查看svc并修改类型[root@hello ~/yaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 4h44m loki ClusterIP 10.68.140.107 <none> 3100/TCP 2m58s loki-headless ClusterIP None <none> 3100/TCP 2m58s [root@hello ~/yaml]#将svc设置为 type: NodePort[root@hello ~/yaml]# kubectl edit svc loki service/loki edited [root@hello ~/yaml]# [root@hello ~/yaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 4h46m loki NodePort 10.68.140.107 <none> 3100:31089/TCP 4m34s loki-headless ClusterIP None <none> 3100/TCP 4m34s [root@hello ~/yaml]#添加nginx应用[root@hello ~/yaml]# vim nginx-app.yaml [root@hello ~/yaml]# cat nginx-app.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx jobLabel: nginx spec: ports: - name: nginx port: 80 protocol: TCP selector: app: nginx type: NodePort [root@hello ~/yaml]# 查看nginx的pod[root@hello ~/yaml]# kubectl apply -f nginx-app.yaml deployment.apps/nginx created service/nginx created [root@hello ~/yaml]# kubectl get pod | grep nginx nginx-5d59d67564-7fj4b 1/1 Running 0 29s [root@hello ~/yaml]# 测试访问[root@hello ~/yaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 4h57m loki NodePort 10.68.140.107 <none> 3100:31089/TCP 15m loki-headless ClusterIP None <none> 3100/TCP 15m nginx NodePort 10.68.150.95 <none> 80:31317/TCP 105s [root@hello ~/yaml]# [root@hello ~/yaml]# while true; do curl --silent --output /dev/null --write-out '%{http_code}' http://192.168.1.12:31317; sleep 1; echo; done 在grafana中添加源查看日志添加面板Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。68篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
755 阅读
0 评论
0 点赞
2021-12-30
Centos9网卡配置
Centos9 网卡配置文件已修改,如下[root@bogon ~]# cat /etc/NetworkManager/system-connections/ens18.nmconnection [connection] id=ens18 uuid=8d1ece55-d999-3c97-866b-d2e23832a324 type=ethernet autoconnect-priority=-999 interface-name=ens18 permissions= timestamp=1639473429 [ethernet] mac-address-blacklist= [ipv4] address1=192.168.1.92/24,192.168.1.1 dns=8.8.8.8; dns-search= method=manual [ipv6] addr-gen-mode=eui64 dns-search= method=auto [proxy] [root@bogon ~]#命令语法:# nmcli connection modify <interface_name> ipv4.address <ip/prefix>复制代码注意: 为了简化语句,在 nmcli 命令中,我们通常用 con 关键字替换 connection,并用 mod 关键字替换 modify。将 IPv4 地址 (192.168.1.91) 分配给 ens18网卡上, [root@chenby ~]# nmcli con mod ens18 ipv4.addresses 192.168.1.91/24; 复制代码使用下面的 nmcli 命令设置网关, [root@chenby ~]# nmcli con mod ens18 ipv4.gateway 192.168.1.1; 复制代码设置手动配置(从 dhcp 到 static), [root@chenby ~]# nmcli con mod ens18 ipv4.method manual; 复制代码设置 DNS 值为 “8.8.8.8”, [root@chen'b'y ~]# nmcli con mod ens18 ipv4.dns "8.8.8.8"; 复制代码要保存上述更改并重新加载,请执行如下 nmcli 命令, [root@chenby ~]# nmcli con up ens18合成一句话为:nmcli con mod ens18 ipv4.addresses 192.168.1.91/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18远程修改IP为:ssh root@192.168.1.197 "nmcli con mod ens18 ipv4.addresses 192.168.1.91/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。69篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步v
2021年12月30日
611 阅读
0 评论
0 点赞
2021-12-30
kubernetes核心实战(八)--- service
13、service四层网络负载创建[root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# vim my-app.yaml [root@k8s-master-node1 ~/yaml/test]# cat my-app.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-dep name: my-dep spec: replicas: 3 selector: matchLabels: app: my-dep template: metadata: labels: app: my-dep spec: containers: - image: nginx name: nginx [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f my-app.yaml deployment.apps/my-dep created [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE ingress-demo-app 2/2 2 2 155m my-dep 3/3 3 3 71s nfs-client-provisioner 1/1 1 1 140m [root@k8s-master-node1 ~/yaml/test]# 使用标签查找[root@k8s-master-node1 ~/yaml/test]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS hello-27285682--1-q2p6x 0/1 Completed 0 3m15s controller-uid=5e700c2e-29b3-4099-be17-b005d8077284,job-name=hello-27285682 hello-27285683--1-b2qgn 0/1 Completed 0 2m15s controller-uid=d7455e28-bd37-4fdf-bf47-de7ae6b4b7bb,job-name=hello-27285683 hello-27285684--1-glsmp 0/1 Completed 0 75s controller-uid=9cc7f28d-e780-49fb-a23a-ab725413ea8a,job-name=hello-27285684 hello-27285685--1-s7ws5 0/1 ContainerCreating 0 15s controller-uid=169e3631-6981-4df8-bfee-6a4f4632b713,job-name=hello-27285685 ingress-demo-app-694bf5d965-8rh7f 1/1 Running 0 157m app=ingress-demo-app,pod-template-hash=694bf5d965 ingress-demo-app-694bf5d965-swkpb 1/1 Running 0 157m app=ingress-demo-app,pod-template-hash=694bf5d965 my-dep-5b7868d854-kzflw 1/1 Running 0 2m34s app=my-dep,pod-template-hash=5b7868d854 my-dep-5b7868d854-pfhps 1/1 Running 0 2m34s app=my-dep,pod-template-hash=5b7868d854 my-dep-5b7868d854-v67ll 1/1 Running 0 2m34s app=my-dep,pod-template-hash=5b7868d854 nfs-client-provisioner-dc5789f74-5bznq 1/1 Running 0 141m app=nfs-client-provisioner,pod-template-hash=dc5789f74 pi--1-k5cbq 0/1 Completed 0 25m controller-uid=2ecfcafd-f848-403b-b37f-9c145a0dc8cc,job-name=pi redis-app-86g4q 1/1 Running 0 27m controller-revision-hash=77c8899f5d,name=fluentd-redis,pod-template-generation=1 redis-app-rt92n 1/1 Running 0 27m controller-revision-hash=77c8899f5d,name=fluentd-redis,pod-template-generation=1 redis-app-vkzft 1/1 Running 0 27m controller-revision-hash=77c8899f5d,name=fluentd-redis,pod-template-generation=1 web-0 1/1 Running 0 91m app=nginx,controller-revision-hash=web-57c5cc66df,statefulset.kubernetes.io/pod-name=web-0 web-1 1/1 Running 0 91m app=nginx,controller-revision-hash=web-57c5cc66df,statefulset.kubernetes.io/pod-name=web-1 web-2 1/1 Running 0 90m app=nginx,controller-revision-hash=web-57c5cc66df,statefulset.kubernetes.io/pod-name=web-2 [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl get pod -l app=my-dep NAME READY STATUS RESTARTS AGE my-dep-5b7868d854-kzflw 1/1 Running 0 2m42s my-dep-5b7868d854-pfhps 1/1 Running 0 2m42s my-dep-5b7868d854-v67ll 1/1 Running 0 2m42s [root@k8s-master-node1 ~/yaml/test]#命令行暴露端口[root@k8s-master-node1 ~/yaml/test]# kubectl expose deployment my-dep --port=8000 --target-port=80 service/my-dep exposed [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-demo-app ClusterIP 10.96.145.40 <none> 80/TCP 158m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 160m my-dep ClusterIP 10.96.241.162 <none> 8000/TCP 11s nginx ClusterIP None <none> 80/TCP 92m [root@k8s-master-node1 ~/yaml/test]#yaml文件暴露端口apiVersion: v1 kind: Service metadata: labels: app: my-dep name: my-dep spec: selector: app: my-dep ports: - port: 8000 protocol: TCP targetPort: 80ClusterIP方式暴露命令行kubectl expose deployment my-dep --port=8000 --target-port=80 --type=ClusterIPyaml方式apiVersion: v1 kind: Service metadata: labels: app: my-dep name: my-dep spec: ports: - port: 8000 protocol: TCP targetPort: 80 selector: app: my-dep type: ClusterIP访问测试[root@k8s-master-node1 ~/yaml/test]# curl -I 10.96.241.162:8000 HTTP/1.1 200 OK Server: nginx/1.21.4 Date: Wed, 17 Nov 2021 09:30:27 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 02 Nov 2021 14:49:22 GMT Connection: keep-alive ETag: "61814ff2-267" Accept-Ranges: bytes [root@k8s-master-node1 ~/yaml/test]#NodePort方式暴露命令行kubectl expose deployment my-dep --port=8000 --target-port=80 --type=NodePortyaml方式apiVersion: v1 kind: Service metadata: labels: app: my-dep name: my-dep spec: ports: - port: 8000 protocol: TCP targetPort: 80 selector: app: my-dep type: NodePort查看service[root@k8s-master-node1 ~/yaml/test]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-demo-app ClusterIP 10.96.145.40 <none> 80/TCP 165m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 167m my-dep NodePort 10.96.241.162 <none> 8000:32306/TCP 7m13s nginx ClusterIP None <none> 80/TCP 99m [root@k8s-master-node1 ~/yaml/test]#访问测试使用kubernetes任何node的ip都可以访问[root@k8s-master-node1 ~/yaml/test]# curl -I 192.168.1.62:32306 HTTP/1.1 200 OK Server: nginx/1.21.4 Date: Wed, 17 Nov 2021 09:36:50 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 02 Nov 2021 14:49:22 GMT Connection: keep-alive ETag: "61814ff2-267" Accept-Ranges: bytes [root@k8s-master-node1 ~/yaml/test]# curl -I 192.168.1.61:32306 HTTP/1.1 200 OK Server: nginx/1.21.4 Date: Wed, 17 Nov 2021 09:36:53 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 02 Nov 2021 14:49:22 GMT Connection: keep-alive ETag: "61814ff2-267" Accept-Ranges: bytes [root@k8s-master-node1 ~/yaml/test]# curl -I 192.168.1.63:32306 HTTP/1.1 200 OK Server: nginx/1.21.4 Date: Wed, 17 Nov 2021 09:36:56 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 02 Nov 2021 14:49:22 GMT Connection: keep-alive ETag: "61814ff2-267" Accept-Ranges: bytes [root@k8s-master-node1 ~/yaml/test]# https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
516 阅读
0 评论
0 点赞
2021-12-30
kubernetes核心实战(九) --- Ingress
14、Ingress检查是否有安装[root@k8s-master-node1 ~/yaml/test]# kubectl get pod,svc -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/ingress-nginx-admission-create--1-74mtg 0/1 Completed 0 172m pod/ingress-nginx-admission-patch--1-5qrct 0/1 Completed 0 172m pod/ingress-nginx-controller-f97bd58b5-vr8c2 1/1 Running 0 172m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller NodePort 10.96.109.80 <none> 80:30127/TCP,443:36903/TCP 172m service/ingress-nginx-controller-admission ClusterIP 10.96.215.201 <none> 443/TCP 172m [root@k8s-master-node1 ~/yaml/test]#若未安装可以查看官网文档:https://kubernetes.github.io/ingress-nginx/deploy/创建环境:[root@k8s-master-node1 ~/yaml/test]# vim ingress.yaml [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# cat ingress.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-server spec: replicas: 2 selector: matchLabels: app: hello-server template: metadata: labels: app: hello-server spec: containers: - name: hello-server image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server ports: - containerPort: 9000 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-demo name: nginx-demo spec: replicas: 2 selector: matchLabels: app: nginx-demo template: metadata: labels: app: nginx-demo spec: containers: - image: nginx name: nginx --- apiVersion: v1 kind: Service metadata: labels: app: nginx-demo name: nginx-demo spec: selector: app: nginx-demo ports: - port: 8000 protocol: TCP targetPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: hello-server name: hello-server spec: selector: app: hello-server ports: - port: 8000 protocol: TCP targetPort: 9000 [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f ingress.yaml deployment.apps/hello-server created deployment.apps/nginx-demo created service/nginx-demo created service/hello-server created [root@k8s-master-node1 ~/yaml/test]#查看四层负载并测试[root@k8s-master-node1 ~/yaml/test]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-server ClusterIP 10.96.246.46 <none> 8000/TCP 21s ingress-demo-app ClusterIP 10.96.145.40 <none> 80/TCP 3h1m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h3m my-dep NodePort 10.96.241.162 <none> 8000:32306/TCP 23m nginx ClusterIP None <none> 80/TCP 115m nginx-demo ClusterIP 10.96.162.193 <none> 8000/TCP 21s [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# curl -I 10.96.162.193:8000 HTTP/1.1 200 OK Server: nginx/1.21.4 Date: Wed, 17 Nov 2021 09:49:40 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 02 Nov 2021 14:49:22 GMT Connection: keep-alive ETag: "61814ff2-267" Accept-Ranges: bytes [root@k8s-master-node1 ~/yaml/test]# curl -I 10.96.246.46:8000 HTTP/1.1 200 OK Date: Wed, 17 Nov 2021 09:49:52 GMT Content-Length: 12 Content-Type: text/plain; charset=utf-8 [root@k8s-master-node1 ~/yaml/test]#创建七层[root@k8s-master-node1 ~/yaml/test]# vim ingress-7.yaml [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# cat ingress-7.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-host-bar spec: ingressClassName: nginx rules: - host: "hello.chenby.cn" http: paths: - pathType: Prefix path: "/" backend: service: name: hello-server port: number: 8000 - host: "demo.chenby.cn" http: paths: - pathType: Prefix path: "/nginx" # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404 backend: service: name: nginx-demo ## java,比如使用路径重写,去掉前缀nginx port: number: 8000 [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f ingress-7.yaml ingress.networking.k8s.io/ingress-host-bar created [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-demo-app <none> app.demo.com 192.168.1.62 80 3h14m ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 192.168.1.62 80 9m50s [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl describe ingress ingress-host-bar Name: ingress-host-bar Namespace: default Address: 192.168.1.62 Default backend: default-http-backend:80 (10.244.2.7:8080) Rules: Host Path Backends ---- ---- -------- hello.chenby.cn / hello-server:8000 (10.244.2.47:9000,10.244.2.48:9000) demo.chenby.cn /nginx nginx-demo:8000 (10.244.0.13:80,10.244.1.34:80) Annotations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 6m26s (x2 over 6m50s) nginx-ingress-controller Scheduled for sync [root@k8s-master-node1 ~/yaml/test]#电脑上写死hosts[root@k8s-master-node1 ~/yaml/test]# kubectl get service ingress-demo-app NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-demo-app ClusterIP 10.96.145.40 <none> 80/TCP 3h15m [root@k8s-master-node1 ~/yaml/test]# 10.96.145.40 hello.chenby.cn 10.96.145.40 demo.chenby.cn测试访问[root@k8s-master-node1 ~/yaml/test]# curl hello.chenby.cn Hostname: ingress-demo-app-694bf5d965-8rh7f IP: 127.0.0.1 IP: 10.244.1.6 RemoteAddr: 192.168.1.61:49809 GET / HTTP/1.1 Host: hello.chenby.cn User-Agent: curl/7.68.0 Accept: */* [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# curl demo.chenby.cn Hostname: ingress-demo-app-694bf5d965-swkpb IP: 127.0.0.1 IP: 10.244.2.4 RemoteAddr: 192.168.1.61:57797 GET / HTTP/1.1 Host: demo.chenby.cn User-Agent: curl/7.68.0 Accept: */* [root@k8s-master-node1 ~/yaml/test]#url 重写[root@k8s-master-node1 ~/yaml/test]# vim ingress-url [root@k8s-master-node1 ~/yaml/test]# cat ingress-url.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: ingress-host-bar spec: ingressClassName: nginx rules: - host: "hello.chenby.cn" http: paths: - pathType: Prefix path: "/" backend: service: name: hello-server port: number: 8000 - host: "demo.chenby.cn" http: paths: - pathType: Prefix path: "/nginx(/|$)(.*)" # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404 backend: service: name: nginx-demo ## java,比如使用路径重写,去掉前缀nginx port: number: 8000 [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f ingress-url.yaml ingress.networking.k8s.io/ingress-host-bar created [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# curl hello.chenby.cn Hostname: ingress-demo-app-694bf5d965-8rh7f IP: 127.0.0.1 IP: 10.244.1.6 RemoteAddr: 192.168.1.61:42303 GET / HTTP/1.1 Host: hello.chenby.cn User-Agent: curl/7.68.0 Accept: */* [root@k8s-master-node1 ~/yaml/test]# curl demo.chenby.cn Hostname: ingress-demo-app-694bf5d965-swkpb IP: 127.0.0.1 IP: 10.244.2.4 RemoteAddr: 192.168.1.61:1108 GET / HTTP/1.1 Host: demo.chenby.cn User-Agent: curl/7.68.0 Accept: */* [root@k8s-master-node1 ~/yaml/test]#流量限制[root@k8s-master-node1 ~/yaml/test]# vim ingress-limit [root@k8s-master-node1 ~/yaml/test]# cat ingress-limit.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-limit-rate annotations: nginx.ingress.kubernetes.io/limit-rps: "1" spec: ingressClassName: nginx rules: - host: "haha.chenby.cn" http: paths: - pathType: Exact path: "/" backend: service: name: nginx-demo port: number: 8000 [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl apply -f ingress-limit.yaml ingress.networking.k8s.io/ingress-limit-rate created [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# vim /etc/hosts [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# curl haha.chenby.cn Hostname: ingress-demo-app-694bf5d965-8rh7f IP: 127.0.0.1 IP: 10.244.1.6 RemoteAddr: 192.168.1.61:1676 GET / HTTP/1.1 Host: haha.chenby.cn User-Agent: curl/7.68.0 Accept: */* [root@k8s-master-node1 ~/yaml/test]#注:可以将server 改为nodeport 即可在外部访问,将 type 改为 NodePort[root@k8s-master-node1 ~/yaml/test]# kubectl get service ingress-demo-app NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-demo-app ClusterIP 10.96.145.40 <none> 80/TCP 6h27m [root@k8s-master-node1 ~/yaml/test]# kubectl edit service ingress-demo-app service/ingress-demo-app edited [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# [root@k8s-master-node1 ~/yaml/test]# kubectl get service ingress-demo-app NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-demo-app NodePort 10.96.145.40 <none> 80:30975/TCP 6h28m [root@k8s-master-node1 ~/yaml/test]#https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
488 阅读
0 评论
0 点赞
1
...
29
30
31
...
42