首页
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,610 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,092 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,953 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,890 阅读
5
kubernetes (k8s) 二进制高可用安装
1,826 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
208
篇文章
累计收到
124
条评论
首页
栏目
默认分类
页面
直播
统计
壁纸
留言
友链
关于
搜索到
208
篇与
cby
的结果
2021-12-30
在k8s(kubernetes) 上安装 ingress V1.1.0
Ingress 公开了从集群外部到集群内服务的 HTTP 和 HTTPS 路由。流量路由由 Ingress 资源上定义的规则控制。下面是一个将所有流量都发送到同一 Service 的简单 Ingress 示例:在使用 ingress 创建后发现没有默认HTTP[root@hello ~/yaml/nginx]# kubectl describe ingress Name: ingress-host-bar Namespace: default Address: Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) Rules: Host Path Backends ---- ---- -------- hello.chenby.cn / hello-server:8000 (172.20.1.13:9000,172.20.1.14:9000) demo.chenby.cn /nginx nginx-demo:8000 (172.20.2.14:80,172.20.2.15:80) Annotations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 43m nginx-ingress-controller Scheduled for sync [root@hello ~/yaml/nginx]#出现该问题后是因为没有创建默认的后端,需要卸载之前安装的,之前用什么方式安装就用对应的方式卸载写入配置文件,并执行[root@hello ~/yaml]# vim deploy.yaml [root@hello ~/yaml]# [root@hello ~/yaml]# [root@hello ~/yaml]# cat deploy.yaml apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx --- # Source: ingress-nginx/templates/controller-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx automountServiceAccountToken: true --- # Source: ingress-nginx/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx data: allow-snippet-annotations: 'true' --- # Source: ingress-nginx/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx rules: - apiGroups: - '' resources: - configmaps - endpoints - nodes - pods - secrets - namespaces verbs: - list - watch - apiGroups: - '' resources: - nodes verbs: - get - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch --- # Source: ingress-nginx/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - namespaces verbs: - get - apiGroups: - '' resources: - configmaps - pods - secrets - endpoints verbs: - get - list - watch - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch - apiGroups: - '' resources: - configmaps resourceNames: - ingress-controller-leader verbs: - get - update - apiGroups: - '' resources: - configmaps verbs: - create - apiGroups: - '' resources: - events verbs: - create - patch --- # Source: ingress-nginx/templates/controller-rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-service-webhook.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginx spec: type: ClusterIP ports: - name: https-webhook port: 443 targetPort: webhook appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: type: NodePort externalTrafficPolicy: Local ipFamilyPolicy: SingleStack ipFamilies: - IPv4 ports: - name: http port: 80 protocol: TCP targetPort: http appProtocol: http - name: https port: 443 protocol: TCP targetPort: https appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: dnsPolicy: ClusterFirst containers: - name: controller image: k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: LD_PRELOAD value: /usr/local/lib/libmimalloc.so livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true resources: requests: cpu: 100m memory: 90Mi nodeSelector: kubernetes.io/os: linux serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 300 volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission --- # Source: ingress-nginx/templates/controller-ingressclass.yaml # We don't support namespaced ingressClass yet # So a ClusterRole and a ClusterRoleBinding is required apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: nginx namespace: ingress-nginx spec: controller: k8s.io/ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml # before changing this value, check the required kubernetes version # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook name: ingress-nginx-admission webhooks: - name: validate.nginx.ingress.kubernetes.io matchPolicy: Equivalent rules: - apiGroups: - networking.k8s.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - ingresses failurePolicy: Fail sideEffects: None admissionReviewVersions: - v1 clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /networking/v1/ingresses --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - update --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - '' resources: - secrets verbs: - get - create --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-create namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-create labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: create image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 imagePullPolicy: IfNotPresent args: - create - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc - --namespace=$(POD_NAMESPACE) - --secret-name=ingress-nginx-admission env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-patch namespace: ingress-nginx annotations: helm.sh/hook: post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-patch labels: helm.sh/chart: ingress-nginx-4.0.10 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.1.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: patch image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 imagePullPolicy: IfNotPresent args: - patch - --webhook-name=ingress-nginx-admission - --namespace=$(POD_NAMESPACE) - --patch-mutating=false - --secret-name=ingress-nginx-admission - --patch-failure-policy=Fail env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000 [root@hello ~/yaml]#启用后端,写入配置文件执行[root@hello ~/yaml]# vim backend.yaml [root@hello ~/yaml]# cat backend.yaml apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend labels: app.kubernetes.io/name: default-http-backend namespace: kube-system spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: default-http-backend template: metadata: labels: app.kubernetes.io/name: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend image: k8s.gcr.io/defaultbackend-amd64:1.5 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: kube-system labels: app.kubernetes.io/name: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app.kubernetes.io/name: default-http-backend [root@hello ~/yaml]#安装测试应用[root@hello ~/yaml]# vim ingress-demo-app.yaml [root@hello ~/yaml]# [root@hello ~/yaml]# cat ingress-demo-app.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-server spec: replicas: 2 selector: matchLabels: app: hello-server template: metadata: labels: app: hello-server spec: containers: - name: hello-server image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server ports: - containerPort: 9000 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-demo name: nginx-demo spec: replicas: 2 selector: matchLabels: app: nginx-demo template: metadata: labels: app: nginx-demo spec: containers: - image: nginx name: nginx --- apiVersion: v1 kind: Service metadata: labels: app: nginx-demo name: nginx-demo spec: selector: app: nginx-demo ports: - port: 8000 protocol: TCP targetPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: hello-server name: hello-server spec: selector: app: hello-server ports: - port: 8000 protocol: TCP targetPort: 9000 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-host-bar spec: ingressClassName: nginx rules: - host: "hello.chenby.cn" http: paths: - pathType: Prefix path: "/" backend: service: name: hello-server port: number: 8000 - host: "demo.chenby.cn" http: paths: - pathType: Prefix path: "/nginx" backend: service: name: nginx-demo port: number: 8000 [root@hello ~/yaml]# [root@hello ~/yaml]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-demo-app <none> app.demo.com 192.168.1.11 80 20m ingress-host-bar nginx hello.chenby.cn,demo.chenby.cn 192.168.1.11 80 2m17s [root@hello ~/yaml]#过滤查看ingress端口[root@hello ~/yaml]# kubectl get svc -A | grep ingress default ingress-demo-app ClusterIP 10.68.231.41 <none> 80/TCP 51m ingress-nginx ingress-nginx-controller NodePort 10.68.93.71 <none> 80:32746/TCP,443:30538/TCP 32m ingress-nginx ingress-nginx-controller-admission ClusterIP 10.68.146.23 <none> 443/TCP 32m [root@hello ~/yaml]#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。70篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
2,092 阅读
4 评论
0 点赞
2021-12-30
在 k8s(kubernetes)中使用 Loki 进行日志监控
安装helm环境[root@hello ~/yaml]# [root@hello ~/yaml]# curl https://baltocdn.com/helm/signing.asc | sudo apt-key add - [root@hello ~/yaml]# sudo apt-get install apt-transport-https --yes [root@hello ~/yaml]# echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list deb https://baltocdn.com/helm/stable/debian/ all main [root@hello ~/yaml]# sudo apt-get update [root@hello ~/yaml]# sudo apt-get install helm [root@hello ~/yaml]#添加安装下载源[root@hello ~/yaml]# helm repo add loki https://grafana.github.io/loki/charts && helm repo update WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config "loki" has been added to your repositories WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "loki" chart repository Update Complete. ⎈Happy Helming!⎈ [root@hello ~/yaml]# [root@hello ~/yaml]# [root@hello ~/yaml]# helm pull loki/loki-stack WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config [root@hello ~/yaml]# ls loki-stack-2.1.2.tgz nfs-storage.yaml nginx-ingress.yaml [root@hello ~/yaml]# tar xf loki-stack-2.1.2.tgz [root@hello ~/yaml]# ls loki-stack loki-stack-2.1.2.tgz nfs-storage.yaml nginx-ingress.yaml安装loki日志系统[root@hello ~/yaml]# helm install loki loki-stack/ WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config WARNING: This chart is deprecated W1203 07:31:04.751065 212245 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W1203 07:31:04.754254 212245 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W1203 07:31:04.833003 212245 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W1203 07:31:04.833003 212245 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ NAME: loki LAST DEPLOYED: Fri Dec 3 07:31:04 2021 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: The Loki stack has been deployed to your cluster. Loki can now be added as a datasource in Grafana. See http://docs.grafana.org/features/datasources/loki/ for more detail. [root@hello ~/yaml]#查看安装后是否完成[root@hello ~/yaml]# helm list -A WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION loki default 1 2021-12-03 07:31:04.3324429 +0000 UTC deployed loki-stack-2.1.2 v2.0.0 [root@hello ~/yaml]# [root@hello ~/yaml]# kubectl get pod NAME READY STATUS RESTARTS AGE loki-0 0/1 Running 0 68s loki-promtail-79tn8 1/1 Running 0 68s loki-promtail-qzjjs 1/1 Running 0 68s loki-promtail-zlt7p 1/1 Running 0 68s nfs-client-provisioner-dc5789f74-jsrh7 1/1 Running 0 44m [root@hello ~/yaml]#查看svc并修改类型[root@hello ~/yaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 4h44m loki ClusterIP 10.68.140.107 <none> 3100/TCP 2m58s loki-headless ClusterIP None <none> 3100/TCP 2m58s [root@hello ~/yaml]#将svc设置为 type: NodePort[root@hello ~/yaml]# kubectl edit svc loki service/loki edited [root@hello ~/yaml]# [root@hello ~/yaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 4h46m loki NodePort 10.68.140.107 <none> 3100:31089/TCP 4m34s loki-headless ClusterIP None <none> 3100/TCP 4m34s [root@hello ~/yaml]#添加nginx应用[root@hello ~/yaml]# vim nginx-app.yaml [root@hello ~/yaml]# cat nginx-app.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx jobLabel: nginx spec: ports: - name: nginx port: 80 protocol: TCP selector: app: nginx type: NodePort [root@hello ~/yaml]# 查看nginx的pod[root@hello ~/yaml]# kubectl apply -f nginx-app.yaml deployment.apps/nginx created service/nginx created [root@hello ~/yaml]# kubectl get pod | grep nginx nginx-5d59d67564-7fj4b 1/1 Running 0 29s [root@hello ~/yaml]# 测试访问[root@hello ~/yaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 4h57m loki NodePort 10.68.140.107 <none> 3100:31089/TCP 15m loki-headless ClusterIP None <none> 3100/TCP 15m nginx NodePort 10.68.150.95 <none> 80:31317/TCP 105s [root@hello ~/yaml]# [root@hello ~/yaml]# while true; do curl --silent --output /dev/null --write-out '%{http_code}' http://192.168.1.12:31317; sleep 1; echo; done 在grafana中添加源查看日志添加面板Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。68篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
757 阅读
0 评论
0 点赞
2021-12-30
Centos9网卡配置
Centos9 网卡配置文件已修改,如下[root@bogon ~]# cat /etc/NetworkManager/system-connections/ens18.nmconnection [connection] id=ens18 uuid=8d1ece55-d999-3c97-866b-d2e23832a324 type=ethernet autoconnect-priority=-999 interface-name=ens18 permissions= timestamp=1639473429 [ethernet] mac-address-blacklist= [ipv4] address1=192.168.1.92/24,192.168.1.1 dns=8.8.8.8; dns-search= method=manual [ipv6] addr-gen-mode=eui64 dns-search= method=auto [proxy] [root@bogon ~]#命令语法:# nmcli connection modify <interface_name> ipv4.address <ip/prefix>复制代码注意: 为了简化语句,在 nmcli 命令中,我们通常用 con 关键字替换 connection,并用 mod 关键字替换 modify。将 IPv4 地址 (192.168.1.91) 分配给 ens18网卡上, [root@chenby ~]# nmcli con mod ens18 ipv4.addresses 192.168.1.91/24; 复制代码使用下面的 nmcli 命令设置网关, [root@chenby ~]# nmcli con mod ens18 ipv4.gateway 192.168.1.1; 复制代码设置手动配置(从 dhcp 到 static), [root@chenby ~]# nmcli con mod ens18 ipv4.method manual; 复制代码设置 DNS 值为 “8.8.8.8”, [root@chen'b'y ~]# nmcli con mod ens18 ipv4.dns "8.8.8.8"; 复制代码要保存上述更改并重新加载,请执行如下 nmcli 命令, [root@chenby ~]# nmcli con up ens18合成一句话为:nmcli con mod ens18 ipv4.addresses 192.168.1.91/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18远程修改IP为:ssh root@192.168.1.197 "nmcli con mod ens18 ipv4.addresses 192.168.1.91/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。69篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步v
2021年12月30日
614 阅读
0 评论
0 点赞
2021-12-30
Kubernetes(k8s)集群安装JupyterHub以及Lab
背景JupyterHub 为用户组带来了笔记本的强大功能。它使用户能够访问计算环境和资源,而不会给用户带来安装和维护任务的负担。用户——包括学生、研究人员和数据科学家——可以在他们自己的工作空间中完成他们的工作,共享资源可以由系统管理员有效管理。JupyterHub 在云端或您自己的硬件上运行,可以为世界上的任何用户提供预先配置的数据科学环境。它是可定制和可扩展的,适用于小型和大型团队、学术课程和大型基础设施。第一步、参考:https://cloud.tencent.com/developer/article/1902519 创建动态挂载存储第二步、安装helmroot@hello:~# curl https://baltocdn.com/helm/signing.asc | sudo apt-key add - root@hello:~# sudo apt-get install apt-transport-https --yes root@hello:~# echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list root@hello:~# sudo apt-get update root@hello:~# sudo apt-get install helm第三步、导入镜像root@hello:~# docker load -i pause-3.5.tar root@hello:~# docker load -i kube-scheduler.tar第四步、安装jupyterhubhelm repo add jupyterhub https://jupyterhub.github.io/helm-chart/ helm repo update helm upgrade --cleanup-on-fail \ --install ju jupyterhub/jupyterhub \ --namespace ju \ --create-namespace \ --version=1.2.0 \ --values config.yaml注:此文件可以自定义内容,具体看注释,如下开启lab功能root@hello:~# vim config.yaml root@hello:~# cat config.yaml # This file can update the JupyterHub Helm chart's default configuration values. # # # # For reference see the configuration reference and default values, but make # # sure to refer to the Helm chart version of interest to you! # # # # Introduction to YAML: https://www.youtube.com/watch?v=cdLNKUoMc6c # # Chart config reference: https://zero-to-jupyterhub.readthedocs.io/en/stable/resources/reference.html # # Chart default values: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/HEAD/jupyterhub/values.yaml # # Available chart versions: https://jupyterhub.github.io/helm-chart/ # # singleuser: defaultUrl: "/lab" extraEnv: JUPYTERHUB_SINGLEUSER_APP: "jupyter_server.serverapp.ServerApp" #singleuser: # defaultUrl: "/lab" # extraEnv: # JUPYTERHUB_SINGLEUSER_APP: "notebook.notebookapp.NotebookApp" root@hello:~# root@hello:~# root@hello:~# 第五步、修改svc为nodeportroot@hello:~# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 16h ju hub ClusterIP 10.68.60.16 <none> 8081/TCP 114s ju proxy-api ClusterIP 10.68.239.54 <none> 8001/TCP 114s ju proxy-public LoadBalancer 10.68.62.47 <pending> 80:32070/TCP 114s kube-system dashboard-metrics-scraper ClusterIP 10.68.244.241 <none> 8000/TCP 16h kube-system kube-dns ClusterIP 10.68.0.2 <none> 53/UDP,53/TCP,9153/TCP 16h kube-system kube-dns-upstream ClusterIP 10.68.221.104 <none> 53/UDP,53/TCP 16h kube-system kubernetes-dashboard NodePort 10.68.206.196 <none> 443:32143/TCP 16h kube-system metrics-server ClusterIP 10.68.1.149 <none> 443/TCP 16h kube-system node-local-dns ClusterIP None <none> 9253/TCP 16h root@hello:~# kubectl edit svc proxy-public -n ju service/proxy-public edited root@hello:~# root@hello:~# root@hello:~# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 16h ju hub ClusterIP 10.68.60.16 <none> 8081/TCP 2m19s ju proxy-api ClusterIP 10.68.239.54 <none> 8001/TCP 2m19s ju proxy-public NodePort 10.68.62.47 <none> 80:32070/TCP 2m19s kube-system dashboard-metrics-scraper ClusterIP 10.68.244.241 <none> 8000/TCP 16h kube-system kube-dns ClusterIP 10.68.0.2 <none> 53/UDP,53/TCP,9153/TCP 16h kube-system kube-dns-upstream ClusterIP 10.68.221.104 <none> 53/UDP,53/TCP 16h kube-system kubernetes-dashboard NodePort 10.68.206.196 <none> 443:32143/TCP 16h kube-system metrics-server ClusterIP 10.68.1.149 <none> 443/TCP 16h kube-system node-local-dns ClusterIP None <none> 9253/TCP 16h root@hello:~# https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
1,583 阅读
0 评论
0 点赞
2021-12-30
在Kubernetes(k8s)中使用GPU
介绍Kubernetes 支持对节点上的 AMD 和 NVIDIA GPU (图形处理单元)进行管理,目前处于实验状态。修改docker配置文件root@hello:~# cat /etc/docker/daemon.json { "default-runtime": "nvidia", "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } }, "data-root": "/var/lib/docker", "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [ "https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com" ], "insecure-registries": ["127.0.0.1/8"], "max-concurrent-downloads": 10, "live-restore": true, "log-driver": "json-file", "log-level": "warn", "log-opts": { "max-size": "50m", "max-file": "1" }, "storage-driver": "overlay2" } root@hello:~# root@hello:~# systemctl daemon-reload root@hello:~# systemctl start docker添加标签root@hello:~# kubectl label nodes 192.168.1.56 nvidia.com/gpu.present=true root@hello:~# kubectl get nodes -L nvidia.com/gpu.present NAME STATUS ROLES AGE VERSION GPU.PRESENT 192.168.1.55 Ready,SchedulingDisabled master 128m v1.22.2 192.168.1.56 Ready node 127m v1.22.2 true root@hello:~#安装helm仓库root@hello:~# curl https://baltocdn.com/helm/signing.asc | sudo apt-key add - root@hello:~# sudo apt-get install apt-transport-https --yes root@hello:~# echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list root@hello:~# sudo apt-get update root@hello:~# sudo apt-get install helm helm install \ --version=0.10.0 \ --generate-name \ nvdp/nvidia-device-plugin查看是否有nvidiaroot@hello:~# kubectl describe node 192.168.1.56 | grep nv nvidia.com/gpu.present=true nvidia.com/gpu: 1 nvidia.com/gpu: 1 kube-system nvidia-device-plugin-1637728448-fgg2d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 50s nvidia.com/gpu 0 0 root@hello:~#下载镜像root@hello:~# docker pull registry.cn-beijing.aliyuncs.com/ai-samples/tensorflow:1.5.0-devel-gpu root@hello:~# docker save -o tensorflow-gpu.tar registry.cn-beijing.aliyuncs.com/ai-samples/tensorflow:1.5.0-devel-gpu root@hello:~# docker load -i tensorflow-gpu.tar创建tensorflow测试podroot@hello:~# vim gpu-test.yaml root@hello:~# cat gpu-test.yaml apiVersion: v1 kind: Pod metadata: name: test-gpu labels: test-gpu: "true" spec: containers: - name: training image: registry.cn-beijing.aliyuncs.com/ai-samples/tensorflow:1.5.0-devel-gpu command: - python - tensorflow-sample-code/tfjob/docker/mnist/main.py - --max_steps=300 - --data_dir=tensorflow-sample-code/data resources: limits: nvidia.com/gpu: 1 tolerations: - effect: NoSchedule operator: Exists root@hello:~# root@hello:~# kubectl apply -f gpu-test.yaml pod/test-gpu created root@hello:~#查看日志root@hello:~# kubectl logs test-gpu WARNING:tensorflow:From tensorflow-sample-code/tfjob/docker/mnist/main.py:120: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See tf.nn.softmax_cross_entropy_with_logits_v2. 2021-11-24 04:38:50.846973: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:895] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-11-24 04:38:50.847698: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1105] Found device 0 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59 pciBusID: 0000:00:10.0 totalMemory: 14.75GiB freeMemory: 14.66GiB 2021-11-24 04:38:50.847759: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla T4, pci bus id: 0000:00:10.0, compute capability: 7.5) root@hello:~# https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
882 阅读
0 评论
0 点赞
1
...
29
30
31
...
42