首页
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,587 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,083 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,944 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,876 阅读
5
kubernetes (k8s) 二进制高可用安装
1,814 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
208
篇文章
累计收到
124
条评论
首页
栏目
默认分类
页面
直播
统计
壁纸
留言
友链
关于
搜索到
208
篇与
默认分类
的结果
2021-12-30
kubernetes核心实战(一)--- namespace
kubernetes核心实战1、资源创建方式命令行创建yaml文件创建2、namespace命名空间(namespace)是Kubernetes提供的组织机制,用于给集群中的任何对象组进行分类、筛选和管理。每一个添加到Kubernetes集群的工作负载必须放在一个命名空间中。命名空间为集群中的对象名称赋予作用域。虽然在命名空间中名称必须是唯一的,但是相同的名称可以在不同的命名空间中使用。这对于某些场景来说可能帮助很大。例如,如果使用命名空间来划分应用程序生命周期环境(如开发、staging、生产),则可以在每个环境中维护利用同样的名称维护相同对象的副本。命名空间还可以让用户轻松地将策略应用到集群的具体部分。你可以通过定义ResourceQuota对象来控制资源的使用,该对象在每个命名空间的基础上设置了使用资源的限制。类似地,当在集群上使用支持网络策略的CNI(容器网络接口)时,比如Calico或Canal(calico用于策略,flannel用于网络)。你可以将NetworkPolicy应用到命名空间,其中的规则定义了pod之间如何彼此通信。不同的命名空间可以有不同的策略。使用命名空间最大的好处之一是能够利用Kubernetes RBAC(基于角色的访问控制)。RBAC允许您在单个名称下开发角色,这样将权限或功能列表分组。ClusterRole对象用于定义集群规模的使用模式,而角色对象类型(Role object type)应用于具体的命名空间,从而提供更好的控制和粒度。在角色创建后,RoleBinding可以将定义的功能授予单个命名空间上下文中的具体具体用户或用户组。通过这种方式,命名空间可以使得集群操作者能够将相同的策略映射到组织好的资源集合。将命名空间映射到团队或项目上使用命名空间对生命周期环境进行分区使用命名空间隔离不同的使用者[root@k8s-master-node1 ~]# kubectl create namespace cby namespace/cby created [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl get namespaces NAME STATUS AGE cby Active 2s default Active 21h ingress-nginx Active 21h kube-node-lease Active 21h kube-public Active 21h kube-system Active 21h kubernetes-dashboard Active 21h [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl delete namespace cby namespace "cby" deleted [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl get namespaces NAME STATUS AGE default Active 21h ingress-nginx Active 21h kube-node-lease Active 21h kube-public Active 21h kube-system Active 21h kubernetes-dashboard Active 21h [root@k8s-master-node1 ~]#查看yaml格式[root@k8s-master-node1 ~]# kubectl create namespace cby namespace/cby created [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl get namespaces cby -o yaml apiVersion: v1 kind: Namespace metadata: creationTimestamp: "2021-11-17T03:08:10Z" labels: kubernetes.io/metadata.name: cby name: cby resourceVersion: "311903" uid: 63f2e47d-a2a5-4a67-8fd2-7ca29bfb02be spec: finalizers: - kubernetes status: phase: Active  **Linux运维交流社区** Linux运维交流社区,互联网新闻以及技术交流。 57篇原创内容 公众号  https://blog.csdn.net/qq_33921750 https://my.oschina.net/u/3981543 https://www.zhihu.com/people/chen-bu-yun-2 https://segmentfault.com/u/hppyvyv6/articles https://juejin.cn/user/3315782802482007 https://space.bilibili.com/352476552/article https://cloud.tencent.com/developer/column/93230 知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云
2021年12月30日
349 阅读
0 评论
0 点赞
2021-12-30
在 Linux 上以 All-in-One 模式安装 KubeSphere
在 Linux 上以 All-in-One 模式安装 KubeSphereInstall KubeSphere in All-in-One mode on Linux背景KubeSphere 是在Kubernetes 之上构建的面向云原生应用的分布式操作系统,完全开源,支持多云与多集群管理,提供全栈的IT 自动化运维能力,简化公司的DevOps 工作流。... 作为全栈的多租户容器平台,KubeSphere 提供了运维友好的向导式操作界面,帮助公司快速构建一个强大和功能丰富的容器云平台。KubeSphere is a distributed operating system for cloud-native applications built on Kubernetes. It is fully open source, supports multi-cloud and multi-cluster management, provides full-stack IT automated operation and maintenance capabilities, and simplifies the company's DevOps workflow. ... As a full-stack multi-tenant container platform, KubeSphere provides an operation and maintenance-friendly guided operation interface to help the company quickly build a powerful and feature-rich container cloud platform.一、安装 dockerOne, install dockerroot@hello:~# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun ----略---- root@hello:~# docker -v Docker version 20.10.9, build c2ea9bc root@hello:~#二,下载安装 KubeKeySecond, download and install KubeKey从源代码生成二进制文件Generate binary files from source coderoot@hello:~# git clone https://github.com/kubesphere/kubekey.git Cloning into 'kubekey'... remote: Enumerating objects: 13438, done. remote: Counting objects: 100% (899/899), done. remote: Compressing objects: 100% (238/238), done. remote: Total 13438 (delta 745), reused 662 (delta 661), pack-reused 12539 Receiving objects: 100% (13438/13438), 34.95 MiB | 10.14 MiB/s, done. Resolving deltas: 100% (5424/5424), done. root@hello:~# root@hello:~# cd kubekey root@hello:~/kubekey# root@hello:~/kubekey# root@hello:~/kubekey# ./build.sh -p ----略----注意:Notice:在构建之前,需要先安装 Docker。如果无法访问 https://proxy.golang.org/,比如在墙内,请执行 build.sh -p。Before building, you need to install Docker.If you cannot access https://proxy.golang.org/, such as inside a firewall, please execute build.sh -p.三、 安装所需工具Three, Tools required for installationroot@hello:~# apt install sudo -y root@hello:~# apt install curl -y root@hello:~# apt install openssl -y root@hello:~# apt install ebtables -y root@hello:~# apt install socat -y root@hello:~# apt install ipset -y root@hello:~# apt install conntrack -y root@hello:~# apt install nfs-common -y四、创建集群Fourth, create a cluster同时安装 Kubernetes 和 KubeSphereInstall Kubernetes and KubeSphere at the same timeroot@hello:~# export KKZONE=cn root@hello:~# /root/kubekey/output/kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.1 +-------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+ | name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time | +-------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+ | hello | y | y | y | y | y | y | y | 20.10.9 | y | | | UTC 02:50:57 | +-------+------+------+---------+----------+-------+-------+-----------+---------+------------+-------------+------------------+--------------+ This is a simple check of your environment. Before installation, you should ensure that your machines meet all requirements specified at https://github.com/kubesphere/kubekey#requirements-and-recommendations Continue this installation? [yes/no]: yes INFO[02:51:00 UTC] Downloading Installation Files INFO[02:51:00 UTC] Downloading kubeadm ... ----略----五、验证安装结果Five, verify the installation resultsroot@hello:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f ----略---- ##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.1.20:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. ##################################################### https://kubesphere.io 2021-10-11 03:04:53 #####################################################注意:Notice:输出信息会显示 Web 控制台的 IP 地址和端口号,默认的 NodePort 是 30880。现在,您可以使用默认的帐户和密码 (admin/P@88w0rd) 通过 <NodeIP>:30880 访问控制台The output information will display the IP address and port number of the Web console. The default NodePort is 30880. Now you can use the default account and password (admin/P@88w0rd) to access the console via <NodeIP>:30880Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。39篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
814 阅读
0 评论
0 点赞
2021-12-30
最新版 Harbor 在ubuntu系统上安装
最新版 Harbor 在ubuntu系统上安装The latest version of Harbor is installed on the ubuntu system安装docker Install dockerroot@hello:~# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun root@hello:~#配置Docker ComposeConfigure Docker Composeroot@hello:~# sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 633 100 633 0 0 2444 0 --:--:-- --:--:-- --:--:-- 2444 100 12.1M 100 12.1M 0 0 10.2M 0 0:00:01 0:00:01 --:--:-- 26.2M root@hello:~# sudo chmod +x /usr/local/bin/docker-compose root@hello:~# sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose root@hello:~# docker-compose --version docker-compose version 1.29.2, build 5becea4c root@hello:~#下载Docker Harbor安装包Download the Docker Harbor installation packageroot@hello:~# wget https://github.com/goharbor/harbor/releases/download/v2.3.2/harbor-offline-installer-v2.3.2.tgz root@hello:~#解压安装包Unzip the installation packageroot@hello:~# tar xvf harbor-offline-installer-v2.3.2.tgz -C /usr/local/ harbor/harbor.v2.3.2.tar.gz harbor/prepare harbor/LICENSE harbor/install.sh harbor/common.sh harbor/harbor.yml.tmpl root@hello:~# cd /usr/local/harbor/配置证书Configure Certificateroot@hello:/usr/local/harbor# mkdir ca root@hello:/usr/local/harbor# cd ca/ root@hello:/usr/local/harbor/ca# pwd /usr/local/harbor/ca root@hello:/usr/local/harbor/ca# openssl genrsa -des3 -out server.key 2048 Generating RSA private key, 2048 bit long modulus (2 primes) ......................................+++++ ...................................................................................................................................................+++++ e is 65537 (0x010001) Enter pass phrase for server.key: Verifying - Enter pass phrase for server.key: root@hello:/usr/local/harbor/ca# root@hello:/usr/local/harbor/ca# root@hello:/usr/local/harbor/ca# openssl req -new -key server.key -out server.csr Enter pass phrase for server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (e.g. server FQDN or YOUR name) []: Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: root@hello:/usr/local/harbor/ca# root@hello:/usr/local/harbor/ca# cp server.key server.key.org root@hello:/usr/local/harbor/ca# openssl rsa -in server.key.org -out server.key Enter pass phrase for server.key.org: writing RSA key root@hello:/usr/local/harbor/ca# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt Signature ok subject=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd Getting Private key root@hello:/usr/local/harbor/ca#修改配置文件,修改 hostname 和证书路径 即可 Modify the configuration file, modify the hostname and certification pathroot@hello:/usr/local/harbor# cp harbor.yml.tmpl harbor.yml root@hello:/usr/local/harbor# root@hello:/usr/local/harbor# vim harbor.yml root@hello:/usr/local/harbor# cat harbor.yml # Configuration file of Harbor hostname: harbor.chenby.cn # http related config http: # port for http, default is 80. If https enabled, this port will redirect to https port port: 80 # https related config https: # https port for harbor, default is 443 port: 443 # The path of cert and key files for nginx certificate: /usr/local/harbor/ca/server.crt private_key: /usr/local/harbor/ca/server.key harbor_admin_password: Harbor12345 ----略---- root@hello:/usr/local/harbor#安装Installroot@hello:/usr/local/harbor# ./install.sh [Step 0]: checking if docker is installed ... Note: docker version: 20.10.8 [Step 1]: checking docker-compose is installed ... Note: docker-compose version: 1.29.2 [Step 2]: loading Harbor images ... Loaded image: goharbor/redis-photon:v2.3.2 Loaded image: goharbor/nginx-photon:v2.3.2 Loaded image: goharbor/harbor-portal:v2.3.2 Loaded image: goharbor/trivy-adapter-photon:v2.3.2 Loaded image: goharbor/chartmuseum-photon:v2.3.2 Loaded image: goharbor/notary-signer-photon:v2.3.2 Loaded image: goharbor/harbor-core:v2.3.2 Loaded image: goharbor/harbor-log:v2.3.2 Loaded image: goharbor/harbor-registryctl:v2.3.2 Loaded image: goharbor/harbor-exporter:v2.3.2 Loaded image: goharbor/notary-server-photon:v2.3.2 Loaded image: goharbor/prepare:v2.3.2 Loaded image: goharbor/harbor-db:v2.3.2 Loaded image: goharbor/harbor-jobservice:v2.3.2 Loaded image: goharbor/registry-photon:v2.3.2 [Step 3]: preparing environment ... [Step 4]: preparing harbor configs ... prepare base dir is set to /usr/local/harbor Clearing the configuration file: /config/portal/nginx.conf Clearing the configuration file: /config/log/rsyslog_docker.conf Clearing the configuration file: /config/log/logrotate.conf Generated configuration file: /config/portal/nginx.conf Generated configuration file: /config/log/logrotate.conf Generated configuration file: /config/log/rsyslog_docker.conf Generated configuration file: /config/nginx/nginx.conf Generated configuration file: /config/core/env Generated configuration file: /config/core/app.conf Generated configuration file: /config/registry/config.yml Generated configuration file: /config/registryctl/env Generated configuration file: /config/registryctl/config.yml Generated configuration file: /config/db/env Generated configuration file: /config/jobservice/env Generated configuration file: /config/jobservice/config.yml Generated and saved secret to file: /data/secret/keys/secretkey Successfully called func: create_root_cert Generated configuration file: /compose_location/docker-compose.yml Clean up the input dir [Step 5]: starting Harbor ... Creating network "harbor_harbor" with the default driver Creating harbor-log ... done Creating harbor-portal ... done Creating harbor-db ... done Creating registryctl ... done Creating redis ... done Creating registry ... done Creating harbor-core ... done Creating harbor-jobservice ... done Creating nginx ... done ? ----Harbor has been installed and started successfully.---- root@hello:/usr/local/harbor#配置dns解析,或者在本地host中配置,具体配置略Configure dns resolution, or configure in the local host, the specific configuration is omitted登陆Sign in默认账号:admin默认密码:Harbor12345Default account: adminDefault password: Harbor12345客户端使用Client useroot@hello:~# vim /etc/docker/daemon.json root@hello:~# root@hello:~# cat /etc/docker/daemon.json { "insecure-registries": ["https://harbor.chenby.cn"] } root@hello:~# root@hello:~# systemctl daemon-reload root@hello:~# root@hello:~# root@hello:~# sudo systemctl restart docker root@hello:~# docker login https://harbor.chenby.cn/ Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded root@hello:~#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。38篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
370 阅读
0 评论
0 点赞
2021-12-30
kubernetes 安装 Prometheus + Grafana
kubernetes 安装 Prometheus + Grafanakubernetes install Prometheus + Grafana官网Official websitehttps://prometheus.io/GitHubGitHubhttps://github.com/coreos/kube-prometheus组件说明Component descriptionMetricServer:是kubernetes集群资源使用情况的聚合器,收集数据给kubernetes集群内使用,如 kubectl,hpa,scheduler等。PrometheusOperator:是一个系统监测和警报工具箱,用来存储监控数据。NodeExporter:用于各node的关键度量指标状态数据。KubeStateMetrics:收集kubernetes集群内资源对象数 据,制定告警规则。Prometheus:采用pull方式收集apiserver,scheduler,controller-manager,kubelet组件数 据,通过http协议传输。Grafana:是可视化数据统计和监控平台。MetricServer: It is an aggregator of the resource usage of the kubernetes cluster, collecting data for use in the kubernetes cluster, such as kubectl, hpa, scheduler, etc.PrometheusOperator: is a system monitoring and alerting toolbox used to store monitoring data.NodeExporter: Used for the key metric status data of each node.KubeStateMetrics: Collect resource object data in the kubernetes cluster and formulate alarm rules.Prometheus: collect data from apiserver, scheduler, controller-manager, and kubelet components in a pull mode, and transmit it through the http protocol.Grafana: It is a platform for visual data statistics and monitoring.安装Install配置Google上网环境下的docker,docker会去外网进行下载部分镜像Configure docker in Google's Internet environment, docker will go to the external network to download part of the imagesudo mkdir -p /etc/systemd/system/docker.service.d sudo touch /etc/systemd/system/docker.service.d/proxy.conf[root@k8s-master-node1 ~]# cat /etc/systemd/system/docker.service.d/proxy.conf [Service] Environment="HTTP_PROXY=http://192.168.1.6:7890/" Environment="HTTPS_PROXY=http://192.168.1.6:7890/" Environment="NO_PROXY=localhost,127.0.0.1,.example.com"dockerd代理的修改比较特殊,它实际上是改systemd的配置,因此需要重载systemd并重启dockerd才能生效。The modification of the dockerd agent is quite special. It actually changes the configuration of systemd, so systemd needs to be reloaded and dockerd restarted to take effect.sudo systemctl daemon-reload sudo systemctl restart docker下载download[root@k8s-master-node1 ~]# git clone https://github.com/coreos/kube-prometheus.git Cloning into 'kube-prometheus'... remote: Enumerating objects: 13409, done. remote: Counting objects: 100% (1908/1908), done. remote: Compressing objects: 100% (801/801), done. remote: Total 13409 (delta 1184), reused 1526 (delta 947), pack-reused 11501 Receiving objects: 100% (13409/13409), 6.65 MiB | 5.21 MiB/s, done. Resolving deltas: 100% (8313/8313), done. [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# cd kube-prometheus/manifests [root@k8s-master-node1 ~/kube-prometheus/manifests]# 修改 grafana-service.yaml 文件,使用 nodepode 方式访问 grafana:Modify the grafana-service.yaml file and use nodepode to access grafana:[root@k8s-master-node1 ~/kube-prometheus/manifests]# cat grafana-service.yaml apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 8.1.3 name: grafana namespace: monitoring spec: type: NodePort ports: - name: http port: 3000 targetPort: http nodePort: 31100 selector: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus修改 prometheus-service.yaml,改为 nodepode:Modify prometheus-service.yaml to nodepode:[root@k8s-master-node1 ~/kube-prometheus/manifests]# cat prometheus-service.yaml apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: prometheus app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 2.30.0 prometheus: k8s name: prometheus-k8s namespace: monitoring spec: type: NodePort ports: - name: web port: 9090 targetPort: web nodePort: 31200 - name: reloader-web port: 8080 targetPort: reloader-web nodePort: 31300 selector: app: prometheus app.kubernetes.io/component: prometheus app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus prometheus: k8s sessionAffinity: ClientIP修改 alertmanager-service.yaml,改为 nodepodeModify alertmanager-service.yaml to nodepode[root@k8s-master-node1 ~/kube-prometheus/manifests]# cat alertmanager-service.yaml apiVersion: v1 kind: Service metadata: labels: alertmanager: main app.kubernetes.io/component: alert-router app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 0.23.0 name: alertmanager-main namespace: monitoring spec: type: NodePort ports: - name: web port: 9093 targetPort: web nodePort: 31400 - name: reloader-web port: 8080 targetPort: reloader-web nodePort: 31500 selector: alertmanager: main app: alertmanager app.kubernetes.io/component: alert-router app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: kube-prometheus sessionAffinity: ClientIP [root@k8s-master-node1 ~/kube-prometheus/manifests]# 创建名称空间和CRDCreate namespace and CRD[root@k8s-master-node1 ~/kube-prometheus]# kubectl create -f /root/kube-prometheus/manifests/setup namespace/monitoring created customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created clusterrole.rbac.authorization.k8s.io/prometheus-operator created clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created deployment.apps/prometheus-operator created service/prometheus-operator created serviceaccount/prometheus-operator created等待资源可用后,安装After waiting for resources to be available, install[root@k8s-master-node1 ~/kube-prometheus]# [root@k8s-master-node1 ~/kube-prometheus]# [root@k8s-master-node1 ~/kube-prometheus]# kubectl create -f /root/kube-prometheus/manifests/ ---略--- [root@k8s-master-node1 ~/kube-prometheus]# 访问 PrometheusVisit Prometheushttp://192.168.1.10:31200/targets访问 GrafanaVisit Grafanahttp://192.168.1.10:31100/访问报警平台 AlertManagerVisit the alert platform AlertManagerhttp://192.168.1.10:31400/#/statusLinux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。36篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
734 阅读
0 评论
0 点赞
2021-12-30
Ubuntu 通过 Netplan 配置网络教程
Ubuntu 通过 Netplan 配置网络教程Ubuntu through Netplan configuration network tutorial一、Netplan 配置流程1. Netplan configuration process1、Netplan默认配置文件在/etc/netplan目录下。您可以使用以下命令找到:1. The default configuration file of Netplan is in the /etc/netplan directory. You can find it with the following command:ls /etc/netplan/就可以看到配置文件名称。You can see the configuration file name.2、查看Netplan网络配置文件的内容,执行以下命令:2. View the contents of the Netplan network configuration file and execute the following command:cat /etc/netplan/*.yaml3、现在你需要在任何编辑器中打开配置文件: 由于我使用 vim 编辑器来编辑配置文件,所以我将运行:3. Now you need to open the configuration file in any editor: Since I use the vim editor to edit the configuration file, I will run:vim /etc/netplan/*.yaml根据您的网络需要更新配置文件。对于静态 IP 寻址,添加 IP 地址、网关、DNS 信息,而对于动态 IP 寻址,无需添加此信息,因为它将从 DHCP 服务器获取此信息。使用以下语法编辑配置文件。Update the configuration file according to your network needs. For static IP addressing, add IP address, gateway, DNS information, and for dynamic IP addressing, there is no need to add this information because it will get this information from the DHCP server. Use the following syntax to edit the configuration file.4、在应用任何更改之前,我们将测试配置文件。4. We will test the configuration file before applying any changes.sudo netplan try如果没有问题,它将返回配置接受消息。如果配置文件未通过测试,它将恢复为以前的工作配置。If there is no problem, it will return a configuration acceptance message. If the configuration file fails the test, it will revert to the previous working configuration.5、运行以下命令来应用新配置:5. Run the following command to apply the new configuration:sudo netplan apply6、成功应用所有配置后,通过运行以下命令重新启动 Network-Manager 服务:6. After successfully applying all the configurations, restart the Network-Manager service by running the following command:如果是桌面版:If it is the desktop version:sudo systemctl restart system-networkd如果您使用的是 Ubuntu 服务器,请改用以下命令:If you are using an Ubuntu server, use the following command instead:sudo systemctl restart network-manager7、验证 IP 地址7. Verify the IP addressip a二、Netplan 配置文件详解2. Detailed explanation of Netplan configuration file 1、使用 DHCP:1. Use DHCP:network: version: 2 renderer: networkd ethernets: enp3s0: dhcp4: true2、使用静态 IP:2. Use static IP:network: version: 2 renderer: networkd ethernets: enp3s0: addresses: - 10.0.0.10/8 gateway4: 10.0.0.1 nameservers: search: [mydomain, otherdomain] addresses: [10.0.0.5, 1.1.1.1]3、多个网口 DHCP:3. Multiple network ports DHCP:network: version: 2 ethernets: enred: dhcp4: yes dhcp4-overrides: route-metric: 100 engreen: dhcp4: yes dhcp4-overrides: route-metric: 2004、连接开放的 WiFi(无密码):4. Connect to open WiFi (without password):network: version: 2 wifis: wl0: access-points: opennetwork: {} dhcp4: yes5、连接 WPA 加密的 WiFi:5. Connect to WPA encrypted WiFi:network: version: 2 renderer: networkd wifis: wlp2s0b1: dhcp4: no dhcp6: no addresses: [10.0.0.10/8] gateway4: 10.0.0.1 nameservers: addresses: [10.0.0.5, 8.8.8.8] access-points: "network_ssid_name": password: "**********"6、在单网卡上使用多个 IP 地址(同一网段):6. Use multiple IP addresses on a single network card (same network segment):network: version: 2 renderer: networkd ethernets: enp3s0: addresses: - 10.0.0.10/8 - 10.0.0.10/8 gateway4: 10.0.0.17、在单网卡使用多个不同网段的 IP 地址:7. Use multiple IP addresses of different network segments on a single network card:network: version: 2 renderer: networkd ethernets: enp3s0: addresses: - 9.0.0.9/24 - 10.0.0.10/24 - 11.0.0.11/24 #gateway4: # unset, since we configure routes below routes: - to: 0.0.0.0/0 via: 9.0.0.1 metric: 100 - to: 0.0.0.0/0 via: 10.0.0.1 metric: 100 - to: 0.0.0.0/0 via: 11.0.0.1 metric: 100Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。35篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
1,876 阅读
2 评论
0 点赞
1
...
32
33
34
...
42