首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,556 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,059 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,922 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,841 阅读
5
kubernetes (k8s) 二进制高可用安装
1,793 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
199
篇文章
累计收到
144
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
199
篇与
默认分类
的结果
2021-12-30
搭建Hadoop2.7.2和Hive2.3.3以及Spark3.1.2
Hadoop 简介Hadoop是一个用Java编写的Apache开源框架,允许使用简单的编程模型跨计算机集群分布式处理大型数据集。Hadoop框架工作的应用程序在跨计算机集群提供分布式存储和计算的环境中工作。Hadoop旨在从单个服务器扩展到数千个机器,每个都提供本地计算和存储。Hive简介Apache Hive是一个构建于Hadoop顶层的数据仓库,可以将结构化的数据文件映射为一张数据库表,并提供简单的SQL查询功能,可以将SQL语句转换为MapReduce任务进行运行。需要注意的是,Hive它并不是数据库。Hive依赖于HDFS和MapReduce,其对HDFS的操作类似于SQL,我们称之为HQL,它提供了丰富的SQL查询方式来分析存储在HDFS中的数据。HQL可以编译转为MapReduce作业,完成查询、汇总、分析数据等工作。这样一来,即使不熟悉MapReduce 的用户也可以很方便地利用SQL 语言查询、汇总、分析数据。而MapReduce开发人员可以把己写的mapper 和reducer 作为插件来支持Hive 做更复杂的数据分析。Apache Spark 简介用于大数据工作负载的分布式开源处理系统Apache Spark 是一种用于大数据工作负载的分布式开源处理系统。它使用内存中缓存和优化的查询执行方式,可针对任何规模的数据进行快速分析查询。它提供使用 Java、Scala、Python 和 R 语言的开发 API,支持跨多个工作负载重用代码—批处理、交互式查询、实时分析、机器学习和图形处理等。本文将先搭建 jdk1.8 + MySQL5.7基础环境之后搭建Hadoop2.7.2和Hive2.3.3以及Spark3.1.2此文章搭建为单机版 1.创建目录并解压jdk安装包[root@localhost ~]# mkdir jdk [root@localhost ~]# cd jdk/ [root@localhost jdk]# ls jdk-8u202-linux-x64.tar.gz [root@localhost jdk]# ll total 189496 -rw-r--r--. 1 root root 194042837 Oct 18 12:05 jdk-8u202-linux-x64.tar.gz [root@localhost jdk]# [root@localhost jdk]# tar xvf jdk-8u202-linux-x64.tar.gz2.配置环境变量[root@localhost ~]# vim /etc/profile [root@localhost ~]# tail -n 3 /etc/profile export JAVA_HOME=/root/jdk/jdk1.8.0_202/ export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar [root@localhost ~]# [root@localhost ~]# source /etc/profile3.下载安装MySQL并设置为开机自启[root@localhost ~]# mkdir mysql [root@localhost ~]# cd mysql [root@localhost mysql]# wget https://downloads.mysql.com/archives/get/p/23/file/mysql-5.7.35-1.el7.x86_64.rpm-bundle.tar [root@localhost mysql]# tar xvf mysql-5.7.35-1.el7.x86_64.rpm-bundle.tar [root@localhost mysql]# yum install ./*.rpm [root@localhost mysql]# systemctl start mysqld.service [root@localhost mysql]# [root@localhost mysql]# systemctl enable mysqld.service [root@localhost mysql]# [root@localhost mysql]#4.查看MySQL默认密码,并修改默认密码,同时创建新的用户,将其设置为可以远程登录[root@localhost mysql]# sudo grep 'temporary password' /var/log/mysqld.log 2021-10-18T06:12:35.519726Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: eNHu<sXHt3rq [root@localhost mysql]# [root@localhost mysql]# [root@localhost mysql]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 8.0.25 Copyright (c) 2000, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'Cby123..'; Query OK, 0 rows affected (0.02 sec) mysql> mysql> mysql> use mysql; Database changed mysql> mysql> update user set host='%' where user ='root'; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> set global validate_password_policy=0; Query OK, 0 rows affected (0.01 sec) mysql> set global validate_password_mixed_case_count=0; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_number_count=3; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_special_char_count=0; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_length=3; Query OK, 0 rows affected (0.00 sec) mysql> SHOW VARIABLES LIKE 'validate_password%'; +--------------------------------------+-------+ | Variable_name | Value | +--------------------------------------+-------+ | validate_password_check_user_name | OFF | | validate_password_dictionary_file | | | validate_password_length | 3 | | validate_password_mixed_case_count | 0 | | validate_password_number_count | 3 | | validate_password_policy | LOW | | validate_password_special_char_count | 0 | +--------------------------------------+-------+ 7 rows in set (0.00 sec) mysql> create user 'cby'@'%' identified by 'cby'; Query OK, 0 rows affected (0.00 sec) mysql> grant all on *.* to 'cby'@'%'; Query OK, 0 rows affected (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'WITH GRANT OPTION; Query OK, 0 rows affected (0.00 sec) mysql> CREATE DATABASE dss_dev; Query OK, 1 row affected (0.00 sec) mysql> mysql> select host,user,plugin from user; +-----------+---------------+-----------------------+ | host | user | plugin | +-----------+---------------+-----------------------+ | % | root | mysql_native_password | | localhost | mysql.session | mysql_native_password | | localhost | mysql.sys | mysql_native_password | +-----------+---------------+-----------------------+ 3 rows in set (0.01 sec) mysql>注:若上面root不是mysql_native_password使用以下命令将其改掉update user set plugin='mysql_native_password' where user='root'; 5.添加hosts解析,同时设置免密登录[root@localhost ~]# mkdir Hadoop [root@localhost ~]# [root@localhost ~]# vim /etc/hosts [root@localhost ~]# [root@localhost ~]# [root@localhost ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 127.0.0.1 namenode [root@localhost ~]# ssh-keygen [root@localhost ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@127.0.0.1 [root@localhost ~]#6.下载Hadoop,解压后创建所需目录[root@localhost ~]# cd Hadoop/ [root@localhost Hadoop]# ls [root@localhost Hadoop]# wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.2/hadoop-2.7.2.tar.gz [root@localhost Hadoop]# tar xvf hadoop-2.7.2.tar.gz [root@localhost Hadoop]# [root@localhost Hadoop]# mkdir -p /root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/namenode [root@localhost Hadoop]# [root@localhost Hadoop]# [root@localhost Hadoop]# mkdir -p /root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/datanode7.添加Hadoop环境变量[root@localhost ~]# vim /etc/profile [root@localhost ~]# tail -n 8 /etc/profile export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2/ export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin [root@localhost ~]# [root@localhost ~]# source /etc/profile8.修改Hadoop配置[root@localhost ~]# cd Hadoop/hadoop-2.7.2/ [root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/core-site.xml [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# tail /root/Hadoop/hadoop-2.7.2/etc/hadoop/core-site.xml <!-- Put site-specific property overrides in this file. --> <configuration> <!-- 指定HDFS中NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://127.0.0.1:9000</value> </property> <!-- 指定Hadoop运行时产生文件的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadoop-2.7.2/data/tmp</value> </property> </configuration>9.修改Hadoop的hdfs目录配置[root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/hdfs-site.xml [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# tail -n 15 /root/Hadoop/hadoop-2.7.2/etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.name.dir</name> <value>/root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/namenode</value> </property> <property> <name>dfs.data.dir</name> <value>/root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/datanode</value> </property> </configuration>10.修改Hadoop的yarn配置[root@localhost hadoop]# [root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/yarn-site.xml [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# tail -n 6 /root/Hadoop/hadoop-2.7.2/etc/hadoop/yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>[root@localhost hadoop]# [root@localhost hadoop]# cp /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml.template /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml [root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# tail /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> [root@localhost hadoop]#11.修改Hadoop环境配置文件[root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/hadoop-env.sh 修改JAVA_HOME export JAVA_HOME=/root/jdk/jdk1.8.0_202/ [root@localhost ~]# hdfs namenode -format [root@localhost ~]# start-dfs.sh [root@localhost ~]# start-yarn.sh若重置太多次会导致clusterID不匹配,datanode起不来,删除版本后在初始化启动[root@localhost ~]# rm -rf /root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/datanode/current/VERSION [root@localhost ~]# hadoop namenode -format [root@localhost ~]# hdfs namenode -format [root@localhost ~]# start-dfs.sh [root@localhost ~]# start-yarn.sh在浏览器访问Hadoop访问Hadoop的默认端口号为50070.使用以下网址,以获取浏览器Hadoop服务。http://localhost:50070/验证集群的所有应用程序访问集群中的所有应用程序的默认端口号为8088。使用以下URL访问该服务。http://localhost:8088/12.创建hive目录并解压[root@localhost ~]# mkdir hive [root@localhost ~]# cd hive [root@localhost hive]# wget https://archive.apache.org/dist/hive/hive-2.3.3/apache-hive-2.3.3-bin.tar.gz [root@localhost hive]# tar xvf apache-hive-2.3.3-bin.tar.gz [root@localhost hive]#13.备份hive配置文件[root@localhost hive]# cd /root/hive/apache-hive-2.3.3-bin/conf/ [root@localhost conf]# cp hive-env.sh.template hive-env.sh [root@localhost conf]# cp hive-default.xml.template hive-site.xml [root@localhost conf]# cp hive-log4j2.properties.template hive-log4j2.properties [root@localhost conf]# cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties14.在Hadoop中创建文件夹并设置权限[root@localhost conf]# hadoop fs -mkdir -p /data/hive/warehouse 21/10/18 14:27:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# [root@localhost conf]# hadoop fs -mkdir /data/hive/tmp 21/10/18 14:27:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# [root@localhost conf]# hadoop fs -mkdir /data/hive/log 21/10/18 14:27:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# [root@localhost conf]# hadoop fs -chmod -R 777 /data/hive/warehouse 21/10/18 14:27:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# hadoop fs -chmod -R 777 /data/hive/tmp 21/10/18 14:27:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# hadoop fs -chmod -R 777 /data/hive/log 21/10/18 14:27:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]#15.修改hive配置文件[root@localhost conf]# vim hive-site.xml hive 配置入下: <property> <name>hive.exec.scratchdir</name> <value>hdfs://127.0.0.1:9000/data/hive/tmp</value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>hdfs://127.0.0.1:9000/data/hive/warehouse</value> </property> <property> <name>hive.querylog.location</name> <value>hdfs://127.0.0.1:9000/data/hive/log</value> </property> <!—该配置是关闭hive元数据版本认证,否则会在启动spark程序时报错--> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> 配置mysql IP 端口以及放元数据的库名称 <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true</value> </property> <!—配置mysql启动器名称 --> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <!—配置连接mysql用户名 --> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <!—配置连接mysql用户名登录密码--> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>Cby123..</value> </property>修改配置文件中 system:java.io.tmpdir 和 system:user.name 相关信息,改为实际目录和用户名,或者加入如下配置 <property> <name>system:java.io.tmpdir</name> <value>/tmp/hive/java</value> </property> <property> <name>system:user.name</name> <value>${user.name}</value> </property>并修改临时路径 : <property> <name>hive.exec.local.scratchdir</name> <value>/root/hive/apache-hive-2.3.3-bin/tmp/${system:user.name}</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/root/hive/apache-hive-2.3.3-bin/tmp/${hive.session.id}_resources</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>/root/hive/apache-hive-2.3.3-bin/tmp/root/operation_logs</value> <description>Top level directory where operation logs are stored if logging functionality is enabled</description> </property>16.配置hive中jdbc的MySQL驱动[root@localhost lib]# cd /root/hive/apache-hive-2.3.3-bin/lib/ [root@localhost lib]# wget https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-java-5.1.49.tar.gz [root@localhost lib]# tar xvf mysql-connector-java-5.1.49.tar.gz [root@localhost lib]# cp mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar . [root@localhost bin]# [root@localhost bin]# vim /root/hive/apache-hive-2.3.3-bin/conf/hive-env.sh [root@localhost bin]# tail -n 3 /root/hive/apache-hive-2.3.3-bin/conf/hive-env.sh export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2/ export HIVE_CONF_DIR=/root/hive/apache-hive-2.3.3-bin/conf export HIVE_AUX_JARS_PATH=/root/hive/apache-hive-2.3.3-bin/lib [root@localhost bin]#17.配置hive环境变量[root@localhost ~]# vim /etc/profile [root@localhost ~]# tail -n 6 /etc/profile export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2/ export HIVE_CONF_DIR=/root/hive/apache-hive-2.3.3-bin/conf export HIVE_AUX_JARS_PATH=/root/hive/apache-hive-2.3.3-bin/lib export HIVE_PATH=/root/hive/apache-hive-2.3.3-bin/ export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HIVE_PATH/bin [root@localhost bin]# ./schematool -dbType mysql -initSchema初始化完成后修改MySQL链接信息,之后配置mysql IP 端口以及放元数据的库名称[root@localhost conf]# vim hive-site.xml <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://127.0.0.1:3306/hive?characterEncoding=utf8&useSSL=false</value> </property> [root@localhost bin]# nohup hive --service metastore & [root@localhost bin]# nohup hive --service hiveserver2 &**18.创建spark目录并下载所需文件 **[root@localhost ~]# mkdir spark [root@localhost ~]# [root@localhost ~]# cd spark [root@localhost spark]# [root@localhost spark]# wget https://dlcdn.apache.org/spark/spark-3.1.2/spark-3.1.2-bin-without-hadoop.tgz --no-check-certificate [root@localhost spark]# tar xvf spark-3.1.2-bin-without-hadoop.tgz19.配置spark环境变量以及备份配置文件[root@localhost ~]# vim /etc/profile [root@localhost ~]# [root@localhost ~]# tail -n 3 /etc/profile export SPARK_HOME=/root/spark/spark-3.1.2-bin-without-hadoop/ export PATH=$PATH:$SPARK_HOME/bin [root@localhost spark]# cd /root/spark/spark-3.1.2-bin-without-hadoop/conf/ [root@localhost conf]# cp spark-env.sh.template spark-env.sh [root@localhost conf]# cp spark-defaults.conf.template spark-defaults.conf [root@localhost conf]# cp metrics.properties.template metrics.properties20.配置程序的环境变量[root@localhost conf]# cp workers.template workers [root@localhost conf]# vim spark-env.sh export JAVA_HOME=/root/jdk/jdk1.8.0_202 export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2 export HADOOP_CONF_DIR=/root/Hadoop/hadoop-2.7.2/etc/hadoop export SPARK_DIST_CLASSPATH=$(/root/Hadoop/hadoop-2.7.2/bin/hadoop classpath) export SPARK_MASTER_HOST=127.0.0.1 export SPARK_MASTER_PORT=7077 export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 - Dspark.history.retainedApplications=50 - Dspark.history.fs.logDirectory=hdfs://127.0.0.1:9000/spark-eventlog"21.修改默认的配置文件[root@localhost conf]# vim spark-defaults.conf spark.master spark://127.0.0.1:7077 spark.eventLog.enabled true spark.eventLog.dir hdfs://127.0.0.1:9000/spark-eventlog spark.serializer org.apache.spark.serializer.KryoSerializer spark.driver.memory 3g spark.eventLog.enabled true spark.eventLog.dir hdfs://127.0.0.1:9000/spark-eventlog spark.eventLog.compress true22.配置工作节点[root@localhost conf]# vim workers [root@localhost conf]# cat workers 127.0.0.1 [root@localhost conf]# [root@localhost sbin]# /root/spark/spark-3.1.2-bin-without-hadoop/sbin/start-all.sh验证应用程序访问集群中的所有应用程序的默认端口号为8080。使用以下URL访问该服务。http://localhost:8080/Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。41篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
954 阅读
0 评论
0 点赞
2021-12-30
使用 Istioctl 安装 istio
使用 Istioctl 安装 istio下载 Istio转到 Istio 发布 页面,下载针对你操作系统的安装文件, 或用自动化工具下载并提取最新版本(Linux 或 macOS):[root@k8s-master-node1 ~]# curl -L https://istio.io/downloadIstio | sh - 若无法下载可以手动写入文件进行执行 脚本内容:#!/bin/sh # Copyright Istio Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This file will be fetched as: curl -L https://git.io/getLatestIstio | sh - # so it should be pure bourne shell, not bash (and not reference other scripts) # # The script fetches the latest Istio release candidate and untars it. # You can pass variables on the command line to download a specific version # or to override the processor architecture. For example, to download # Istio 1.6.8 for the x86_64 architecture, # run curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.6.8 TARGET_ARCH=x86_64 sh -. set -e # Determines the operating system. OS="$(uname)" if [ "x${OS}" = "xDarwin" ] ; then OSEXT="osx" else OSEXT="linux" fi # Determine the latest Istio version by version number ignoring alpha, beta, and rc versions. if [ "x${ISTIO_VERSION}" = "x" ] ; then ISTIO_VERSION="$(curl -sL https://github.com/istio/istio/releases | \ grep -o 'releases/[0-9]*.[0-9]*.[0-9]*/' | sort -V | \ tail -1 | awk -F'/' '{ print $2}')" ISTIO_VERSION="${ISTIO_VERSION##*/}" fi LOCAL_ARCH=$(uname -m) if [ "${TARGET_ARCH}" ]; then LOCAL_ARCH=${TARGET_ARCH} fi case "${LOCAL_ARCH}" in x86_64) ISTIO_ARCH=amd64 ;; armv8*) ISTIO_ARCH=arm64 ;; aarch64*) ISTIO_ARCH=arm64 ;; armv*) ISTIO_ARCH=armv7 ;; amd64|arm64) ISTIO_ARCH=${LOCAL_ARCH} ;; *) echo "This system's architecture, ${LOCAL_ARCH}, isn't supported" exit 1 ;; esac if [ "x${ISTIO_VERSION}" = "x" ] ; then printf "Unable to get latest Istio version. Set ISTIO_VERSION env var and re-run. For example: export ISTIO_VERSION=1.0.4" exit 1; fi NAME="istio-$ISTIO_VERSION" URL="https://github.com/istio/istio/releases/download/${ISTIO_VERSION}/istio-${ISTIO_VERSION}-${OSEXT}.tar.gz" ARCH_URL="https://github.com/istio/istio/releases/download/${ISTIO_VERSION}/istio-${ISTIO_VERSION}-${OSEXT}-${ISTIO_ARCH}.tar.gz" with_arch() { printf "\nDownloading %s from %s ...\n" "$NAME" "$ARCH_URL" if ! curl -o /dev/null -sIf "$ARCH_URL"; then printf "\n%s is not found, please specify a valid ISTIO_VERSION and TARGET_ARCH\n" "$ARCH_URL" exit 1 fi curl -fsLO "$ARCH_URL" filename="istio-${ISTIO_VERSION}-${OSEXT}-${ISTIO_ARCH}.tar.gz" tar -xzf "${filename}" rm "${filename}" } without_arch() { printf "\nDownloading %s from %s ..." "$NAME" "$URL" if ! curl -o /dev/null -sIf "$URL"; then printf "\n%s is not found, please specify a valid ISTIO_VERSION\n" "$URL" exit 1 fi curl -fsLO "$URL" filename="istio-${ISTIO_VERSION}-${OSEXT}.tar.gz" tar -xzf "${filename}" rm "${filename}" } # Istio 1.6 and above support arch # Istio 1.5 and below do not have arch support ARCH_SUPPORTED="1.6" if [ "${OS}" = "Linux" ] ; then # This checks if ISTIO_VERSION is less than ARCH_SUPPORTED (version-sort's before it) if [ "$(printf '%s\n%s' "${ARCH_SUPPORTED}" "${ISTIO_VERSION}" | sort -V | head -n 1)" = "${ISTIO_VERSION}" ]; then without_arch else with_arch fi elif [ "x${OS}" = "xDarwin" ] ; then without_arch else printf "\n\n" printf "Unable to download Istio %s at this moment!\n" "$ISTIO_VERSION" printf "Please verify the version you are trying to download.\n\n" exit 1 fi printf "" printf "\nIstio %s Download Complete!\n" "$ISTIO_VERSION" printf "\n" printf "Istio has been successfully downloaded into the %s folder on your system.\n" "$NAME" printf "\n" BINDIR="$(cd "$NAME/bin" && pwd)" printf "Next Steps:\n" printf "See https://istio.io/latest/docs/setup/install/ to add Istio to your Kubernetes cluster.\n" printf "\n" printf "To configure the istioctl client tool for your workstation,\n" printf "add the %s directory to your environment path variable with:\n" "$BINDIR" printf "\t export PATH=\"\$PATH:%s\"\n" "$BINDIR" printf "\n" printf "Begin the Istio pre-installation check by running:\n" printf "\t istioctl x precheck \n" printf "\n" printf "Need more information? Visit https://istio.io/latest/docs/setup/install/ \n"[root@k8s-master-node1 ~]# bash istio.sh 转到 Istio 包目录。例如,如果包是 istio-1.11.4:[root@k8s-master-node1 ~]# cd istio-1.11.4/ [root@k8s-master-node1 ~/istio-1.11.4]# ll total 28 drwxr-x---. 2 root root 22 Oct 13 22:50 bin -rw-r--r--. 1 root root 11348 Oct 13 22:50 LICENSE drwxr-xr-x. 5 root root 52 Oct 13 22:50 manifests -rw-r-----. 1 root root 854 Oct 13 22:50 manifest.yaml -rw-r--r--. 1 root root 5866 Oct 13 22:50 README.md drwxr-xr-x. 21 root root 4096 Oct 13 22:50 samples drwxr-xr-x. 3 root root 57 Oct 13 22:50 tools [root@k8s-master-node1 ~/istio-1.11.4]#安装目录包含:samples/ 目录下的示例应用程序bin/ 目录下的 istioctl 客户端二进制文件 .将 istioctl 客户端加入搜索路径(Linux or macOS):$ export PATH=$PWD/bin:$PATH export PATH=/root/istio-1.11.4/bin:$PATH [root@k8s-master-node1 ~/istio-1.11.4]# export PATH=$PWD/bin:$PATH [root@k8s-master-node1 ~/istio-1.11.4]# [root@k8s-master-node1 ~/istio-1.11.4]# vim /etc/profile [root@k8s-master-node1 ~/istio-1.11.4]# tail -n 2 /etc/profile export PATH=/root/istio-1.11.4/bin:$PATH [root@k8s-master-node1 ~/istio-1.11.4]#使用默认配置档安装 Istio最简单的选择是用下面命令安装 Istio 默认 配置档:[root@k8s-master-node1 ~]# istioctl version no running Istio pods in "istio-system" 1.11.4 [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# istioctl install --set profile=demo -y ✔ Istio core installed ✔ Istiod installed ✔ Egress gateways installed ✔ Ingress gateways installed ✔ Installation complete Thank you for installing Istio 1.11. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/kWULBRjUv7hHci7T6 [root@k8s-master-node1 ~]#查看istio相应的 namespace 和 pod 是否已经正常创建[root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE istio-egressgateway-756d4db566-wh949 1/1 Running 0 2m istio-ingressgateway-8577c57fb6-2vrtg 1/1 Running 0 2m istiod-5847c59c69-l2dt2 1/1 Running 0 2m39s [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]#检查 istio 的 CRD 和 API 资源[root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl get crd |grep istio authorizationpolicies.security.istio.io 2021-11-01T09:43:55Z destinationrules.networking.istio.io 2021-11-01T09:43:55Z envoyfilters.networking.istio.io 2021-11-01T09:43:55Z gateways.networking.istio.io 2021-11-01T09:43:55Z istiooperators.install.istio.io 2021-11-01T09:43:55Z peerauthentications.security.istio.io 2021-11-01T09:43:55Z requestauthentications.security.istio.io 2021-11-01T09:43:55Z serviceentries.networking.istio.io 2021-11-01T09:43:55Z sidecars.networking.istio.io 2021-11-01T09:43:55Z telemetries.telemetry.istio.io 2021-11-01T09:43:55Z virtualservices.networking.istio.io 2021-11-01T09:43:55Z workloadentries.networking.istio.io 2021-11-01T09:43:55Z workloadgroups.networking.istio.io 2021-11-01T09:43:55Z [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl api-resources |grep istio istiooperators iop,io install.istio.io/v1alpha1 true IstioOperator destinationrules dr networking.istio.io/v1beta1 true DestinationRule envoyfilters networking.istio.io/v1alpha3 true EnvoyFilter gateways gw networking.istio.io/v1beta1 true Gateway serviceentries se networking.istio.io/v1beta1 true ServiceEntry sidecars networking.istio.io/v1beta1 true Sidecar virtualservices vs networking.istio.io/v1beta1 true VirtualService workloadentries we networking.istio.io/v1beta1 true WorkloadEntry workloadgroups wg networking.istio.io/v1alpha3 true WorkloadGroup authorizationpolicies security.istio.io/v1beta1 true AuthorizationPolicy peerauthentications pa security.istio.io/v1beta1 true PeerAuthentication requestauthentications ra security.istio.io/v1beta1 true RequestAuthentication telemetries telemetry telemetry.istio.io/v1alpha1 true Telemetry [root@k8s-master-node1 ~]#安装 dashboard 组件。命令如下[root@k8s-master-node1 ~]# kubectl apply -f /root/istio-1.11.4/samples/addons/ -n istio-system serviceaccount/grafana created configmap/grafana created service/grafana created deployment.apps/grafana created configmap/istio-grafana-dashboards created configmap/istio-services-grafana-dashboards created deployment.apps/jaeger created service/tracing created service/zipkin created service/jaeger-collector created serviceaccount/kiali created configmap/kiali created clusterrole.rbac.authorization.k8s.io/kiali-viewer created clusterrole.rbac.authorization.k8s.io/kiali created clusterrolebinding.rbac.authorization.k8s.io/kiali created role.rbac.authorization.k8s.io/kiali-controlplane created rolebinding.rbac.authorization.k8s.io/kiali-controlplane created service/kiali created deployment.apps/kiali created serviceaccount/prometheus created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created service/prometheus created deployment.apps/prometheus created [root@k8s-master-node1 ~]# [root@k8s-master-node1 ~]# kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE grafana-68cc7d6d78-792cw 1/1 Running 0 88s istio-egressgateway-756d4db566-wh949 1/1 Running 0 6m9s istio-ingressgateway-8577c57fb6-2vrtg 1/1 Running 0 6m9s istiod-5847c59c69-l2dt2 1/1 Running 0 6m48s jaeger-5d44bc5c5d-n6zjq 1/1 Running 0 88s kiali-fd9f88575-svz7g 1/1 Running 0 87s prometheus-77b49cb997-7d4s9 2/2 Running 0 86s [root@k8s-master-node1 ~]#将istio-ingressgateway改为NodePort方式,方便访问[root@k8s-master-node1 ~]# kubectl patch service istio-ingressgateway -n istio-system -p '{"spec":{"type":"NodePort"}}' service/istio-ingressgateway patched [root@k8s-master-node1 ~]#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。46篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
1,140 阅读
0 评论
0 点赞
2021-12-30
KubeSphere 升级 && 安装后启用插件
KubeSphere 升级root@master1:~# export KKZONE=cn root@master1:~# kk upgrade --with-kubernetes v1.22.1 --with-kubesphere v3.2.0 -f sample.yaml 启用插件用户可以使用 KubeSphere Web 控制台查看和操作不同的资源。要在安装后启用可插拔组件,只需要在控制台中进行略微调整。对于那些习惯使用 Kubernetes 命令行工具 kubectl 的人来说,由于该工具已集成到控制台中,因此使用 KubeSphere 将毫无困难。以 admin 身份登录控制台。点击左上角的平台管理 ,然后选择集群管理。集群管理点击 CRD,然后在搜索栏中输入 clusterconfiguration,点击搜索结果进入其详情页面。CRD编辑配置文件在该配置文件中,将对应组件 enabled 的 false 更改为 true,以启用要安装的组件。完成后,点击更新以保存配置。我的内容:apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: labels: version: v3.2.0 name: ks-installer namespace: kubesphere-system spec: alerting: enabled: true auditing: enabled: true authentication: jwtSecret: '' common: core: console: enableMultiLogin: true port: 30880 type: NodePort es: basicAuth: enabled: true password: '' username: '' data: volumeSize: 20Gi elkPrefix: logstash externalElasticsearchPort: '' externalElasticsearchUrl: '' logMaxAge: 7 master: volumeSize: 4Gi gpu: kinds: - default: true resourceName: nvidia.com/gpu resourceType: GPU minio: volumeSize: 20Gi monitoring: GPUMonitoring: enabled: true endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090' openldap: enabled: true redis: enabled: true devops: enabled: true jenkinsJavaOpts_MaxRAM: 2g jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi etcd: endpointIps: 192.168.1.10 monitoring: false port: 2379 tlsEnable: true events: enabled: true kubeedge: cloudCore: cloudHub: advertiseAddress: - '' nodeLimit: '100' cloudhubHttpsPort: '10002' cloudhubPort: '10000' cloudhubQuicPort: '10001' cloudstreamPort: '10003' nodeSelector: node-role.kubernetes.io/worker: '' service: cloudhubHttpsNodePort: '30002' cloudhubNodePort: '30000' cloudhubQuicNodePort: '30001' cloudstreamNodePort: '30003' tunnelNodePort: '30004' tolerations: [] tunnelPort: '10004' edgeWatcher: edgeWatcherAgent: nodeSelector: node-role.kubernetes.io/worker: '' tolerations: [] nodeSelector: node-role.kubernetes.io/worker: '' tolerations: [] enabled: true logging: containerruntime: docker enabled: true logsidecar: enabled: true replicas: 2 metrics_server: enabled: true monitoring: gpu: nvidia_dcgm_exporter: enabled: true storageClass: '' multicluster: clusterRole: none network: ippool: type: none networkpolicy: enabled: true topology: type: none openpitrix: store: enabled: true persistence: storageClass: '' servicemesh: enabled: true 启用组件执行以下命令,使用 Web kubectl 来检查安装过程:root@master1:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f 如果组件安装成功,输出将显示以下消息。##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.0.2:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. ##################################################### https://kubesphere.io 20xx-xx-xx xx:xx:xx #####################################################登录 KubeSphere 控制台,在系统组件中可以查看不同组件的状态。服务组件Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。47篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
466 阅读
0 评论
0 点赞
2021-12-30
Python 人工智能 5秒钟偷走你的声音
介绍Python 深度学习AI - 声音克隆、声音模仿,是一个三阶段的深度学习框架,允许从几秒钟的音频中创建语音的数字表示,并用它来调节文本到语音模型,该模型经过培训,可以概括到新的声音。环境准备与安装原始英文版地址:https://github.com/CorentinJ/Real-Time-Voice-Cloning中文二次开发版(本文使用该版本):https://github.com/babysor/MockingBirdpycharm环境下载:https://www.jetbrains.com/pycharm/download/#section=windowsconda虚拟环境:https://www.anaconda.com/products/individualFFmpeg :https://github.com/BtbN/FFmpeg-Builds/releases模型文件:https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ 提取码 2021在电脑系统上安装 FFmpeg 工具下载zip压缩包连接为:https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2021-11-09-12-23/ffmpeg-N-104488-ga13646639f-win64-gpl.zip下载完成后将其解压到一个目录后在系统的环境变量中添加该目录打开新的cmd中查看是否安装成功ffmpeg -version使用打开项目目录后,创建时使用conda的Python 3.9虚拟环境创建完成后,在cmd中查看现有的虚拟环境,并进入刚刚创建的虚拟环境conda env listactivate pythonProject1进入环境后在进行安装pip所需依赖,并使用国内源进行安装实现下载加速pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple在虚拟环境下安装pytorchpip install torch -i https://pypi.tuna.tsinghua.edu.cn/simple回到pycharm中,将模型导入到项目目录下,把目录复制黏贴到项目中修改一行代码,在 synthesizer/utils/symbols.py 文件中修改为: _characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz12340!'(),-.:;? '之后在terminal中启动工具箱使用音频合成工具箱Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。48篇原创内容公众号 https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
570 阅读
0 评论
0 点赞
2021-12-30
kubernetes(k8s)安装命令行自动补全功能
Ubuntu下安装命令root@master1:~# apt install -y bash-completion Reading package lists... Done Building dependency tree Reading state information... Done bash-completion is already the newest version (1:2.10-1ubuntu1). 0 upgraded, 0 newly installed, 0 to remove and 29 not upgraded.centos下安装命令[root@dss ~]# yum install bash-completion -y Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * epel: mirrors.tuna.tsinghua.edu.cn Package 1:bash-completion-2.1-8.el7.noarch already installed and latest version Nothing to do [root@dss ~]#root@master1:~# locate bash_completion /etc/bash_completion /etc/bash_completion.d /etc/bash_completion.d/apport_completion /etc/bash_completion.d/git-prompt /etc/profile.d/bash_completion.sh /snap/core18/2128/etc/bash_completion /snap/core18/2128/usr/share/bash-completion/bash_completion /snap/core18/2128/usr/share/doc/bash/README.md.bash_completion.gz /snap/core18/2128/usr/share/perl5/Debian/Debhelper/Sequence/bash_completion.pm /snap/lxd/21029/etc/bash_completion.d /snap/lxd/21029/etc/bash_completion.d/snap.lxd.lxc /usr/share/bash-completion/bash_completion /usr/share/doc/bash/README.md.bash_completion.gz /usr/share/perl5/Debian/Debhelper/Sequence/bash_completion.pm /var/lib/docker/overlay2/0f27e9d2ca7fbe8a3b764a525f1c58990345512fa6dfe4162aba3e05ccff5b56/diff/etc/bash_completion.d /var/lib/docker/overlay2/5eb1b0cb946881e1081bfa7a608b6fa85dbf2cb7e67f84b038f3b8a85bd13196/diff/usr/local/lib/node_modules/npm/node_modules/dashdash/etc/dashdash.bash_completion.in /var/lib/docker/overlay2/76c41c1d1eb6eaa7b9259bd822a4bffebf180717a24319d2ffec3b4dcae0e66a/merged/etc/bash_completion.d /var/lib/docker/overlay2/78b8ab76c0e0ad7ee873daab9ab3987a366ec32fda68a4bb56a218c7f8806a58/merged/etc/profile.d/bash_completion.sh /var/lib/docker/overlay2/78b8ab76c0e0ad7ee873daab9ab3987a366ec32fda68a4bb56a218c7f8806a58/merged/usr/share/bash-completion/bash_completion /var/lib/docker/overlay2/802133f75f62596a2c173f1b57231efbe210eddd7a43770a62ca94c86ce2ca56/merged/usr/local/lib/node_modules/npm/node_modules/dashdash/etc/dashdash.bash_completion.in /var/lib/docker/overlay2/ee672bdd0bf0fdf590f9234a8a784ca12c262c47a0ac8ab91acc0942dfafc339/diff/etc/profile.d/bash_completion.sh /var/lib/docker/overlay2/ee672bdd0bf0fdf590f9234a8a784ca12c262c47a0ac8ab91acc0942dfafc339/diff/usr/share/bash-completion/bash_completion临时环境变量root@master1:~# source /usr/share/bash-completion/bash_completion root@master1:~# source <(kubectl completion bash) root@master1:~# root@master1:~# root@master1:~# kubectl annotate auth config delete exec kustomize plugin run uncordon api-resources autoscale cordon describe explain label port-forward scale version api-versions certificate cp diff expose logs proxy set wait apply cluster-info create drain get options replace taint attach completion debug edit help patch rollout top root@master1:~# kubectl永久写入环境变量配置文件root@master1:~# root@master1:~# root@master1:~# echo "source <(kubectl completion bash)" >> ~/.bashrc root@master1:~# root@master1:~# cat ~/.bashrc ----略---- # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). #if [ -f /etc/bash_completion ] && ! shopt -oq posix; then # . /etc/bash_completion #fi source <(kubectl completion bash) root@master1:~# https://blog.csdn.net/qq_33921750https://my.oschina.net/u/3981543https://www.zhihu.com/people/chen-bu-yun-2https://segmentfault.com/u/hppyvyv6/articleshttps://juejin.cn/user/3315782802482007https://space.bilibili.com/352476552/articlehttps://cloud.tencent.com/developer/column/93230知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云本文使用 文章同步助手 同步
2021年12月30日
465 阅读
0 评论
0 点赞
1
...
32
33
34
...
40