首页
简历
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,556 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,059 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,922 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,841 阅读
5
kubernetes (k8s) 二进制高可用安装
1,792 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
199
篇文章
累计收到
144
条评论
首页
栏目
默认分类
页面
简历
直播
统计
壁纸
留言
友链
关于
搜索到
199
篇与
默认分类
的结果
2021-12-30
Prometheus+Grafana监控系统
Prometheus vs Zabbix Zabbix的客户端更多是只做上报的事情,push模式。而Prometheus则是客户端本地也会存储监控数据,服务端定时来拉取想要的数据。Zabbix的客户端agent可以比较方便的通过脚本来读取机器内数据库、日志等文件来做上报。zabbix的客户端agent可以比较方便的通过脚本来读取机器内数据库、日志等文件来做上报。Prometheus的上报客户端则分为不同语言的SDK和不同用途的exporter两种,比如如果你要监控机器状态、mysql性能等,有大量已经成熟的exporter来直接开箱使用,通过http通信来对服务端提供信息上报(server去pull信息);Zabbix's client is more of only reporting things, push mode. In Prometheus, the client also stores monitoring data locally, and the server regularly pulls the desired data. Zabbix's client agent can easily read the database, log and other files in the machine through scripts for reporting. The zabbix client agent can easily read the database, log and other files in the machine through scripts for reporting. Prometheus reporting clients are divided into SDKs in different languages and exporters for different purposes. For example, if you want to monitor machine status, mysql performance, etc., there are a large number of mature exporters to use directly out of the box, and serve through HTTP communication. The terminal provides information reporting (server to pull information);安装Prometheus:install Prometheus 官网下载地址:Official website download addresshttps://prometheus.io/download/ 下载您想要的版本后,进行安装使用即可。After downloading the version you want, install it and use itcby@cby-Inspiron-7577:~$ wget https://github.com/prometheus/prometheus/releases/download/v2.21.0/prometheus-2.21.0.linux-amd64.tar.gz cby@cby-Inspiron-7577:~$ tar xvf prometheus-2.21.0.linux-amd64.tar.gz prometheus-2.21.0.linux-amd64/ prometheus-2.21.0.linux-amd64/LICENSE prometheus-2.21.0.linux-amd64/prometheus prometheus-2.21.0.linux-amd64/promtool prometheus-2.21.0.linux-amd64/prometheus.yml prometheus-2.21.0.linux-amd64/NOTICE prometheus-2.21.0.linux-amd64/console\_libraries/ prometheus-2.21.0.linux-amd64/console\_libraries/menu.lib prometheus-2.21.0.linux-amd64/console\_libraries/prom.lib prometheus-2.21.0.linux-amd64/consoles/ prometheus-2.21.0.linux-amd64/consoles/node-overview.html prometheus-2.21.0.linux-amd64/consoles/node.html prometheus-2.21.0.linux-amd64/consoles/prometheus.html prometheus-2.21.0.linux-amd64/consoles/node-cpu.html prometheus-2.21.0.linux-amd64/consoles/index.html.example prometheus-2.21.0.linux-amd64/consoles/prometheus-overview.html prometheus-2.21.0.linux-amd64/consoles/node-disk.html 解压后进入文件夹内即可看到该程序。同时即可使用。After decompression, enter the folder to see the program. Can be used at the same timecby@cby-Inspiron-7577:~/prometheus-2.21.0.linux-amd64$ ll 总用量 161140 drwxr-xr-x 4 cby cby 4096 9月 11 21:30 ./ drwxr-xr-x 22 cby cby 4096 10月 5 00:18 ../ drwxr-xr-x 2 cby cby 4096 9月 11 21:29 console\_libraries/ drwxr-xr-x 2 cby cby 4096 9月 11 21:29 consoles/ -rw-r--r-- 1 cby cby 11357 9月 11 21:29 LICENSE -rw-r--r-- 1 cby cby 3420 9月 11 21:29 NOTICE -rwxr-xr-x 1 cby cby 88471209 9月 11 19:37 prometheus\* -rw-r--r-- 1 cby cby 926 9月 11 21:29 prometheus.yml -rwxr-xr-x 1 cby cby 76493104 9月 11 19:39 promtool\* 查看一下版本: Check the versioncby@cby-Inspiron-7577:~/prometheus-2.21.0.linux-amd64$ ./prometheus --version prometheus, version 2.21.0 (branch: HEAD, revision: e83ef207b6c2398919b69cd87d2693cfc2fb4127) build user: root@a4d9bea8479e build date: 20200911-11:35:02 go version: go1.15.2 查看启动sever文件: View startup sever filecby@cby-Inspiron-7577:~/prometheus-2.21.0.linux-amd64$ cat prometheus.yml # my global config global: scrape\_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation\_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape\_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static\_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation\_interval'. rule\_files: # - "first\_rules.yml" # - "second\_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape\_configs: # The job name is added as a label \`job=<job\_name>\` to any timeseries scraped from this config. - job\_name: 'prometheus' # metrics\_path defaults to '/metrics' # scheme defaults to 'http'. static\_configs: - targets: \['localhost:9090'\] 其大致分为四部分:It is roughly divided into four parts:global:全局配置,其中scrape_interval表示抓取一次数据的间隔时间,evaluation_interval表示进行告警规则检测的间隔时间;global: global configuration, in which scrape_interval represents the interval of data capture, evaluation_interval represents the interval of alarm rule detection;alerting:告警管理器(Alertmanager)的配置,目前还没有安装Alertmanager;alerting: The configuration of the alert manager (Alertmanager), Alertmanager is not installed yet;rule_files:告警规则有哪些;rule_files: what are the alarm rules;scrape_configs:抓取监控信息的目标。一个job_name就是一个目标,其targets就是采集信息的IP和端口。这里默认监控了Prometheus自己,可以通过修改这里来修改Prometheus的监控端口。Prometheus的每个exporter都会是一个目标,它们可以上报不同的监控信息,比如机器状态,或者mysql性能等等,不同语言sdk也会是一个目标,它们会上报你自定义的业务监控信息。scrape_configs: The goal of grabbing monitoring information. A job_name is a target, and its targets are the IP and port for collecting information. Prometheus itself is monitored by default here, and the monitoring port of Prometheus can be modified by modifying this. Each exporter of Prometheus will be a target, they can report different monitoring information, such as machine status, or mysql performance, etc., different language SDK will also be a target, they will report your customized business monitoring information. 启动运行sever:Start running severcby@cby-Inspiron-7577:~/prometheus-2.21.0.linux-amd64$ ./prometheus --config.file=prometheus.yml 运行后,使用默认9090端口即可进行访问,若无法访问您可以查看一下是否有防火墙的限制,若没有限制,那就看一下是否正常启动,有端口的监听。 After running, you can use the default port 9090 to access it. If you can't access it, you can check if there is a firewall restriction. If there is no restriction, check if it is started normally and there is port monitoring.添加机器的监控器:Add machine monitor 在官网的下载页面中,可以找到 node_exporter 这个tar包,这个监空插件可以监控基础的硬件信息,例如CPU内存硬盘等信息,node_exporter本身也是一个http服务可以进行直接调用使用哦。 On the download page of the official website, you can find the tar package of node_exporter. This plug-in can monitor basic hardware information, such as CPU memory and hard disk information. The node_exporter itself is also an http service that can be used directly. 下载最新的此插件,同时进行解压,并运行:Download the latest plug-in, unzip at the same time, and runcby@cby-Inspiron-7577:~$ wget https://github.com/prometheus/node\_exporter/releases/download/v1.0.1/node\_exporter-1.0.1.linux-amd64.tar.gz cby@cby-Inspiron-7577:~$ cd node\_exporter-1.0.1.linux-amd64/ cby@cby-Inspiron-7577:~/node\_exporter-1.0.1.linux-amd64$ ls LICENSE node\_exporter NOTICE cby@cby-Inspiron-7577:~/node\_exporter-1.0.1.linux-amd64$ ./node\_exporter level=info ts=2020-10-04T16:31:41.858Z caller=node\_exporter.go:177 msg="Starting node\_exporter" version="(version=1.0.1, branch=HEAD, revision=3715be6ae899f2a9b9dbfd9c39f3e09a7bd4559f)" level=info ts=2020-10-04T16:31:41.858Z caller=node\_exporter.go:178 msg="Build context" build\_context="(go=go1.14.4, user=root@1f76dbbcfa55, date=20200616-12:44:12)" level=info ts=2020-10-04T16:31:41.859Z caller=node\_exporter.go:105 msg="Enabled collectors" 可以使用curl进行测试一下是否正常启动 You can use curl to test whether it starts normallycby@cby-Inspiron-7577:~$ curl http://localhost:9100/metrics 若可以正常访问,那就可以在prometheus.yml文件中添加一个targetIf you can access normally, you can add a target in the prometheus.yml file\# my global config global: scrape\_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation\_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape\_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static\_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation\_interval'. rule\_files: # - "first\_rules.yml" # - "second\_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape\_configs: # The job name is added as a label \`job=<job\_name>\` to any timeseries scraped from this config. - job\_name: 'prometheus' # metrics\_path defaults to '/metrics' # scheme defaults to 'http'. static\_configs: - targets: \['localhost:9090'\] - job\_name: 'server' static\_configs: - targets: \['localhost:9100'\] 在标签栏的 Status --> Targets 中可以:In Status --> Targets in the tab bar, you can安装Grafana:Install Grafanacby@cby-Inspiron-7577:~$ sudo apt-get install -y adduser libfontconfig1 cby@cby-Inspiron-7577:~$ wget https://dl.grafana.com/oss/release/grafana\_7.2.0\_amd64.deb cby@cby-Inspiron-7577:~$ sudo dpkg -i grafana\_7.2.0\_amd64.deb 正在选中未选择的软件包 grafana。 (正在读取数据库 ... 系统当前共安装有 211277 个文件和目录。) 准备解压 grafana\_7.2.0\_amd64.deb ... 正在解压 grafana (7.2.0) ... 正在设置 grafana (7.2.0) ... 正在添加系统用户"grafana" (UID 130)... 正在将新用户"grafana" (UID 130)添加到组"grafana"... 无法创建主目录"/usr/share/grafana"。 ### NOT starting on installation, please execute the following statements to configure grafana to start automatically using systemd sudo /bin/systemctl daemon-reload sudo /bin/systemctl enable grafana-server ### You can start grafana-server by executing sudo /bin/systemctl start grafana-server 正在处理用于 systemd (245.4-4ubuntu3.2) 的触发器 ... 安装完成后,进行启动: After the installation is complete, startcby@cby-Inspiron-7577:~$ sudo systemctl start grafana-server.service cby@cby-Inspiron-7577:~$ sudo systemctl status grafana-server.service \[sudo\] cby 的密码: 对不起,请重试。 \[sudo\] cby 的密码: 对不起,请重试。 \[sudo\] cby 的密码: ● grafana-server.service - Grafana instance Loaded: loaded (/lib/systemd/system/grafana-server.service; disabled; vendor preset: enabled) Active: active (running) since Mon 2020-10-05 00:02:59 CST; 40min ago Docs: http://docs.grafana.org Main PID: 1521572 (grafana-server) Tasks: 14 (limit: 18689) Memory: 25.3M CGroup: /system.slice/grafana-server.service └─1521572 /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini --pidfile=/var/run/grafa 默认端口为3000 ,使用IP加端口即可进行访问,默认用户名密码是admin,登录后即可看到首页。在设置中进行添加Prometheus监控数据。 The default port is 3000, you can access by using IP plus port, the default user name and password is admin, you can see the home page after logging in. Add Prometheus monitoring data in the settings. 添加监控数据后,导入一个监控面板,或者勤劳的人们可以自行进行配置面板,哇哈哈哈,同时可以在官方的面板界面中寻找到一个心仪的面板地址为:https://grafana.com/dashboards下载面板的json后,可以进行导入面板。 After adding monitoring data, import a monitoring panel, or industrious people can configure the panel by themselves, wow ha ha ha, and you can find a favorite panel in the official panel interfaceThe address is: https://grafana.com/dashboardsAfter downloading the json of the panel, you can import the panel.导入后即可显示看到花里胡哨的面版了After importing, you can see the bells and whistles 面板添加后,必然需要报警。可以使用onealert,进行告警。 https://caweb.aiops.com/#/Application/newBuild/grafana/0 After the panel is added, an alarm is necessary. You can use onealert to alert.到这里环境已经配置完成The environment has been configured here
2021年12月30日
617 阅读
0 评论
0 点赞
2021-12-30
Python安装-在Linux系统中使用编译进行安装
Python安装-在Linux系统中使用编译进行安装 你可以使用Ubuntu自带的Python3,不过你不能自由的控制版本,还要单独安装pip3,如果你想升级pip3,还会出现一些让人不愉快的使用问题。而在CentOS系统中,默认只有Python2,通过yum安装Python3,也同样面临版本落后以及pip3的问题。如果不自己编译安装,还有什么别的方法来一直保持使用最新的版本呢?!除非你用Win系统。You can use the Python3 that comes with Ubuntu, but you can't control the version freely. You have to install pip3 separately. If you want to upgrade pip3, there will be some unpleasant usage problems. In the CentOS system, there is only Python2 by default. Installing Python3 through yum also faces the problems of backward version and pip3. If you don’t compile and install it yourself, what other methods are there to keep using the latest version? ! Unless you use Win system.在CentOS中安装Python3需要的依赖库Install the dependency libraries required by Python3 in CentOSsudo yum install zlib-devel bzip2-developenssl-devel ncurses-devel sqlite-devel readline-devel tk-devel libffi-develexpat-devel gdbm-devel xz-devel db4-devel libpcap-devel make在Ubuntu中安装Python3需要的依赖库Install the dependency libraries required by Python3 in Ubuntu$ sudo apt install libreadline-gplv2-devlibncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libbz2-devzlib1g-dev libffi-dev liblzma-dev安装GCCInstall GCCCentOS的minimal版本,以及Ubuntu,都没有预装gcc,如果你用的是这两个版本,需要确保系统有gcc编译器可以使用。安装和查看gcc的方法:The minimal version of CentOS and Ubuntu do not have gcc pre-installed. If you are using these two versions, you need to make sure that the system has a gcc compiler that can be used. How to install and view gcc: $ sudo yum install gcc # install gcc in centos $ sudo apt install gcc # install gcc in ubuntu $ which gcc # check if gcc is there $ gcc --version # check gcc version下载Python3源码并解压Download the Python3 source code and unzip itPython3的官方源码下载页面是:https://www.python.org/downloads/The official source code download page of Python3 is:https://www.python.org/downloads/ 使用curl或wget下载,然后解压:Use curl or wget to download, and then unzip:Wget https://www.python.org/ftp/python/3.9.2/Python-3.9.2.tgz tar xvf Python-3.9.2.tgz执行configureExecute configure进入上一步的解压目录,然后执行configure:Enter the unzipped directory of the previous step, and then execute configure:$ cd Python-3.7.3 $ ./configure --prefix=/usr/local/python-3.9.2make和installmake and install最后,我们执行make和install的指令。Finally, we execute the make and install instructions.$ make && sudo make installmake install 前要有sudo,因为我们在configure的时候,指定的安装路径为系统路径,不是用户的/home/user路径。There must be sudo before make install, because when we configure, the specified installation path is the system path, not the user's /home/user path.ln -s /usr/local/python-3.9.2/bin/python3.9/usr/bin/python3 ln -s /usr/local/python-3.9.2/bin/pip3.9/usr/bin/pip3
2021年12月30日
565 阅读
0 评论
0 点赞
2021-12-30
服务器被入侵,异常进程无法杀掉,随机进程名
故事情节: 有一天在聚餐中,我有一个朋友和我说他的服务器上有有个异常的进程他一直在占满CPU在运行,我在一顿谦虚之后答应了他,有空登录上他的服务器看一下具体情况。 这一天正是五月一日,一年一度的劳动节来了,我在家里闲着没事干在看某综艺,这时手机响了,来了一条微信消息,看到他给我发来了俩张图,突然勾起了我内心的好奇。 就是以上三张图,在proc目录中的exe指向的文件已被删除,我看到这里,我好奇这个进程肯定是被隐藏掉了。这时,我急中生智跟这位朋友要了root账号密码。登录服务器用top命令一看,发现一个奇怪的进程在运行,我使用kill命令将其杀后,等了十来分钟后,发现没有被启动,这时我和这位朋友说干掉了,他问我是不是kill掉了,我说嗯,他又补充到,这个进程杀掉过段时间会起来的,我问他大概多久就会启动,他说不清楚大概一天内肯定会启动。这时我慌了,如果是一天内才启动,我还得明天才能看见,那实在没办法了。我又开始看我的综艺了。 没过多久,我又看了一下,发现这个进程换了个名字又启动了。还干满了CPU,就在这时,我在研究这个进程运行文件的时候发现: 这个进程会连到一个韩国的服务器上,我访问这个IP发现是一个正常的网站,没有异常情况。 同时在查看运行目录的时候,发现如下问题 发现运行文件的命令也没有,同时运行目录也被删掉了。就在这时卡住了脖子,不知如何是好,这时突然想起来一个定时运行的脚本。打开脚本是这样的: 发现这个脚本是base64编码加密的,在网上找了一个解密的工具,进解密后发现这个是脚本完整脚本如图: 在下大概看了一下脚本内容,如下是执行一个临时文件并赋予一个执行权限在执行完成后将其删除,所以刚刚在看得时候发现执行的目录下得文件报红出现丢失的情况 最骚的是这里,关键东西在这里了。使用拼接组成一个URL进行下载病毒文件。通过一系列操作,先查看本地IP,又看了是我是谁,又看了机器的架构,还看了机器的主机名,同时还看了本地的网卡所有的IP。最关键的是把网络这一块搞成一个md5sum。在最后查看了定时任务并搞成了一个base64的字符串 再往下就是下载脚本执行并添加定时任务了,有意思的是这个脚本的2017年的,至今还再用。到最后我取消了他所有权限,并改了名字,同时把定时任务将其删除。到此该病毒已被清理。本文使用 文章同步助手 同步
2021年12月30日
780 阅读
0 评论
0 点赞
2021-12-30
从APNIC获取中国IP地址列表
关于APNIC 全球IP地址块被IANA(Internet Assigned Numbers Authority)分配给全球三大地区性IP地址分配机构,它们分别是:ARIN (American Registry for Internet Numbers) 负责北美、南美、加勒比以及非洲撒哈啦部分的IP地址分配。同时还要给全球NSP(Network Service Providers)分配地址。RIPE (Reseaux IP Europeens) 负责欧洲、中东、北非、西亚部分地区(前苏联)APNIC (Asia Pacific Network Information Center) 负责亚洲、太平洋地区APNIC IP地址分配信息总表的获取:APNIC提供了每日更新的亚太地区IPv4,IPv6,AS号分配的信息表:http://ftp.apnic.net/apnic/stats/apnic/delegated-apnic-latest该文件的格式与具体内容参见:ftp://ftp.apnic.net/pub/apnic/stats/apnic/README.TXT通过该文件我们能够得到APNIC辖下IPv4地址空间的分配情况。脚本获取IP地址#!/bin/bash wget -c http://ftp.apnic.net/stats/apnic/delegated-apnic-latest cat delegated-apnic-latest | awk -F '|' '/CN/&&/ipv4/ {print $4 "/" 32-log($5)/log(2)}' | cat > ipv4.txt cat delegated-apnic-latest | awk -F '|' '/CN/&&/ipv6/ {print $4 "/" 32-log($5)/log(2)}' | cat > ipv6.txt cat delegated-apnic-latest | awk -F '|' '/HK/&&/ipv4/ {print $4 "/" 32-log($5)/log(2)}' | cat > ipv4-hk.txt cat delegated-apnic-latest | awk -F '|' '/HK/&&/ipv6/ {print $4 "/" 32-log($5)/log(2)}' | cat > ipv6-hk.txt执行脚本:[root@cby cby]# ./ip.sh --2021-04-29 12:17:13-- http://ftp.apnic.net/stats/apnic/delegated-apnic-latest Resolving ftp.apnic.net (ftp.apnic.net)... 203.119.102.40, 2001:dd8:8:701::40 Connecting to ftp.apnic.net (ftp.apnic.net)|203.119.102.40|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3352151 (3.2M) [text/plain] Saving to: ‘delegated-apnic-latest’ delegated-apnic-latest 100%[=============================================================>] 3.20M 61.3KB/s in 44s 2021-04-29 12:17:58 (74.0 KB/s) - ‘delegated-apnic-latest’ saved [3352151/3352151] [root@cby cby]# ls delegated-apnic-latest index.html ip.sh ipv4-hk.txt ipv4.txt ipv6-hk.txt ipv6.txt 每日凌晨十二点十分会进行同步,若需要IP地址,可以访问如下地址:http://aliyun.chenby.cn/定时任务:[root@cby cby]# crontab -l 10 0 * * * /www/server/cron/3ab48c27ec99cb9787749c362afae517 >> /www/server/cron/3ab48c27ec99cb9787749c362afae517.log 2>&1 10 0 * * * rm -rf /www/wwwroot/www.chenby.cn/cby/ipv4.txt /www/wwwroot/www.chenby.cn/cby/ipv4-hk.txt /www/wwwroot/www.chenby.cn/cby/ipv6.txt /www/wwwroot/www.chenby.cn/cby/ipv6-hk.txt /www/wwwroot/www.chenby.cn/cby/delegated-apnic-latest 11 0 * * * /www/wwwroot/www.chenby.cn/cby/ip.sh >> /home/ip.txt本文使用 文章同步助手 同步
2021年12月30日
414 阅读
0 评论
0 点赞
2021-12-30
华为人工智能atlasA800-9000物理服务器离线安装及CANN安装和MindSpore安装和Tensorflow安装
目录华为人工智能atlas A800-9000 物理服务器全程离线安装驱动以及CANN安装部署和MindSpore安装部署和Tensorflow安装部署A800-9000 物理服务器安装驱动使用镜像配置本地apt源创建普通用户并设置密码安装驱动以及固件验证是否安装成功CANN开发环境部署安装安装环境以及依赖安装完成后查看版本安装Python3.7.5使用Python3.7.5环境安装pip依赖包安装开发套件包CANN训练环境部署安装说明安装训练软件包安装MindSpore安装whl包配置环境变量测试是否可行安装mindinsight安装whl包配置环境变量启动及使用安装Tensorflow编译hdf5配置环境变量及软连接安装whl包安装Pytorch华为人工智能atlas A800-9000 物理服务器全程离线安装驱动以及CANN安装部署和MindSpore安装部署和Tensorflow安装部署背景Atlas 800 训练服务器(型号:9000)是基于华为鲲鹏920+昇腾910处理器的AI训练服务器,具有最强算力密度、超高能效与高速网络带宽等特点。该服务器广泛应用于深度学习模型开发和训练,适用于智慧城市、智慧医疗、天文探索、石油勘探等需要大算力的行业领域。链接:https://e.huawei.com/cn/products/cloud-computing-dc/atlas/atlas-800-training-9000CANN (Compute Architecture for Neural Networks)是华为公司针对AI场景推出的异构计算架构,通过提供多层次的编程接口,支持用户快速构建基于昇腾平台的AI应用和业务。链接:https://e.huawei.com/cn/products/cloud-computing-dc/atlas/cannMindSpore,新一代AI开源计算框架。创新编程范式,AI科学家和工程师更易使用,便于开放式创新;该计算框架可满足终端、边缘计算、云全场景需求,能更好保护数据隐私;可开源,形成广阔应用生态。链接:https://www.mindspore.cn/TensorFlow最初由谷歌大脑团队开发,用于Google的研究和生产,于2015年11月9日在Apache 2.0开源许可证下发布。 链接: https://www.tensorflow.org/A800-9000物理服务器安装驱动使用镜像配置本地apt源root@ubuntu:/etc/apt# mkdir/media/cdrom root@ubuntu:/etc/apt# mount/home/cby/ubuntu-18.04.5-server-arm64.iso /media/cdrom mount: /media/chrom: WARNING:device write-protected, mounted read-only. root@ubuntu:/etc/apt# apt-cdrom-m -d=/media/cdrom/ add root@ubuntu:/etc/apt# cat/etc/apt/sources.list创建普通用户并设置密码root@ubuntu:~#groupadd HwHiAiUser root@ubuntu:~#useradd -g HwHiAiUser -d /home/HwHiAiUser -m HwHiAiUser root@ubuntu:~#passwd HwHiAiUser Enter newUNIX password: Retype newUNIX password: passwd:password updated successfully安装驱动以及固件root@ubuntu:~# cd /home/cby/ root@ubuntu:/home/cby# ll total 98324 drwxr-xr-x 4 cby cby 4096 Apr 21 21:41 ./ drwxr-xr-x 4 root root 4096 Apr 21 21:44 ../ -rw-r--r-- 1 cby cby 99728721 Apr 21 21:41A800-9000-npu-driver\_20.2.0\_ubuntu18.04-aarch64.run -rw-r--r-- 1 cby cby 912335 Apr 21 21:41 A800-9000-npu-firmware\_1.76.22.3.220.run root@ubuntu:/home/cby# chmod +x\*.run root@ubuntu:/home/cby# ll total 98324 drwxr-xr-x 4 cby cby 4096 Apr 21 21:41 ./ drwxr-xr-x 4 root root 4096 Apr 21 21:44 ../ -rwxr-xr-x 1 cby cby 99728721 Apr 21 21:41 A800-9000-npu-driver\_20.2.0\_ubuntu18.04-aarch64.run\* -rwxr-xr-x 1 cby cby 912335 Apr 21 21:41 A800-9000-npu-firmware\_1.76.22.3.220.run\* root@ubuntu:/home/cby# aptinstall gcc root@ubuntu:/home/cby# aptinstall make root@ubuntu:/home/cby#./A800-9000-npu-driver\_20.2.0\_ubuntu18.04-aarch64.run –run root@ubuntu:/home/cby#./A800-9000-npu-firmware\_1.76.22.3.220.run –run*注意:安装完成后需要重启服务器验证是否安装成功root@ubuntu:/home/cby#npu-smi infoCANN开发环境部署安装 安装环境以及依赖root@ubuntu:/home/cby# apt install g++ root@ubuntu:/home/cby# cd cmake/ root@ubuntu:/home/cby/cmake# ll total 4356 drwxr-xr-x 2 root root 4096 Apr 21 23:48 ./ drwxr-xr-x 7 cby cby 4096 Apr 21 23:48 ../ -rw-r--r-- 1 cby cby 2971248 Apr 21 23:45 cmake\_3.10.2-1ubuntu2.18.04.1\_arm64.deb -rw-r--r-- 1 cby cby 1331524 Apr 21 23:45 cmake-data\_3.10.2-1ubuntu2.18.04.1\_all.deb -rw-r--r-- 1 cby cby 69166 Apr 21 23:47 libjsoncpp1\_1.7.4-3\_arm64.deb -rw-r--r-- 1 cby cby 71788 Apr 21 23:48 librhash0\_1.3.6-2\_arm64.deb root@ubuntu:/home/cby/cmake# apt install./\* root@ubuntu:/home/cby/cmake# make–versionroot@ubuntu:/home/cby# apt install./zlib1g-dev\_1%3a1.2.11.dfsg-0ubuntu2\_arm64.deb root@ubuntu:/home/cby# apt install./libbz2-dev\_1.0.6-8.1ubuntu0.2\_arm64.deb root@ubuntu:/home/cby# apt install ./libsqlite3-dev\_3.22.0-1ubuntu0.4\_arm64.debroot@ubuntu:/home/cby# cd libssl-dev/ root@ubuntu:/home/cby/libssl-dev#apt install ./\*root@ubuntu:/home/cby# cd libxslt1-dev/ root@ubuntu:/home/cby/libxslt1-dev#ll total 13596 drwxr-xr-x 2 root root 4096 Apr 22 00:37 ./ drwxr-xr-x 10 cby cby 4096 Apr 22 00:37 ../ -rw-r--r-- 1 cby cby 18528 Apr 22 00:30gir1.2-harfbuzz-0.0\_1.7.2-1ubuntu1\_arm64.deb -rw-r--r-- 1 cby cby 170204 Apr 22 00:27icu-devtools\_60.2-3ubuntu3.1\_arm64.deb -rw-r--r-- 1 cby cby 983364 Apr 22 00:37libglib2.0-0\_2.56.4-0ubuntu0.18.04.8\_arm64.deb -rw-r--r-- 1 cby cby 61832 Apr 22 00:33libglib2.0-bin\_2.56.4-0ubuntu0.18.04.8\_arm64.deb -rw-r--r-- 1 cby cby 1297600 Apr 22 00:31libglib2.0-dev\_2.56.4-0ubuntu0.18.04.8\_arm64.deb -rw-r--r-- 1 cby cby 99676 Apr 22 00:31libglib2.0-dev-bin\_2.56.4-0ubuntu0.18.04.8\_arm64.deb -rw-r--r-- 1 cby cby 14528 Apr 22 00:32libgraphite2-dev\_1.3.11-2\_arm64.deb -rw-r--r-- 1 cby cby 280584 Apr 22 00:28libharfbuzz-dev\_1.7.2-1ubuntu1\_arm64.deb -rw-r--r-- 1 cby cby 12556 Apr 22 00:30libharfbuzz-gobject0\_1.7.2-1ubuntu1\_arm64.deb -rw-r--r-- 1 cby cby 5348 Apr 22 00:29libharfbuzz-icu0\_1.7.2-1ubuntu1\_arm64.deb -rw-r--r-- 1 cby cby 8890124 Apr 22 00:26libicu-dev\_60.2-3ubuntu3.1\_arm64.deb -rw-r--r-- 1 cby cby 14412 Apr 22 00:28libicu-le-hb0\_1.0.3+git161113-4\_arm64.deb -rw-r--r-- 1 cby cby 29760 Apr 22 00:27libicu-le-hb-dev\_1.0.3+git161113-4\_arm64.deb -rw-r--r-- 1 cby cby 18756 Apr 22 00:26libiculx60\_60.2-3ubuntu3.1\_arm64.deb -rw-r--r-- 1 cby cby 120696 Apr 22 00:35libpcre16-3\_2%3a8.39-9\_arm64.deb -rw-r--r-- 1 cby cby 113240 Apr 22 00:35libpcre32-3\_2%3a8.39-9\_arm64.deb -rw-r--r-- 1 cby cby 459316 Apr 22 00:33libpcre3-dev\_2%3a8.39-9\_arm64.deb -rw-r--r-- 1 cby cby 15124 Apr 22 00:35libpcrecpp0v5\_2%3a8.39-9\_arm64.deb -rw-r--r-- 1 cby cby 673384 Apr 22 00:25libxml2-dev\_2.9.4+dfsg1-6.1ubuntu1.3\_arm64.deb -rw-r--r-- 1 cby cby 395564 Apr 22 00:24libxslt1-dev\_1.1.29-5ubuntu0.2\_arm64.deb -rw-r--r-- 1 cby cby 42802 Apr 22 00:33pkg-config\_0.29.1-0ubuntu2\_arm64.deb -rw-r--r-- 1 cby cby 144176 Apr 22 00:37python3-distutils\_3.6.9-1~18.04\_all.deb root@ubuntu:/home/cby/libxslt1-dev#apt install ./\*root@ubuntu:/home/cby# cd libffi-dev/ root@ubuntu:/home/cby/libffi-dev#ls libffi-dev\_3.2.1-8\_arm64.deb root@ubuntu:/home/cby/libffi-dev#apt install ./\*root@ubuntu:/home/cby#apt install unzip root@ubuntu:/home/cby# apt install./libblas-dev\_3.7.1-4ubuntu1\_arm64.debroot@ubuntu:/home/cby# cd gfortran/ root@ubuntu:/home/cby/gfortran#ll total 7844 drwxr-xr-x 2 root root 4096 Apr 22 00:50 ./ drwxr-xr-x 12cby cby 4096 Apr 22 00:50 ../ -rw-r--r-- 1 cby cby 1344 Apr 22 00:48gfortran\_4%3a7.4.0-1ubuntu2.3\_arm64.deb -rw-r--r-- 1 cby cby 7464740 Apr 22 00:48gfortran-7\_7.5.0-3ubuntu1~18.04\_arm64.deb -rw-r--r-- 1 cby cby 248176 Apr 22 00:50libgfortran4\_7.5.0-3ubuntu1~18.04\_arm64.deb -rw-r--r-- 1 cby cby 300500 Apr 22 00:49libgfortran-7-dev\_7.5.0-3ubuntu1~18.04\_arm64.deb root@ubuntu:/home/cby/gfortran#apt install ./\*root@ubuntu:/home/cby# cd libblas3/ root@ubuntu:/home/cby/libblas3#apt install ./libblas3\_3.7.1-4ubuntu1\_arm64.debroot@ubuntu:/home/cby# cdlibopenblas-dev/ root@ubuntu:/home/cby/libopenblas-dev#ll total 3412 drwxr-xr-x 2 root root 4096 Apr 22 00:56 ./ drwxr-xr-x 14cby cby 4096 Apr 22 00:56 ../ -rw-r--r-- 1 cby cby 1813748 Apr 22 00:55libopenblas-base\_0.2.20+ds-4\_arm64.deb -rw-r--r-- 1 cby cby 1668126 Apr 22 00:54libopenblas-dev\_0.2.20+ds-4\_arm64.deb root@ubuntu:/home/cby/libopenblas-dev#apt install ./\*安装完成后查看版本gcc --version g++ --version make --version cmake --version dpkg -l zlib1g| grepzlib1g| grep ii dpkg -l zlib1g-dev|grep zlib1g-dev| grep ii dpkg -l libbz2-dev|grep libbz2-dev| grep ii dpkg -llibsqlite3-dev| grep libsqlite3-dev| grep ii dpkg -l openssl| grepopenssl| grep ii dpkg -l libssl-dev|grep libssl-dev| grep ii dpkg -l libxslt1-dev|grep libxslt1-dev| grep ii dpkg -l libffi-dev|grep libffi-dev| grep ii dpkg -l unzip| grepunzip| grep ii dpkg -l pciutils|grep pciutils| grep ii dpkg -l net-tools|grep net-tools| grep ii dpkg -l libblas-dev|grep libblas-dev| grep ii dpkg -l gfortran|grep gfortran| grep ii dpkg -l libblas3|grep libblas3| grep ii dpkg -llibopenblas-dev| grep libopenblas-dev| grep ii安装Python3.7.5root@ubuntu:/home/cby/python#tar xvf Python3.7.5.tar root@ubuntu:/home/cby/python# cdPython-3.7.5/ root@ubuntu:/home/cby/python/Python-3.7.5#./configure --prefix=/usr/local/python3.7.5 --enable-loadable-sqlite-extensions--enable-shared root@ubuntu:/home/cby/python/Python-3.7.5#make root@ubuntu:/home/cby/python/Python-3.7.5#make install root@ubuntu:/home/cby# sudo ln -s/usr/local/python3.7.5/bin/pip3 /usr/local/bin/pip3.7.5 root@ubuntu:/home/cby# sudo ln-s /usr/local/python3.7.5/bin/python3 /usr/local/bin/python3.7.5 root@ubuntu:/home/cby/cann\_xunlian#sudo ln -s /usr/local/python3.7.5/bin/python3 /usr/local/bin/python3.7 root@ubuntu:/home/cby/cann\_xunlian#sudo ln -s /usr/local/python3.7.5/bin/pip3 /usr/local/bin/pip3.7 root@ubuntu:/home/cby# vim ~/.bashrc exportLD\_LIBRARY\_PATH=/usr/local/python3.7.5/lib:$LD\_LIBRARY\_PATH root@ubuntu:/home/cby#python3.7.5 --version Python 3.7.5 root@ubuntu:/home/cby# pip3.7.5--version pip 19.2.3 from /usr/local/python3.7.5/lib/python3.7/site-packages/pip(python 3.7)使用Python3.7.5环境安装pip依赖包root@ubuntu:/home/cby/pip-pack# tar xvfpip\_pack.tar root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./attrs-20.3.0-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./numpy-1.17.2-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./decorator-5.0.6-py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./mpmath-1.2.1-py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./sympy-1.4-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./pycparser-2.20-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./cffi-1.12.3.tar.gz root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./PyYAML-5.3.1.tar.gz root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./six-1.15.0-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./pathlib2-2.3.5-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./psutil-5.8.0.tar.gz root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./protobuf-3.15.8-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./scipy-1.6.0-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./chardet-3.0.4-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./idna-2.10-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./urllib3-1.25.10-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./certifi-2020.6.20-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./certifi-2020.6.20-py2.py3-none-any.whl root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./requests-2.24.0-py2.py3-none-any.wh root@ubuntu:/home/cby/pip-pack/pip\_pack#pip3.7.5 install ./xlrd-1.2.0-py2.py3-none-any.whl *注意:以上pip包的安装必须以该顺序依次进行安装安装开发套件包root@ubuntu:/home/cby/cann#./Ascend-cann-tfplugin\_20.2.rc1\_linux-aarch64.run –install root@ubuntu:/home/cby/cann#./Ascend-cann-toolkit\_20.2.rc1\_linux-aarch64.run –install 出现install success后表示安装成功。 CANN训练环境部署安装说明 训练环境的Python3.7.5和环境以及依赖,和开发环境下的安装方式一样,可参考《CANN开发环境部署安装》文档进行安装。在已经搭建好的开发环境中,进行安装训练环境仅需安装一下训练软件包和实用工具包即可。安装训练软件包root@ubuntu:/home/cby/cann\_xunlian# chmod+x ./\*.run root@ubuntu:/home/cby/cann\_xunlian# ./Ascend-cann-nnae\_20.2.rc1\_linux-aarch64.run–install root@ubuntu:/home/cby/cann\_xunlian#./Ascend-cann-toolbox\_20.2.rc1\_linux-aarch64.run –install 出现install success后表示安装成功。安装MindSpore安装whl包 安装Ascend 910 AI处理器配套软件包提供的whl包,whl包随配套软件包发布,升级配套软件包之后需要重新安装。root@ubuntu:/home/cby/mindspore\_ascend#pip3.7.5 install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/hccl-0.1.0-py3-none-any.whl root@ubuntu:/home/cby/mindspore\_ascend#pip3.7.5 install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/te-0.4.0-py3-none-any.whl root@ubuntu:/home/cby/mindspore\_ascend#pip3.7.5 install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/topi-0.4.0-py3-none-any.whl root@ubuntu:/home/cby/mindspore\_ascend/pip#pip3.7.5 install easydict-1.9.tar.gz root@ubuntu:/home/cby/mindspore\_ascend/pip#pip3.7.5 install ./wheel-0.36.2-py2.py3-none-any.whl root@ubuntu:/home/cby/mindspore\_ascend/pip#pip3.7.5 install ./astunparse-1.6.3-py2.py3-none-any.whl root@ubuntu:/home/cby/mindspore\_ascend/pip#pip3.7.5 install ./Pillow-8.2.0-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/mindspore\_ascend/pip#pip3.7.5 install ./asttokens-2.0.4-py2.py3-none-any.whl root@ubuntu:/home/cby/mindspore\_ascend/pip#pip3.7.5 install ./cffi-1.14.5-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/mindspore\_ascend/pip#pip3.7.5 install ./pyparsing-2.4.7-py2.py3-none-any.whl root@ubuntu:/home/cby/mindspore\_ascend/pip#pip3.7.5 install ./packaging-20.9-py2.py3-none-any.whl root@ubuntu:/home/cby/mindspore\_ascend/pip#pip3.7.5 install ../mindspore\_ascend-1.1.1-cp37-cp37m-linux\_aarch64.whl*注意:安装时必须以此顺序进行安装配置环境变量\# control log level.0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING. export GLOG\_v=2 # Conda environmentaloptions LOCAL\_ASCEND=/usr/local/Ascend # the root directoryof run package # lib libraries thatthe run package depends on exportLD\_LIBRARY\_PATH=${LOCAL\_ASCEND}/add-ons/:${LOCAL\_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL\_ASCEND}/driver/lib64:${LOCAL\_ASCEND}/opp/op\_impl/built-in/ai\_core/tbe/op\_tiling:${LD\_LIBRARY\_PATH} # Environmentvariables that must be configured exportTBE\_IMPL\_PATH=${LOCAL\_ASCEND}/ascend-toolkit/latest/opp/op\_impl/built-in/ai\_core/tbe # TBE operatorimplementation tool path exportASCEND\_OPP\_PATH=${LOCAL\_ASCEND}/ascend-toolkit/latest/opp # OPP path exportPATH=${LOCAL\_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec\_compiler/bin/:${PATH} # TBE operatorcompilation tool path exportPYTHONPATH=${TBE\_IMPL\_PATH}:${PYTHONPATH} # Python library thatTBE implementation depends on测试是否可行Python代码内容:import numpy as np from mindspore importTensor import mindspore.opsas ops importmindspore.context as context context.set\_context(device\_target="Ascend") x =Tensor(np.ones(\[1,3,3,4\]).astype(np.float32)) y =Tensor(np.ones(\[1,3,3,4\]).astype(np.float32)) print(ops.tensor\_add(x,y))出现此结果即是安装部署完成\[\[\[\[2. 2. 2. 2.\] \[2. 2. 2. 2.\] \[2. 2. 2. 2.\]\] \[\[2. 2. 2. 2.\] \[2. 2. 2. 2.\] \[2. 2. 2. 2.\]\] \[\[2. 2. 2. 2.\] \[2. 2. 2. 2.\] \[2. 2. 2. 2.\]\]\]\]安装mindinsight安装whl包root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./itsdangerous-1.1.0-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./Werkzeug-1.0.1-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./MarkupSafe-1.1.1-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./Jinja2-2.11.3-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./click-7.1.2-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./Flask-1.1.2-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./Flask\_Cors-3.0.10-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./yapf-0.31.0-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./future-0.18.2.tar.gz root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./treelib-1.6.1.tar.gz root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./grpcio-1.37.0-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./google\_pasta-0.2.0-py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./pytz-2021.1-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./python\_dateutil-2.8.1-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./pandas-1.2.3-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./gunicorn-20.1.0.tar.gz root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./marshmallow-3.11.1-py2.py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./threadpoolctl-2.1.0-py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./joblib-1.0.1-py3-none-any.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./scikit\_learn-0.24.1-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/mindinsight/Mindinsight#pip3.7.5 install ./mindinsight-1.1.1-cp37-cp37m-linux\_aarch64.whl*注意:安装必须以此顺序进安装配置环境变量在配置文件中配置如下变量PATH=$PATH:/usr/local/python3.7.5/bin/root@ubuntu:/home/cby#source /etc/profile启动及使用root@ubuntu:/home/cby#mindinsight start Workspace:/root/mindinsight Webaddress: http://127.0.0.1:8080 servicestart state: success 出现该消息后,说明可视化已经启动成功,若需要外机访问的话,需要进行反向代理到0.0.0.0上面即可,比如frp工具即可实现该操作 在训练完成的Python代码目录下,使以下命令即可启动并展示该目录下的训练数据,debugger的参数可使用false或者truemindinsightstart --summary-base-dir . --port 8080 --enable-debugger True --debugger-port50051使用如下命令即可启动训练root@ubuntu:/home/cby/lenet/lenet#python3.7.5 lenet.py --device_target=Ascend安装Tensorflow编译hdf5root@ubuntu:/home/cby/Tensorflow/Tensorflow#cd hdf5-1.10.5/ root@ubuntu:/home/cby/Tensorflow/Tensorflow/hdf5-1.10.5#./configure --prefix=/usr/include/hdf5 root@ubuntu:/home/cby/Tensorflow/Tensorflow/hdf5-1.10.5#make root@ubuntu:/home/cby/Tensorflow/Tensorflow/hdf5-1.10.5#make install配置环境变量及软连接exportCPATH="/usr/include/hdf5/include/:/usr/include/hdf5/lib/" root@ubuntu:/home/cby/Tensorflow/Tensorflow/hdf5-1.10.5#ln -s /usr/include/hdf5/lib/libhdf5.so /usr/lib/libhdf5.so root@ubuntu:/home/cby/Tensorflow/Tensorflow/hdf5-1.10.5#ln -s /usr/include/hdf5/lib/libhdf5\_hl.so /usr/lib/libhdf5\_hl.so安装whl包root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./Cython-0.29.21-py2.py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./h5py-2.10.0-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./grpcio-1.30.0.tar.gz root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./gast-0.2.2.tar.gz root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./opt\_einsum-3.3.0-py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./Keras\_Applications-1.0.8-py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./Keras\_Preprocessing-1.1.2-py2.py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./astor-0.8.1-py2.py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./typing\_extensions-3.7.4.3-py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./zipp-3.4.1-py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./importlib\_metadata-3.10.1-py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./Markdown-3.2.2-py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./tensorboard-1.15.0-py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./wrapt-1.12.1.tar.gz root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./tensorflow\_estimator-1.15.1-py2.py3-none-any.whl root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./termcolor-1.1.0.tar.gz root@ubuntu:/home/cby/Tensorflow/Tensorflow#pip3.7.5 install ./tensorflow-1.15.0-cp37-cp37m-linux\_aarch64.whl注意:必须依次安装安装Pytorchroot@ubuntu:/home/cby/pytorch/Pytorch#pip3.7.5 install ./apex-0.1+ascend-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/pytorch/Pytorch#pip3.7.5 install ./torch-1.5.0+ascend.post2-cp37-cp37m-linux\_aarch64.whl root@ubuntu:/home/cby/pytorch/Pytorch#pip3.7.5 install ./future-0.18.2.tar.gz该文章所配套的软件包关注微信公众号回复 ai 即可获取所需要的所有软件包 Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。20篇原创内容公众号
2021年12月30日
615 阅读
0 评论
0 点赞
1
...
36
37
38
...
40