首页
直播
统计
壁纸
留言
友链
关于
Search
1
PVE开启硬件显卡直通功能
2,610 阅读
2
在k8s(kubernetes) 上安装 ingress V1.1.0
2,092 阅读
3
二进制安装Kubernetes(k8s) v1.24.0 IPv4/IPv6双栈
1,953 阅读
4
Ubuntu 通过 Netplan 配置网络教程
1,890 阅读
5
kubernetes (k8s) 二进制高可用安装
1,825 阅读
默认分类
登录
/
注册
Search
chenby
累计撰写
208
篇文章
累计收到
124
条评论
首页
栏目
默认分类
页面
直播
统计
壁纸
留言
友链
关于
搜索到
208
篇与
cby
的结果
2021-12-30
HaProxy 安装搭建配置
HaProxy简介 HAProxy是一个免费的负载均衡软件,可以运行于大部分主流的Linux操作系统上。 HAProxy提供了L4(TCP)和L7(HTTP)两种负载均衡能力,具备丰富的功能。HAProxy的社区非常活跃,版本更新快速。最关键的是,HAProxy具备媲美商用负载均衡器的性能和稳定性。HaProxy的核心功能 负载均衡:L4和L7两种模式,支持RR/静态RR/LC/IP Hash/URI Hash/URL_PARAM Hash/HTTP_HEADER Hash等丰富的负载均衡算法 健康检查:支持TCP和HTTP两种健康检查模式 会话保持:对于未实现会话共享的应用集群,可通过Insert Cookie/Rewrite Cookie/Prefix Cookie,以及上述的多种Hash方式实现会话保持 SSL:HAProxy可以解析HTTPS协议,并能够将请求解密为HTTP后向后端传输 HTTP请求重写与重定向 监控与统计:HAProxy提供了基于Web的统计信息页面,展现健康状态和流量数据。基于此功能,使用者可以开发监控程序来监控HAProxy的状态HaProxy的关键特性 性能 1 . 采用单线程、事件驱动、非阻塞模型,减少上下文切换的消耗,能在1ms内处理数百个请求。并且每个会话只占用数KB的内存。 2 . 大量精细的性能优化,如O(1)复杂度的事件检查器、延迟更新技术、Single-buffereing、Zero-copy forwarding等等,这些技术使得HAProxy在中等负载下只占用极低的CPU资源。 3 . HAProxy大量利用操作系统本身的功能特性,使得其在处理请求时能发挥极高的性能,通常情况下,HAProxy自身只占用15%的处理时间,剩余的85%都是在系统内核层完成的。 4 . HAProxy作者在8年前(2009)年使用1.4版本进行了一次测试,单个HAProxy进程的处理能力突破了10万请求/秒,并轻松占满了10Gbps的网络带宽。稳定性 在上文中提到过,HAProxy的大部分工作都是在操作系统内核完成的,所以HAProxy的稳定性主要依赖于操作系统,作者建议使用2.6或3.x的Linux内核,对sysctls参数进行精细的优化,并且确保主机有足够的内存。这样HAProxy就能够持续满负载稳定运行数年之久。设置主机名root@hello:~# hostnamectl set-hostname haproxy root@hello:~# root@hello:~# root@hello:~# bash root@haproxy:~#安装 haproxyroot@haproxy:~# apt-get install haproxy root@haproxy:~# cp /etc/haproxy/haproxy.cfg{,.ori} root@haproxy:~# root@haproxy:~# vim /etc/haproxy/haproxy.cfg root@haproxy:~# 配置文件如下root@haproxy:~# cat /etc/haproxy/haproxy.cfg cat /etc/haproxy/haproxy.cfg global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats timeout 30s user haproxy group haproxy daemon ca-base /etc/ssl/certs crt-base /etc/ssl/private ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http frontend LOADBALANCER-01 bind 0.0.0.0:80 mode http default_backend WEBSERVERS-01 backend WEBSERVERS-01 balance roundrobin server node1 192.168.1.10:9200 check inter 2000 rise 3 fall 3 weight 1 maxconn 2000 server node2 192.168.1.11:9200 check inter 2000 rise 3 fall 3 weight 1 maxconn 2000 server node3 192.168.1.12:9200 check inter 2000 rise 3 fall 3 weight 1 maxconn 2000 server node4 192.168.1.13:9200 check inter 2000 rise 3 fall 3 weight 1 maxconn 2000 server node5 192.168.1.14:9200 check inter 2000 rise 3 fall 3 weight 1 maxconn 2000 server node6 192.168.1.15:9200 check inter 2000 rise 3 fall 3 weight 1 maxconn 2000 server node7 192.168.1.16:9200 check inter 2000 rise 3 fall 3 weight 1 maxconn 2000 backup option httpchk启动服务root@haproxy:~# root@haproxy:~# systemctl start haproxy root@haproxy:~#设置开机自启root@haproxy:~# root@haproxy:~# systemctl enable haproxy Synchronizing state of haproxy.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable haproxy root@haproxy:~#Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。40篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
578 阅读
0 评论
0 点赞
2021-12-30
KVM WEB管理工具 WebVirtMgr
一、webvirtmgr介绍及环境说明温馨提示:安装KVM是需要2台都操作的,因为我们是打算将2台都设置为宿主机所有都需要安装KVM相关组件github地址https://github.com/retspen/webvirtmgrWebVirtMgr是一个基于libvirt的Web界面,用于管理虚拟机。它允许您创建和配置新域,并调整域的资源分配。VNC查看器为来宾域提供完整的图形控制台。KVM是目前唯一支持的虚拟机管理程序。查看服务器版本号[root@webc ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)内核版本[root@webc ~]# uname -r 3.10.0-1160.42.2.el7.x86_64关闭Selinux & 防火墙[root@webc ~]# systemctl stop firewalld [root@webc ~]# systemctl disable firewalld [root@webc ~]# setenforce 0 setenforce: SELinux is disabled [root@webc ~]# sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config更新软件包并安装epel扩展源[root@webc ~]# yum update [root@webc ~]# yum install epel*查看python版本[root@webc ~]# python -V Python 2.7.5 [root@webc ~]#查看KVM 驱动是否加载[root@webc ~]# lsmod | grep kvm kvm_intel 188740 0 kvm 637515 1 kvm_intel irqbypass 13503 1 kvm [root@webc ~]# [root@webc ~]# [root@webc ~]# modprobe -a kvm [root@webc ~]# modprobe -a kvm_intel [root@webc ~]#免密配置[root@webc ~]# ssh-keygen [root@webc ~]# ssh-copy-id -i .ssh/id_rsa.pub root@192.168.1.104二、安装KVM安装KVM依赖包及管理工具kvm属于内核态,不需要安装。但是需要一些管理工具包[root@webc ~]# yum install qemu-img qemu-kvm qemu-kvm-tools virt-manager virt-viewer virt-v2v virt-top libvirt libvirt-Python libvirt-client python-virtinst bridge-utils tunctl [root@webc ~]# yum install -y virt-install [root@webc ~]# [root@webc ~]# systemctl start libvirtd.service [root@webc ~]# systemctl enable libvirtd.service [root@webc ~]# [root@webc ~]# cd cby/kvm/ [root@webc kvm]# [root@webc kvm]# [root@webc kvm]# git clone https://github.com/palli/python-virtinst.git [root@webc kvm]# cd python-virtinst/ [root@webc python-virtinst]# python setup.py install [root@webc python-virtinst]# virt-install [root@webc python-virtinst]# yum install bridge-utils [root@webc python-virtinst]# [root@webc python-virtinst]# vim /etc/sysconfig/network-scripts/ifcfg-br0 [root@webc python-virtinst]# [root@webc python-virtinst]# [root@webc python-virtinst]# [root@webc python-virtinst]# [root@webc python-virtinst]# [root@webc python-virtinst]# cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=static IPADDR=192.168.1.49 NETMASK=255.225.255.0 GATEWAY=192.168.1.1 DNS1=192.168.1.1 [root@webc python-virtinst]# brctl show bridge name bridge id STP enabled interfaces br-0d093958d245 8000.0242d5824d14 no br-2e2d3c481379 8000.0242884030e2 no br-36a6ad3375a8 8000.0242d7d7f1ef no br-66a9675a6dd5 8000.024248a61c72 no br-b7daf4844ff7 8000.024263dd4715 no br-deba197eb09e 8000.0242b290e104 no br0 8000.000000000000 no docker0 8000.0242858c017c no vethe14f7ac docker_gwbridge 8000.0242588c6db0 no virbr0 8000.5254009ba65a yes virbr0-nic [root@webc python-virtinst]# ln -s /usr/libexec/qemu-kvm /usr/sbin/ 三、WebVirtMgr 安装安装pip、git及supervisor && NginxWebVirtMgr只在管理端安装[root@webc ~]# yum -y install git python-pip libvirt-python libxml2-python python-websockify supervisor gcc python-devel 使用pip安装Python扩展程序库[root@webc ~]# pip install numpy git克隆配置并运行WebVirMgr[root@webc ~]# cd cby/ [root@webc cby]# mkdir kvm [root@webc cby]# cd kvm [root@webc kvm]# pwd /root/cby/kvm [root@webc kvm]# [root@webc kvm]# git clone git://github.com/retspen/webvirtmgr.git 正克隆到 'webvirtmgr'... remote: Enumerating objects: 5614, done. remote: Total 5614 (delta 0), reused 0 (delta 0), pack-reused 5614 接收对象中: 100% (5614/5614), 2.97 MiB | 748.00 KiB/s, done. 处理 delta 中: 100% (3606/3606), done. [root@webc kvm]# [root@webc kvm]# [root@webc kvm]# cd webvirtmgr [root@webc webvirtmgr]# pip install -r requirements.txt#初始化环境 [root@webc webvirtmgr]# ./manage.py syncdb #配置Django 静态页面 [root@webc webvirtmgr]# ./manage.py collectstatic启动WebVirMgr前台启动WebVirMgr,默认是Debug模式同时日志打印在前台 用户名和密码是我们刚刚创建的下载Nginx[root@webc webvirtmgr]# cd .. [root@webc kvm]# ls webvirtmgr [root@webc kvm]# [root@webc kvm]# mkdir nginx [root@webc kvm]# cd nginx [root@webc nginx]# wget https://nginx.org/download/nginx-1.20.1.tar.gz [root@webc nginx]# tar xf nginx-1.20.1.tar.gz [root@webc nginx]# cd nginx-1.20.1/ [root@webc nginx-1.20.1]#修改nginx配置文件[root@webc conf]# vim nginx.conf [root@webc conf]# [root@webc conf]# cat nginx.conf user root; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 90; server_name 192.168.1.104; #charset koi8-r; #access_log logs/host.access.log main; location / { #root html; #index index.html index.htm; proxy_pass http://127.0.0.1:8000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for; proxy_set_header Host $host:$server_port; proxy_set_header X-Forwarded-Proto $remote_addr; proxy_connect_timeout 600; proxy_read_timeout 600; proxy_send_timeout 600; client_max_body_size 5120M; } location /static/ { root /root/cby/kvm/webvirtmgr; expires max; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } [root@webc conf]#安装Nginx[root@webc nginx-1.20.1]# yum install -y gcc glibc gcc-c++ prce-devel openssl-devel pcre-devel [root@webc nginx-1.20.1]# useradd -s /sbin/nologin nginx -M [root@webc nginx-1.20.1]# ./configure --prefix=/root/cby/kvm/nginx/ --user=nginx --group=nginx --with-http_ssl_module --with-http_stub_status_module [root@webc nginx-1.20.1]# make && make install启动Nginx[root@webc nginx-1.20.1]# cd /root/cby/kvm/nginx/sbin/ [root@webc sbin]# /root/cby/kvm/nginx/sbin/nginx -t nginx: the configuration file /root/cby/kvm/nginx//conf/nginx.conf syntax is ok nginx: configuration file /root/cby/kvm/nginx//conf/nginx.conf test is successful [root@webc sbin]# /root/cby/kvm/nginx/sbin/nginx使用systemctl启停服务[root@webc sbin]# cat > /etc/supervisord.d/webvirtmgr.ini << EOF [program:webvirtmgr] command=/usr/bin/python /root/cby/kvm/webvirtmgr/manage.py run_gunicorn -c /root/cby/kvm/webvirtmgr/conf/gunicorn.conf.py directory=/root/cby/kvm/webvirtmgr autostart=true autorestart=true logfile=/var/log/supervisor/webvirtmgr.log log_stderr=true user=root [program:webvirtmgr-console] command=/usr/bin/python /root/cby/kvm/webvirtmgr/console/webvirtmgr-console directory=/root/cby/kvm/webvirtmgr autostart=true autorestart=true stdout_logfile=/var/log/supervisor/webvirtmgr-console.log redirect_stderr=true user=root EOF启动supervisor[root@webc webvirtmgr]# systemctl daemon-reload [root@webc webvirtmgr]# systemctl stop supervisord [root@webc webvirtmgr]# systemctl start supervisord查看是否启动成功[root@webc webvirtmgr]# supervisorctl status webvirtmgr RUNNING pid 23783, uptime 0:00:11 webvirtmgr-console RUNNING pid 23782, uptime 0:00:11 [root@webc webvirtmgr]#四、Web界面配置webvirtmgr4.1 添加主机设置存储1.Add Connection 添加宿主机(即KVM主机)2.点击SSH连接3.Label 为主机名,必须为主机名做免密4.IP 为宿主机IP5.用户名为服务器用户名6.点击添加Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。43篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
819 阅读
0 评论
0 点赞
2021-12-30
搭建Hadoop2.7.2和Hive2.3.3以及Spark3.1.2
Hadoop 简介Hadoop是一个用Java编写的Apache开源框架,允许使用简单的编程模型跨计算机集群分布式处理大型数据集。Hadoop框架工作的应用程序在跨计算机集群提供分布式存储和计算的环境中工作。Hadoop旨在从单个服务器扩展到数千个机器,每个都提供本地计算和存储。Hive简介Apache Hive是一个构建于Hadoop顶层的数据仓库,可以将结构化的数据文件映射为一张数据库表,并提供简单的SQL查询功能,可以将SQL语句转换为MapReduce任务进行运行。需要注意的是,Hive它并不是数据库。Hive依赖于HDFS和MapReduce,其对HDFS的操作类似于SQL,我们称之为HQL,它提供了丰富的SQL查询方式来分析存储在HDFS中的数据。HQL可以编译转为MapReduce作业,完成查询、汇总、分析数据等工作。这样一来,即使不熟悉MapReduce 的用户也可以很方便地利用SQL 语言查询、汇总、分析数据。而MapReduce开发人员可以把己写的mapper 和reducer 作为插件来支持Hive 做更复杂的数据分析。Apache Spark 简介用于大数据工作负载的分布式开源处理系统Apache Spark 是一种用于大数据工作负载的分布式开源处理系统。它使用内存中缓存和优化的查询执行方式,可针对任何规模的数据进行快速分析查询。它提供使用 Java、Scala、Python 和 R 语言的开发 API,支持跨多个工作负载重用代码—批处理、交互式查询、实时分析、机器学习和图形处理等。本文将先搭建 jdk1.8 + MySQL5.7基础环境之后搭建Hadoop2.7.2和Hive2.3.3以及Spark3.1.2此文章搭建为单机版 1.创建目录并解压jdk安装包[root@localhost ~]# mkdir jdk [root@localhost ~]# cd jdk/ [root@localhost jdk]# ls jdk-8u202-linux-x64.tar.gz [root@localhost jdk]# ll total 189496 -rw-r--r--. 1 root root 194042837 Oct 18 12:05 jdk-8u202-linux-x64.tar.gz [root@localhost jdk]# [root@localhost jdk]# tar xvf jdk-8u202-linux-x64.tar.gz2.配置环境变量[root@localhost ~]# vim /etc/profile [root@localhost ~]# tail -n 3 /etc/profile export JAVA_HOME=/root/jdk/jdk1.8.0_202/ export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar [root@localhost ~]# [root@localhost ~]# source /etc/profile3.下载安装MySQL并设置为开机自启[root@localhost ~]# mkdir mysql [root@localhost ~]# cd mysql [root@localhost mysql]# wget https://downloads.mysql.com/archives/get/p/23/file/mysql-5.7.35-1.el7.x86_64.rpm-bundle.tar [root@localhost mysql]# tar xvf mysql-5.7.35-1.el7.x86_64.rpm-bundle.tar [root@localhost mysql]# yum install ./*.rpm [root@localhost mysql]# systemctl start mysqld.service [root@localhost mysql]# [root@localhost mysql]# systemctl enable mysqld.service [root@localhost mysql]# [root@localhost mysql]#4.查看MySQL默认密码,并修改默认密码,同时创建新的用户,将其设置为可以远程登录[root@localhost mysql]# sudo grep 'temporary password' /var/log/mysqld.log 2021-10-18T06:12:35.519726Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: eNHu<sXHt3rq [root@localhost mysql]# [root@localhost mysql]# [root@localhost mysql]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 8.0.25 Copyright (c) 2000, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'Cby123..'; Query OK, 0 rows affected (0.02 sec) mysql> mysql> mysql> use mysql; Database changed mysql> mysql> update user set host='%' where user ='root'; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> set global validate_password_policy=0; Query OK, 0 rows affected (0.01 sec) mysql> set global validate_password_mixed_case_count=0; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_number_count=3; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_special_char_count=0; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_length=3; Query OK, 0 rows affected (0.00 sec) mysql> SHOW VARIABLES LIKE 'validate_password%'; +--------------------------------------+-------+ | Variable_name | Value | +--------------------------------------+-------+ | validate_password_check_user_name | OFF | | validate_password_dictionary_file | | | validate_password_length | 3 | | validate_password_mixed_case_count | 0 | | validate_password_number_count | 3 | | validate_password_policy | LOW | | validate_password_special_char_count | 0 | +--------------------------------------+-------+ 7 rows in set (0.00 sec) mysql> create user 'cby'@'%' identified by 'cby'; Query OK, 0 rows affected (0.00 sec) mysql> grant all on *.* to 'cby'@'%'; Query OK, 0 rows affected (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'WITH GRANT OPTION; Query OK, 0 rows affected (0.00 sec) mysql> CREATE DATABASE dss_dev; Query OK, 1 row affected (0.00 sec) mysql> mysql> select host,user,plugin from user; +-----------+---------------+-----------------------+ | host | user | plugin | +-----------+---------------+-----------------------+ | % | root | mysql_native_password | | localhost | mysql.session | mysql_native_password | | localhost | mysql.sys | mysql_native_password | +-----------+---------------+-----------------------+ 3 rows in set (0.01 sec) mysql>注:若上面root不是mysql_native_password使用以下命令将其改掉update user set plugin='mysql_native_password' where user='root'; 5.添加hosts解析,同时设置免密登录[root@localhost ~]# mkdir Hadoop [root@localhost ~]# [root@localhost ~]# vim /etc/hosts [root@localhost ~]# [root@localhost ~]# [root@localhost ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 127.0.0.1 namenode [root@localhost ~]# ssh-keygen [root@localhost ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@127.0.0.1 [root@localhost ~]#6.下载Hadoop,解压后创建所需目录[root@localhost ~]# cd Hadoop/ [root@localhost Hadoop]# ls [root@localhost Hadoop]# wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.2/hadoop-2.7.2.tar.gz [root@localhost Hadoop]# tar xvf hadoop-2.7.2.tar.gz [root@localhost Hadoop]# [root@localhost Hadoop]# mkdir -p /root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/namenode [root@localhost Hadoop]# [root@localhost Hadoop]# [root@localhost Hadoop]# mkdir -p /root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/datanode7.添加Hadoop环境变量[root@localhost ~]# vim /etc/profile [root@localhost ~]# tail -n 8 /etc/profile export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2/ export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin [root@localhost ~]# [root@localhost ~]# source /etc/profile8.修改Hadoop配置[root@localhost ~]# cd Hadoop/hadoop-2.7.2/ [root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/core-site.xml [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# tail /root/Hadoop/hadoop-2.7.2/etc/hadoop/core-site.xml <!-- Put site-specific property overrides in this file. --> <configuration> <!-- 指定HDFS中NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://127.0.0.1:9000</value> </property> <!-- 指定Hadoop运行时产生文件的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadoop-2.7.2/data/tmp</value> </property> </configuration>9.修改Hadoop的hdfs目录配置[root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/hdfs-site.xml [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# tail -n 15 /root/Hadoop/hadoop-2.7.2/etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.name.dir</name> <value>/root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/namenode</value> </property> <property> <name>dfs.data.dir</name> <value>/root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/datanode</value> </property> </configuration>10.修改Hadoop的yarn配置[root@localhost hadoop]# [root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/yarn-site.xml [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# tail -n 6 /root/Hadoop/hadoop-2.7.2/etc/hadoop/yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>[root@localhost hadoop]# [root@localhost hadoop]# cp /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml.template /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml [root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml [root@localhost hadoop]# [root@localhost hadoop]# [root@localhost hadoop]# tail /root/Hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> [root@localhost hadoop]#11.修改Hadoop环境配置文件[root@localhost hadoop]# vim /root/Hadoop/hadoop-2.7.2/etc/hadoop/hadoop-env.sh 修改JAVA_HOME export JAVA_HOME=/root/jdk/jdk1.8.0_202/ [root@localhost ~]# hdfs namenode -format [root@localhost ~]# start-dfs.sh [root@localhost ~]# start-yarn.sh若重置太多次会导致clusterID不匹配,datanode起不来,删除版本后在初始化启动[root@localhost ~]# rm -rf /root/Hadoop/hadoop-2.7.2/hadoopinfra/hdfs/datanode/current/VERSION [root@localhost ~]# hadoop namenode -format [root@localhost ~]# hdfs namenode -format [root@localhost ~]# start-dfs.sh [root@localhost ~]# start-yarn.sh在浏览器访问Hadoop访问Hadoop的默认端口号为50070.使用以下网址,以获取浏览器Hadoop服务。http://localhost:50070/验证集群的所有应用程序访问集群中的所有应用程序的默认端口号为8088。使用以下URL访问该服务。http://localhost:8088/12.创建hive目录并解压[root@localhost ~]# mkdir hive [root@localhost ~]# cd hive [root@localhost hive]# wget https://archive.apache.org/dist/hive/hive-2.3.3/apache-hive-2.3.3-bin.tar.gz [root@localhost hive]# tar xvf apache-hive-2.3.3-bin.tar.gz [root@localhost hive]#13.备份hive配置文件[root@localhost hive]# cd /root/hive/apache-hive-2.3.3-bin/conf/ [root@localhost conf]# cp hive-env.sh.template hive-env.sh [root@localhost conf]# cp hive-default.xml.template hive-site.xml [root@localhost conf]# cp hive-log4j2.properties.template hive-log4j2.properties [root@localhost conf]# cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties14.在Hadoop中创建文件夹并设置权限[root@localhost conf]# hadoop fs -mkdir -p /data/hive/warehouse 21/10/18 14:27:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# [root@localhost conf]# hadoop fs -mkdir /data/hive/tmp 21/10/18 14:27:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# [root@localhost conf]# hadoop fs -mkdir /data/hive/log 21/10/18 14:27:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# [root@localhost conf]# hadoop fs -chmod -R 777 /data/hive/warehouse 21/10/18 14:27:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# hadoop fs -chmod -R 777 /data/hive/tmp 21/10/18 14:27:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]# hadoop fs -chmod -R 777 /data/hive/log 21/10/18 14:27:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@localhost conf]#15.修改hive配置文件[root@localhost conf]# vim hive-site.xml hive 配置入下: <property> <name>hive.exec.scratchdir</name> <value>hdfs://127.0.0.1:9000/data/hive/tmp</value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>hdfs://127.0.0.1:9000/data/hive/warehouse</value> </property> <property> <name>hive.querylog.location</name> <value>hdfs://127.0.0.1:9000/data/hive/log</value> </property> <!—该配置是关闭hive元数据版本认证,否则会在启动spark程序时报错--> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> 配置mysql IP 端口以及放元数据的库名称 <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true</value> </property> <!—配置mysql启动器名称 --> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <!—配置连接mysql用户名 --> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <!—配置连接mysql用户名登录密码--> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>Cby123..</value> </property>修改配置文件中 system:java.io.tmpdir 和 system:user.name 相关信息,改为实际目录和用户名,或者加入如下配置 <property> <name>system:java.io.tmpdir</name> <value>/tmp/hive/java</value> </property> <property> <name>system:user.name</name> <value>${user.name}</value> </property>并修改临时路径 : <property> <name>hive.exec.local.scratchdir</name> <value>/root/hive/apache-hive-2.3.3-bin/tmp/${system:user.name}</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/root/hive/apache-hive-2.3.3-bin/tmp/${hive.session.id}_resources</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>/root/hive/apache-hive-2.3.3-bin/tmp/root/operation_logs</value> <description>Top level directory where operation logs are stored if logging functionality is enabled</description> </property>16.配置hive中jdbc的MySQL驱动[root@localhost lib]# cd /root/hive/apache-hive-2.3.3-bin/lib/ [root@localhost lib]# wget https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-java-5.1.49.tar.gz [root@localhost lib]# tar xvf mysql-connector-java-5.1.49.tar.gz [root@localhost lib]# cp mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar . [root@localhost bin]# [root@localhost bin]# vim /root/hive/apache-hive-2.3.3-bin/conf/hive-env.sh [root@localhost bin]# tail -n 3 /root/hive/apache-hive-2.3.3-bin/conf/hive-env.sh export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2/ export HIVE_CONF_DIR=/root/hive/apache-hive-2.3.3-bin/conf export HIVE_AUX_JARS_PATH=/root/hive/apache-hive-2.3.3-bin/lib [root@localhost bin]#17.配置hive环境变量[root@localhost ~]# vim /etc/profile [root@localhost ~]# tail -n 6 /etc/profile export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2/ export HIVE_CONF_DIR=/root/hive/apache-hive-2.3.3-bin/conf export HIVE_AUX_JARS_PATH=/root/hive/apache-hive-2.3.3-bin/lib export HIVE_PATH=/root/hive/apache-hive-2.3.3-bin/ export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HIVE_PATH/bin [root@localhost bin]# ./schematool -dbType mysql -initSchema初始化完成后修改MySQL链接信息,之后配置mysql IP 端口以及放元数据的库名称[root@localhost conf]# vim hive-site.xml <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://127.0.0.1:3306/hive?characterEncoding=utf8&useSSL=false</value> </property> [root@localhost bin]# nohup hive --service metastore & [root@localhost bin]# nohup hive --service hiveserver2 &**18.创建spark目录并下载所需文件 **[root@localhost ~]# mkdir spark [root@localhost ~]# [root@localhost ~]# cd spark [root@localhost spark]# [root@localhost spark]# wget https://dlcdn.apache.org/spark/spark-3.1.2/spark-3.1.2-bin-without-hadoop.tgz --no-check-certificate [root@localhost spark]# tar xvf spark-3.1.2-bin-without-hadoop.tgz19.配置spark环境变量以及备份配置文件[root@localhost ~]# vim /etc/profile [root@localhost ~]# [root@localhost ~]# tail -n 3 /etc/profile export SPARK_HOME=/root/spark/spark-3.1.2-bin-without-hadoop/ export PATH=$PATH:$SPARK_HOME/bin [root@localhost spark]# cd /root/spark/spark-3.1.2-bin-without-hadoop/conf/ [root@localhost conf]# cp spark-env.sh.template spark-env.sh [root@localhost conf]# cp spark-defaults.conf.template spark-defaults.conf [root@localhost conf]# cp metrics.properties.template metrics.properties20.配置程序的环境变量[root@localhost conf]# cp workers.template workers [root@localhost conf]# vim spark-env.sh export JAVA_HOME=/root/jdk/jdk1.8.0_202 export HADOOP_HOME=/root/Hadoop/hadoop-2.7.2 export HADOOP_CONF_DIR=/root/Hadoop/hadoop-2.7.2/etc/hadoop export SPARK_DIST_CLASSPATH=$(/root/Hadoop/hadoop-2.7.2/bin/hadoop classpath) export SPARK_MASTER_HOST=127.0.0.1 export SPARK_MASTER_PORT=7077 export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 - Dspark.history.retainedApplications=50 - Dspark.history.fs.logDirectory=hdfs://127.0.0.1:9000/spark-eventlog"21.修改默认的配置文件[root@localhost conf]# vim spark-defaults.conf spark.master spark://127.0.0.1:7077 spark.eventLog.enabled true spark.eventLog.dir hdfs://127.0.0.1:9000/spark-eventlog spark.serializer org.apache.spark.serializer.KryoSerializer spark.driver.memory 3g spark.eventLog.enabled true spark.eventLog.dir hdfs://127.0.0.1:9000/spark-eventlog spark.eventLog.compress true22.配置工作节点[root@localhost conf]# vim workers [root@localhost conf]# cat workers 127.0.0.1 [root@localhost conf]# [root@localhost sbin]# /root/spark/spark-3.1.2-bin-without-hadoop/sbin/start-all.sh验证应用程序访问集群中的所有应用程序的默认端口号为8080。使用以下URL访问该服务。http://localhost:8080/Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。41篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
966 阅读
0 评论
0 点赞
2021-12-30
elk7.15.1安装部署搭建
ELK简介ELK是Elasticsearch、Logstash、Kibana三大开源框架首字母大写简称(但是后期出现的Filebeat(beats中的一种)可以用来替代Logstash的数据收集功能,比较轻量级)。市面上也被成为Elastic Stack。Filebeat是用于转发和集中日志数据的轻量级传送工具。Filebeat监视您指定的日志文件或位置,收集日志事件,并将它们转发到Elasticsearch或 Logstash进行索引。Filebeat的工作方式如下:启动Filebeat时,它将启动一个或多个输入,这些输入将在为日志数据指定的位置中查找。对于Filebeat所找到的每个日志,Filebeat都会启动收集器。每个收集器都读取单个日志以获取新内容,并将新日志数据发送到libbeat,libbeat将聚集事件,并将聚集的数据发送到为Filebeat配置的输出。Logstash是免费且开放的服务器端数据处理管道,能够从多个来源采集数据,转换数据,然后将数据发送到您最喜欢的“存储库”中。Logstash能够动态地采集、转换和传输数据,不受格式或复杂度的影响。利用Grok从非结构化数据中派生出结构,从IP地址解码出地理坐标,匿名化或排除敏感字段,并简化整体处理过程。Elasticsearch是Elastic Stack核心的分布式搜索和分析引擎,是一个基于Lucene、分布式、通过Restful方式进行交互的近实时搜索平台框架。Elasticsearch为所有类型的数据提供近乎实时的搜索和分析。无论您是结构化文本还是非结构化文本,数字数据或地理空间数据,Elasticsearch都能以支持快速搜索的方式有效地对其进行存储和索引。Kibana是一个针对Elasticsearch的开源分析及可视化平台,用来搜索、查看交互存储在Elasticsearch索引中的数据。使用Kibana,可以通过各种图表进行高级数据分析及展示。并且可以为Logstash和ElasticSearch提供的日志分析友好的 Web 界面,可以汇总、分析和搜索重要数据日志。还可以让海量数据更容易理解。它操作简单,基于浏览器的用户界面可以快速创建仪表板(Dashboard)实时显示Elasticsearch查询动态完整日志系统基本特征收集:能够采集多种来源的日志数据传输:能够稳定的把日志数据解析过滤并传输到存储系统存储:存储日志数据分析:支持UI分析警告:能够提供错误报告,监控机制安装jdk17环境root@elk:~# mkdir jdk root@elk:~# cd jdk root@elk:~/jdk# wget https://download.oracle.com/java/17/latest/jdk-17_linux-x64_bin.tar.gz root@elk:~/jdk# tar xf jdk-17_linux-x64_bin.tar.gz root@elk:~/jdk# cd .. root@elk:~# root@elk:~# mv jdk/ / root@elk:~# vim /etc/profile root@elk:~# root@elk:~# root@elk:~# tail -n 4 /etc/profile export JAVA_HOME=/jdk/jdk-17.0.1/ export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar root@elk:~# root@elk:~# source /etc/profile root@elk:~# chmod -R 777 /jdk/创建elk文件夹,并下载所需包root@elk:~# mkdir elk root@elk:~# cd elk root@elk:~/elk# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.1-linux-x86_64.tar.gz root@elk:~/elk# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.15.1-linux-x86_64.tar.gz root@elk:~/elk# wget https://artifacts.elastic.co/downloads/logstash/logstash-7.15.1-linux-x86_64.tar.gz解压安装包root@elk:~/elk# tar xf elasticsearch-7.15.1-linux-x86_64.tar.gz root@elk:~/elk# tar xf kibana-7.15.1-linux-x86_64.tar.gz root@elk:~/elk# tar xf logstash-7.15.1-linux-x86_64.tar.gz root@elk:~/elk# ll total 970288 drwxr-xr-x 5 root root 4096 Oct 20 06:09 ./ drwx------ 7 root root 4096 Oct 20 06:04 ../ drwxr-xr-x 9 root root 4096 Oct 7 22:00 elasticsearch-7.15.1/ -rw-r--r-- 1 root root 340849929 Oct 14 13:28 elasticsearch-7.15.1-linux-x86_64.tar.gz drwxr-xr-x 10 root root 4096 Oct 20 06:09 kibana-7.15.1-linux-x86_64/ -rw-r--r-- 1 root root 283752241 Oct 14 13:34 kibana-7.15.1-linux-x86_64.tar.gz drwxr-xr-x 13 root root 4096 Oct 20 06:09 logstash-7.15.1/ -rw-r--r-- 1 root root 368944379 Oct 14 13:38 logstash-7.15.1-linux-x86_64.tar.gz创建用户并设置权限root@elk:~/elk# cd root@elk:~# useradd elk root@elk:~# mkdir /home/elk root@elk:~# cp -r elk/ /home/elk/ root@elk:~# chown -R elk:elk /home/elk/修改系统配置文件root@elk:~# vim /etc/security/limits.conf root@elk:~# root@elk:~# root@elk:~# tail -n 3 /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536 root@elk:~# root@elk:~# vim /etc/sysctl.conf root@elk:~# root@elk:~# tail -n 2 /etc/sysctl.conf vm.max_map_count=262144 root@elk:~# root@elk:~# sysctl -p vm.max_map_count = 262144 root@elk:~#修改elk配置文件root@elk:~# su - elk $ bash elk@elk:~$ cd /elk/elasticsearch-7.15.1/config elk@elk:~/elk/elasticsearch-7.15.1/config$ vim elasticsearch.yml elk@elk:~/elk/elasticsearch-7.15.1/config$ elk@elk:~/elk/elasticsearch-7.15.1/config$ tail -n 20 elasticsearch.yml #设置data存放的路径为/data/es-data path.data: /home/elk/data/ #设置logs日志的路径为/log/es-log path.logs: /home/elk/data/ #设置内存不使用交换分区 bootstrap.memory_lock: false #配置了bootstrap.memory_lock为true时反而会引发9200不会被监听,原因不明 #设置允许所有ip可以连接该elasticsearch network.host: 0.0.0.0 #开启监听的端口为9200 http.port: 9500 #增加新的参数,为了让elasticsearch-head插件可以访问es (5.x版本,如果没有可以自己手动加) http.cors.enabled: true http.cors.allow-origin: "*" cluster.initial_master_nodes: ["elk"] node.name: elk root@elk:~/elk/elasticsearch-7.15.1/config#使用elk用户去启动elasticsearchroot@elk:~# su - elk $ bash elk@elk:~$ elk@elk:~$ mkdir data elk@elk:~/elk/elasticsearch-7.15.1/bin$ cd elk@elk:~$ cd /home/elk/elk/elasticsearch-7.15.1/bin elk@elk:~/elk/elasticsearch-7.15.1/bin$ ./elasticsearch启动之后访问测试:root@elk:~# curl -I http://192.168.1.19:9500/ HTTP/1.1 200 OK X-elastic-product: Elasticsearch Warning: 299 Elasticsearch-7.15.1-83c34f456ae29d60e94d886e455e6a3409bba9ed "Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.15/security-minimal-setup.html to enable security." content-type: application/json; charset=UTF-8 content-length: 532 root@elk:~#放到后台运行elk@elk:~/elk/elasticsearch-7.15.1/bin$ nohup /home/elk/elk/elasticsearch-7.15.1/bin/elasticsearch >> /home/elk/elk/elasticsearch-7.15.1/output.log 2>&1 & [1] 8811 elk@elk:~/elk/elasticsearch-7.15.1/bin$elk@elk:~$ cd elk/kibana-7.15.1-linux-x86_64/config/ elk@elk:~/elk/kibana-7.15.1-linux-x86_64/config$ vim kibana.yml elk@elk:~/elk/kibana-7.15.1-linux-x86_64/config$ tail -n 18 kibana.yml #设置监听端口为5601 server.port: 5601 #设置可访问的主机地址 server.host: "0.0.0.0" #设置elasticsearch主机地址 elasticsearch.hosts: ["http://localhost:9500"] #如果elasticsearch设置了用户名密码,那么需要配置该两项,如果没配置,那就不用管 #elasticsearch.username: "user" #elasticsearch.password: "pass" elk@elk:~/elk/kibana-7.15.1-linux-x86_64/config$ elk@elk:~$ cd /home/elk/elk/kibana-7.15.1-linux-x86_64/bin elk@elk:~/elk/kibana-7.15.1-linux-x86_64/bin$ ./kibana测试访问root@elk:~# curl -I http://192.168.1.19:5601/app/home#/tutorial_directory HTTP/1.1 200 OK content-security-policy: script-src 'unsafe-eval' 'self'; worker-src blob: 'self'; style-src 'unsafe-inline' 'self' x-content-type-options: nosniff referrer-policy: no-referrer-when-downgrade kbn-name: elk kbn-license-sig: aaa69ea6a0792153cde61e88d0cd9bbad7ddcdaec87b613f281dd275e9dbad47 content-type: text/html; charset=utf-8 cache-control: private, no-cache, no-store, must-revalidate content-length: 144351 vary: accept-encoding Date: Wed, 20 Oct 2021 07:11:10 GMT Connection: keep-alive Keep-Alive: timeout=120 root@elk:~#放到后台运行elk@elk:~/elk/kibana-7.15.1-linux-x86_64/bin$ nohup /home/elk/elk/kibana-7.15.1-linux-x86_64/bin/kibana >> /home/elk/elk/kibana-7.15.1-linux-x86_64/output.log 2>&1 & [2] 9378 elk@elk:~/elk/kibana-7.15.1-linux-x86_64/bin$将日志信息输出到屏幕上elk@elk:~$ cd elk/logstash-7.15.1/bin/ elk@elk:~/elk/logstash-7.15.1/bin$ ./logstash -e 'input {stdin{}} output{stdout{}}' 输入个123然后回车,会把结果输出到屏幕上 { "host" => "elk", "@timestamp" => 2021-10-20T07:15:54.230Z, "@version" => "1", "message" => "" } 123 { "host" => "elk", "@timestamp" => 2021-10-20T07:15:56.453Z, "@version" => "1", "message" => "123" } elk@elk:~/elk/logstash-7.15.1/bin$ cd ../config/ elk@elk:~/elk/logstash-7.15.1/config$ vim logstash elk@elk:~/elk/logstash-7.15.1/config$ cat logstash input { # 从文件读取日志信息 file { path => "/var/log/messages" type => "system" start_position => "beginning" } } filter { } output { # 标准输出 stdout {} } elk@elk:~/elk/logstash-7.15.1/config$ mv logstash logstash.conf elk@elk:~/elk/logstash-7.15.1/config$启动测试elk@elk:~/elk/logstash-7.15.1/config$ cd ../bin/ elk@elk:~/elk/logstash-7.15.1/bin$ ./logstash -f ../config/logstash.conf后台启动elk@elk:~$ nohup /home/elk/elk/logstash-7.15.1/bin/logstash -f /home/elk/elk/logstash-7.15.1/config/logstash.conf >> /home/elk/elk/logstash-7.15.1/output.log 2>&1 & [3] 10177 elk@elk:~$设置开机自启elk@elk:~$ vim startup.sh elk@elk:~$ elk@elk:~$ cat startup.sh #!/bin/bash nohup /home/elk/elk/elasticsearch-7.15.1/bin/elasticsearch >> /home/elk/elk/elasticsearch-7.15.1/output.log 2>&1 & nohup /home/elk/elk/kibana-7.15.1-linux-x86_64/bin/kibana >> /home/elk/elk/kibana-7.15.1-linux-x86_64/output.log 2>&1 & nohup /home/elk/elk/logstash-7.15.1/bin/logstash -f /home/elk/elk/logstash-7.15.1/config/logstash.conf >> /home/elk/elk/logstash-7.15.1/output.log 2>&1 & elk@elk:~$ elk@elk:~$ crontab -e no crontab for elk - using an empty one Select an editor. To change later, run 'select-editor'. 1. /bin/nano <---- easiest 2. /usr/bin/vim.basic 3. /usr/bin/vim.tiny 4. /bin/ed Choose 1-4 [1]: 2 crontab: installing new crontab elk@elk:~$ elk@elk:~$ elk@elk:~$ crontab -l @reboot /home/elk/startup.sh elk@elk:~$logstash插件logstash是通过插件对其功能进行加强插件分类:inputs 输入codecs 解码filters 过滤outputs 输出在Gemfile文件里记录了logstash的插件elk@elk:~$ cd elk/logstash-7.15.1 elk@elk:~/elk/logstash-7.15.1$ ls Gemfile Gemfile elk@elk:~/elk/logstash-7.15.1$去其github上下载插件,地址为:https://github.com/logstash-plugins使用filter插件logstash-filter-mutateelk@elk:~/elk/logstash-7.15.1/config$ vim logstash2.conf #创建一个新的配置文件用来过滤 input { stdin { } } filter { mutate { split => ["message", "|"] } } output { stdout { } }当输入sss|sssni|akok223|23即会按照|分隔符进行分隔其数据处理流程:input–>解码–>filter–>解码–>output启动服务然后去启动logstash服务elk@elk:~$ nohup /home/elk/elk/logstash-7.15.1/bin/logstash -f /home/elk/elk/logstash-7.15.1/config/logstash2.conf >> /home/elk/elk/logstash-7.15.1/output.log 2>&1 & Linux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。41篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
475 阅读
0 评论
0 点赞
2021-12-30
Exchangis搭建安装
项目简介Exchangis是一个轻量级的、高扩展性的数据交换平台,支持对结构化及无结构化的异构数据源之间的数据传输,在应用层上具有数据权限管控、节点服务高可用和多租户资源隔离等业务特性,而在数据层上又具有传输架构多样化、模块插件化和组件低耦合等架构特点。Exchangis的传输交换能力依赖于其底层聚合的传输引擎,其顶层对各类数据源定义统一的参数模型,每种传输引擎对参数模型进行映射配置,转化为引擎的输入模型。每聚合一种引擎,都将增加Exchangis一类特性,对某类引擎的特性强化,都是对Exchangis特性的完善。默认聚合以及强化Alibaba的DataX传输引擎。核心特点数据源管理以绑定项目的方式共享自己的数据源;设置数据源对外权限,控制数据的流入和流出。多传输引擎支持传输引擎可横向扩展;当前版本完整聚合了离线批量引擎DataX、部分聚合了大数据批量导数引擎SQOOP近实时任务管控快速抓取传输任务日志以及传输速率等信息,实时关闭任务;可根据带宽状况对任务进行动态限流支持无结构化传输DataX框架改造,单独构建二进制流快速通道,适用于无数据转换的纯数据同步场景。任务状态自检监控长时间运行的任务和状态异常任务,及时释放占用的资源并发出告警。架构设计环境准备基础软件安装MySQL (5.5+) 必选,对应客户端可以选装, Linux服务上若安装mysql的客户端可以通过部署脚本快速初始化数据库JDK (1.8.0_141) 必选Maven (3.6.1+) 必选SQOOP (1.4.6) 可选,如果想要SQOOP做传输引擎,可以安装SQOOP,SQOOP安装依赖Hive,Hadoop环境,这里就不展开来讲Python (2.x) 可选,主要用于调度执行底层DataX的启动脚本,默认的方式是以Java子进程方式执行DataX,用户可以选择以Python方式来做自定义的改造mysql 数据库安装[root@localhost ~]# mkdir mysql [root@localhost ~]# cd mysql [root@localhost mysql]# wget https://downloads.mysql.com/archives/get/p/23/file/mysql-5.7.35-1.el7.x86_64.rpm-bundle.tar [root@localhost mysql]# tar xvf mysql-5.7.35-1.el7.x86_64.rpm-bundle.tar [root@localhost mysql]# yum install ./*.rpm [root@localhost mysql]# systemctl start mysqld.service [root@localhost mysql]# [root@localhost mysql]# systemctl enable mysqld.service [root@localhost mysql]# [root@localhost mysql]# [root@localhost mysql]# sudo grep 'temporary password' /var/log/mysqld.log 2021-10-25T06:57:46.569037Z 1 [Note] A temporary password is generated for root@localhost: (l5aFfIxfNuu [root@localhost mysql]# [root@localhost mysql]# [root@localhost mysql]# [root@localhost mysql]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.35 MySQL Community Server (GPL) Copyright (c) 2000, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'Cby123..'; Query OK, 0 rows affected (0.00 sec) mysql> mysql> mysql> mysql> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> mysql> mysql> update user set host='%' where user ='root'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> set global validate_password_policy=0; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_mixed_case_count=0; Query OK, 0 rows affected (0.01 sec) mysql> set global validate_password_number_count=3; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_special_char_count=0; Query OK, 0 rows affected (0.00 sec) mysql> set global validate_password_length=3; Query OK, 0 rows affected (0.00 sec) mysql>jdk安装[root@localhost ~]# mkdir jdk [root@localhost ~]# cd jdk [root@localhost jdk]# [root@localhost jdk]# tar xf jdk-8u141-linux-x64.tar.gz [root@localhost jdk]# [root@localhost jdk]# ll total 181172 drwxr-xr-x. 8 10 143 255 Jul 12 2017 jdk1.8.0_141 -rw-r--r--. 1 root root 185516505 Jul 25 2017 jdk-8u141-linux-x64.tar.gz [root@localhost jdk]# [root@localhost jdk]# vim /etc/profile [root@localhost jdk]# tail -n 3 /etc/profile export JAVA_HOME=/root/jdk/jdk1.8.0_141/ export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar [root@localhost jdk]# [root@localhost jdk]# source /etc/profile [root@localhost jdk]# java -version java version "1.8.0_141" Java(TM) SE Runtime Environment (build 1.8.0_141-b15) Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode) [root@localhost jdk]#maven 安装[root@localhost ~]# [root@localhost ~]# mkdir maven [root@localhost ~]# cd maven [root@localhost maven]# wget https://archive.apache.org/dist/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz [root@localhost maven]# [root@localhost maven]# tar xf apache-maven-3.6.1-bin.tar.gz [root@localhost maven]# ll total 8924 drwxr-xr-x. 6 root root 99 Oct 25 15:08 apache-maven-3.6.1 -rw-r--r--. 1 root root 9136463 Sep 4 2019 apache-maven-3.6.1-bin.tar.gz [root@localhost maven]# vim /etc/profile [root@localhost maven]# [root@localhost maven]# tail -n 3 /etc/profile export MAVEN_HOME=/root/maven/apache-maven-3.6.1 export PATH=$MAVEN_HOME/bin:$PATH:$HOME/bin [root@localhost maven]# [root@localhost maven]# source /etc/profile [root@localhost maven]# mvn -version Apache Maven 3.6.1 (d66c9c0b3152b2e69ee9bac180bb8fcc8e6af555; 2019-04-05T03:00:29+08:00) Maven home: /root/maven/apache-maven-3.6.1 Java version: 1.8.0_141, vendor: Oracle Corporation, runtime: /root/jdk/jdk1.8.0_141/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.10.0-1160.31.1.el7.x86_64", arch: "amd64", family: "unix" [root@localhost maven]#Exchangis安装[root@localhost ~]# [root@localhost ~]# mkdir Exchangis [root@localhost ~]# cd Exchangis [root@localhost Exchangis]# wget https://github.com/WeBankFinTech/Exchangis/releases/download/release-0.5.0/wedatasphere-exchangis-0.5.0.RELEASE.tar.gz [root@localhost Exchangis]# ll total 552904 -rw-r--r--. 1 root root 566172217 Oct 25 15:14 wedatasphere-exchangis-0.5.0.RELEASE.tar.gz [root@localhost Exchangis]# tar xf wedatasphere-exchangis-0.5.0.RELEASE.tar.gz [root@localhost Exchangis]# ll total 552904 drwxr-xr-x. 6 root root 91 Oct 25 15:14 wedatasphere-exchangis-0.5.0.RELEASE -rw-r--r--. 1 root root 566172217 Oct 25 15:14 wedatasphere-exchangis-0.5.0.RELEASE.tar.gz [root@localhost Exchangis]# cd wedatasphere-exchangis-0.5.0.RELEASE [root@localhost wedatasphere-exchangis-0.5.0.RELEASE]# ll total 20 drwxrwxrwx. 2 root root 120 Oct 29 2020 bin drwxrwxrwx. 4 root root 32 May 12 2020 docs drwxrwxrwx. 4 root root 57 May 12 2020 images -rwxrwxrwx. 1 root root 11357 Oct 29 2020 LICENSE drwxr-xr-x. 2 root root 198 Oct 25 15:14 packages -rwxrwxrwx. 1 root root 4582 Oct 29 2020 README.md [root@localhost wedatasphere-exchangis-0.5.0.RELEASE]# cd bin/ [root@localhost bin]# ./install.sh 2021-10-25 15:16:19.723 [INFO] (12476) Creating directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/bin/../modules]. 2021-10-25 15:16:19.728 [INFO] (12476) ####### Start To Uncompress Packages ###### 2021-10-25 15:16:19.730 [INFO] (12476) Uncompressing.... Do you want to decompress this package: [exchangis-eureka_0.5.0.RELEASE_1.tar.gz]? (Y/N)y 2021-10-25 15:16:22.691 [INFO] (12476) Uncompress package: [exchangis-eureka_0.5.0.RELEASE_1.tar.gz] to modules directory Do you want to decompress this package: [exchangis-executor_0.5.0.RELEASE_1.tar.gz]? (Y/N)y 2021-10-25 15:16:24.798 [INFO] (12476) Uncompress package: [exchangis-executor_0.5.0.RELEASE_1.tar.gz] to modules directory Do you want to decompress this package: [exchangis-gateway_0.5.0.RELEASE_1.tar.gz]? (Y/N)y 2021-10-25 15:16:31.947 [INFO] (12476) Uncompress package: [exchangis-gateway_0.5.0.RELEASE_1.tar.gz] to modules directory Do you want to decompress this package: [exchangis-service_0.5.0.RELEASE_1.tar.gz]? (Y/N)y 2021-10-25 15:16:35.029 [INFO] (12476) Uncompress package: [exchangis-service_0.5.0.RELEASE_1.tar.gz] to modules directory 2021-10-25 15:16:36.537 [INFO] (12476) ####### Finish To Umcompress Packages ###### Scan modules directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/bin/../modules] to find server under exchangis 2021-10-25 15:16:36.542 [INFO] (12476) ####### Start To Install Modules ###### 2021-10-25 15:16:36.545 [INFO] (12476) Module servers could be installed: [exchangis-eureka] [exchangis-executor] [exchangis-gateway] [exchangis-service] Do you want to confiugre and install [exchangis-eureka]? (Y/N)y 2021-10-25 15:16:37.676 [INFO] (12476) Install module server: [exchangis-eureka] 2021-10-25 15:16:37.706 [INFO] (12527) Start to build directory 2021-10-25 15:16:37.709 [INFO] (12527) Creating directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-eureka/bin/../logs]. 2021-10-25 15:16:37.779 [INFO] (12527) Directory or file: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-eureka/bin/../conf] has been exist 2021-10-25 15:16:37.782 [INFO] (12527) Creating directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-eureka/bin/../data]. Do you want to confiugre and install [exchangis-executor]? (Y/N)y 2021-10-25 15:16:38.529 [INFO] (12476) Install module server: [exchangis-executor] 2021-10-25 15:16:38.558 [INFO] (12565) Start to build directory 2021-10-25 15:16:38.561 [INFO] (12565) Creating directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-executor/bin/../logs]. 2021-10-25 15:16:38.596 [INFO] (12565) Directory or file: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-executor/bin/../conf] has been exist 2021-10-25 15:16:38.599 [INFO] (12565) Creating directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-executor/bin/../data]. Do you want to confiugre and install [exchangis-gateway]? (Y/N)y 2021-10-25 15:16:39.291 [INFO] (12476) Install module server: [exchangis-gateway] 2021-10-25 15:16:39.317 [INFO] (12603) Start to build directory 2021-10-25 15:16:39.320 [INFO] (12603) Creating directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-gateway/bin/../logs]. 2021-10-25 15:16:39.354 [INFO] (12603) Directory or file: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-gateway/bin/../conf] has been exist 2021-10-25 15:16:39.356 [INFO] (12603) Creating directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-gateway/bin/../data]. Do you want to confiugre and install [exchangis-service]? (Y/N)y 2021-10-25 15:16:39.991 [INFO] (12476) Install module server: [exchangis-service] 2021-10-25 15:16:40.017 [INFO] (12641) Start to build directory 2021-10-25 15:16:40.020 [INFO] (12641) Creating directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-service/bin/../logs]. 2021-10-25 15:16:40.056 [INFO] (12641) Directory or file: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-service/bin/../conf] has been exist 2021-10-25 15:16:40.059 [INFO] (12641) Creating directory: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/modules/exchangis-service/bin/../data]. 2021-10-25 15:16:40.099 [INFO] (12641) Scan out mysql command, so begin to initalize the database Do you want to initalize database with sql: [/root/Exchangis/wedatasphere-exchangis-0.5.0.RELEASE/bin/exchangis-init.sql]? (Y/N)y Please input the db host(default: 127.0.0.1): Please input the db port(default: 3306): Please input the db username(default: root): Please input the db password(default: ): Cby123.. Please input the db name(default: exchangis) mysql: [Warning] Using a password on the command line interface can be insecure. 2021-10-25 15:16:55.665 [INFO] (12476) ####### Finish To Install Modules ###### [root@localhost bin]# [root@localhost bin]# ./start-all.sh 2021-10-25 15:18:22.181 [INFO] (12691) Try To Start Modules In Order 2021-10-25 15:18:22.189 [INFO] (12699) ####### Begin To Start Module: [exchangis-eureka] ###### 2021-10-25 15:18:22.199 [INFO] (12707) load environment variables 2021-10-25 15:18:22.717 [INFO] (12707) /root/jdk/jdk1.8.0_141//bin/java 2021-10-25 15:18:22.721 [INFO] (12707) Waiting EXCHANGIS-EUREKA to start complete ... 2021-10-25 15:18:22.994 [INFO] (12707) EXCHANGIS-EUREKA start success 2021-10-25 15:18:23.003 [INFO] (13009) ####### Begin To Start Module: [exchangis-gateway] ###### 2021-10-25 15:18:23.012 [INFO] (13017) load environment variables 2021-10-25 15:18:23.493 [INFO] (13017) /root/jdk/jdk1.8.0_141//bin/java 2021-10-25 15:18:23.497 [INFO] (13017) Waiting EXCHANGIS-GATEWAY to start complete ... 2021-10-25 15:18:24.081 [INFO] (13017) EXCHANGIS-GATEWAY start success 2021-10-25 15:18:24.091 [INFO] (13321) ####### Begin To Start Module: [exchangis-service] ###### 2021-10-25 15:18:24.099 [INFO] (13329) load environment variables 2021-10-25 15:18:24.933 [INFO] (13329) /root/jdk/jdk1.8.0_141//bin/java 2021-10-25 15:18:24.936 [INFO] (13329) Waiting EXCHANGIS-SERVICE to start complete ... 2021-10-25 15:18:26.398 [INFO] (13329) EXCHANGIS-SERVICE start success 2021-10-25 15:18:26.410 [INFO] (13634) ####### Begin To Start Module: [exchangis-executor] ###### 2021-10-25 15:18:26.423 [INFO] (13643) load environment variables 2021-10-25 15:18:27.677 [INFO] (13643) /root/jdk/jdk1.8.0_141//bin/java 2021-10-25 15:18:27.681 [INFO] (13643) Waiting EXCHANGIS-EXECUTOR to start complete ... 2021-10-25 15:18:28.441 [INFO] (13643) EXCHANGIS-EXECUTOR start success [root@localhost bin]#**登陆访问 ** 注册中心:http://192.168.1.161:8500/访问地址:http://192.168.1.161:9503/账号:admin密码:adminLinux运维交流社区Linux运维交流社区,互联网新闻以及技术交流。43篇原创内容公众号本文使用 文章同步助手 同步
2021年12月30日
1,159 阅读
0 评论
0 点赞
1
...
33
34
35
...
42