flume组件以及通过命令监控大数据平台转态

2023-04-24 18:44:32 来源: 博客园
实验一、Flume 组件安装配置1、下载和解压 Flume

可 以 从 官 网 下 载 Flume 组 件 安 装 包 , 下 载 地 址 如 下 URL 链 接 所 示 https://archive.apache.org/dist/flume/1.6.0/

[root@master ~]# lsanaconda-ks.cfg         jdk-8u152-linux-x64.tar.gzapache-flume-1.6.0-bin.tar.gz  mysqlapache-hive-2.0.0-bin.tar.gz  mysql-connector-java-5.1.46.jarderby.log            sqoop-1.4.7.bin__hadoop-2.6.0.tar.gzhadoop-2.7.1.tar.gz       zookeeper-3.4.8.tar.gzhbase-1.2.1-bin.tar.gz#使用 root 用户解压 Flume 安装包到“/usr/local/src”路径,并修改解压后文件夹名为 flume。[root@master ~]# tar xf apache-flume-1.6.0-bin.tar.gz -C /usr/local/src/[root@master ~]# cd /usr/local/src/[root@master src]# lsapache-flume-1.6.0-bin  hadoop  hbase  hive  jdk  sqoop  zookeeper[root@master src]# mv apache-flume-1.6.0-bin flume[root@master src]# lsflume  hadoop  hbase  hive  jdk  sqoop  zookeeper
2、Flume 组件部署

步骤一:使用 root 用户设置 Flume 环境变量,并使环境变量对所有用户生效。


(资料图片仅供参考)

[root@master ~]# vim /etc/profile.d/flume.shexport FLUME_HOME=/usr/local/src/flumeexport PATH=${FLUME_HOME}/bin:$PATH

步骤二:修改 Flume 相应配置文件。

首先,切换到 hadoop 用户,并切换当前工作目录到 Flume 的配置文件夹。

[root@master ~]# chown -R hadoop.hadoop /usr/local/src/[root@master ~]# su - hadoopLast login: Fri Apr 14 16:31:48 CST 2023 on pts/1[hadoop@master ~]$ lsderby.log  input  student.java  zookeeper.out[hadoop@master ~]$ cd /usr/local/src/flume/conf/[hadoop@master conf]$ lsflume-conf.properties.template  flume-env.sh.templateflume-env.ps1.template      log4j.properties

拷贝 flume-env.sh.template 文件并重命名为 flume-env.sh。

[hadoop@master conf]$ cp flume-env.sh.template flume-env.sh[hadoop@master conf]$ lsflume-conf.properties.template  flume-env.sh      log4j.propertiesflume-env.ps1.template      flume-env.sh.template

步骤三:修改并配置 flume-env.sh 文件。

删除 JAVA_HOME 变量前的注释,修改为 JDK 的安装路径。

[hadoop@master conf]$ vi flume-env.shexport JAVA_HOME=/usr/loocal/src/jdk#export HBASE_CLASSPATH=/usr/local/src/hadoop/etc/hadoop

使用 flume-ng version 命令验证安装是否成功,若能够正常查询 Flume 组件版本为 1.6.0,则表示安装成功。

[hadoop@master conf]$ flume-ng versionError: Could not find or load main class org.apache.flume.tools.GetJavaPropertyFlume 1.6.0Source code repository: https://git-wip-us.apache.org/repos/asf/flume.gitRevision: 2561a23240a71ba20bf288c7c2cda88f443c2080Compiled by hshreedharan on Mon May 11 11:15:44 PDT 2015From source with checksum b29e416802ce9ece3269d34233baf43f
3、使用 Flume 发送和接受信息

通过 Flume 将 Web 服务器中数据传输到 HDFS 中。

步骤一:在 Flume 安装目录中创建 xxx.conf 文件。

[hadoop@master flume]$ vi xxx.confa1.sources=r1a1.sinks=k1a1.channels=c1a1.sources.r1.type=spooldira1.sources.r1.spoolDir=/usr/local/src/hadoop/logsa1.sources.r1.fileHeader=truea1.sinks.k1.type=hdfsa1.sinks.k1.hdfs.path=hdfs://master:9000/tmp/flumea1.sinks.k1.hdfs.rollsize=1048760a1.sinks.k1.hdfs.rollCount=0a1.sinks.k1.hdfs.rollInterval=900a1.sinks.k1.hfds.useLocalTimeStamp=truea1.channels.c1.type=filea1.channels.c1.capacity=1000a1.channels.c1.transactionCapacity=100a1.sources.r1.channels = c1a1.sinks.k1.channel = c1
[hadoop@master flume]$ ls /usr/local/src/hadoop/bin  etc    lib    LICENSE.txt  NOTICE.txt  sbin  tmpdfs  include  libexec  logs     README.txt  share[hadoop@master flume]$ ls /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.example.com.loghadoop-hadoop-namenode-master.example.com.outhadoop-hadoop-namenode-master.example.com.out.1hadoop-hadoop-namenode-master.example.com.out.2hadoop-hadoop-namenode-master.example.com.out.3hadoop-hadoop-namenode-master.example.com.out.4hadoop-hadoop-namenode-master.example.com.out.5hadoop-hadoop-secondarynamenode-master.example.com.loghadoop-hadoop-secondarynamenode-master.example.com.outhadoop-hadoop-secondarynamenode-master.example.com.out.1hadoop-hadoop-secondarynamenode-master.example.com.out.2hadoop-hadoop-secondarynamenode-master.example.com.out.3hadoop-hadoop-secondarynamenode-master.example.com.out.4hadoop-hadoop-secondarynamenode-master.example.com.out.5SecurityAuth-hadoop.audityarn-hadoop-resourcemanager-master.example.com.logyarn-hadoop-resourcemanager-master.example.com.outyarn-hadoop-resourcemanager-master.example.com.out.1yarn-hadoop-resourcemanager-master.example.com.out.2yarn-hadoop-resourcemanager-master.example.com.out.3yarn-hadoop-resourcemanager-master.example.com.out.4yarn-hadoop-resourcemanager-master.example.com.out.5

步骤二:使用 flume-ng agent 命令加载 simple-hdfs-flume.conf 配置信息,启 动 flume 传输数据。

[[hadoop@master flume]$ flume-ng agent --conf-file xxx.conf --name a1Warning: No configuration directory set! Use --conf  to override.Info: Including Hadoop libraries found via (/usr/local/src/hadoop/bin/hadoop) for HDFS accessInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpathInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpathInfo: Including HBASE libraries found via (/usr/local/src/hbase/bin/hbase) for HBASE accessInfo: Excluding /usr/local/src/hbase/lib/slf4j-api-1.7.7.jar from classpathInfo: Excluding /usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar from classpathInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpathInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpathInfo: Including Hive libraries found via (/usr/local/src/hive) for Hive access...23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.append.accepted == 023/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.append.received == 023/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.events.accepted == 1723/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.events.received == 1723/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.open-connection.count == 023/04/21 16:02:35 INFO source.SpoolDirectorySource: SpoolDir source r1 stopped. Metrics: SOURCE:r1{src.events.accepted=17, src.open-connection.count=0, src.append.received=0, src.append-batch.received=1, src.append-batch.accepted=1, src.append.accepted=0, src.events.received=17}

ctrl+c 退出 flume 传输

#首先得开启所有节点[hadoop@master flume]$ start-all.shThis script is Deprecated. Instead use start-dfs.sh and start-yarn.shStarting namenodes on [master]master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.example.com.out192.168.88.201: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.example.com.out192.168.88.200: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.example.com.outStarting secondary namenodes [0.0.0.0]0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.example.com.outstarting yarn daemonsstarting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.example.com.out192.168.88.200: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.example.com.out192.168.88.201: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.example.com.out[hadoop@master flume]$ ss -antlState      Recv-Q Send-Q Local Address:Port               Peer Address:Port              LISTEN     0      128    192.168.88.101:9000                      *:*                  LISTEN     0      128           *:50090                     *:*                  LISTEN     0      128           *:50070                     *:*                  LISTEN     0      128           *:22                        *:*                  LISTEN     0      128       ::ffff:192.168.88.101:8030                     :::*                  LISTEN     0      128       ::ffff:192.168.88.101:8031                     :::*                  LISTEN     0      128       ::ffff:192.168.88.101:8032                     :::*                  LISTEN     0      128       ::ffff:192.168.88.101:8033                     :::*                  LISTEN     0      80           :::3306                     :::*                  LISTEN     0      128          :::22                       :::*                  LISTEN     0      128       ::ffff:192.168.88.101:8088                     :::*                  

步骤三:查看 Flume 传输到 HDFS 的文件,若能查看到 HDFS 上/tmp/flume 目 录有传输的数据文件,则表示数据传输成功。

[hadoop@master flume]$ hdfs dfs -ls /Found 5 itemsdrwxr-xr-x   - hadoop supergroup          0 2023-04-07 15:20 /hbasedrwxr-xr-x   - hadoop supergroup          0 2023-03-17 17:33 /inputdrwxr-xr-x   - hadoop supergroup          0 2023-03-17 18:45 /outputdrwx------   - hadoop supergroup          0 2023-04-21 16:02 /tmpdrwxr-xr-x   - hadoop supergroup          0 2023-04-14 20:48 /user[hadoop@master flume]$ hdfs dfs -ls /tmp/flumeFound 72 items-rw-r--r--   2 hadoop supergroup       1560 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143693-rw-r--r--   2 hadoop supergroup       1398 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143694-rw-r--r--   2 hadoop supergroup       1456 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143695-rw-r--r--   2 hadoop supergroup       1398 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143696-rw-r--r--   2 hadoop supergroup       1403 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143697-rw-r--r--   2 hadoop supergroup       1434 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143698-rw-r--r--   2 hadoop supergroup       1383 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143699...-rw-r--r--   2 hadoop supergroup       1508 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143760-rw-r--r--   2 hadoop supergroup       1361 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143761-rw-r--r--   2 hadoop supergroup       1359 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143762-rw-r--r--   2 hadoop supergroup       1502 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143763-rw-r--r--   2 hadoop supergroup       1399 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143764
实验二、通过命令监控大数据平台运行状态1、通过命令查看大数据平台状态

步骤一:查看 Linux 系统的信息(uname -a)

[root@master ~]# uname -aLinux master.example.com 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

结果显示,该 Linux 节点名称为 master,内核发行号为 3.10.0-693.el7.x86_64。

步骤二:查看硬盘信息

(1)查看所有分区(fdisk -l)

[root@master ~]# fdisk -lDisk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x000af885   Device Boot      Start         End      Blocks   Id  System/dev/sda1   *        2048     2099199     1048576   83  Linux/dev/sda2         2099200   209715199   103808000   8e  Linux LVMDisk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/centos-swap: 8455 MB, 8455716864 bytes, 16515072 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/centos-home: 44.1 GB, 44149243904 bytes, 86228992 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes

结果显示,硬盘空间为 107.4GB。

(2)查看所有交换分区(swapon -s)

[root@master ~]# swapon -sFilenameTypeSizeUsedPriority/dev/dm-1                              partition82575320-1

结果显示,交换分区为 8257532。

(3)查看文件系统占比(df -h)

[root@master ~]# df -hFilesystem               Size  Used Avail Use% Mounted on/dev/mapper/centos-root   50G  4.6G   46G  10% /devtmpfs                 1.9G     0  1.9G   0% /devtmpfs                    1.9G     0  1.9G   0% /dev/shmtmpfs                    1.9G   12M  1.9G   1% /runtmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup/dev/mapper/centos-home   42G   36M   42G   1% /home/dev/sda1               1014M  142M  873M  14% /boottmpfs                    378M     0  378M   0% /run/user/0

结果显示,挂载点“/”的容量为 50G,已使用 4.6G。

步骤三:查看网络 IP 地址(ip a)

[root@master ~]# ip a1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:20:cd:03 brd ff:ff:ff:ff:ff:ff    inet 192.168.88.101/24 brd 192.168.88.255 scope global noprefixroute ens33       valid_lft forever preferred_lft forever    inet6 fe80::75a3:9da2:ede2:5be7/64 scope link noprefixroute        valid_lft forever preferred_lft forever    inet6 fe80::1c1b:d0e3:f01a:7c11/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever

结果显示 ens33 的 IP 地址为 192.168.88.101,子网掩码为 255.255.255.0;回环地址 为 127.0.0.1,子网掩码为 255.0.0.0。

步骤四:查看所有监听端口(netstat -lntp)

[root@master ~]# netstat -lntpActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      908/sshd            tcp6       0      0 :::3306                 :::*                    LISTEN      1053/mysqld         tcp6       0      0 :::22                   :::*                    LISTEN      908/sshd    

结果显示,在监听的端口分别为 22、3306。

步骤五:查看所有已经建立的连接(netstat -antp)

[root@master ~]# netstat -antpActive Internet connections (servers and established)Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      908/sshd            tcp        0     36 192.168.88.101:22       192.168.88.1:61450      ESTABLISHED 1407/sshd: root@pts tcp6       0      0 :::3306                 :::*                    LISTEN      1053/mysqld         tcp6       0      0 :::22                   :::*                    LISTEN      908/sshd

结果显示,已经连接上的本地端口分别为 50608、22、9000、8031 等。

步骤六:实时显示进程状态(top),该命令可以查看进程对 CPU、内存的占比 等。

步骤七:查看 CPU 信息( cat /proc/cpuinfo)

[root@master ~]# cat /proc/cpuinfo processor: 0vendor_id: AuthenticAMDcpu family: 25model: 80model name: AMD Ryzen 7 5800U with Radeon Graphicsstepping: 0cpu MHz: 1896.438cache size: 512 KBphysical id: 0siblings: 2core id: 0cpu cores: 2apicid: 0initial apicid: 0fpu: yesfpu_exception: yescpuid level: 16wp: yesflags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl tsc_reliable nonstop_tsc extd_apicid eagerfpu pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext retpoline_amd vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero ibpb ibrs arat pku ospke overflow_recov succorbogomips: 3792.87TLB size: 2560 4K pagesclflush size: 64cache_alignment: 64address sizes: 45 bits physical, 48 bits virtualpower management:processor: 1vendor_id: AuthenticAMDcpu family: 25model: 80model name: AMD Ryzen 7 5800U with Radeon Graphicsstepping: 0cpu MHz: 1896.438cache size: 512 KBphysical id: 0siblings: 2core id: 1cpu cores: 2apicid: 1initial apicid: 1fpu: yesfpu_exception: yescpuid level: 16wp: yesflags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl tsc_reliable nonstop_tsc extd_apicid eagerfpu pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext retpoline_amd vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero ibpb ibrs arat pku ospke overflow_recov succorbogomips: 3792.87TLB size: 2560 4K pagesclflush size: 64cache_alignment: 64address sizes: 45 bits physical, 48 bits virtual

步骤八:查看内存信息( cat /proc/meminfo),该命令可以查看总内存、空 闲内存等信息。

[root@master ~]# cat /proc/meminfo MemTotal:        3863564 kBMemFree:         2971196 kBMemAvailable:    3301828 kBBuffers:            2120 kBCached:           529608 kBSwapCached:            0 kBActive:           626584 kBInactive:         130828 kBActive(anon):     226348 kBInactive(anon):    11152 kBActive(file):     400236 kBInactive(file):   119676 kBUnevictable:           0 kBMlocked:               0 kBSwapTotal:       8257532 kBSwapFree:        8257532 kBDirty:                 0 kBWriteback:             0 kBAnonPages:        225716 kBMapped:            28960 kBShmem:             11816 kBSlab:              59428 kBSReclaimable:      30680 kBSUnreclaim:        28748 kBKernelStack:        4432 kBPageTables:         3664 kBNFS_Unstable:          0 kBBounce:                0 kBWritebackTmp:          0 kBCommitLimit:    10189312 kBCommitted_AS:     773684 kBVmallocTotal:   34359738367 kBVmallocUsed:      185376 kBVmallocChunk:   34359310332 kBHardwareCorrupted:     0 kBAnonHugePages:    180224 kBCmaTotal:              0 kBCmaFree:               0 kBHugePages_Total:       0HugePages_Free:        0HugePages_Rsvd:        0HugePages_Surp:        0Hugepagesize:       2048 kBDirectMap4k:       73536 kBDirectMap2M:     3072000 kBDirectMap1G:     3145728 kB
2、通过命令查看 Hadoop 状态

步骤一:切换到 hadoop 用户

[root@master ~]# su - hadoopLast login: Fri Apr 21 15:26:05 CST 2023 on pts/0Last failed login: Fri Apr 21 16:02:08 CST 2023 from slave1 on ssh:nottyThere were 4 failed login attempts since the last successful login.

若当前的用户为 root,请切换到 hadoop 用户进行操作。

步骤二:切换到 Hadoop 的安装目录

[hadoop@master ~]$ cd /usr/local/src/hadoop/

步骤三:启动 Hadoop

[hadoop@master ~]$ start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.shStarting namenodes on [master]master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.example.com.out192.168.88.201: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.example.com.out192.168.88.200: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.example.com.outStarting secondary namenodes [0.0.0.0]0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.example.com.outstarting yarn daemonsstarting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.example.com.out192.168.88.200: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.example.com.out192.168.88.201: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.example.com.out

步骤四:关闭 Hadoop

[hadoop@master hadoop]$ stop-all.sh This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.shStopping namenodes on [master]master: stopping namenode192.168.88.201: stopping datanode192.168.88.200: stopping datanodeStopping secondary namenodes [0.0.0.0]0.0.0.0: stopping secondarynamenodestopping yarn daemonsstopping resourcemanager192.168.88.200: stopping nodemanager192.168.88.201: stopping nodemanagerno proxyserver to stop
实验三、通过命令监控大数据平台资源状态1、通过命令查看 YARN 状态

步骤一:确认切换到目录 /usr/local/src/hadoop

[hadoop@master hadoop]$ cd /usr/local/src/hadoop/

步骤二:返回主机界面在 Master 主机上执行 start-all.sh

#master 节点启动 zookeeper[hadoop@master hadoop]$ zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgStarting zookeeper ... STARTED#slave1 节点启动 zookeeper[root@slave1 ~]# zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgStarting zookeeper ... STARTED#slave2 节点启动 zookeeper[root@slave2 ~]# zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgStarting zookeeper ... STARTEDmaster节点[hadoop@master hadoop]$ start-all.sh 

步 骤 三 : 执 行 JPS 命 令 , 发 现 Master 上 有 NodeManager 进程和 ResourceManager 进程,则 YARN 启动完成。

[hadoop@master hadoop]$ jps

执行结果如下,说明 YARN 已启动。

[hadoop@master hadoop]$ jps2642 QuorumPeerMain2994 SecondaryNameNode3154 ResourceManager3413 Jps2795 NameNode
2、通过命令查看 HDFS 状态

步骤一:目录操作

切换到 hadoop 目录,执行 cd /usr/local/src/hadoop 命令

[hadoop@master hadoop]$ cd /usr/local/src/hadoop/

查看 HDFS 目录

[hadoop@master hadoop]$  ./bin/hdfs dfs -ls /Found 5 itemsdrwxr-xr-x   - hadoop supergroup          0 2023-04-07 15:20 /hbasedrwxr-xr-x   - hadoop supergroup          0 2023-03-17 17:33 /inputdrwxr-xr-x   - hadoop supergroup          0 2023-03-17 18:45 /outputdrwx------   - hadoop supergroup          0 2023-04-21 16:02 /tmpdrwxr-xr-x   - hadoop supergroup          0 2023-04-14 20:48 /user

步骤二:查看 HDSF 的报告,执行命令: bin/hdfs dfsadmin -report

[hadoop@master hadoop]$ bin/hdfs dfsadmin -reportConfigured Capacity: 107321753600 (99.95 GB)Present Capacity: 102855995392 (95.79 GB)DFS Remaining: 102852079616 (95.79 GB)DFS Used: 3915776 (3.73 MB)DFS Used%: 0.00%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0Missing blocks (with replication factor 1): 0-------------------------------------------------Live datanodes (2):Name: 192.168.88.201:50010 (slave2)Hostname: slave2Decommission Status : NormalConfigured Capacity: 53660876800 (49.98 GB)DFS Used: 1957888 (1.87 MB)Non DFS Used: 2232373248 (2.08 GB)DFS Remaining: 51426545664 (47.89 GB)DFS Used%: 0.00%DFS Remaining%: 95.84%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Sun Apr 23 21:56:46 CST 2023Name: 192.168.88.200:50010 (slave1)Hostname: slave1Decommission Status : NormalConfigured Capacity: 53660876800 (49.98 GB)DFS Used: 1957888 (1.87 MB)Non DFS Used: 2233384960 (2.08 GB)DFS Remaining: 51425533952 (47.89 GB)DFS Used%: 0.00%DFS Remaining%: 95.83%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Sun Apr 23 21:56:46 CST 2023

步骤三:查看 HDFS 空间情况,执行命令:hdfs dfs -df

[hadoop@master hadoop]$ hdfs dfs -df /Filesystem                  Size     Used     Available  Use%hdfs://master:9000  107321753600  3915776  102852079616    0%
3、通过命令查看 HBase 状态

步骤一:启动运行 HBase

切换到 HBase 安装目录/usr/local/src/hbase,命令如下:

[hadoop@master hadoop]$ cd /usr/local/src/hbase/[hadoop@master hbase]$ hbase versionHBase 1.2.1Source code repository git://asf-dev/home/busbey/projects/hbase revision=8d8a7107dc4ccbf36a92f64675dc60392f85c015Compiled by busbey on Wed Mar 30 11:19:21 CDT 2016From source with checksum f4bb4a14bb4e0b72b46f729dae98a772

结果显示 HBase1.2.1,说明 HBase 正在运行,版本号为 1.2.1。

如果没有启动,则执行命令 start-hbase.sh 启动 HBase。[hadoop@master hbase]$ start-hbase.sh master: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-master.example.com.outslave1: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-slave1.example.com.outslave2: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-slave2.example.com.outstarting master, logging to /usr/local/src/hbase/logs/hbase-hadoop-master-master.example.com.outslave2: starting regionserver, logging to /usr/local/src/hbase/logs/hbase-hadoop-regionserver-slave2.example.com.outslave1: starting regionserver, logging to /usr/local/src/hbase/logs/hbase-hadoop-regionserver-slave1.example.com.out

步骤二:查看 HBase 版本信息

执行命令hbase shell,进入HBase命令交互界面

[hadoop@master hbase]$ hbase shellSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]HBase Shell; enter "help" for list of supported commands.Type "exit" to leave the HBase ShellVersion 1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016hbase(main):001:0> 

输入 version,查询 HBase 版本

hbase(main):001:0> version1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016

结果显示 HBase 版本为 1.2.1。

步骤三:查询 HBase 状态,在 HBase 命令交互界面,执行 status 命令

hbase(main):002:0> status1 active master, 0 backup masters, 2 servers, 0 dead, 1.0000 average load

查询结果显示,1 台活动 master,0 台备份 masters,共 3 台服务主机,平均加载 时间为 0.6667 秒。

我们还可以“简单”查询 HBase 的状态,执行命令 status "simple"

hbase(main):003:0> status "simple"active master:  master:16000 16822583214330 backup masters2 live servers    slave1:16020 1682258322908        requestsPerSecond=0.0, numberOfOnlineRegions=2, usedHeapMB=19, maxHeapMB=440, numberOfStores=2, numberOfStorefiles=0, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=7, writeRequestsCount=4, rootIndexSizeKB=0 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[MultiRowMutationEndpoint]    slave2:16020 1682258322896        requestsPerSecond=0.0, numberOfOnlineRegions=0, usedHeapMB=11, maxHeapMB=440, numberOfStores=0, numberOfStorefiles=0, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]0 dead serversAggregate load: 0, regions: 2

显示更多的关于 Master、Slave1 和 Slave2 主机的服务端口、请求时间等详细信息。

如果需要查询更多关于 HBase 状态,执行命令 help "status"

hbase(main):004:0> help "status"Show cluster status. Can be "summary", "simple", "detailed", or "replication". Thedefault is "summary". Examples:  hbase> status  hbase> status "simple"  hbase> status "summary"  hbase> status "detailed"  hbase> status "replication"  hbase> status "replication", "source"  hbase> status "replication", "sink"  hbase(main):005:0> quit[hadoop@master hbase]$ 

结果显示出所有关于 status 的命令。

步骤四 停止 HBase 服务

停止 HBase 服务,则执行命令 stop-hbase.sh。

[hadoop@master hbase]$ stop-hbase.sh stopping hbase.................slave1: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pidslave2: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pidmaster: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid

没有错误提示,显示$提示符时,即停止了 HBase 服务。

4、通过命令查看 Hive 状态

步骤一:启动 Hive

切换到/usr/local/src/hive 目录,输入 hive,回车。

[hadoop@master hbase]$ cd /usr/local/src/hive/[hadoop@master hive]$ hiveSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]Logging initialized using configuration in jar:file:/usr/local/src/hive/lib/hive-common-2.0.0.jar!/hive-log4j2.propertiesHive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.hive> 

当显示 hive>时,表示启动成功,进入到了 Hive shell 状态。

步骤二:Hive 操作基本命令

注意:Hive 命令行语句后面一定要加分号。

(1)查看数据库

hive> show databases;OKdefaultsampleTime taken: 0.973 seconds, Fetched: 2 row(s)

显示默认的数据库 default。

(2)查看 default 数据库所有表

hive> use default;OKTime taken: 0.02 secondshive> show tables;OKtestTime taken: 2.201 seconds, Fetched: 1 row(s)

显示 default 数据中没有任何表。

(3)创建表 stu,表的 id 为整数型,name 为字符型

hive> create table stu(id int,name string);OKTime taken: 0.432 seconds

(4)为表 stu 插入一条信息,id 号为 001,name 为liuyaling

hive> insert into stu values (001,"liuyaling");WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.Query ID = hadoop_20230423222915_a95e9891-fdf5-4739-a63e-fcadecc85e28Total jobs = 3Launching Job 1 out of 3Number of reduce tasks is set to 0 since there"s no reduce operatorStarting Job = job_1682258121749_0001, Tracking URL = http://master:8088/proxy/application_1682258121749_0001/Kill Command = /usr/local/src/hadoop/bin/hadoop job  -kill job_1682258121749_0001Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 02023-04-23 22:31:36,420 Stage-1 map = 0%,  reduce = 0%2023-04-23 22:31:43,892 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.72 secMapReduce Total cumulative CPU time: 2 seconds 720 msecEnded Job = job_1682258121749_0001Stage-4 is selected by condition resolver.Stage-3 is filtered out by condition resolver.Stage-5 is filtered out by condition resolver.Moving data to: hdfs://master:9000/user/hive/warehouse/stu/.hive-staging_hive_2023-04-23_22-30-32_985_529079703757687911-1/-ext-10000Loading data to table default.stuMapReduce Jobs Launched: Stage-Stage-1: Map: 1   Cumulative CPU: 2.72 sec   HDFS Read: 4135 HDFS Write: 79 SUCCESSTotal MapReduce CPU Time Spent: 2 seconds 720 msecOKTime taken: 72.401 seconds

按照以上操作,继续插入两条信息:id 和 name 分别为 1002、1003 和 yanhaoxiang、tnt。

(5)插入数据后查看表的信息

hive> show tables;OKstutestvalues__tmp__table__1values__tmp__table__2Time taken: 0.026 seconds, Fetched: 4 row(s)

(6)查看表 stu 结构

hive> desc stu;OKid                  int                                     name                string                                  Time taken: 0.041 seconds, Fetched: 2 row(s)

(7)查看表 stu 的内容

hive> select * from stu;OK1liuyaling1002yanhaoxiang1002yanhaoxiang1003tntTime taken: 0.118 seconds, Fetched: 4 row(s)

步骤三:通过 Hive 命令行界面查看文件系统和历史命令

(1)查看本地文件系统,执行命令 ! ls /usr/local/src;

hive> ! ls /usr/local/src;flumehadoophbasehivejdksqoopzookeeper

(2)查看 HDFS 文件系统,执行命令 dfs -ls /;

hive> dfs -ls /;Found 5 itemsdrwxr-xr-x   - hadoop supergroup          0 2023-04-23 21:58 /hbasedrwxr-xr-x   - hadoop supergroup          0 2023-03-17 17:33 /inputdrwxr-xr-x   - hadoop supergroup          0 2023-03-17 18:45 /outputdrwx------   - hadoop supergroup          0 2023-04-21 16:02 /tmpdrwxr-xr-x   - hadoop supergroup          0 2023-04-14 20:48 /userhive> exit;

(3)查看在 Hive 中输入的所有历史命令

进入到当前用户 Hadoop 的目录/home/hadoop,查看.hivehistory 文件。

[hadoop@master hive]$ cd /home/hadoop/[hadoop@master ~]$ cat .hivehistory show databases;create database sample;show databases;use sample;create table student(number string,name string);exitshow databases;exit;use sample;select * from student;sqoop export --connect "jdbc:mysql://master:3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by "|" --export-dir /user/hive/warehouse/sample.db/student/*sqoop export --connect "jdbc:mysql://master:3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by "|" --export-dir /user/hive/warehouse/sample.db/student/*;sqoop export --connect "jdbc:mysql://master3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by "|" --export-dir /user/hive/warehouse/sample.db/student/*sqoop export --connect "jdbc:mysql://master3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by "|" --export-dir /user/hive/warehouse/sample.db/student/*;sqoop export --connect "jdbc:mysql://master3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by "|" --export-dir /user/hive/warehouse/sample.db/student/*exitshow databases;use sample;show tables;use default;show tables;quit"select*from default.test;quit;show databases;use default;show tables;insert into stu values (001,"liuyaling");create table stu(id int,name string);insert into stu values (001,"liuyaling");insert into stu values (1002,"yanhaoxiang");hiveinsert into stu values (1003,"tnt");quitexitexit;quit;show tables;insert into stu values (1002,"yanhaoxiang");insert into stu values (1003,"tnt");show tables;desc stu;select * from stu;! ls /usr/local/src;dfs -ls /;exit;

结果显示,之前在 Hive 命令行界面下运行的所有命令(含错误命令)都显示了出 来,有助于维护、故障排查等工作。

实验四、通过命令监控大数据平台服务状态1、通过命令查看 ZooKeeper 状态

步骤一: 查看 ZooKeeper 状态,执行命令 zkServer.sh status,结果显示如下

[hadoop@master ~]$ zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/src/zookeeper/bin/../conf/zoo.cfgMode: follower

以上结果中,Mode:follower 表示为 ZooKeeper 的跟随者。

步骤二: 查看运行进程

QuorumPeerMain:QuorumPeerMain 是 ZooKeeper 集群的启动入口类,是用来加载配 置启动 QuorumPeer 线程的。

执行命令 jps 以查看进程情况。

[hadoop@master ~]$ jps2642 QuorumPeerMain2994 SecondaryNameNode3154 ResourceManager5400 Jps2795 NameNode

此时 QuorumPeerMain 进程已启动。

步骤四: 在成功启动 ZooKeeper 服务后,输入命令 zkCli.sh,连接到 ZooKeeper 服务。

[hadoop@master ~]$ zkCli.sh Connecting to localhost:21812023-04-23 22:39:17,093 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT2023-04-23 22:39:17,096 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=master2023-04-23 22:39:17,096 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_1522023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation2023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/local/src/jdk/jre2023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/local/src/zookeeper/bin/../build/classes:/usr/local/src/zookeeper/bin/../build/lib/*.jar:/usr/local/src/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/src/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/src/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/usr/local/src/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/local/src/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/local/src/zookeeper/bin/../zookeeper-3.4.8.jar:/usr/local/src/zookeeper/bin/../src/java/lib/*.jar:/usr/local/src/zookeeper/bin/../conf:/usr/local/src/sqoop/lib:2023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib2023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp2023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=2023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux2023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd642023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-862.el7.x86_642023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=hadoop2023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/home/hadoop2023-04-23 22:39:17,097 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/home/hadoop2023-04-23 22:39:17,099 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@69d0a921Welcome to ZooKeeper!2023-04-23 22:39:17,122 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)JLine support is enabled2023-04-23 22:39:17,208 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session2023-04-23 22:39:17,223 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x187ae88b8390000, negotiated timeout = 30000WATCHER::WatchedEvent state:SyncConnected type:None path:null[zk: localhost:2181(CONNECTED) 0] 

步骤五: 使用 Watch 监听/hbase 目录,一旦/hbase 内容有变化,将会有提 示。打开监视,执行命令 get /hbase 1。

[zk: localhost:2181(CONNECTED) 0] get /hbase 1cZxid = 0x500000002ctime = Sun Apr 23 21:58:42 CST 2023mZxid = 0x500000002mtime = Sun Apr 23 21:58:42 CST 2023pZxid = 0x500000062cversion = 18dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 0numChildren = 14[zk: localhost:2181(CONNECTED) 1] quitQuitting...2023-04-23 22:40:16,816 [myid:] - INFO  [main:ZooKeeper@684] - Session: 0x187ae88b8390001 closed2023-04-23 22:40:16,817 [myid:] - INFO  [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x187ae88b8390001[hadoop@master ~]$ 

结果显示,当执行命令 set /hbase value-update 后,数据版本由 0 变成 1,说明 /hbase 处于监控中。

2、通过命令查看 Sqoop 状态

步骤一: 查询 Sqoop 版本号,验证 Sqoop 是否启动成功。

首先切换到/usr/local/src/sqoop 目录,执行命令:./bin/sqoop-version

[hadoop@master ~]$ cd /usr/local/src/sqoop/[hadoop@master sqoop]$  ./bin/sqoop-version Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.Please set $HCAT_HOME to the root of your HCatalog installation.Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.Please set $ACCUMULO_HOME to the root of your Accumulo installation.23/04/23 22:40:59 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7Sqoop 1.4.7git commit id 2328971411f57f0cb683dfb79d19d4d19d185dd8Compiled by maugli on Thu Dec 21 15:59:58 STD 2017

结果显示:Sqoop 1.4.7,说明 Sqoop 版本号为 1.4.7,并启动成功。

步骤二: 测试 Sqoop 是否能够成功连接数据库

切换到 Sqoop 的 目 录 , 执 行 命 令 bin/sqoop list-databases --connect jdbc:mysql://master:3306/ --username root --password Password123$,命令中 “master:3306”为数据库主机名和端口。

[hadoop@master sqoop]$  bin/sqoop list-databases --connect jdbc:mysql://master:3306/ --username root --password Password@123!Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.Please set $HCAT_HOME to the root of your HCatalog installation.Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.Please set $ACCUMULO_HOME to the root of your Accumulo installation.23/04/23 22:42:16 INFO sqoop.Sqoop: Running Sqoop version: 1.4.723/04/23 22:42:16 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.23/04/23 22:42:16 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.Sun Apr 23 22:42:16 CST 2023 WARN: Establishing SSL connection without server"s identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn"t set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to "false". You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.information_schemahivemysqlperformance_schemasamplesys

结果显示,可以连接到 MySQL,并查看到 Master 主机中 MySQL 的所有库实例,如 information_schema、hive、mysql、performance_schema 和 sys 等数据库。

步骤三: 执行命令 sqoop help,可以看到如下内容,代表 Sqoop 启动成功。

[hadoop@master sqoop]$ sqoop helpWarning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.Please set $HCAT_HOME to the root of your HCatalog installation.Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.Please set $ACCUMULO_HOME to the root of your Accumulo installation.23/04/23 22:42:37 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7usage: sqoop COMMAND [ARGS]Available commands:  codegen            Generate code to interact with database records  create-hive-table  Import a table definition into Hive  eval               Evaluate a SQL statement and display the results  export             Export an HDFS directory to a database table  help               List available commands  import             Import a table from a database to HDFS  import-all-tables  Import tables from a database to HDFS  import-mainframe   Import datasets from a mainframe server to HDFS  job                Work with saved jobs  list-databases     List available databases on a server  list-tables        List available tables in a database  merge              Merge results of incremental imports  metastore          Run a standalone Sqoop metastore  version            Display version informationSee "sqoop help COMMAND" for information on a specific command.

结果显示了 Sqoop 的常用命令和功能,如下表所示。

序号命令功能
1import将数据导入到集群
2export将集群数据导出
3codegen生成与数据库记录交互的代码
4create-hivetable创建 Hive 表
5eval查看 SQL 执行结果
6import-all-tables导入某个数据库下所有表到 HDFS 中
7job生成一个 job
8list-databases列出所有数据库名
9list-tables列出某个数据库下所有的表
10merge将 HDFS 中不同目录下数据合在一起,并存放在指定的目录 中
11metastore记录 Sqoop job 的元数据信息,如果不启动 metastore 实 例,则默认的元数据存储目录为:~/.sqoop
12help打印 Sqoop 帮助信息
13version打印 Sqoop 版本信息
3、通过命令查看 Flume 状态

步骤一:检查 Flume 安装是否成功,执行 flume-ng version 命令,查看 Flume 的版本。

[hadoop@master sqoop]$ cd /usr/local/src/flume/[hadoop@master flume]$ flume-ng versionFlume 1.6.0Source code repository: https://git-wip-us.apache.org/repos/asf/flume.gitRevision: 2561a23240a71ba20bf288c7c2cda88f443c2080Compiled by hshreedharan on Mon May 11 11:15:44 PDT 2015From source with checksum b29e416802ce9ece3269d34233baf43f[hadoop@master flume]$ 

步骤二:添加 example.conf 到/usr/local/src/flume

[hadoop@master flume]$ vim /usr/local/src/flume/example.confa1.sources=r1a1.sinks=k1a1.channels=c1a1.sources.r1.type=spooldira1.sources.r1.spoolDir=/usr/local/src/flume/a1.sources.r1.fileHeader=truea1.sinks.k1.type=hdfsa1.sinks.k1.hdfs.path=hdfs://master:9000/flumea1.sinks.k1.hdfs.rollsize=1048760a1.sinks.k1.hdfs.rollCount=0a1.sinks.k1.hdfs.rollInterval=900a1.sinks.k1.hdfs.useLocalTimeStamp=truea1.channels.c1.type=filea1.channels.c1.capacity=1000a1.channels.c1.transactionCapacity=100a1.sources.r1.channels = c1a1.sinks.k1.channel = c1

步骤三:启动 Flume Agent a1 日志控制台

[hadoop@master flume]$ flume-ng agent --conf-file example.conf --name a1 -Dflume.root.logger=INFO,consoleWarning: No configuration directory set! Use --conf  to override.Info: Including Hadoop libraries found via (/usr/local/src/hadoop/bin/hadoop) for HDFS accessInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpathInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpathInfo: Including HBASE libraries found via (/usr/local/src/hbase/bin/hbase) for HBASE accessInfo: Excluding /usr/local/src/hbase/lib/slf4j-api-1.7.7.jar from classpathInfo: Excluding /usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar from classpathxxxxxxxxxx[hadoop@master flume]$ flume-ng agent --conf-file example.conf --name a1 -Dflume.root.logger=INFO,consoleWarning: No configuration directory set! Use --conf  to override.Info: Including Hadoop libraries found via (/usr/local/src/hadoop/bin/hadoop) for HDFS accessInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpathInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpathInfo: Including HBASE libraries found via (/usr/local/src/hbase/bin/hbase) for HBASE accessInfo: Excluding /usr/local/src/hbase/lib/slf4j-api-1.7.7.jar from classpathInfo: Excluding /usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar from classpath[hadoop@master flume]$  /usr/local/src/flume/bin/flume-ng agent --conf ./conf --conf-file ./example.conf --name a1 -Dflume.root.logger=INFO,consoleInfo: Sourcing environment configuration script /usr/local/src/flume/conf/flume-env.shInfo: Including Hadoop libraries found via (/usr/local/src/hadoop/bin/hadoop) for HDFS access.../lib/native:/usr/local/src/hadoop/lib/native org.apache.flume.node.Application --conf-file ./example.conf --name a1/usr/local/src/flume/bin/flume-ng: line 241: /usr/loocal/src/jdk/bin/java: No such file or directory...23/04/23 22:52:56 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: k1. sink.connection.failed.count == 023/04/23 22:52:56 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: k1. sink.event.drain.attempt == 191823/04/23 22:52:56 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: k1. sink.event.drain.sucess == 1918[hadoop@master flume]$ ^C

步骤四: 查看结果

[hadoop@master flume]$ hdfs dfs -lsr /flumelsr: DEPRECATED: Please use "ls -R" instead.-rw-r--r--   2 hadoop supergroup       1499 2023-04-23 22:52 /flume/FlumeData.1682261559722-rw-r--r--   2 hadoop supergroup       1419 2023-04-23 22:52 /flume/FlumeData.1682261559723-rw-r--r--   2 hadoop supergroup       1468 2023-04-23 22:52 /flume/FlumeData.1682261559724...-rw-r--r--   2 hadoop supergroup       1795 2023-04-23 22:52 /flume/FlumeData.1682261559817-rw-r--r--   2 hadoop supergroup       1841 2023-04-23 22:52 /flume/FlumeData.1682261559818-rw-r--r--   2 hadoop supergroup       1665 2023-04-23 22:52 /flume/FlumeData.1682261559819-rw-r--r--   2 hadoop supergroup       1439 2023-04-23 22:52 /flume/FlumeData.1682261559820

标签:

flume组件以及通过命令监控大数据平台转态

实验一、Flume组件安装配置1、下载和解压Flume可以从官网下载Flume组件安装包,下载地址如下URL链接所示https: archi

04-24 18:44:32

热门:江苏省仪征市市场监管局开展特种设备安全“千人培训”活动

中国质量新闻网讯为增强企业特种设备安全责任意识,提高企业特种设备安全管理水平,降低事故隐患风险,近日

04-24 18:26:03

新西兰7.1级地震引发局地海啸 不会对中国沿岸造成影响

中新网4月24日电据国家预警信息发布中心官方微博消息,4月24日08时42分(北京时间),新西兰克马德克群岛海域

04-24 17:47:24

出门带伞,还要添衣!25日我省有雨,雨后降温

受高空槽影响,预计25日早晨到夜间,我省自西向东将有一场降雨天气过程,主要集中在25日白天,松原、长春、

04-24 17:47:14

环球消息!dnf重炮掌控者刷图加点_重炮掌控者刷图加点

1、我是除榴弹发射器和加农炮,枪炮系其他都满的,小技能格林、bbq和喷火器,我是玩火强的。2、ex主要是双

04-24 16:59:32

【天天新视野】汽车迎来保价潮,一季度价格战留下了哪些“后遗症”?

汽车保价,跟不跟?从今年1月开启,到3月爆发的轰轰烈烈的汽车价格战虽然仍未彻底落幕,但已经不再受到广泛

04-24 16:41:42

天天快播:IMF驻华首席代表:中国经济强劲反弹,产生积极溢出效应

中新网4月23日电(记者何路曼)“中国的经济增长很强劲,其增长速度每加快1%,其他经济体可能会由此实现0 3%

04-24 16:29:23

半月谈丨广州:向世界级现代化城市群进发 微资讯

广州:向世界级现代化城市群进发半月谈记者马晓澄熊嘉艺借助粤港澳大湾区建设的东风,广州都市圈发展提速。

04-24 16:10:51

小摩增持JS环球生活525.4万股 每股作价约7.73港元 世界新视野

香港联交所最新资料显示,4月17日,小摩增持JS环球生活(01691)525 4万股,每股作价7 7337港元,总金额约为4

04-24 15:44:55

焦点消息!通渭县平襄镇人民政府平襄镇旧店子村蔬菜批发交易市场建设项目变更公告

一、项目基本情况原公告的采购项目编号:2023RTGK008原公告的采购项目名称:通渭县平襄镇人民政府平襄镇旧

04-24 15:29:53

热点聚焦:广东肇庆有什么特产_肇庆特产有哪些

1、肇庆特产  肇庆物产丰富。2、端砚、鼎湖山九龙宝鼎、水晶艺品为肇庆旅游工艺品“三奇”;  何首...

04-24 15:07:15

【快播报】国科天成过会:今年IPO过关第98家 国泰君安过4单

中国经济网北京4月24日讯深圳证券交易所上市审核委员会2023年第25次审议会议于4月21日召开,审议结果显示,

04-24 14:50:10

世界微动态丨奥运会的标志图片_奥运会的标志

1、东京奥运会标志是:东京英文Tokyo。2、代表团队合作的Team、象征美好明天的Tomorrow三个词语共同的首字母

04-24 14:36:14

4月24日无锡鼎利凯不锈钢价格下跌_全球滚动

4月24日,无锡鼎利凯钢业有限公司304,1 0四八尺,公差0 9平板价格报14200元 吨,较上一交易日下跌100元 吨

04-24 13:59:35

狄龙:裁判驱逐我是因为他们此前驱逐了哈登 这是不公平的|环球热讯

今日,灰熊球员狄龙-布鲁克斯在训练期间接受了媒体的采访。谈到自己不会因为对詹姆斯的二级恶意犯规而被禁

04-24 13:41:40

中国美院涉嫌抄袭教师停课接受调查 基本情况讲解-当前简讯

大家好,今日关于【中国美院涉嫌抄袭教师停课接受调查】迅速上了的热搜榜,受到全网的关注度非常高。那么【

04-24 13:02:58

环球关注:南昌387家企业入选江西省2023年第二批拟入库科技型中小企业名单

南昌新闻网讯近日,江西省科学技术厅公示了江西省2023年第二批拟入库科技型中小企业名单,我市江西省天驰高

04-24 13:02:18

最新快讯!龙华科技公司:千里育苗记

  从陕西神木到四川绵阳,1300多公里的路上,曹宇琴想了很多。她很清楚,这次走出去参加实习,是为了带着

04-24 12:22:39

佛塑科技董秘回复:截至2023年4月20日,公司股东人数为69662户 世界热文

佛塑科技(000973)04月24日在投资者关系平台上答复了投资者关心的问题。

04-24 12:36:21

闰土股份:4月21日融资买入2085.37万元,融资融券余额4.17亿元 全球简讯

4月21日,闰土股份(002440)融资买入2085 37万元,融资偿还1416 89万元,融资净买入668 48万元,融资余额4

04-24 11:43:55

四川盐源森林火灾三线明火被扑灭,东线断崖还剩1条断续火线

4月22日3时许,四川凉山州盐源县洼里乡庄房村因雷击发生森林火灾。火灾发生后,四川消防救援队伍前后共动用

04-24 11:48:59

全球热推荐:奇瑞展示了有望在上市的Jaecoo7跨界车

公开了面向的中国跨界车Jaecoo7。该车型预计今年将在市场上市:早些时候,拥有Jaecoo子品牌的奇瑞确认

04-24 10:53:47

五一国内机票预订超过2019年同期 平均票价涨39%

新湖南,主流新媒体,移动新门户。-分享自@新湖南

04-24 10:52:22

【党建引领】三顺店社区:党建带家建 “小微家”汇聚基层社会治理“大能量”

从2018年开始,社区本着强化家庭教育主体责任,提高家庭教育水平,培养儿童优良品质和健康人格,促进儿童健

04-24 10:32:20

即时看!纽约一华裔网约车司机拒改下车地点 遭2非裔女打劫

中新网4月24日电据美国世界日报消息,一名华裔优步(Uber)司机日前在纽约布鲁克林班森贺接客,上车后的两名

04-24 10:12:43

教与学七年级上册英语答案

1、预备篇第三题答案:DDCABB第六题答案IGCAJHFDEB模块自测第二题:DAADD第五题1DF2KM

04-24 09:46:46

强强联手,魅族官宣与Polestar 4合作车机系统 天天微动态

现在的国内汽车市场可以说是热闹非凡,成为了寸金寸土的地方。因为大家都看到了汽车行业的未来是一片光明,

04-24 09:24:53

cad文件复制到另一个文件不能粘贴_为什么cad复制粘贴不了到另一个文件

以2020版cad为例。CAD不能复制并粘贴到另一个文件中的原因如下:1 该文件受保护;2 设置防复印参数;3 cad的

04-24 09:07:48

世界微头条丨出手阔绰 15只公募REITs密集分红

近两年火爆的创新产品公募REITs进入密集分红期,平均分红比例接近99%,长期投资价值凸显。

04-24 08:45:01

热推荐:乌克兰恢复从黑海港口出口谷物,削弱了小麦价格的风险升金十期货4月24日讯,据外媒报道,截至2023年4月21日当周,芝加哥期货交易所(CBOT)软红冬小麦期货的基准期约下跌2.8%,创下近四周新低,主要反映出美国中西部软红冬小麦作物状况良好;乌克兰恢复从黑海港口出口谷物,也削弱了小麦价格的风险升水

乌克兰恢复从黑海港口出口谷物,削弱了小麦价格的风险升金十期货4月24日讯,据外媒报道,截至2023年4月21日

04-24 08:18:43

我省开展口腔种植价格“三位一体”综合治理工作 单颗常规种植牙降幅超50%|世界热闻

青海新闻网·大美青海客户端讯为保障人民群众获得高质量、有效率、能负担的缺牙修复服务,省医保局印发...

04-24 08:02:32

全球新消息丨比亚迪计划到2025年在日本开100家门店

[本站资讯]日前,有国内媒体报道称,比亚迪官方在接受采访时表示,其计划到2025年在日本开设100家电动汽车

04-24 07:27:59

观察:项目部的“120”

连日来,在胜利油田油气井下作业中心孤岛作业区孤七作业项目GDN17X209井场上,55岁的党支部书记许峰没有

04-24 06:27:33

“壮族三月三”欢乐回顾②丨尝美食、赏美景、对歌斗舞……活动继续,精彩继续!|世界观热点

“壮族三月三”“壮族三月三”活动期间沉浸感,直!接!拉!满!|大图!美文!小视频!手机悦赏“三月...

04-24 05:55:43

如何做炸糯米团_教你做炸糯米团_今日观点

解答:1、材料:糯米750克,盐10克。2、姜,火腿肠,大葱,菱角洗半斤切成颗粒备用。3、糯米1斤左右,洗净

04-24 05:15:12

环球观焦点:玩转英超玩不转中甲,曼城金主水土不服?欠费用毁青训又威胁足协

到了4月1日愚人节,杜兆才步了陈戌源的后尘,被官宣为涉嫌严重违纪违法,足协开始进入无主时期,关于四川九

04-24 04:40:49

路易威登呈现与草间弥生的合作系列无限风格魅力 环球看热讯

路易威登品牌代言人朱一龙,品牌大使钟楚曦、金晨,及演员白敬亭,白鹿,田曦薇,余承恩演绎LouisVuittonXY

04-24 03:37:12

今日讯!2010安徽中考语文答案_2010安徽中考语文

1、物理与化学合卷共150分2、2021年安徽中考文化课总分为750分,考试科目与分值为:语文150分、数学150分、英

04-24 02:34:42

世界热门:牡丹花怎么养家庭养法_牡丹花怎么养

1、牡丹花是花中之王,那么牡丹花应该怎么养呢,下面我们一起来看看吧。2、土质:3、牡丹适宜疏松肥沃,土

04-24 02:01:08

解局 | 远洋20亿债务兑付里的信心战|每日讯息

这局市场信心的保卫战仍是路漫漫。

04-24 01:01:33

安徽大学新闻传播学院院长姜红:拜祖大典是一次“顶天”与“立地”相契合的生动实践|焦点热闻

“这是我第一次参加黄帝故里拜祖大典。”作为癸卯年黄帝故里拜祖大典组委会特别策划“学院派传媒大咖郑...

04-23 23:53:25

【全球独家】25年,一双妙手修复时光碎片

杨景龙没有想到,也没有在意,他能在全国最高级别的一级大赛上获得一等奖,用他的话说:“这是惊喜之中...

04-23 23:43:05

成人眼睛斜视手术后遗症_斜视手术后遗症

1、斜视的手术在临床上比较多见,造成斜视的原因是眼睛在运动的时候需要有肌肉来支撑,眼部肌肉的不发达或

04-23 22:51:43

不亦“阅”乎!枣庄市市中区文化路街道举办“世界读书日”活动 新要闻

齐鲁网·闪电新闻4月23日讯在第28个“世界读书日”到来之际,为倡导全民阅读,营造“以学促用、以学促干...

04-23 22:32:30

天天快讯:N型电池片技术百花齐放 TOPCon产能率先放量

2023年04月23日关于N型电池片技术百花齐放TOPCon产能率先放量的最新消息:目前,PERC电池已接近理论极限效

04-23 21:45:42

英国首都是哪个城市_英国首都在哪里-全球热资讯

欢迎观看本篇文章,小勉来为大家解答以上问题。英国首都是哪个城市,英国首都在哪里很多人还不知道,现在让

04-23 21:49:36

世界滚动:中央企业5名管理人员接受审查调查

中央纪委国家监委网站讯日前,中央企业5名管理人员正接受纪律审查和监察调查,现通报如下:1 中国石油共享

04-23 21:13:01

每日短讯:新iPhone相机曝光:搭载4800万像素+接近1英寸传感器

去年,苹果在iPhone14Pro和iPhone14ProMax的相机上首次将用了7年的1200万像素摄像头升级为4800万像素,这是

04-23 20:39:00

青岛第十八届红岛蛤蜊节时间安排

第十八届红岛蛤蜊节时间安排蛤蜊节节会时间:4月29日—5月7日蛤蜊节园区开放时间:7:30—19:30,18:30后

04-23 20:10:13

环球今头条!网红“竹筒奶茶”被爆有霉斑 探访:卫生确实堪忧

网红“竹筒奶茶”被爆有霉斑探访:卫生确实堪忧

04-23 19:47:01

热门:江苏省仪征市市场监管局开展特种设备安全“千人培训”活动
新西兰7.1级地震引发局地海啸 不会对中国沿岸造成影响
出门带伞,还要添衣!25日我省有雨,雨后降温
环球消息!dnf重炮掌控者刷图加点_重炮掌控者刷图加点
【天天新视野】汽车迎来保价潮,一季度价格战留下了哪些“后遗症”?
天天快播:IMF驻华首席代表:中国经济强劲反弹,产生积极溢出效应
半月谈丨广州:向世界级现代化城市群进发 微资讯
小摩增持JS环球生活525.4万股 每股作价约7.73港元 世界新视野
焦点消息!通渭县平襄镇人民政府平襄镇旧店子村蔬菜批发交易市场建设项目变更公告
热点聚焦:广东肇庆有什么特产_肇庆特产有哪些
【快播报】国科天成过会:今年IPO过关第98家 国泰君安过4单
世界微动态丨奥运会的标志图片_奥运会的标志
4月24日无锡鼎利凯不锈钢价格下跌_全球滚动
狄龙:裁判驱逐我是因为他们此前驱逐了哈登 这是不公平的|环球热讯
中国美院涉嫌抄袭教师停课接受调查 基本情况讲解-当前简讯
环球关注:南昌387家企业入选江西省2023年第二批拟入库科技型中小企业名单
最新快讯!龙华科技公司:千里育苗记
佛塑科技董秘回复:截至2023年4月20日,公司股东人数为69662户 世界热文
闰土股份:4月21日融资买入2085.37万元,融资融券余额4.17亿元 全球简讯
四川盐源森林火灾三线明火被扑灭,东线断崖还剩1条断续火线
全球热推荐:奇瑞展示了有望在上市的Jaecoo7跨界车
五一国内机票预订超过2019年同期 平均票价涨39%
【党建引领】三顺店社区:党建带家建 “小微家”汇聚基层社会治理“大能量”
即时看!纽约一华裔网约车司机拒改下车地点 遭2非裔女打劫
教与学七年级上册英语答案
强强联手,魅族官宣与Polestar 4合作车机系统 天天微动态
cad文件复制到另一个文件不能粘贴_为什么cad复制粘贴不了到另一个文件
世界微头条丨出手阔绰 15只公募REITs密集分红
热推荐:乌克兰恢复从黑海港口出口谷物,削弱了小麦价格的风险升金十期货4月24日讯,据外媒报道,截至2023年4月21日当周,芝加哥期货交易所(CBOT)软红冬小麦期货的基准期约下跌2.8%,创下近四周新低,主要反映出美国中西部软红冬小麦作物状况良好;乌克兰恢复从黑海港口出口谷物,也削弱了小麦价格的风险升水
我省开展口腔种植价格“三位一体”综合治理工作 单颗常规种植牙降幅超50%|世界热闻
全球新消息丨比亚迪计划到2025年在日本开100家门店
观察:项目部的“120”
“壮族三月三”欢乐回顾②丨尝美食、赏美景、对歌斗舞……活动继续,精彩继续!|世界观热点
如何做炸糯米团_教你做炸糯米团_今日观点
环球观焦点:玩转英超玩不转中甲,曼城金主水土不服?欠费用毁青训又威胁足协
路易威登呈现与草间弥生的合作系列无限风格魅力 环球看热讯
今日讯!2010安徽中考语文答案_2010安徽中考语文
世界热门:牡丹花怎么养家庭养法_牡丹花怎么养
解局 | 远洋20亿债务兑付里的信心战|每日讯息
安徽大学新闻传播学院院长姜红:拜祖大典是一次“顶天”与“立地”相契合的生动实践|焦点热闻
【全球独家】25年,一双妙手修复时光碎片
成人眼睛斜视手术后遗症_斜视手术后遗症
不亦“阅”乎!枣庄市市中区文化路街道举办“世界读书日”活动 新要闻
天天快讯:N型电池片技术百花齐放 TOPCon产能率先放量
英国首都是哪个城市_英国首都在哪里-全球热资讯
世界滚动:中央企业5名管理人员接受审查调查
每日短讯:新iPhone相机曝光:搭载4800万像素+接近1英寸传感器
青岛第十八届红岛蛤蜊节时间安排
环球今头条!网红“竹筒奶茶”被爆有霉斑 探访:卫生确实堪忧
“中欧班列-上海号”第100趟列车发车,终点为莫斯科
X 广告
行业动态
X 广告

Copyright ©  2015-2022 华东植物网版权所有  备案号:京ICP备2022016840号-41   联系邮箱:2 913 236 @qq.com