
This update replaces the currently enforced sysadmin username, allowing any local LDAP user, that is part of the sudo and sys_protected groups, to run collect. This change introduced several challenging new failure modes that necessitated some refactoring of collect's existing fault reporting. Enhancements were made to the detection, handling and reporting of failures at 'all levels' ; i.e. local host, remote host, subcloud and subcloud remote host. Specific attention was put towards handling of most probable failure modes that include detection and handling of passwordless sudo, unsupported sudo, unknown usernames, unreachable hosts, invalid passwords, out of space errors, etc. 'at all levels' Generally, multi host collect handling continues in the presence of remote host collect errors or warnings. Whereas local host failures are typically treated as fatal. Additionally, reporting of collect timeout cases was improved by instead of printing only a timeout code number, a string based timeout cause is now included. Improvements were also made in the handling and packaging of the collect.log file. A collect.log is now included at the main bundle and subcloud collect levels. Some failures to collect remote hosts are now logged in these files at the appropriate level. If a user notices that a host or subcloud is missing from a bundle, say because it was unreachable or had sudo-less password enabled, the collect log will have a warning message to that effect. A 5 second yield was added to subcloud collect monitoring to reduce the cpu load subcloud collect monitoring was inducing. This lines up with the existing 5 second yield done for host collect monitoring. In attempt to improve the collect user experience the following additional improvements were made. - Attempt to source the openrc file and query system inventory is moved after the password prompt. This allows the various password checks to be handled early making the tool feel more responsive. - The global collect timeout now starts only after password is input and inventory is read so these operations don't contribute towards the collection time. - Improves how collect reports to the console and logs the hosts and or subclouds that 'will be' and 'were successfully' collected from. - Added expect segment debug logging tied to the --debug option. With debug enabled each function's expect segment logs its execution output to /tmp with files of the following form. /tmp/collect_expect_<username>_<unit>_<function> - Added a --password option to simplify collect test automation. - Replaced the subcloud collect verbose option with debug. The verbose subcloud collect was known to cause issues. All the above changes warranted a collect tool up-version to 3.0 Test Plan: A full collect regression was performed PASS: Verify install and collect testing on the following systems - All-In-One SX - All-In-One DX - Standard DX with 1 worker and 1 storage - Simplex DC system with 2 subclouds ; 1 SX and 1 DX Success Path Handling: both sysadmin and any other username PASS: Verify collect handling at all levels PASS: Verify dated collect all for system and subcloud PASS: Verify all variations of collect host list handling PASS: Verify collect clean at all levels PASS: Verify system and subcloud collect --report handling PASS: Verify collect all --skip-mask PASS: Verify collect all --timeout PASS: Verify collect all --inline PASS: Verify collect all --subcloud PASS: Verify collect all --verbose PASS: Verify collect all --verbose --debug PASS: Verify collect all --version PASS: Verify collect all --subcloud --inline PASS: Verify collect all on SX/DX standard and DC systems PASS: verify new collect --password option PASS: Verify collect bundle content between sysadmin and other user. PASS: Verify bundle includes collect.log at bundle and host levels PASS: Verify collect.log content at each level. PASS: Verify collect from remote host that does not have this update PASS: Verify collect from subcloud that does not have this update PASS: Verify system and subcloud collect using account password with special character(s). Failure Path Handling: error response should clearly indicate the issue PASS: Verify all level collect handling of unknown username PASS: Verify all level collect handling with passwordless sudo enabled PASS: Verify all level collect handling of unsupported sudo PASS: Verify all level collect handling where hosts run out of scratch PASS: Verify all level collect handling of a host whose scratch space is filled to 75% or more PASS: Verify a successful collect following the cleanup of a previous out of space error. PASS: Verify collect handling of all non-active controller cases PASS: Verify collect handling of an invalid hostname PASS: Verify collect handling of unreachable remote hosts at all levels PASS: Verify collect handling of an invalid password at all levels PASS: Verify collect host and subcloud collect timeout handling PASS: Verify collect global timeout handling PASS: Verify collect handling of failure to get the remote tarball PASS: Verify collect debug option handling and data Story: 2010533 Task: 50419 Change-Id: Ibd827e1c72190bcdcf710b32ad7903cfa397c394 Signed-off-by: Eric MacDonald <eric.macdonald@windriver.com>
528 lines
18 KiB
Bash
Executable File
528 lines
18 KiB
Bash
Executable File
#! /bin/bash
|
|
########################################################################
|
|
#
|
|
# Copyright (c) 2016-2022, 2024 Wind River Systems, Inc.
|
|
#
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
#
|
|
########################################################################
|
|
|
|
# make these platform.conf variables global.
|
|
# values are loaded in source_openrc_if_needed.
|
|
export nodetype=""
|
|
export subfunction=""
|
|
export system_type=""
|
|
export security_profile=""
|
|
export sdn_enabled=""
|
|
export region_config=""
|
|
export vswitch_type=""
|
|
export system_mode=""
|
|
export sw_version=""
|
|
|
|
# assume this is not the active controller until learned
|
|
export ACTIVE=false
|
|
|
|
#
|
|
# Import commands, variables and convenience functions available to
|
|
# all collectors ; common and user defined.
|
|
#
|
|
source /usr/local/sbin/collect_utils
|
|
source_openrc_if_needed
|
|
|
|
#
|
|
# parse input parameters
|
|
#
|
|
COLLECT_NAME="${1}"
|
|
DEBUG=${8}
|
|
INVENTORY=${9}
|
|
set_debug_mode ${DEBUG}
|
|
expect_debug=$([ "${DEBUG}" = true ] && echo "-d" || echo "")
|
|
|
|
# Calling parms
|
|
#
|
|
# 1 = collect name
|
|
# 2 = start date option
|
|
# 3 = start date
|
|
# 4 = "any" (ignored - no longer used ; kept to support upgrades/downgrades)
|
|
# 5 = end date option
|
|
# 6 = end date
|
|
# 7 = "any" (ignored - no longer used ; kept to support upgrades/downgrades)
|
|
# 8 = debug mode
|
|
# 9 = inventory
|
|
logger -t ${COLLECT_TAG} "${0} ${1} ${2} ${3} ${4} ${5} ${6} ${7} ${8} ${9}"
|
|
|
|
# parse out the start data/time data if it is present
|
|
STARTDATE_RANGE=false
|
|
STARTDATE="any"
|
|
if [ "${2}" == "${STARTDATE_OPTION}" ] ; then
|
|
if [ "${3}" != "any" -a ${#3} -gt 7 ] ; then
|
|
STARTDATE_RANGE=true
|
|
STARTDATE="${3}"
|
|
fi
|
|
fi
|
|
|
|
# parse out the end date/time if it is present
|
|
ENDDATE_RANGE=false
|
|
ENDDATE="any"
|
|
if [ "${5}" == "${ENDDATE_OPTION}" ] ; then
|
|
if [ "${6}" != "any" -a ${#6} -gt 7 ] ; then
|
|
ENDDATE_RANGE=true
|
|
ENDDATE="${6}"
|
|
fi
|
|
fi
|
|
|
|
COLLECT_BASE_DIR="/scratch"
|
|
EXTRA="var/extra"
|
|
hostname="${HOSTNAME}"
|
|
COLLECT_NAME_DIR="${COLLECT_BASE_DIR}/${COLLECT_NAME}"
|
|
EXTRA_DIR="${COLLECT_NAME_DIR}/${EXTRA}"
|
|
TARBALL="${COLLECT_NAME_DIR}.tgz"
|
|
COLLECT_PATH="/etc/collect.d"
|
|
RUN_EXCLUDE="/etc/collect/run.exclude"
|
|
ETC_EXCLUDE="/etc/collect/etc.exclude"
|
|
VAR_LOG_EXCLUDE="/etc/collect/varlog.exclude"
|
|
COLLECT_INCLUDE="/var/run /etc /root"
|
|
FLIGHT_RECORDER_PATH="var/lib/sm/"
|
|
FLIGHT_RECORDER_FILE="sm.eru.v1"
|
|
VAR_LOG_INCLUDE_LIST="/tmp/${COLLECT_NAME}.lst"
|
|
COLLECT_DIR_USAGE_CMD="df -h ${COLLECT_BASE_DIR}"
|
|
COLLECT_DATE="/usr/local/sbin/collect_date"
|
|
COLLECT_SYSINV="${COLLECT_PATH}/collect_sysinv"
|
|
|
|
function log_space()
|
|
{
|
|
local msg=${1}
|
|
|
|
space="`${COLLECT_DIR_USAGE_CMD}`"
|
|
space1=`echo "${space}" | grep -v Filesystem`
|
|
ilog "${COLLECT_BASE_DIR} ${msg} ${space1}"
|
|
}
|
|
|
|
space_precheck ${HOSTNAME} ${COLLECT_BASE_DIR}
|
|
|
|
CURR_DIR=`pwd`
|
|
mkdir -p ${COLLECT_NAME_DIR}
|
|
cd ${COLLECT_NAME_DIR}
|
|
|
|
# create dump target extra-stuff directory
|
|
mkdir -p ${EXTRA_DIR}
|
|
|
|
RETVAL=0
|
|
|
|
# Remove any previous collect error log.
|
|
# Start this collect with an empty file.
|
|
#
|
|
# stderr is directed to this log during the collect process.
|
|
# By searching this log after collect_host is run we can find
|
|
# errors that occured during collect.
|
|
# The only real error that we care about right now is the
|
|
#
|
|
# "No space left on device" error
|
|
#
|
|
rm -f ${COLLECT_ERROR_LOG}
|
|
touch ${COLLECT_ERROR_LOG}
|
|
chmod 644 ${COLLECT_ERROR_LOG}
|
|
echo "`date '+%F %T'` :${COLLECT_NAME_DIR}" > ${COLLECT_ERROR_LOG}
|
|
|
|
ilog "creating local collect tarball ${COLLECT_NAME_DIR}.tgz"
|
|
|
|
################################################################################
|
|
# Run collect scripts to check system status
|
|
################################################################################
|
|
function collect_parts()
|
|
{
|
|
if [ -d ${COLLECT_PATH} ]; then
|
|
for i in ${COLLECT_PATH}/*; do
|
|
if [ -f $i ]; then
|
|
if [ ${i} = ${COLLECT_SYSINV} ]; then
|
|
$i ${COLLECT_NAME_DIR} ${EXTRA_DIR} ${hostname} ${INVENTORY}
|
|
else
|
|
$i ${COLLECT_NAME_DIR} ${EXTRA_DIR} ${hostname}
|
|
fi
|
|
fi
|
|
done
|
|
fi
|
|
}
|
|
|
|
|
|
function collect_extra()
|
|
{
|
|
# dump process lists
|
|
LOGFILE="${EXTRA_DIR}/process.info"
|
|
echo "${hostname}: Process Info ......: ${LOGFILE}"
|
|
|
|
delimiter ${LOGFILE} "ps -e -H -o ..."
|
|
${PROCESS_DETAIL_CMD} >> ${LOGFILE}
|
|
|
|
# Collect process and thread info (tree view)
|
|
delimiter ${LOGFILE} "pstree --arguments --ascii --long --show-pids"
|
|
pstree --arguments --ascii --long --show-pids >> ${LOGFILE}
|
|
|
|
# Collect process, thread and scheduling info (worker subfunction only)
|
|
# (also gets process 'affinity' which is useful on workers;
|
|
which ps-sched.sh >/dev/null 2>&1
|
|
if [ $? -eq 0 ]; then
|
|
delimiter ${LOGFILE} "ps-sched.sh"
|
|
ps-sched.sh >> ${LOGFILE}
|
|
fi
|
|
|
|
# Collect process, thread and scheduling, and elapsed time
|
|
# This has everything that ps-sched.sh does, except for cpu affinity mask,
|
|
# adds: stime,etime,time,wchan,tty).
|
|
delimiter ${LOGFILE} "ps -eL -o pid,lwp,ppid,state,class,nice,rtprio,priority,psr,stime,etime,time,wchan:16,tty,comm,command"
|
|
ps -eL -o pid,lwp,ppid,state,class,nice,rtprio,priority,psr,stime,etime,time,wchan:16,tty,comm,command >> ${LOGFILE}
|
|
|
|
# Collect per kubernetes container name, QoS, and cpusets per numa node
|
|
delimiter ${LOGFILE} "kube-cpusets"
|
|
kube-cpusets >> ${LOGFILE}
|
|
|
|
# Various host attributes
|
|
LOGFILE="${EXTRA_DIR}/host.info"
|
|
echo "${hostname}: Host Info .........: ${LOGFILE}"
|
|
|
|
# CGCS build info
|
|
delimiter ${LOGFILE} "${BUILD_INFO_CMD}"
|
|
${BUILD_INFO_CMD} >> ${LOGFILE}
|
|
|
|
delimiter ${LOGFILE} "timedatectl"
|
|
timedatectl >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "uptime"
|
|
uptime >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "cat /proc/cmdline"
|
|
cat /proc/cmdline >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "cat /proc/version"
|
|
cat /proc/version >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "lscpu"
|
|
lscpu >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "lscpu -e"
|
|
lscpu -e >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "cat /proc/cpuinfo"
|
|
cat /proc/cpuinfo >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "cat /sys/devices/system/cpu/isolated"
|
|
cat /sys/devices/system/cpu/isolated >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "ip addr show"
|
|
ip addr show >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "lspci -nn"
|
|
lspci -nn >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "find /sys/kernel/iommu_groups/ -type l"
|
|
find /sys/kernel/iommu_groups/ -type l >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# networking totals
|
|
delimiter ${LOGFILE} "cat /proc/net/dev"
|
|
cat /proc/net/dev >> ${LOGFILE}
|
|
|
|
delimiter ${LOGFILE} "dmidecode"
|
|
dmidecode >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# summary of scheduler tunable settings
|
|
delimiter ${LOGFILE} "cat /proc/sched_debug | head -15"
|
|
cat /proc/sched_debug | head -15 >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
if [ "${SKIP_MASK}" = "true" ]; then
|
|
delimiter ${LOGFILE} "facter (excluding ssh info)"
|
|
facter | grep -iv '^ssh' >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
else
|
|
delimiter ${LOGFILE} "facter"
|
|
facter >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
fi
|
|
|
|
if [[ "$nodetype" == "worker" || "$subfunction" == *"worker"* ]] ; then
|
|
delimiter ${LOGFILE} "topology"
|
|
topology >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
fi
|
|
|
|
# CPU C-state power info
|
|
delimiter ${LOGFILE} "cpupower monitor"
|
|
cpupower monitor >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
LOGFILE="${EXTRA_DIR}/memory.info"
|
|
echo "${hostname}: Memory Info .......: ${LOGFILE}"
|
|
|
|
delimiter ${LOGFILE} "cat /proc/meminfo"
|
|
cat /proc/meminfo >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "cat /sys/devices/system/node/node?/meminfo"
|
|
cat /sys/devices/system/node/node?/meminfo >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "cat /proc/slabinfo"
|
|
log_slabinfo ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "ps -e -o ppid,pid,nlwp,rss:10,vsz:10,cmd --sort=-rss"
|
|
ps -e -o ppid,pid,nlwp,rss:10,vsz:10,cmd --sort=-rss >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# list open files
|
|
delimiter ${LOGFILE} "lsof -lwX"
|
|
lsof -lwX >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# hugepages numa mapping
|
|
delimiter ${LOGFILE} "grep huge /proc/*/numa_maps"
|
|
grep -e " huge " /proc/*/numa_maps >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# rootfs and tmpfs usage
|
|
delimiter ${LOGFILE} "df -h -H -T --local -t rootfs -t tmpfs"
|
|
df -h -H -T --local -t rootfs -t tmpfs >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
LOGFILE="${EXTRA_DIR}/filesystem.info"
|
|
echo "${hostname}: Filesystem Info ...: ${LOGFILE}"
|
|
|
|
# disk inodes usage
|
|
delimiter ${LOGFILE} "df -h -H -T --local -t rootfs -t tmpfs"
|
|
df -h -H -T --local -t rootfs -t tmpfs >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# disk space usage
|
|
delimiter ${LOGFILE} "df -h -H -T --local -t ext2 -t ext3 -t ext4 -t xfs --total"
|
|
df -h -H -T --local -t ext2 -t ext3 -t ext4 -t xfs --total >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# disk inodes usage
|
|
delimiter ${LOGFILE} "df -h -H -T --local -i -t ext2 -t ext3 -t ext4 -t xfs --total"
|
|
df -h -H -T --local -i -t ext2 -t ext3 -t ext4 -t xfs --total >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# disks by-path values
|
|
delimiter ${LOGFILE} "ls -lR /dev/disk"
|
|
ls -lR /dev/disk >> ${LOGFILE}
|
|
|
|
# disk summary (requires sudo/root)
|
|
delimiter ${LOGFILE} "fdisk -l"
|
|
fdisk -l >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "cat /proc/scsi/scsi"
|
|
cat /proc/scsi/scsi >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# Controller specific stuff
|
|
if [ "$nodetype" = "controller" ] ; then
|
|
|
|
delimiter ${LOGFILE} "cat /proc/drbd"
|
|
cat /proc/drbd >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "/sbin/drbdadm dump"
|
|
/sbin/drbdadm dump >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
fi
|
|
|
|
# LVM summary
|
|
delimiter ${LOGFILE} "/usr/sbin/vgs --version ; /usr/sbin/pvs --version ; /usr/sbin/lvs --version"
|
|
/usr/sbin/vgs --version >> ${LOGFILE}
|
|
/usr/sbin/pvs --version >> ${LOGFILE}
|
|
/usr/sbin/lvs --version >> ${LOGFILE}
|
|
|
|
delimiter ${LOGFILE} "/usr/sbin/vgs --all --options all"
|
|
/usr/sbin/vgs --all --options all >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "/usr/sbin/pvs --all --options all"
|
|
/usr/sbin/pvs --all --options all >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "/usr/sbin/lvs --all --options all"
|
|
/usr/sbin/lvs --all --options all >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# iSCSI Information
|
|
LOGFILE="${EXTRA_DIR}/iscsi.info"
|
|
echo "${hostname}: iSCSI Info ........: ${LOGFILE}"
|
|
|
|
if [ "$nodetype" = "controller" ] ; then
|
|
# Controller- LIO exported initiators summary
|
|
delimiter ${LOGFILE} "targetcli ls"
|
|
targetcli ls >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# Controller - LIO sessions
|
|
delimiter ${LOGFILE} "targetcli sessions detail"
|
|
targetcli sessions detail >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
elif [[ "$nodetype" == "worker" || "$subfunction" == *"worker"* ]] ; then
|
|
# Worker - iSCSI initiator information
|
|
collect_dir=${EXTRA_DIR}/iscsi_initiator_info
|
|
mkdir -p ${collect_dir}
|
|
cp -rf /run/iscsi-cache/nodes/* ${collect_dir}
|
|
find ${collect_dir} -type d -exec chmod 750 {} \;
|
|
|
|
# Worker - iSCSI initiator active sessions
|
|
delimiter ${LOGFILE} "iscsiadm -m session"
|
|
iscsiadm -m session >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# Worker - iSCSI udev created nodes
|
|
delimiter ${LOGFILE} "ls -la /dev/disk/by-path | grep \"iqn\""
|
|
ls -la /dev/disk/by-path | grep "iqn" >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
fi
|
|
|
|
LOGFILE="${EXTRA_DIR}/history.info"
|
|
echo "${hostname}: Bash History ......: ${LOGFILE}"
|
|
|
|
# history
|
|
delimiter ${LOGFILE} "cat /home/sysadmin/.bash_history"
|
|
cat /home/sysadmin/.bash_history >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
LOGFILE="${EXTRA_DIR}/interrupt.info"
|
|
echo "${hostname}: Interrupt Info ....: ${LOGFILE}"
|
|
|
|
# interrupts
|
|
delimiter ${LOGFILE} "cat /proc/interrupts"
|
|
cat /proc/interrupts >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
delimiter ${LOGFILE} "cat /proc/softirqs"
|
|
cat /proc/softirqs >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# Controller specific stuff
|
|
if [ "$nodetype" = "controller" ] ; then
|
|
netstat -pan > ${EXTRA_DIR}/netstat.info
|
|
fi
|
|
|
|
LOGFILE="${EXTRA_DIR}/blockdev.info"
|
|
echo "${hostname}: Block Devices Info : ${LOGFILE}"
|
|
|
|
# Collect block devices - show all sda and cinder devices, and size
|
|
delimiter ${LOGFILE} "lsblk"
|
|
lsblk >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# Collect block device topology - show devices and which io-scheduler
|
|
delimiter ${LOGFILE} "lsblk --topology"
|
|
lsblk --topology >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
|
|
# Collect SCSI devices - show devices and cinder attaches, etc
|
|
delimiter ${LOGFILE} "lsblk --scsi"
|
|
lsblk --scsi >> ${LOGFILE} 2>>${COLLECT_ERROR_LOG}
|
|
}
|
|
|
|
log_space "before collect ......:"
|
|
|
|
collect_extra
|
|
collect_parts
|
|
|
|
#
|
|
# handle collect collect-after and collect-range and then
|
|
# in elif clause collect-before
|
|
#
|
|
VAR_LOG="/var/log"
|
|
rm -f ${VAR_LOG_INCLUDE_LIST}
|
|
|
|
if [ "${STARTDATE_RANGE}" == true ] ; then
|
|
if [ "${ENDDATE_RANGE}" == false ] ; then
|
|
ilog "collecting $VAR_LOG files containing logs after ${STARTDATE}"
|
|
${COLLECT_DATE} ${STARTDATE} ${ENDDATE} ${VAR_LOG_INCLUDE_LIST} ${DEBUG} ""
|
|
else
|
|
ilog "collecting $VAR_LOG files containing logs between ${STARTDATE} and ${ENDDATE}"
|
|
${COLLECT_DATE} ${STARTDATE} ${ENDDATE} ${VAR_LOG_INCLUDE_LIST} ${DEBUG} ""
|
|
fi
|
|
elif [ "${ENDDATE_RANGE}" == true ] ; then
|
|
STARTDATE="20130101"
|
|
ilog "collecting $VAR_LOG files containing logs before ${ENDDATE}"
|
|
${COLLECT_DATE} ${STARTDATE} ${ENDDATE} ${VAR_LOG_INCLUDE_LIST} ${DEBUG} ""
|
|
else
|
|
ilog "collecting all of $VAR_LOG"
|
|
find $VAR_LOG ! -empty > ${VAR_LOG_INCLUDE_LIST}
|
|
fi
|
|
|
|
# collect the www lighttpd logs if they exists and are not empty.
|
|
# note: The lighttpd logs don't have the date in the right place.
|
|
# So there is no point in passing those potentially large
|
|
# files through the dated collect; it would just waste time.
|
|
# Add that path in the raw form after.
|
|
WWW_LOG="www/var/log"
|
|
if [ -e "/var/${WWW_LOG}" ] ; then
|
|
find "/var/${WWW_LOG}" ! -empty >> ${VAR_LOG_INCLUDE_LIST}
|
|
elif [ -e "/${WWW_LOG}" ] ; then
|
|
find "/${WWW_LOG}" ! -empty >> ${VAR_LOG_INCLUDE_LIST}
|
|
fi
|
|
|
|
# Filter any dirs or files in the exclude list from the include list
|
|
remove_common_paths "${VAR_LOG_INCLUDE_LIST}" "${VAR_LOG_EXCLUDE}"
|
|
|
|
# Add VM console.log
|
|
for i in /var/lib/nova/instances/*/console.log; do
|
|
if [ -e "$i" ]; then
|
|
tmp=`dirname $i`
|
|
mkdir -p ${COLLECT_NAME_DIR}/$tmp
|
|
cp $i ${COLLECT_NAME_DIR}/$tmp
|
|
fi
|
|
done
|
|
|
|
# Add Rook-ceph logs
|
|
for i in /var/lib/ceph/data/rook-ceph/log/*; do
|
|
if [ -e "$i" ]; then
|
|
rook_ceph_log_dir="var/log/rook-ceph"
|
|
mkdir -p ${COLLECT_NAME_DIR}/$rook_ceph_log_dir
|
|
cp $i ${COLLECT_NAME_DIR}/$rook_ceph_log_dir
|
|
fi
|
|
done
|
|
|
|
log_space "before first tar ....:"
|
|
|
|
(cd ${COLLECT_NAME_DIR} ; ${IONICE_CMD} ${NICE_CMD} ${TAR_CMD} ${COLLECT_NAME_DIR}/${COLLECT_NAME}.tar -T ${VAR_LOG_INCLUDE_LIST} -X ${RUN_EXCLUDE} -X ${ETC_EXCLUDE} -X ${VAR_LOG_EXCLUDE} ${COLLECT_INCLUDE} 2>>${COLLECT_ERROR_LOG} 1>>${COLLECT_ERROR_LOG} )
|
|
|
|
log_space "after first tar .....:"
|
|
|
|
(cd ${COLLECT_NAME_DIR} ; ${IONICE_CMD} ${NICE_CMD} ${UNTAR_CMD} ${COLLECT_NAME_DIR}/${COLLECT_NAME}.tar 2>>${COLLECT_ERROR_LOG} 1>>${COLLECT_ERROR_LOG} )
|
|
|
|
log_space "after first untar ...:"
|
|
|
|
rm -f ${COLLECT_NAME_DIR}/${COLLECT_NAME}.tar
|
|
|
|
log_space "after delete tar ....:"
|
|
|
|
if [ "${SKIP_MASK}" != "true" ]; then
|
|
# Run password masking before final tar
|
|
dlog "running /usr/local/sbin/collect_mask_passwords ${COLLECT_NAME_DIR} ${EXTRA_DIR}"
|
|
/usr/local/sbin/collect_mask_passwords ${COLLECT_NAME_DIR} ${EXTRA_DIR}
|
|
log_space "after passwd masking :"
|
|
fi
|
|
|
|
if [ "${OMIT_CERTS}" != "true" ]; then
|
|
# Collect certificates from the host
|
|
dlog "running /usr/local/sbin/collect_certificates ${EXTRA_DIR}"
|
|
COLLECT_ERROR_LOG="$COLLECT_ERROR_LOG" \
|
|
/usr/local/sbin/collect_certificates ${EXTRA_DIR}
|
|
log_space "after certificates ..:"
|
|
fi
|
|
|
|
(cd ${COLLECT_BASE_DIR} ; ${IONICE_CMD} ${NICE_CMD} ${TAR_ZIP_CMD} ${COLLECT_NAME_DIR}.tgz ${COLLECT_NAME} 2>/dev/null 1>/dev/null )
|
|
|
|
log_space "after first tarball .:"
|
|
|
|
mkdir -p ${COLLECT_NAME_DIR}/${FLIGHT_RECORDER_PATH}
|
|
|
|
(cd /${FLIGHT_RECORDER_PATH} ; ${TAR_ZIP_CMD} ${COLLECT_NAME_DIR}/${FLIGHT_RECORDER_PATH}/${FLIGHT_RECORDER_FILE}.tgz ./${FLIGHT_RECORDER_FILE} 2>>${COLLECT_ERROR_LOG} 1>>${COLLECT_ERROR_LOG})
|
|
|
|
|
|
# save the collect.log file to this host's tarball
|
|
cp -a ${COLLECT_ERROR_LOG} ${COLLECT_NAME_DIR}/${COLLECT_LOG}
|
|
|
|
log_space "with flight data ....:"
|
|
|
|
(cd ${COLLECT_BASE_DIR} ; ${IONICE_CMD} ${NICE_CMD} ${TAR_ZIP_CMD} ${COLLECT_NAME_DIR}.tgz ${COLLECT_NAME} 2>>${COLLECT_ERROR_LOG} 1>>${COLLECT_ERROR_LOG} )
|
|
|
|
log_space "after collect .......:"
|
|
|
|
rm -rf ${COLLECT_NAME_DIR}
|
|
rm -f ${VAR_LOG_INCLUDE_LIST}
|
|
|
|
log_space "after cleanup .......:"
|
|
|
|
# Check for collect errors
|
|
# Only out of space error is enough to fail this hosts's collect
|
|
collect_errors "${HOSTNAME}"
|
|
RC=${?}
|
|
|
|
rm -f ${COLLECT_ERROR_LOG}
|
|
|
|
if [ ${RC} -ne 0 ] ; then
|
|
rm -f ${COLLECT_NAME_DIR}.tgz
|
|
if [ "${REMOTE_HOST}" = true ] ; then
|
|
ilog "${FAIL_OUT_OF_SPACE_REMOTE_STR} ${COLLECT_BASE_DIR}"
|
|
else
|
|
ilog "${FAIL_OUT_OF_SPACE_STR} ${COLLECT_BASE_DIR}"
|
|
fi
|
|
else
|
|
ilog "collect of ${COLLECT_NAME_DIR}.tgz succeeded"
|
|
echo "${collect_done}"
|
|
fi
|
|
|
|
dlog "collect_host exit code: ${rc}"
|
|
exit ${rc} |