Flock.Is there some 'lockfile' my jobs can create to prevent other jobs from running?
Code: Select all
#!/bin/bash
#
## #######################################################################################
## Cat ALL files from a directory specified (/data if none specified).
## to /dev/null, to look for bad sectors.
## IGNORE directory BAD_SECTORS and all it's contents.
## Also, this script will not report bad sectors that are in the actual directory structure
## as opposed to files that contain bad sectors
## #######################################################################################
#
for dirToCheck in / /home /var /tmp /mnt/c_drive /mnt/e_drive /data
do
if [ ! -d "${dirToCheck}" ]
then
echo "$0: Expect parameter >${dirToCheck}< to be a directory that exists."
exit 1
fi
cd "${dirToCheck}" || ( echo "$0: Failed to cd to >${dirToCheck}<" && exit 2 )
echo "Checking ${dirToCheck} for bad sectors."
# send error output of find to /dev/null as it does traverse the BAD_SECTORS folder (if any)
# during it's search.
# Ignore those errors.
#
find . -mount -type f 2> /dev/null |
grep -v 'BAD_SECTORS' |
while read f ;
do
cat "${f}" > /dev/null || echo "${f} has problems" ;
done
done
exit 0
Code: Select all
find . -mount -name BAD_SECTORS -prune -o -type f -print0 | while read -d '' f; doWell, the easy way (running script under flock) does acquire lock, then runs script.The problem, for me, is flock seemed to assume that I wanted to run a command when the lock was achieved and release the lock when the command completed.
I wanted a lock which acquired a lock and a separate? command to release the lock, whilst I did stuff in between.
Write lock a marker file before running backup, read lock the same file before running other tasks. Read locks are social and will be happy to share that file, write lock goes all in or all out.I do not want other jobs to run whilst this backup is happening, for example.
However, I do have other jobs that I am happy to run in parallel, clean up scripts, etc.
well, you can do read locks on one file to allow parallel execution of several tasks when backup (with write lock) is not running, and a write lock on another file to prevent running multiple instances at the same time. But it makes me think you're overengineering it. Throw another layer, to ensure daily won't run at the same time as weekly, and you're definitely overengineering it.The running script, would prevent other instances of itself from running (say monthly and weekly), forcing them to wait till daily has completed.