
Currently we have all our elements and library files in a top-level directory and install them into <root>/share/diskimage-builder/[elements|lib] (where root is either / or the root of a virtualenv). The problem with this is that editable/development installs (pip -e) do *not* install data_files. Thus we have no canonical location to look for elements -- leading to the various odd things we do such as a whole bunch of guessing at the top of disk-image-create and having a special test-loader in tests/test_elements.py so we can run python unit tests on those elements that have it. data_files is really the wrong thing to use for what are essentially assets of the program. data_files install works well for things like config-files, init.d files or dropping documentation files. By moving the elements under the diskimage_builder package, we always know where they are relative to where we import from. In fact, pkg_resources has an api for this which we wrap in the new diskimage_builder/paths.py helper [1]. We use this helper to find the correct path in the couple of places we need to find the base-elements dir, and for the paths to import the library shell functions. Elements such as svc-map and pkg-map include python unit-tests, which we do not need tests/test_elements.py to special-case load any more. They just get found automatically by the normal subunit loader. I have a follow-on change (I69ca3d26fede0506a6353c077c69f735c8d84d28) to move disk-image-create to a regular python entry-point. Unfortunately, this has to move to work with setuptools. You'd think a symlink under diskimage_builder/[elements|lib] would work, but it doesn't. [1] this API handles stuff like getting files out of .zip archive modules, which we don't do. Essentially for us it's returning __file__. Change-Id: I5e3e3c97f385b1a4ff2031a161a55b231895df5b
47 lines
1.2 KiB
Bash
Executable File
47 lines
1.2 KiB
Bash
Executable File
#!/bin/bash
|
|
|
|
# dib-lint: disable=safe_sudo
|
|
|
|
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
|
|
set -x
|
|
fi
|
|
set -eu
|
|
set -o pipefail
|
|
|
|
# sanity checks
|
|
[ -n "$IMAGE_BLOCK_DEVICE" ] || die "Image block device not set"
|
|
|
|
# execute sfdisk with the given partitioning schema
|
|
sudo sfdisk -uS --force $IMAGE_BLOCK_DEVICE <<EOF
|
|
$DIB_PARTITIONING_SFDISK_SCHEMA
|
|
EOF
|
|
sudo partprobe $IMAGE_BLOCK_DEVICE
|
|
|
|
# To ensure no race conditions exist from calling partprobe
|
|
sudo udevadm settle
|
|
|
|
# If the partition isn't under /dev/loop*p1, create it with kpartx
|
|
DM=
|
|
if [ ! -e "${IMAGE_BLOCK_DEVICE}p1" ]; then
|
|
DM=${IMAGE_BLOCK_DEVICE/#\/dev/\/dev\/mapper}
|
|
# If running inside Docker, make our nodes manually, because udev will not be working.
|
|
if [ -f /.dockerenv ]; then
|
|
# kpartx cannot run in sync mode in docker.
|
|
sudo kpartx -av $TMP_IMAGE_PATH
|
|
sudo dmsetup --noudevsync mknodes
|
|
else
|
|
sudo kpartx -asv $TMP_IMAGE_PATH
|
|
fi
|
|
elif [[ "$ARCH" =~ "ppc" ]]; then
|
|
sudo kpartx -asv $TMP_IMAGE_PATH
|
|
fi
|
|
|
|
if [ -n "$DM" ]; then
|
|
echo "IMAGE_BLOCK_DEVICE=${DM}p1"
|
|
elif [[ "$ARCH" =~ "ppc" ]]; then
|
|
DM=${IMAGE_BLOCK_DEVICE/#\/dev/\/dev\/mapper}
|
|
echo "IMAGE_BLOCK_DEVICE=${DM}p2"
|
|
else
|
|
echo "IMAGE_BLOCK_DEVICE=${IMAGE_BLOCK_DEVICE}p1"
|
|
fi
|