
Script changes to download content by layer. Valid options are 'all', 'compiler', 'distro', 'flock'. Current .lst and yum files under directory centos-mirror-tools are relocated. Generic package dependencies are relocated to centos-mirror-tools/config/<os>/<layer>/ . Lst entries for compilable content have been relocated to other git repos by prior updates. i.e. those that list tarballs or srpms to be compiled within that repo. The original .lst files are deleted to make it easier to identify new content during development. Layer 'all' builds all layers in a single workspace. The lst files are identical to current content, minus the src.rpm and tarball entries. Other layers get only a subset of packages download. The minimum required to build the layer. The 'flock' layer will have additional content to satisfy the run time requirements as well as the build time requirements. An upper layer does not need to list rpms known to be provided by a lower layer. Instead the config file 'required_layer_pkgs.cfg' lists urls for lst files for lower layer build outputs. These build outputs are generated and published by cengn for each layer. A second layer config file, 'required_layer_iso_inc.cfg' lists image.inc files for lower layer builds. These build outputs are generated and published by cengn for each layer, summarizing the image.inc files found in individual git repos. Image.inc files inform the build-iso process, listing rpms that that provide services and commands that need to be included in the iso. The transitive list of required rpms need not be listed. Finally the layer config should include a yum.repos.d directory in which supplementary yum repos are defined to pick up cengn built content from lower layers. To allow a designer to do cross-layer building using local sources rather than those provided by CENGN, there are several options. The designer can modify the urls for lower layer build outputs, as found in the .cfg and .repo files within the config directory 'stx-tools/centos-mirror-tools/config/<distro>/<layer-to-build>' directly within the git. Substitute urls can use the file:/// syntax. Just be sure to remove these changes before submitting. Alternatively new args have been added to download_mirror.sh, generate-cgcs-centos-repo.sh and commands that override the normal config. The easiest to use is a command argurement that substitutes a new config directory, replacing stx-tools/centos-mirror-tools/config. The intent is for the designer to do a recursive copy of that directory into a side location. make his changes there, outside of git, and provide the path to that directory as an extra arguement to download_mirror.sh and generate-cgcs-centos-repo.sh. e.g. For simplicity I'll only list the 'extra' arguements download_mirror.sh -C <my-config-dir> \ -l <layer> \ ... generate-cgcs-centos-repo.sh --config-dir=<my-config-dir> \ --layer=<layer> \ ... populate_downloads.sh --config-dir=<my-config-dir> \ --layer=<layer> \ ... These arguements can also be suplied via the environment. For the purpose of containerized builds, these arguements should be defined in your localrc. e.g. export STX_CONFIG_DIR=<my-config-dir> export LAYER=<layer> The final alternative is to override things at a more granular level, replacing a single lst file of image.inc file. Here you can replace a single line found in a required_layer_pkgs.cfg or required_layer_iso_inc.cfg file. e.g. We are doing a flock build and want to modify the content picked up from the distro layer's rt build, and that content delivers a service we want in the iso. For simplicity I'll only list the 'extra' arguments ./download_mirror.sh -l flock \ -L distro,rt,file:///<my-distro-workspace>/rt/rpmbuild/RPMS/rpm.lst \ -I distro,std,file:///<my-distro-workspace>/rt/image.inc \ ... generate-cgcs-centos-repo.sh --layer=flock \ --layer-pkg-url=distro,rt,file:///<my-distro-workspace>/rt/rpmbuild/RPMS/rpm.lst \ --layer-inc-url=distro,std,file:////<my-distro-workspace>/rt/image.inc \ ... NOTE: The triplet syntax for a package list url is <lower-layer>,<build-type>,<url-to-rpm.lst> lower-layer: 'compiler', 'distro' build-type: 'std', 'rt', 'installer' Also if 'file:///' syntax is used, a matching change is made to the yum *.repo file. This assumes that the rpm.lst is co-resident with repodata directory, as is the norm for our build outputs. NOTE: The triplet syntax for a image inc url is <lower-layer>,<include-type>,<url-to-image.inc> lower-layer: 'compiler', 'distro' build-type: 'std', 'dev' A typical user is likely only working in the flock layer on the master branch. He should be content to use the compiler and distro layer outputs from cengn. His workflow looks like ... 1, sync code for flock layer $ repo init -u https://opendev.org/starlingx/manifest.git -b master -m flock.xml $ repo sync 2, download rpms for flock layer, and populate a local mirror $ LOCAL_MIRROR=/import/mirrors/starlingx $ cd stx-tools/centos-mirror-tools $ ./download_mirror.sh -n -g -c yum.conf.sample -S -l flock $ cp -r output/stx-r1/CentOS/pike/* $LOCAL_MIRROR/ 3, Prepare a virtual repo and downloads directory for building $ cd ../toCOPY $ ./generate-cgcs-centos-repo.sh --layer=flock $LOCAL_MIRROR $ ./populate_downloads.sh --layer=flock $LOCAL_MIRROR 4, rpm package and iso building $ build-pkgs && build-iso && build-helm-charts.sh Building all layers in a single workspace is still supported, and looks identical to the previous workflow. 1, sync code $ repo init -u https://opendev.org/starlingx/manifest.git -b master $ repo sync 2, download rpms for flock layer, and populate a local mirror $ LOCAL_MIRROR=/import/mirrors/starlingx $ cd stx-tools/centos-mirror-tools $ ./download_mirror.sh -n -g -c yum.conf.sample -S $ cp -r output/stx-r1/CentOS/pike/* $LOCAL_MIRROR/ 3, create repo named "StxCentos7Distro" for building $ cd ../toCOPY $ generate-cgcs-centos-repo.sh $LOCAL_MIRROR $ populate_downloads.sh $LOCAL_MIRROR 4, rpm package and iso building $ build-pkgs && build-iso && build-helm-charts.sh Only a cross-layer developer should setup two or three copies of the building environment, one per layer. We suggest you use seperate shells for each layer, as the various paths (MY_REPO, MY_WORKSPACE ...) need to be unique, Shell 1, compiler layer $ LOCAL_MIRROR=/import/mirrors/starlingx $ LOCAL_CONFIG=<some-dir>/config $ MY_REPO_ROOT_DIR=<some-dir>/layer-compiler $ MY_REPO=$MY_REPO_ROOT_DIR/cgcs-root $ MY_WORKSPACE=$MY_REPO_ROOT_DIR/workspace ... $ mkdir -p $MY_REPO_ROOT_DIR $ cd $MY_REPO_ROOT_DIR $ repo init -u https://opendev.org/starlingx/manifest.git -b master -m compiler.xml $ cd stx-tools/centos-mirror-tools $ cp -r config/* $LOCAL_CONFIG ... edit urls in *.cfg and *.repo files under $LOCAL_CONFIG ... $ ./download_mirror.sh -n -g -c yum.conf.sample -S -C $LOCAL_CONFIG -l compiler $ cp -r output/stx-r1/CentOS/pike/* $LOCAL_MIRROR/ $ cd ../toCOPY $ ./generate-cgcs-centos-repo.sh --config-dir=$LOCAL_CONFIG --layer=compiler $LOCAL_MIRROR $ ./populate_downloads.sh --config-dir=$LOCAL_CONFIG --layer=flock $LOCAL_MIRROR $ build-pkgs Shell 2, distro layer $ LOCAL_MIRROR=/import/mirrors/starlingx $ LOCAL_CONFIG=<some-dir>/config $ MY_REPO_ROOT_DIR=<some-dir>/layer-distro $ MY_REPO=$MY_REPO_ROOT_DIR/cgcs-root $ MY_WORKSPACE=$MY_REPO_ROOT_DIR/workspace ... $ mkdir -p $MY_REPO_ROOT_DIR $ cd $MY_REPO_ROOT_DIR $ repo init -u https://opendev.org/starlingx/manifest.git -b master -m distro.xml $ repo sync $ cd stx-tools/centos-mirror-tools $ ./download_mirror.sh -n -g -c yum.conf.sample -S -C $LOCAL_CONFIG -l distro $ cp -r output/stx-r1/CentOS/pike/* $LOCAL_MIRROR/ $ cd ../toCOPY $ ./generate-cgcs-centos-repo.sh --config-dir=$LOCAL_CONFIG --layer=distro $LOCAL_MIRROR $ ./populate_downloads.sh --config-dir=$LOCAL_CONFIG --layer=flock $LOCAL_MIRROR $ build-pkgs Shell 3, flock layer $ LOCAL_MIRROR=/import/mirrors/starlingx $ LOCAL_CONFIG=<some-dir>/config $ MY_REPO_ROOT_DIR=<some-dir>/layer-flock $ MY_REPO=$MY_REPO_ROOT_DIR/cgcs-root $ MY_WORKSPACE=$MY_REPO_ROOT_DIR/workspace ... $ mkdir -p $MY_REPO_ROOT_DIR $ cd $MY_REPO_ROOT_DIR $ repo init -u https://opendev.org/starlingx/manifest.git -b master -m flock.xml $ repo sync $ cd stx-tools/centos-mirror-tools $ ./download_mirror.sh -n -g -c yum.conf.sample -S -C $LOCAL_CONFIG -l flock $ cp -r output/stx-r1/CentOS/pike/* $LOCAL_MIRROR/ $ cd ../toCOPY $ ./generate-cgcs-centos-repo.sh --config-dir=$LOCAL_CONFIG --layer=flock $LOCAL_MIRROR $ ./populate_downloads.sh --config-dir=$LOCAL_CONFIG --layer=flock $LOCAL_MIRROR $ build-pkgs && build-iso && build-helm-charts.sh Story: 2006166 Task: 37103 Depends-On: https://review.opendev.org/698756 Depends-On: https://review.opendev.org/700819 Depends-On: https://review.opendev.org/700821 Change-Id: I088020b81f08656e50aa29b5584bbc1dd1378f12 Signed-off-by: Scott Little <scott.little@windriver.com>
tools
StarlingX Build Tools
The StarlingX build process is tightly tied to CentOS in a number of ways, doing the build inside a Docker container makes this much easier on other flavors of Linux. Basically, the StarlingX ISO image creation flow involves the following general steps.
- Build the StarlingX docker image.
- Package mirror creation.
- Build packages/ISO creation.
Build the Starlingx docker image
StarlingX docker image handles all steps related to StarlingX ISO creation. This section describes how to customize the docker image building process.
Container build image customization
You can start by customizing values for the StarlingX docker image build process. There are a pair of useful files that help to do this.
buildrc
localrc
The buildrc
file is a shell script that is used to set
the default configuration values. It is contained in the tbuilder repo
and should not need to be modified by users as it reads a
localrc
file that will not be overwritten by tbuilder
updates. This is where users should alter the default settings. This is
a sample of a localrc
file:
# tbuilder localrc
MYUNAME=<your user name>
PROJECT=starlingx
HOST_PREFIX=$HOME/starlingx/workspace
HOST_MIRROR_DIR=$HOME/starlingx/mirror
This project contains a Makefile that can be used to automate the
build lifecycle of a container. The Makefile will read the contents of
the buildrc
file.
StarlingX Builder container image are tied to your UID so image names should include your username.
Build image
Once the configuration files have been customized, it is possible to
build the docker image. This process is automated by the
tb.sh
script.
NOTE:
- Do NOT change the UID to be different from the one you have on your
host or things will go poorly. i.e. do not change
--build-arg MYUID=$(id -u)
- The Dockerfile needs MYUID and MYUNAME defined, the rest of the configuration is copied in via buildrc/localrc.
Package mirror creation
Once the StarlingX docker image has been built, you must create a mirror before creating the ISO image. Basically, a mirror is a directory that contains a series of packages. The packages are organized to be consumed by the ISO creation scripts.
The HOST_MIRROR_DIR
variable provides the path to the
mirror. The buildrc
file sets the value of this variable
unless the localrc
file has modified it.
The mirror creation involves a set of scripts and configuration files
required to download a group of RPMs, SRPMs, source code packages and so
forth. These tools live inside centos-mirror-tools
directory.
$ cd centos-mirror-tools
All items included in this directory must be visble inside the container environment. Then the container shall be run from the same directory where these tools are stored. Basically, we run a container with the previously created StarlingX docker image, using the following configuration:
$ docker run -it -v $(pwd):/localdisk <your_docker_image_name>:<your_image_version> bash
As /localdisk
is defined as the workdir of the
container, the same folder name should be used to define the volume. The
container will start to run and populate logs
and
output
folders in this directory.
Download packages
Inside the Docker container, enter the following commands to download the required packages to populate the CentOS mirror repository:
$ cd localdisk && bash download_mirror.sh
Monitor the download of packages until it is complete. When the download is complete, the following message appears:
totally 17 files are downloaded! step #3: done successfully IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso" - out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img - out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img - out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz
Verify packages
Verify no missing or failed packages exist:
$ cat logs/_missing_.log $ cat logs/_failmove_.log
In case missing or failed packages do exist, which is usually caused by network instability (or timeout), you need to download the packages manually. Doing so assures you get all RPMs listed in rpms_3rdparties.lst/rpms_centos.lst/rpms_centos3rdparties.lst.
Copy the files to the mirror
After all downloads are complete, copy the downloaded files to mirror.
$ find ./output -name "*.i686.rpm" | xargs rm -f $ chown 751:751 -R ./output $ cp -rf output/stx-r1/ <your_mirror_folder>/
In this case, <your_mirror_folder>
can be whatever
folder you want to use as mirror.
Tweaks in the StarlingX build system.
NOTE: You do not need to do the following step if you've synced the latest codebase.
Go into the StarlingX build system (i.e. another container that hosts the cgcs build system) and perform the following steps:
Build packages/ISO creation
StarlingX ISO image creation required some customized packages. In
this step, a set of patches and customizations are applied to the source
code to create the RPM packages. We have an script called
tb.sh
that helps with the process.
The tb.sh
script is used to manage the run/stop
lifecycle of working containers. Copy it to somewhere on your
PATH
, say $HOME/bin
if you have one, or maybe
/usr/local/bin
.
The basic workflow is to create a working directory for a particular
build, say a specific branch or whatever. Copy the buildrc
file from the tbuilder repo to your work directory and create a
localrc
if you need one. The current working directory is
assumed to be this work directory for all tb.sh
commands.
You switch projects by switching directories.
By default LOCALDISK
will be placed under the directory
pointed to by HOST_PREFIX
, which defaults to
$HOME/starlingx
.
The tb.sh
script uses sub-commands to select the
operation: * run
- Runs the container in a shell. It will
also create LOCALDISK
if it does not exist. *
stop
- Kills the running shell. * exec
-
Starts a shell inside the container.
You should name your running container with your username. tbuilder
does this automatically using the USER
environment
variable.
tb.sh run
will create LOCALDISK
if it does
not already exist before starting the container.
Set the mirror directory to the shared mirror pointed to by
HOST_MIRROR_DIR
. The mirror is LARGE, if you are on a
shared machine use the shared mirror. For example you could set the
default value for HOST_MIRROR_DIR
to
/home/starlingx/mirror
and share it.
Running the Container
Start the builder container:
or by hand:
docker run -it --rm \
--name ${TC_CONTAINER_NAME} \
--detach \
-v ${LOCALDISK}:${GUEST_LOCALDISK} \
-v ${HOST_MIRROR_DIR}:/import/mirrors:ro \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v ~/.ssh:/mySSH:ro \
-e "container=docker" \
--security-opt seccomp=unconfined \
${TC_CONTAINER_TAG}
Running a Shell Inside the Container
Since running the container does not return to a shell prompt the exec into the container must be done from a different shell:
or by hand:
Notes:
- The above will reusult in a running container in systemd mode. It will have NO login.
- I tend to use tmux to keep a group of shells related to the build container
--user=${USER}
is the default username, setMYUNAME
inbuildrc
to change it.
Stop the Container
or by hand:
What to do to build from WITHIN the container
To make git cloning less painful
To start a fresh source tree
Instructions
Initialize the source tree.
cd $MY_REPO_ROOT_DIR
repo init -u https://opendev.org/starlingx/manifest.git -m default.xml
repo sync
To generate cgcs-centos-repo
The cgcs-centos-repo is a set of symbolic links to the packages in the mirror and the mock configuration file. It is needed to create these links if this is the first build or the mirror has been updated.
Where the argument to the script is the path of the mirror.
To build all packages:
To generate cgcs-tis-repo:
The cgcs-tis-repo has the dependency information that sequences the build order; To generate or update the information the following command needs to be executed after building modified or new packages.
To make an iso:
First time build
The entire project builds as a bootable image which means that the resulting ISO needs the boot files (initrd, vmlinuz, etc) that are also built by this build system. The symptom of this issue is that even if the build is successful, the ISO will be unable to boot.
For more specific instructions on how to solve this issue, please the
README on installer
folder in metal
repository.
WARNING HACK WARNING
Due to a lack of full udev support in the current build container, you need to do the following:
if you see complaints about udisksctl not being able to setup the loop device or not being able to mount it, you need to make sure the build-tools/update-efiboot-image is linked to the one in /usr/local/bin
Troubleshooting
if you see:
it's a docker bug. just kill the container and restart the it using a different name.
- I usually switch between -centos-builder and -centos-builder2. It's some kind of timeout (bind?) issue.