Package build deployment with GitLab CI/CD – Part 1

This post is written by Tobias Friede and Christopher Köhler from the Fraunhofer Institute for Wood Research – Wilhelm-Klauditz-Institut WKI in Braunschweig, Germany.

About us

Wood is a traditional raw material with a future. Wood products have outstanding technical properties and exemplary life cycle assessments. In six scientific specialist departments, the Fraunhofer Institute for Wood Research, Wilhelm-Klauditz-Institut WKI in Braunschweig addresses current and future-oriented tasks concerning the use of wood and other renewable resources.

The Institute, founded in 1946 by Dr. Wilhelm Klauditz, is located in Braunschweig, Germany. In 1972 the Institute, which counts among the most significant research institutions for applied wood research in Europe, joined the Fraunhofer-Gesellschaft.

The Fraunhofer WKI works as closely and as application-oriented with the companies of the wood and furniture industries and the supplier industry as it does with the construction industry, the chemical industry and the automotive industry. Virtually all procedures and materials which result from the research activities of the Institute are used industrially.

The local IT department of six employees is responsible for the IT infrastructure and clients on the joint campus of the Fraunhofer WKI and the Fraunhofer IST in Braunschweig. We manage around 600 clients with opsi, which we have been using since 2012.

Intro

In recent years, we neglected our opsi server and package deployment. The upcoming end-of-support for Microsoft Windows 7 and the "new" Windows 10 came with a big workload for us. We had to create new netboot products, test the opsi UEFI module (which is working pretty well) and rework our existing packages. Until mid-2017, we only wrote the scripts locally on our own devices and copied the files to the workbench where we build the packages manually. To improve the complete process of creating and maintaining product packages, we took a look at GitLab. With GitLab, respectively git, we plan to structure our development process. Just a simple branch and let’s go. Now we are able to continue the development of packages which were built by another colleague. With the commit messages we are able to review the recent edits.

Read more…

The current state of opsi and Secure Boot

Customers regularly ask for the possibility to install their UEFI Secure Boot clients via opsi. Until now this was not possible. We started an investigation in the second half ot the last year. There are various ways to implement a Secure Boot installation via opsi. The first and not justifiable way is to deploy a so called machine owner key or MOK to every Secure Boot machine by hand. There is currently no way to deploy a MOK to a client in an automated way, which is understandable as an unattended added key might result in harmful signed malware. The second and selected way is to use a Microsoft signed shim which includes an uib public certificate. With this choice we have not the problem we would have with the previously mentioned MOK way. The customers do not have to deploy a key to their system by hand. With a Microsoft signed shim the system verifies the authenticity of a shim file based on the official signature supplied by Microsoft. Every chainloaded binary is then verified against the embedded uib public key inside the shim. So therefore we can sign our own binaries and they get accepted by the UEFI Secure Boot mechanism. In the end we olny have to get the shim signed by Microsoft and we can sign everything chainloaded by ourselves and don't have to rely on other instances. It is obvious why we decided to use the second possibility. The second way adds a lot of simplicity to deploying a netboot product to a Secure Boot enabled machine.

The way to get a Microsoft signed shim is long and not well documented. Microsoft hosts a semi automatic platform to sign uploaded files. However if one wants to get a shim signed by Microsoft additional steps are needed. RedHat hosts a shim-review repository on GitHub where the to be signed shim is reviewed by authorized people along with our specific changes. Microsoft will sign the submitted shim if the SHA256 checksums in the file and the shim-review match and the files gets signed by Microsoft. While it requires only a few simple steps it took months (in our case) to be completed because approval is done by hand Finally we have a Microsoft signed shim binary we can deploy on UEFI netboot clients without the need of deploying a MOK on every client.

Stay tuned to get more information on opsi and Secure Boot, as we will keep you updated.

Short progress overview for Python 3

The last weeks have seen small progress for Python 3. I've been mostly busy with customer work which always means there is less time for further other work. Another big chunk of work has been getting UCS 4.4 to run as a client. This is still ongoing but the feedback loop on this is rather large which allows for doing other things while tests are running.

We are continuing on our way for replacing opsiconfd with a new implementation based on Tornado. Things are coming along nicely and I am working on adding those small, special features to opsiconfd one after another. The replacement serves requests like a champ and from a functional standpoint the API endpoint does what it is supposed to do.

For the upcoming release I'd like to change the API a little bit by replacing backend_searchIdents, getClients_listOfHashes and getClientIds_list with more specialized methods. Read more about this and join the discussion at the forums!

opsi webservice with Tornado progress

Today has been quite pleasant as I've been able to make some important progress with the new webservice.

I've implemented parameter passing for RPC calls as done in opsiconfd. Now I can hook up an Python 2 JSONRPCBackend to the service running with Python 3 and everything works as expected. This means the core functionality we need from the webservice is working properly.

We still want the new webservice to replace the old one without clients having to alter their behaviour. Another step in this direction is the possibility to parse opsiconfd.conf and use the given settings. opsiconfd has some options for limiting access that haven't yet been implemented.

Prior to this I've made some changes in python-opsi that fixed authenticate through PAM which I sadly broke before. Along this way I learned that from the multiple Python PAM libraries you can find pam3 on PyPI does not have any releases. Bummer. The world of Python modules appears as a jungle to me once you go away from PyPI and try to find the matching package for your Linux distro. There are quite a few similar sounding packages but finding out what version is available with what name isn't a pleasant task.

However having the service this far means that it is time for me to create a proper OS package. Other dependencies of opsi-server have already been ported to Python 3 and are already packaged with the only missing package being the webservice. Keep your fingers crossed that all dependencies are now matching the ones downloaded from PyPI during development!

Introducing opsirc

For many opsi user opsi-admin provides the go-to tool for scripting and even regular work on their system. Usually this makes use of the -d argument in order to not get asked for a password. But what if you want to go through the webservice instead - you don't want to use -d.

One reason for this might be that you want to have logs about what happened. You still can configure opsi-admin to write logs to a location of your choice but you have to think about this every time.

One thing that probably bug you is that you want to go through the webservice but the requirement to type in a password could be an annoyance. The options you have is that you use --password but this means that the password may be visible in the process list. Various other means of passing the password to the prompt may help but this would also mean that you may have to edit a whole lot of scripts if you ever change the password.

For sure you could whip out your favourite programming language and create scripts to do all your tasks but this might end up as not being as flexible and sometimes even overkill if one simple call is all you need.

To make your - and our - live a little bit easier there is now the possibility of providing credentials in a specific file we call opsirc. This file may contain username, password and address in order to have an easy way to benefit from going through the webservice while scripting - or just making use of the webservice by default.

An example might look like this:

username = grace
password = sundown
address = https://opsi.casa.hanson:4447/rpc

Save the contents at ~/.opsi.org/opsirc and opsi-admin (4.1.1.30 or newer) will pick the information up and use it for an connection. You can now run an command like opsi-admin method backend_info without the need to enter a password.

The opsirc file is kept simple so that other tools have the possibility to make use of this aswell. None of the lines are mandatory. A minimum usable example would be only providing a password in the file. And it is possible to save the password in a separate file by using the key password file and the path to the file as value. This makes it possible to show the file to others without presenting your password in plain view.

I hope this makes everyones live a little bit easier!

Looking back to 2018

The new year is now a few days old and I'd like to take to opportunity to look back at what happened in my opsi world in 2018.

The year begun with getting opsi 4.1 ready for release. Most of the development work was already done but for a proper release you need some more. Documentation is one of the things. Not only did we need to change the documentation in some places, we also worked hard on making a reliable and easy to follow migration guide. The migration guide is quite large, covering many distributions and mentioning all kind of changes that could be done, but the things you need to do come done to a few steps.

The release has also led to writing some tools that are in use at our public repositories. Behind the scenes we have tools that help us making the release of new opsi packages an easier task by automatically collecting the corresponding files and knowing where the files need to be placed (localboot/netboot; linux, windows or both). Since a lot of the packages are runnable under opsi 4.0 and opsi 4.1 we wanted to have these packages available in both repositories. We solve this by having a tool watching the opsi 4.0 repository directories and once there is a new file this will be linked to the corresponding repository for opsi 4.1. This is accompied by another tool that watches all our repositories and alters the access rights of new files so that retrieving won't fail because of insufficent rights. This is one of the simpler and still nicer things I'd like to integrate into opsi in the future because it removes a potential error source when handling packages.

From my view the release of opsi 4.1 went nicely. Problems seemed to be rare and seemed to come mostly from diverging from the migration steps. The order was important this time because we changed the way we are tracking what migrations have been run. This will save us some headaches in the future since we know what migrations have been run and therefore what migrations are missing.

I enjoy that we are able to rely on systemd and Python 2.7 with opsi 4.1. This makes a lot of things easier for developing and maintaining things.

The release of opsi 4.1 happened shortly before the great opsiconf. The conference itself was a great joy to me. I got to see new and old faces and talk to many people - and I've talked to even more if time would have allowed it! One thing that I'd like to improve with the next conference is the diversity. I think this is something where we really could do better but I'm unsure of how to achieve it. I know that we have a very diverse user base and I'd love this to be represented a lot more at the next conference!

With release and conference behind me it was again time to look towards the future. One of the major points we started working to is the migration to Python 3. Since I've been constantly blogging about this topic I won't go into detail here. Being able to focus on this wouldn't have been possible if there weren't my co-workers giving the time and taking care of other tasks I'd normally do. I'm very thankful for this because this allowed for making good progress. Lately some more things have come up that I personally had to take care of and this has shown me even more how valuable the time has been.

Another big change was the overhaul of data collection for opsipxeconfd. It took us quite some time to get to this solution but I am happy how this turned out. The duration it took to get there is one thing not bringing a smile to my face but if you are stepping into a new terrain you won't always know what waits for you there.

In between all this there has been the occasional bugfix that lead to touching opi 4.0. This is where the migration from subversion to git really payed off. Maintaining multiple versions and bringing changes from one into the other has become mostly a no-brainer. Because we are already working on a new larger release for opsi this means that most repositories are having at least three different version branches at the same time. Before we went all in on git we tried some workflows and finally came up with our one which has been heavily inspired by git flow.

The winter of 2018 has seen some work on the monitoring extension I quite enjoyed because there was a constant stream of feedback and even though most changes were rather small I had the feeling that these changes made a rather large improvement.

And then things slowed down before the year came to an end. And so did the support span for opsi 4.0.

New things are waiting for us in 2019, new things will be coming in 2019. I am looking forward to this and hope you do aswell!

Quick tip: Easy weekly backups with systemd

Creating regular backups does not have to bee a cumbersome job but can be easily automated. Instead of using cron I have settled to use systemd for such tasks. In my eyes the benefit is that I can still see any job output after it has been run by using either systemctl status or journalctl without the need to configure any logging whatsoever. And I have easy access to things like execution time or exit code.

Read more…

First service endpoint running with Python 3

I am really happy with the progress we are making with supporting Python 3. Our internal build server already serves the first packages that are running on Python 3 only. There are still errors popping up where we need to port parts but this is what I expected earlier.

The previous weeks saw some feedback-based improvements to the opsi monitoring connector in opsi 4.1. Because I was already working in the codebase and it is comparably small I decided the first endpoint I want to port to Python 3 will be /monitoring.

Read more…

New appliance based on Ubuntu 18.04

For a very long time we have been providing a preconfigured opsi server (often referred as opsivm) in the form of a virtual machine, which can be integrated under various virtualisation systems including Virtualbox and VMWare. After the server has been started for the first time, a script is executed to setup the last configuration settings. These are, among other things name, domain, IP of the server, the standard gateway, the DNS server and also the passwords for the already created users. The opsi server is then ready for use and only needs to be filled up with the opsi standard packages.

We always try to keep the opsi server up to date, both in terms of the opsi version and the Linux distribution used. The latest version has been changed from being based on Ubuntu 16.04 to Ubuntu 18.04.

Network configuration

Since Ubuntu 18.04, netplan has been used instead of ifconfig as the default network configuration. After a first attempt to disable netplan and keep the configuration procedure using ifconfig as before, this attempt has been discarded because this solution would require installing additional programs as well as making more adjustments. So the decision was made to use the netplan configuration.

The configuration for netplan can be found in /etc/netplan/config.yaml:

network:
  version: 2
  renderer: networkd
  ethernets:
    enp3s0:
      addresses:
        - 10.10.10.2/24
      gateway4: 10.10.10.1
      nameservers:
          search: [mydomain, otherdomain]
          addresses: [10.10.10.1, 1.1.1.1]

By making adjustments to our start script we are able to use a template based on the default configuration. This template is patched with the values as retrieved during the first startup.

network:
  version: 2
  renderer: networkd
  ethernets:
    enp3s0:
      addresses:
        - @ip@/@cidrnetmask@
      gateway4: @gateway@
      nameservers:
          search: [@domain@]
          addresses: [@dns@]

A particular challenge was the network mask, which is required to be in format 255.255.255.0, but in config.yaml must be given as in CIDR notation including the network address as 10.10.10.1/24.

Once the template is filled the following call activates the network settings from config.yaml:

netplan apply

Deploying a root shell

Another change was the way we made the use of a terminal with root privileges available via a desktop icon.

Most of the tasks on an opsi server can be performed by a dedicated user called adminuser. If system administrator rights are necessary, there is a desktop call to open a shell with root rights. So far this has been done using the gksu tool which is considered obsolete and is therefore no longer supported in the latest versions of Debian and Ubuntu.

For this reason we now start our root shell with the following call:

lxterminal --title "root shell" --command "sudo -s"

Testing the new opsi-appliance

Are you curious? Download the latest version here and then follow the Getting Started.

Have fun