Upcoming: CeBIT 2017

The current week was for nearly all of us under the sign of CeBIT which will take place at Hannover, Germany the next week. The technology fair is an important date for the company and we use it not only to talk to users old and new but also to show new features.

We will be in the Open Source Park in hall 3 at D35 and we'd love to talk to you!

For presentations we bring some machines with us that hold complete demo environments. The demo environments feature various virtual clients all equipped with different operating systems and a complete Samba 4 AD DC. Of course all these were set up using opsi! Admittedly it took us some time to be able to set up the machines through opsi like this but we end up with multiple machines set up to feature the exact same installations and data. And now all it takes to set up a machine like this just requires a few clicks.

People from the dev team will be present the whole week and there will be a talk about our testing environment on friday. I've even heard some rumors about opsi swag... ;)

So if you are at CeBIT come and visit us!

API: new method setupWhereFailed

Sometimes the small things are the ones that make the greatest difference.

So with python-opsi 4.0.7.34 comes a new method setupWhereFailed that takes an product ID as parameter and then will set that product to setup on all clients where that product has failed as action result.

I used this during some development tests and this made things very straight forward.

Here is how you can use it through opsi-admin with the product testproduct:

opsi-admin -d method setupWhereFailed testproduct

Quick tip: show certificate data

A small tip for anyone who needs to check what values their certificate uses. For the certificate renewal in opsi-setup --renew-opsiconfd-cert we want to reuse the values existing in the current certificate and have some code in python-opsi we can leverage now.

With the following command we are able to load the data from the certificate and print it to our shell:

python -c "import OPSI.Util.Task.Certificate as c; from pprint import pprint; pprint(c.loadConfigurationFromCertificate('/etc/opsi/opsiconfd.pem'))"

On my test system I get an output like the following:

{'commonName': u'nikosserver.test.local',
 'country': u'DE',
 'emailAddress': u'niko@foo.bar',
 'locality': u'Mainz',
 'organization': u'Ev1l C0rp',
 'organizationalUnit': u'Katzen',
 'serialNumber': 12345678901595886952L,
 'state': u'RP'}

Improving tests with any()

There are many gems inside the standard library of Python and I think the slogan batteries included is a great choice. But not many are that known and sometimes features get re-implemented because the function from the stdlib is not known.

While refactoring some tests I found code like the following:

found = False
for line in result:
    if line.strip():
        if 'admin users' in line:
            found = True
            break

self.assertTrue(found, 'Missing Admin Users in Share opsi_depot')

That is a lot of code to check if a line with some required text is inside the iterable result.

This brings us to todays gem any()! If any item of the iterable passed to any is truth-y then the function returns True. If the iterable is empty or no truthy value is found we get a False.

With this knowledge we can shorten the test code to the following:

self.assertTrue(any('admin users' in line for line in result if line.strip()), 'Missing Admin Users in Share opsi_depot')

I have been even going so far as to throw out the self.assertTrue coming from unittest.TestCase as this makes a conversion to a different test framework easier and the test is still nice to read.

We can also remove the check if the line is filled (line.split()) because there is no chance of a false positive introduced by this.

Now we end up with this:

assert any('admin users' in line for line in result), 'Missing Admin Users in Share opsi_depot'

And if you ever want to check if all values in an iterable are truth-y you should check out the function all!

A Jenkins- and opsi-based test environment

opsi currently runs on 15 different Linux distributions. For a long time we didn't have automated tests. Most were tested manually. Usually Ubuntu and Debian were tested more intense, because we use these distributions internally.

We then started to build a more complex test environment including functional tests. This test environment uses Jenkins, opsi and small scripts. Jenkins introduced the pipeline plugin with v2.0. With this plugin it is fairly easy to build complex tests with a good overview. opsi is higly integrated into theses tests.

Each test consists of the following build steps:

  • install linux distribution
  • install opsi on this fresh machine
  • perform basic tests (logrotate, opsi-convert, etc.)
  • install a Windows netboot product
  • add a client to the new opsi server and install Windows (7, 8.1 and 10)
  • test for basic installed products on new Windows machine
/images/testenv.png

The above image shows the graphical output of each stage of the jenkins pipeline. Whenever an error occours the step turns red and the pipeline aborts.

To simplify the steps of installing opsi, test it and then run the Windows installation we have seperate products. l-opsi-server installs opsi on a linux machine. It detects the distribution and installs the opsi packages for the distribution. The test product tests this machine for opsi commands like opsi-convert, opsi-backup, opsi-set-rights etc.. The client product adds a client and installs Windows netboot products. Additionally it fills the installfiles directory and sets this product on setup. A script then starts the virtual client and waits until the netboot product is installed, along with the opsi-client-agent. This step performs three times with Microsoft Windows versions (7, 8.1 and 10).

These tests are orchestrated by an opsi-server. We have three different stability levels. Therefore three opsi-servers with different package versions, matching the current stable, testing and experimental versions. The opsi-servers also just install package versions from their stability level on their client machines. This way we can check if our tested products can be released to the next stability level or not.

Please note that our test environment is currently work in progress and not all released packages are tested with it. We plan to integrate more and more products in the near future.