The work on opsi 4.2 is ongoing and the switch of Python versions means that we also have to look for the helper scripts that come with our opsi packages. There is a simple pattern I follow to make sure that we can use a package with opsi 4.1 and 4.2 I will show here.
First I make sure that the script runs fine with Python 2 and 3.
As the standard library of Python changed you can use a tool like 2to3 that will aid you in changing the code the be working with Python 3.
I won't go into detail about this here because there are a lot of resources on the internet on how to deal with these changes - python3porting is one I like a lot.
Scripts that make use of the
OPSI package (provided through
python-opsi) are usually without any need to change imports or usage.
When we have a script that can be run with Python 2 and 3 we are changing the used interpreter.
These helper scripts are executable and specify the used program through the shebang line - this is the first line that usually starts with
I will adjust this so our script is running with Python 3 by using the following:
This makes the script running with Python 3 by default. For Python 2 we will modify the file to run with the older version instead.
To achieve this I add the following to the
OPSI/postinst of the package:
# Patching scripts to work with opsi 4.1 and opsi 4.2 set +e python3 -c "import OPSI" ret=$? set -e if [ $ret -eq 0 ]; then # Python 3 - opsi 4.2 or later echo "Running on opsi 4.2 or later. Nothing to do." else echo "Running on opsi 4.1. Patching scripts..." # Python 2 - opsi 4.1 sed --in-place "s_/usr/bin/python3_/usr/bin/python_" "$CLIENT_DATA_DIR/my_script.py" fi
Once support for Python 2 is completely dropped we will be able to simply remove this part from the postinst and call it a day.
It's not a secret that we are working on getting a new version of opsi out there.
Previous versions of opsi have been mostly developed internally and were only released after they have been declared ready for a public release. With opsi 4.2 we are going a different route and decided to make experimental packages available earlier. Since nearly two weeks we have public repos for the OS packages on OBS. Be aware that these are still experimental and bugs will occur.
At uib we use these repos for our internals tests and since we have them set up more and more people started testing their components against opsi 4.2 which already helped me a tremendously amount in order to discover bugs or problems we need to tackle. We aim at providing an easy way to migrate an up-to-date opsi 4.1 to opsi 4.2 with only very few manual steps.
Right now we only support Debian 10 and Ubuntu 18.04 so we can focus on getting opsi 4.2 out of the door. Additional distributions will be added later. On the technical side it is important that they bring Python 3.6 or newer. Lastly our users demand for new distributions is important for us to be able to priorize what distro to add next.
During tests of the new webservice with some more real-life workloads I noticed that whenever a JSONRPCBackend is used to connect to the service each request generates a new session.
But it should re-use the session ID provided by the service instead so that the session gets reused and methods like
backend_setOptions work as intended for the currently used session.
Since I've successfully used the session ID with requests and curl before the error had to be somewhere else.
Turns out that the custom HTTP response class we currently use in python-opsi does handle headers case-sensitive.
While the old service returned the field as
set-cookie the new one capitalized it into
The solution to this is to follow RFC 7230 and make the headers in our custom class case-insensitive. This should be coming with python-opsi 126.96.36.199.
If you are following the development closely you might have noticed that gzip popped up in a few places.
One of the reasons we are moving more towards gzip is that Tornado offers easy support for gzip-compressed data transfer both to and from the server. This makes the move to Tornado faster for us because we do not have to implement deflate compression features for Tornado right away.
The opsiconfd from opsi 4 does also support gzip and with the latest versions it preferrably returns gzip even if a client would accept other headers aswell.
For an easier transfer towards gzip the
JSONRPCBackend now is able to use gzip aswell.
The parameter deflate and the method setDeflate are now both obsolete.
Instead you should make use of the compression parameter or setCompression method.
Why not have a look at the documentation?
If you are using the JSONRPCBackend please remember that you can switch to the new methods once you have updated to python-opsi 188.8.131.52 or newer!
As the (de)compression is based around HTTP headers it is simple to make your application work as desired by sending the appropriate headers.
To get a gzip-compressed response from the server make sure to set your
Accept-Encoding HTTP header to gzip.
A response that is compressed will have the
Content-Encoding header set to gzip.
This post is written by Tobias Friede and Christopher Köhler from the Fraunhofer Institute for Wood Research – Wilhelm-Klauditz-Institut WKI in Braunschweig, Germany.
Wood is a traditional raw material with a future. Wood products have outstanding technical properties and exemplary life cycle assessments. In six scientific specialist departments, the Fraunhofer Institute for Wood Research, Wilhelm-Klauditz-Institut WKI in Braunschweig addresses current and future-oriented tasks concerning the use of wood and other renewable resources.
The Institute, founded in 1946 by Dr. Wilhelm Klauditz, is located in Braunschweig, Germany. In 1972 the Institute, which counts among the most significant research institutions for applied wood research in Europe, joined the Fraunhofer-Gesellschaft.
The Fraunhofer WKI works as closely and as application-oriented with the companies of the wood and furniture industries and the supplier industry as it does with the construction industry, the chemical industry and the automotive industry. Virtually all procedures and materials which result from the research activities of the Institute are used industrially.
The local IT department of six employees is responsible for the IT infrastructure and clients on the joint campus of the Fraunhofer WKI and the Fraunhofer IST in Braunschweig. We manage around 600 clients with opsi, which we have been using since 2012.
In recent years, we neglected our opsi server and package deployment. The upcoming end-of-support for Microsoft Windows 7 and the "new" Windows 10 came with a big workload for us. We had to create new netboot products, test the opsi UEFI module (which is working pretty well) and rework our existing packages. Until mid-2017, we only wrote the scripts locally on our own devices and copied the files to the workbench where we build the packages manually. To improve the complete process of creating and maintaining product packages, we took a look at GitLab. With GitLab, respectively git, we plan to structure our development process. Just a simple branch and let’s go. Now we are able to continue the development of packages which were built by another colleague. With the commit messages we are able to review the recent edits.
During last week I discovered that our new webservice was slowly leaking memory. Just a little bit after one of my test clients finished but after hundreds of test-runs from my client it became clear that this would be a problem in a production environment.
Customers regularly ask for the possibility to install their UEFI Secure Boot clients via opsi.
Until now this was not possible.
We started an investigation in the second half ot the last year.
There are various ways to implement a Secure Boot installation via opsi.
The first and not justifiable way is to deploy a so called
machine owner key or MOK to every Secure Boot machine by hand.
There is currently no way to deploy a MOK to a client in an automated way, which is understandable as an unattended added key might result in harmful signed malware.
The second and selected way is to use a Microsoft signed shim which includes an uib public certificate.
With this choice we have not the problem we would have with the previously mentioned MOK way.
The customers do not have to deploy a key to their system by hand.
With a Microsoft signed shim the system verifies the authenticity of a shim file based on the official signature supplied by Microsoft.
Every chainloaded binary is then verified against the embedded uib public key inside the shim.
So therefore we can sign our own binaries and they get accepted by the UEFI Secure Boot mechanism.
In the end we olny have to get the shim signed by Microsoft and we can sign everything chainloaded by ourselves and don't have to rely on other instances.
It is obvious why we decided to use the second possibility.
The second way adds a lot of simplicity to deploying a netboot product to a Secure Boot enabled machine.
The way to get a Microsoft signed shim is long and not well documented.
Microsoft hosts a semi automatic platform to sign uploaded files.
However if one wants to get a shim signed by Microsoft additional steps are needed.
RedHat hosts a
shim-review repository on GitHub where the to be signed shim is reviewed by authorized people along with our specific changes.
Microsoft will sign the submitted shim if the SHA256 checksums in the file and the
shim-review match and the files gets signed by Microsoft.
While it requires only a few simple steps it took months (in our case) to be completed because approval is done by hand
Finally we have a Microsoft signed shim binary we can deploy on UEFI netboot clients without the need of deploying a MOK on every client.
Stay tuned to get more information on opsi and Secure Boot, as we will keep you updated.
The last weeks have seen small progress for Python 3. I've been mostly busy with customer work which always means there is less time for further other work. Another big chunk of work has been getting UCS 4.4 to run as a client. This is still ongoing but the feedback loop on this is rather large which allows for doing other things while tests are running.
We are continuing on our way for replacing opsiconfd with a new implementation based on Tornado. Things are coming along nicely and I am working on adding those small, special features to opsiconfd one after another. The replacement serves requests like a champ and from a functional standpoint the API endpoint does what it is supposed to do.
For the upcoming release I'd like to change the API a little bit by replacing
getClientIds_list with more specialized methods.
Read more about this and join the discussion at the forums!
Today has been quite pleasant as I've been able to make some important progress with the new webservice.
I've implemented parameter passing for RPC calls as done in opsiconfd. Now I can hook up an Python 2 JSONRPCBackend to the service running with Python 3 and everything works as expected. This means the core functionality we need from the webservice is working properly.
We still want the new webservice to replace the old one without clients having to alter their behaviour.
Another step in this direction is the possibility to parse
opsiconfd.conf and use the given settings.
opsiconfd has some options for limiting access that haven't yet been implemented.
Prior to this I've made some changes in python-opsi that fixed authenticate through PAM which I sadly broke before. Along this way I learned that from the multiple Python PAM libraries you can find pam3 on PyPI does not have any releases. Bummer. The world of Python modules appears as a jungle to me once you go away from PyPI and try to find the matching package for your Linux distro. There are quite a few similar sounding packages but finding out what version is available with what name isn't a pleasant task.
However having the service this far means that it is time for me to create a proper OS package.
Other dependencies of
opsi-server have already been ported to Python 3 and are already packaged with the only missing package being the webservice.
Keep your fingers crossed that all dependencies are now matching the ones downloaded from PyPI during development!