I am really happy with the progress we are making with supporting Python 3. Our internal build server already serves the first packages that are running on Python 3 only. There are still errors popping up where we need to port parts but this is what I expected earlier.
The previous weeks saw some feedback-based improvements to the opsi monitoring connector in opsi 4.1.
Because I was already working in the codebase and it is comparably small I decided the first endpoint I want to port to Python 3 will be
Yesterday I made the first succesful call to the monitoring endpoint by using
The new endpoint retrieved the data through JSON-RPC from our live instance in the background and I was able to compare the results.
They have been the same in my tests - the API is stable and no changes had to be done on the client-side.
My experience with Tornado has been very pleasant so far.
await in Python 3 (more information about this here) the code ends up very readable and is good to understand in my opinion.
Porting the endpoint handler from Twisted to Tornado was an easy one.
From blocking to async
To leverage the asynchronous potential calling blocking code should be avoided.
The current backend port in python3-opsi leaves the methods in the existing (blocking) style, but I wanted to call the backend methods asynchronously.
Looking for a solution I often found the hint that you could simlpy prefix the methods with
async or use a package like aioify for decorating the methods.
Touching hundreds of methods is not only a lot of work, it would also result in breaking all existing code (and user scripts).
This did not seem very reasonable to me.
Simply changing the method definition to start with
async can be done easily but with having probably blocking code left inside there isn't much of a benefit until you go through all methods and change their internals.
I still want to change all method internals to use asynchronous code by default but this is work for another thime.
So how would I end up being able to make use of awaitable code? I came up with a little wrapper for our existing backends. The wrapper will be applied during runtime and then wraps any public method of an existing backend so they are awaitable. This allows using the backend in asynchronous fashion if required. To not block when calling such a wrapped method the execution of the method on the actual backend is handed to a ThreadPoolExecutor that performans the execution in the background.
How does it look in action?
$ python3 Python 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from OPSI.Backend.JSONRPC import JSONRPCBackend >>> b = JSONRPCBackend('https://niko41.uib.local:4447/rpc', username='moriarty', password='nottodaysherlock') >>> b.host_getObjects() [...(truncated)..., <OpsiClient(id='zulip.uib.local')>] >>> import asyncio >>> loop = asyncio.get_event_loop() >>> loop.run_until_complete(b.host_getObjects()) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.5/asyncio/base_events.py", line 446, in run_until_complete future = tasks.ensure_future(future, loop=self) File "/usr/lib/python3.5/asyncio/tasks.py", line 567, in ensure_future raise TypeError('A Future, a coroutine or an awaitable is required') TypeError: A Future, a coroutine or an awaitable is required >>> from OPSI.Backend._Async import AsyncBackendWrapper >>> ab = AsyncBackendWrapper(b) >>> loop.run_until_complete(ab.host_getObjects()) [...(truncated)..., <OpsiClient(id='zulip.uib.local')>] >>> fut = ab.host_getObjects() >>> type(fut) <class 'coroutine'> >>> loop.run_until_complete(fut) [...(truncated)..., <OpsiClient(id='zulip.uib.local')>]
Admittedly it is a bit cumbersome for only a single method call. Since Python 3.7 there is the shortcut of asyncio.run. The benefit really comes in once you have multiple calls that are being made and you don't have to submit each call by hand. And this is exactly where we are going with this in our webservice.