- I wanted a STOMP client that would be easy to use without getting in the way. For example, spawning listener threads from within the class (as stomp.py does) is easy; however, if you want to exercise some more control over how received messages are handled, it's in the way.
- I wanted a STOMP client that would support a publish-only usage model. Most of my needs to interact with a stomp server from Python have involved writing clients that need to just push messages onto topics/queues (e.g. from python WSGI web applications).
- I wanted to also provide a helpful set of STOMP library utilities for other projects. Specifically, I wanted to flesh out & clean up the Frame classes that I had started implementing for CoilMQ and add in my fixed version of stomper's FrameBuffer that would traffic natively in frames.
- I really wanted a better-documented, better-tested, and generally cleaner, more pythonic (pep-8) codebase.
- And finally, this was really another opportunity to learn more about sockets and multi-threaded application design (& testing) in Python.
Unlike HTTP, the STOMP protocol is not a serial request-response protocol. This actually makes it non-trivial to write a client that can both send and receive messages, since you can't simply sock.send() a frame and then sock.recv() a frame and expect that to the response to your sent frame. Of course, this makes sense since a subscribing client also needs to be able to receive message frames from the server (without being in response to any request). So to have a client that can both send and receive messages, there needs to be some sort of receiver loop constantly running.
I chose to take inspiration from the way that the stompy does this and use queues. My approach was a little simpler in that there is simply a listener loop (expected to be run in its own frame) that enqueues any received frames on the appropriate queue (e.g. message frames go on the message queue, receipt frames on the receipt queue, etc.). Very simple, but seems quite effective.
This approach is also flexible, since it means you could create pool of worker threads that all pull from the appropriate queue(s) to process messages concurrently. (It probably wouldn't be too difficult to also provide a multiprocessing implementation for the worker pool.)
Here's the simple publish-only example that pushes some binary content (a pickled python object) onto a queue:
import pickle from datetime import datetime from stompclient import PublishClient client = PublishClient('127.0.0.1', 61613) client.connect() payload = {'key': 'value', 'counter': 0, 'list': ['a', 'b', 'c'], 'date': datetime.now()} client.send('/queue/example', pickle.dumps(payload, protocol=pickle.HIGHEST_PROTOCOL)) client.disconnect()Head on over to the project website for more examples, documentation, and download links.
No comments:
Post a Comment