Network-based IPC using WAMP protocol

Most Linux based distributions come pre-installed with DBus, which is a language-independent way of IPC on such systems. DBus is great and have been extensively used for a long-time. It, however is written largely to be used on a single client computer, where apps running locally are able to talk to each other. It could be used over TCP, however it may not be suitable for reasons I state below.

In modern times and especially with the advent of smartphones, many new app communication paradigms have appeared. With IoT being the new cool kid in town, its becoming more and more a “required” when different apps running in a premise have to “talk” to each other. DBus daemon can be accessed over TCP, however a client running in a web browser cannot talk to it because browsers no longer provide direct access to the TCP socket, so writing a DBus client library won’t be possible. For Android and iOS, talking to a DBus daemon running on a PC would need new implementations.

Much of the above effort could be reduced if we used a more general purpose protocol, that supports PubSub and RPCs, is secure (supports end to end encryption), cross-platform and have an ever increasing client libraries ecosystem. WAMP protocol is one such protocol, it can be run over WebSocket, allowing “free” browser support. It also runs over RawSocket (custom framing atop TCP). In principle, WAMP could run on any bi-directional and reliable transport hence the future prospects of the protocol look quite good.

To that effort I have been working on a pet project for the last couple of months, called DeskConn. It uses Crossbar as WAMP router (equivalent: DBus daemon) and couple with it an authentication scheme and service discovery using python zerconf, allowing for the daemon running on the desktop/RPi to be discover-able by Clients on the local network (WiFi, LAN or other interfaces).

With the network layer figured, writing applications on top that is pretty straightforward and can be done with very little code. I’ll come up with some example code in different programming languages in a later blogpost. For the curious, the umbrella deskconn project has quite a few sub-projects to be run on different environment https://github.com/deskconn/

Note: I am a Core developer at Crossbar.io GmbH, the company that funds the development of Crossbar (the router) and a few WAMP client library implementations in Java, Python, JS/Node and C++, under the Autobahn project. I am the maintainer of autobahn-java and autobahn-js. DeskConn is a personal project that I have been working on in my free time.

A more wider list of implementations mostly done by the community could be seen here https://crossbar.io/about/Supported-Languages

My first-ever FOSDEM; it was awesome

I came back from FOSDEM on Tuesday but got busy with my day time job at Crossbar.io. Finally, today when I got to write something, I found my blogspot based web page to be really uncomfortable to navigate and manage, so I spent the last few hours trying to move my blog over to wordpress. I also had to update the planet ubuntu bzr repository for my new blog to show up on Planet Ubuntu.

Having been part of the Ubuntu community, I have had the chance to travel to different software events, mostly Ubuntu specific. While at Canonical, my travel was for Ubuntu Developer Summit and for internal Canonical sprints. Post-Canonical layoff in 2017, I didn’t really travel much for conferences, though last year, while visiting Crossbar.io GmbH’s HQ in Erlagen, Germany, I used that opportunity to plan my trip as such that it coincides with UbuCon Europe in Sintra. That was a great event and I got to meet really great people, the social part of that event was on par or even better than the talks/workshops.

So when FOSDEM’s date was announced, I was yet again excited to travel to a community event and since its known as the biggest FOSS conference in Europe and the fact that lots of super-intelligent people from the wider open-source community attend it every year, I knew I had to be there. To that regard I applied for the Ubuntu community donation fund and guess what I got the nod. Rest is just details.

Talks were great

I attended lots of great talks (lighting as well) and one of the great and a "must watch" talk was from James Bottomley of IBM titled "The Selfish Contributor Explained", according to him that to unleash the true potential of an employee, companies should make an effort to figure out what interests their employee and if a developer is working on something they enjoy, they will likely go out of their way to make things work better.

From future’s perspective and something that affects us all is how the web will transform in the coming years; for that Daniel Stenberg (curl creator) gave an informative talk about HTTP/3 and the problems that it solves. Of course much of the "heavy lifting" was done by the new underlying transport QUIC (thanks Google for the earlier work)

Behold HTTP/3 is coming

I gave a talk

DeskConn is a project that I have been working on in my free time for a bit and I wanted to introduce that to a wider audience, hence I gave a brief talk on what could potentially be done with it. DeskConn project enables network based IPC, allowing for different apps, written in different languages to be able to communicate with each other and since the technology is based around WebSocket/WAMP/Zeroconf, a client could be running in any programming language that has a WAMP library.

For simplicity sake: Its a technology that could enable creation of projects like KDE Connect but something that runs on all platforms like Windows, macOS and Linux.

My talk about the DeskConn project

Met old colleagues and friends

FOSDEM gave me the opportunity to meet lots of great people that I truly admire in the Ubuntu community, people I hadn’t seen or talked to for more than 3 years.

I met quite a few people from the Ubuntu desktop team and it was refreshing to know how hard they are working on making Ubuntu 20.04 a success. Olivier Tilloy and I had a short discussion about browser maintenance that he does to ensure we have the latest and greatest versions of our two favorite browsers (Firefox and Chromium). Jibel told me about the ZFS installation feature work that He and Didier have been doing; I hope we’ll be able to use that technology in "production" soon.

from left to right: Martin Pitt (from RedHat), Ian Lane and Jean-Baptiste Lallement and I

Conclusion

My first FOSDEM was a great learning experience, navigating around the ULB is also a challenge of sorts but it was all worth it. I’d definitely go back to a FOSDEM given the chance, maybe next year 😉

Using Your Ubuntu Server As Telegram Proxy (MTProxy Snap)

Telegram is great, especially because it helps one stay away from the distractions that WhatsApp brings with it. Its unfortunately blocked in Pakistan, due to unknown reason but likely not related to censorship, given WhatsApp, Signal and every other messaging app works just fine.

The good news is Telegram upstream have their own proxy protocol and an implementation (https://github.com/TelegramMessenger/MTProxy), which seems to work well. I published MTProxy as a snap (https://snapcraft.io/mtproxy) yesterday, so thought it would make sense to share how others could setup their own proxy. This guide, will of course help me as a future reference as well.

So lets get started by installing MTProxy

snap install mtproxy

Due to security reasons, mtproxy drops privileges (if run as root) by calling setuid(), something a strictly confined snap does not allow due to security reasons, so my workaround was to create a new user on the server, so that mtproxy does not try to drop privileges.

So lets setup a new user and download proxy configurations from Telegram servers, more details: https://github.com/TelegramMessenger/MTProxy#running

useradd mtproxy -m
su - mtproxy
mkdir proxyconfig
curl -s https://core.telegram.org/getProxySecret -o proxyconfig/proxy-secret
curl -s https://core.telegram.org/getProxyConfig -o proxyconfig/proxy-multi.conf

Now lets exit the mtproxy user shell and create a secret to be used later by Telegram client apps

exit
head -c 16 /dev/urandom | xxd -ps

Now we create a systemd service so that our proxy runs in the background and starts automatically whenever the server is restarted. Open the below file for editing using nano (or the editor of your choice) and paste the below configuration.
Note: you must replace the random string that was generated in previous step with “my_secret” in below config.

sudo nano /etc/systemd/system/mtproxy.sevice
 [Unit]  
 Description=MTProxy  
 After=network.target  
 [Service]  
 Type=simple  
 User=mtproxy  
 WorkingDirectory=/home/mtproxy/proxyconfig  
 ExecStart=/snap/bin/mtproxy -u mtproxy -p 8888 -H 8000 -S my_secret --aes-pwd proxy-secret proxy-multi.conf -M 1  
 Restart=on-failure  
 [Install]  
 WantedBy=multi-user.target

Lets now start the service

systemctl enable mtproxy
systemctl start mtproxy

That’s it, we are done, you now have the Telegram proxy setup and (hopefully) working.

NOTE: This was only tested on DigitalOcean droplet, so your mileage may vary.

Control GPIO pins on a RaspberryPi 3 running Ubuntu Core 18, remotely (part 1/4)

Ubuntu Core 18 is out and one of the features that it packs with it is a set of snapd interfaces to access GPIO pins on the Raspberry Pi 2/3 in a fully confined snap. This enables one to just flash Ubuntu Core 18 on a micro sd card, boot, install a snap (which I author), connect a few interfaces and start controlling relays attached to a Raspberry Pi 2/3.

If you don’t have Ubuntu Core 18 already installed, you can see the install instructions here

To get started (assuming you have Ubuntu Core 18 installed and have working ssh access to the Pi), you need to install a snap that exposes the said functionality over the network (local)

snap install pigpio

The above command installed the pigpio server, which automatically starts in the background. The server could take as much as 30 seconds to start, you have been warned.

We also need to allow the newly installed snap to access a few GPIO pins

snap connect pigpio:gpio pi:bcm-gpio-4
snap connect pigpio:gpio pi:bcm-gpio-5
snap connect pigpio:gpio pi:bcm-gpio-6
snap connect pigpio:gpio pi:bcm-gpio-12
snap connect pigpio:gpio pi:bcm-gpio-13
snap connect pigpio:gpio pi:bcm-gpio-17
snap connect pigpio:gpio pi:bcm-gpio-18
snap connect pigpio:gpio pi:bcm-gpio-19
snap connect pigpio:gpio pi:bcm-gpio-20
snap connect pigpio:gpio pi:bcm-gpio-21
snap connect pigpio:gpio pi:bcm-gpio-22
snap connect pigpio:gpio pi:bcm-gpio-23
snap connect pigpio:gpio pi:bcm-gpio-24
snap connect pigpio:gpio pi:bcm-gpio-26

The above pin numbers might look strange, but if you read a bit about the Raspberry Pi 3’s GPIO pinout, you will realize, I only selected the “basic” pins, you are however free to connect all GPIO pin interfaces.

The pigpio snap that we installed above exposes the GPIO functionality over WAMP protocol and http. The HTTP implementation is very basic and allows to “turn on” and “turn off” a GPIO pin and get current state(s) of the pins.

Note: below commands assumes you have httpie installed (snap install http).

To get the state of all pins

http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.get_states

If we only want the state of a specific pin

http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.get_state args:='[4]'

To “turn on” a pin

http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.turn_on args:='[4]'

To “turn off”

http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.turn_off args:='[4]'

I am skipping talking about the WAMP based API for this, to keep this blogpost short, I must add though, that the WAMP implementation is much more powerful than the http one, especially because it has “event publishing”, imagine multiple people controlling a single GPIO pin from different clients, we publish an event that can be subscribed to, hence ensuring all client apps stay in sync. I’ll talk about this in a different blog post. In a later post, I will also be talking about making the GPIO pins accessible over the internet.

For me personally, I have a few projects for home and one for my co-working space that I plan to accomplish using this.

The code lives on github

Introducing PySide2 (Qt for Python) Snap Runtime

Lately at Crossbar.io, we have been PySide2 for an internal project. Last week it reached a milestone and I am now in the process of code cleanup and refactoring as we had to rush quite a few things for that deadline. We also create a snap package for the project, our previous approach was to ship the whole PySide2 runtime (170mb+) with the Snap, it worked but was a slow process, because each new snap build involved downloading PySide2 from PyPI and installing some deb dependencies.

So I decided to play with the content interface and cooked up a new snap that is now published to snap store. This definitely resulted in overall size reduction of the snap but at the same time opens a lot of different opportunities for app development on the Linux desktop.

I created a ‘Hello World’ snap that is just 8Kb in size since it doesn’t include any dependencies with it as they are provided by the pyside2 snap. I am currently working on a very simple “sound recorder” app using PySide and will publish to the Snap store.

With pyside2 snap installed, we can probably export a few environment variables to make the runtime available outside of snap environment, for someone who is developing an app on their computer.

Software security over convenience

Recently I got inspired (paranoid ?) by my boss who cares a lot about software security. Previously, I had almost the same password on all the websites I used, I had them synced to google servers (Chrome user previously), but once I started taking software security seriously, I knew the biggest mistake I was making was to have a single password everywhere, so I went one step forward and set randomly generated passwords on all online accounts and stored them in a keystore.

I then enabled 2FA authentication on some important services (GMail, GitHub, Twitter, DO) and adopted the policy to never login to my browser’s sync features. Doing that, I realize that the browser is just a commodity, it doesn’t matter which browser I use as long as I can log into my online accounts and of course a browser that actually works.

I am pretty sure there are many things that I could still improve around my computing patterns, which I will over time.

Motto: software security over convenience.