Creating a new plugin for the Janus WebRTC server

I am working on a system to support multi-site podcasting using WebRTC and the Janus Server seemed like a good place to start. None of the example plugins does exactly what I want so, rather than modify an existing plugin, I decided to create a new one based on an existing one (videoroom). The screen capture shows the result. At this stage, it is identical to the video room plugin hence the identical look of the test. There are a few steps to doing this such that it is integrated into the configuration and build system and there’s no way I will remember them, hence this aide-memoire!

One thing I noticed which has nothing to do with a new plugin is that I needed to install gtk-doc-tools before I could compile libnice as described in the dependency section of the readme.

Anyway, the janus-gateway repo has a plugins directory that contains c source (amongst other things) of the various plugins. I decided to base my new plugin on the videoroom plugin so I copied janus_videoroom.c into rt_podcall.c for the new plugin. Then, using a text editor, I changed all forms of text involving “videoroom” into “podcall”.

Once the source is created, it can be added into the configure.ac file which is in the root of the repo. Basically, I copied anything involving “videoroom” and changed the text from “videoroom” to “podcall”. The same also needs to be done for Makefile.am.

It is also necessary to create a configuration file for the new plugin. The repo root has a directory called conf which is where all of the configurations are held. I copied the janus.plugin.videoroom.jcfg.sample into janus.plugin.podcall.jcfg.sample to satisfy that requirement.

In order to test the plugin, it’s useful to add code into the existing demo system. The repo root has a directory called html that contains the test code. I copied videoroomtest.html and videoroomtest.js into podcall.html and podcall.js and edited the files to fix the references (such as plugin name) from videoroom to podcall.

To make the test available in the Demos dropdown, edit navbar.html and add the appropriate line in the dropdown menu.

Once all that’s done, it should be possible to build and install the modified Janus server:

sh autogen.sh
./configure --prefix=/opt/janus
make
sudo make install
sudo make configs

The Janus server needs a webserver in order to run these tests. I used a very simple Python server to do this:

from http.server import HTTPServer, SimpleHTTPRequestHandler
import ssl

server_address = ('localhost', 8080)
httpd = HTTPServer(server_address, SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket(httpd.socket,
                               certfile='../certs/mycert.pem',
                               keyfile='../certs/mycert.key',
                               server_side=True)
httpd.serve_forever()

This is run with Python3 in the html directory and borrows the sample Janus certificates to support ssl. Replace localhost with a real IP address to allow access this server outside of the local machine.

Sending and receiving binary data using JSON encoding, Python and MQTT

I really like using JSON encoding as a way of transferring messages between processes as it is machine and language independent. Plus, it is very well suited to stream processing networks (such as rt-ai Edge) as arbitrary fields can be added to existing JSON messages and passed along. Contrast this with compiled IDLs which typically have no flexibility whatsoever.

One problem though is that binary data cannot be included in JSON messages directly. Typically base64 encoding is used to convert binary data into text. However, this is inefficient, especially in a stream processing network where base64 decoding and encoding might have to be done several times.

There are a variety of modifications to JSON around but it is very simple to just add binary data on to the end of a JSON message to form a complete message that can be transferred via MQTT for example.

In Python, an MQTT message can be published like this:

    import json
    import struct
    ...
    def publish(topic, jsonData, binData = None):
        jsonDump = json.dumps(jsonData)
        jsonString = struct.pack('>I', len(jsonDump)) + jsonDump + binData
        MQTTClient.publish(topic, jsonString)
        ...

Here, jsonData contains the normal JSON message text, binData contains the binary data to be sent along with it. To receive the message, use something like this:

    import json
    import struct
    ...
    def onMessage(client, userdata, message):
        jsonLength = struct.unpack('>I', message.payload[0:4])[0]
        jsonData = json.loads(message.payload[4:4+jsonLength])
        binData = message.payload[4+jsonLength:]
        ...

Speeding up Apache NiFi compilation

The normal way to build Apache NiFi from source on Linux is to use:

mvn -T C2.0 clean install

More info is here incidentally. One issue with this is that it also runs all the tests which gives rise to a couple of problems. One is that some of the tests take a while and slow down the build process. The other is that, should some arcane test in code that isn’t interesting fail, the build aborts. To avoid this, build with tests turned off:

mvn -T C2.0 clean install -Dmaven.test.skip=true

Saves quite a bit of time!

Developing Electron apps with Visual Studio Code

I have been trying out Electron as a way of developing some WebRTC apps to work with the Janus gateway. In the end I decided that Visual Studio Code was a good route to take for Javascript code development. One thing that wasn’t at all obvious though was how to get breakpoints to work. I found this blog entry that had the answer – no way I would have been able to work it out myself so go to that link for the original source (reproduced here for my convenience).

First thing is to install the Debugger for Chrome extension for VS Code – instructions are here. Then, the .vscode/launch.json file should look something like this:

{
  // Use IntelliSense to learn about possible Node.js debug attributes.
  // Hover to view descriptions of existing attributes.
  // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
  "version": "0.2.0",
  "configurations": [
    {
      "type": "node",
      "request": "launch",
      "name": "Main Debug",
      "runtimeExecutable": "${workspaceRoot}/node_modules/.bin/electron",
      "windows": {
        "runtimeExecutable": "${workspaceRoot}/node_modules/.bin/electron.cmd"
      },
      "program": "${workspaceRoot}/main.js",
      "protocol": "legacy"
    },
    {
      "name": "Renderer Debug",
      "type": "chrome",
      "request": "launch",
      "runtimeExecutable": "${workspaceRoot}/node_modules/.bin/electron",
      "runtimeArgs": [
        "${workspaceRoot}",
        "--enable-logging",
        "--remote-debugging-port=9222"
      ],
      "sourceMaps": false
    }
  ]
}

Using this launch file, to debug in the main process use Main Debug, to debug in the renderer process use Renderer Debug.

Developing Unity projects for Moverio BT-300 AR glasses on Windows

Since the Moverio BT-300 AR glasses run Android 5.1 using an Atom processor, it is possible to run Unity projects on them. The starting point is the instructions here on setting up Unity for the Android platform. One problem with this is that the android command is not included in Android Studio apparently so Unity builds will fail. So, to get Unity builds for Android to work, it is necessary to download and unzip the command line tools from the bottom of this page. This will create a directory tree that includes a tools directory. This should be used to replace the original tools directory in the Android Studio install, usually found at:

C:\Users\<username>\AppData\Local\Android\sdk

Incidentally, that is also the path that Unity needs to know in order to perform its builds.

There is a Unity plugin that provides support for 3D on the BT-300. For instructions on how to use the plugin, read:

 Assets > MoverioBT300UnityPlugin > MoverioController > README

The plugin includes a scene called MoverioTutorial that can be used as a starting point. It demonstrates some of the features of the plugin.

After the package name has been set in Player > Other Settings, it should then be possible to build, deploy and run on the BT-300 directly from Unity. I had a few problems with the tutorial with regard to SDK functionality but the Unity part seemed to work well (although I had to set 3D mode and disable the 2D camera manually sometimes). I am sure that I am doing something wrong – I’ll update the post when I work out what is happening.

Connecting a webcam to a VirtualBox guest OS

I am running Ubuntu 16.04 in a VirtualBox VM on a Windows 10 machine and wanted to access the laptop’s webcam from a Python script running in the Ubuntu VM. The trick (as described here) is to enter this line on the host while the VM is running:

VboxManage controlvm "vmname" webcam attach .0

where vmname is the name of the VM to be modified.

There doesn’t seem to be any need to add a USB filter for the webcam – doing that doesn’t seem to help at all.

The only problem with this is that the change isn’t permanent – it has to be run each time the VM is started. Simplest way to deal with that is to start the VM from a batch file:

cd "c:\Program Files\Oracle\VirtualBox"
VboxManage startvm "vmname"
VboxManage controlvm "vmname" webcam attach .0

Incidentally, this attaches the default webcam. Individual ones can be specified using .1, .2 etc. Use:

VboxManage list webcams

to get a list of webcams and aliases.

Using zeroconf to ssh into VirtualBox VM instances on Windows

If a VM is started in headless mode with bridged networking and DHCP, it’s kind of tricky to work out what IP address the VM is using in order to ssh into it. The simplest way is to use the zeroconf .local style address (i.e. <hostname>.local) but Windows by default doesn’t support this. However, installing Bonjour Print Services from the Apple Support website solves the problem.