from @[email protected]

I wanted to host my code without a lot of extra infrastructure, apps, and metadata to operate and maintain. I just wanted a simple way to share my repos. Contributors can easily send patches via email without a whole new login and user system. I saw hg.prosody.im and was inspired by it's simplicity.

Given the overwhelming popularity of Git and full-service solutions like GitLab, there are only a couple helpful, but slightly outdated, guides for Mercurial + Nginx. It was enough to get me up and running, though, and I wanted to document how I got things working.

Mercurial can run a fork of cgit via WSGI and hosted by Nginx. Check the repo for the most up-to-date version of these configs in an Ansible role. The setup is as follows for my code hosted at the subdomain src.nth.io:

Nginx -> UNIX domain socket -> WSGI -> Mercurial repos


This particular setup is running Ubuntu 20.04.

  • mercurial
  • nginx
  • uwsgi
  • uwsgi-plugin-python3
  • python3-pygments

Nginx Config

Path: /etc/nginx/sites-enabled/src.nth.io.conf

server {
    listen 80;
    listen [::]:80;
    server_name src.nth.io;
    return 301 https://$host$request_uri;

server {
    listen 443 ssl http2;
    listen [::]:443 ssl ipv6only=on;
    server_name src.nth.io;

    ssl_certificate /etc/letsencrypt/live/src.nth.io/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/src.nth.io/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    ssl_ecdh_curve X25519:prime256v1:secp384r1:secp521r1;
    ssl_stapling on;
    ssl_stapling_verify on;

    location / {
        include     uwsgi_params;
        uwsgi_param REMOTE_PORT     $remote_port;
        uwsgi_param SERVER_PORT     $server_port;
        uwsgi_param SERVER_PROTOCOL $server_protocol;
        uwsgi_param UWSGI_SCHEME    $scheme;
        uwsgi_param SCRIPT_NAME     /;
        uwsgi_param AUTH_USER       $remote_user;
        uwsgi_param REMOTE_USER     $remote_user;
        uwsgi_pass  unix:/run/uwsgi/app/hgweb/socket;

UWSGI Config

Path: /etc/uwsgi/apps-enabled/hgweb.ini

Note the socket key matches the uwsgi_pass parameter from the Nginx config above.

processes = 2
socket = unix:/run/uwsgi/app/hgweb/socket
chdir = /var/www/src.nth.io
wsgi-file = hgweb.wsgi
uid = www-data
gid = www-data
plugins = python3

HgWeb Script

Path: /var/www/src.nth.io/hgweb.wsgi

config = "/var/www/src.nth.io/hgweb.config"
from mercurial import demandimport; demandimport.enable()
from mercurial.hgweb import hgweb
application = hgweb(config.encode())

HgWeb Config

Path: /var/www/src.nth.io/hgweb.config

Note the actual Mercurial repo path at /var/hg/repos.

/ = /var/hg/repos

deny_push = *
allow_archive = gz bz2 zip
encoding = UTF-8
style = gitweb

hgext.highlight =


With this all set up, sudo systemctl start uwsgi and sudo systemctl start nginx. If something doesn't work, check the logs. Ensure the UWSGI user and group allows for read access to the repo directory.

Both the Nginx and uWSGI Ubuntu packages are set to actually have the configs at {sites|apps}-available and then symlinked to the respective {sites|apps}-enabled directories, as documented on their websites.


Check out the Mercurial docs for instructions on theming the website.

#Mercurial #SelfHosting #Nginx


from @[email protected]

Gmail was founded on the premise that users would get free email if Google could read that email in order to show relevant ads. It seemed like a fair trade and a new way of paying for services. But over time, as business optimized the model, it started to get in the way of using those services.

What we watch — My Roku TV started showing popups and inviting us to watch the same show on the Roku when watching something on our Apple TV. This confirmed stories I have heard that TVs are becoming cheaper, less than parts and labor, because they are using embedding image recognition to monitor and sell data on what we are watching.

What we write — Following the fall of RSS, I've used Medium to host various blogs for years. It seemed like a good way to get an audience and see stats on readership. But over the course of those years, Medium started showing popups to readers to buy a Medium membership. Medium also started showing notices to me, as a writer, to publish my works behind a paywall.

What we say — More and more Twitter, my main social network, has started feeling like LinkedIn. A professional work network. It felt less like a community and more like a self indulgent place for people to flaunt their witticisms and sick burns. On top of this, the platform is increasingly struggling to balance moderation with overreaching censorship. To be fair, at their scale, a daunting and maybe impossible task.

Over time, these free services turned from users being the customer to the data brokers and advertisers being the customer. We're the product. But we can opt out.

Self hosting services isn't new and is, in fact, how the internet was built. Society moved to hosted services because self hosting makes it hard to discover people and content and to be discovered ourselves. Federation was, and is, the answer to the discoverability problem but now we have new federation technologies available to us.

With the success of Mastodon, federated social networks and services have achieved critical mass. Self hosting doesn't mean isolation anymore so opting out by leaving centralized networks is a newly refreshed viable choice.

#SelfHosting #Fediverse


from @[email protected]

There seem to be two schools of thought in the homelab community regarding whether Raspberry Pi clusters are worth it. Many are of the thought that four Raspberry Pis, clustered, with all the peripherals like hard drives, networking equipment, power supplies, etc. adds up to be more expensive than a single NUC or blade with way better specs. This seems reasonable and yet I still find myself drawn to the Pi cluster.

Why? Because of resource isolation and balance.

I have four Raspberry Pi 4 B (4gb) hosts running Ubuntu 64bit. I can have one dedicated to running a Bitcoin full node and know that any disk or CPU spike will not interrupt my web hosting on a separate host. I have dedicated and isolated units comprising CPU, RAM, NIC, and HDD resources. This, in some ways, is the opposite of running a true cluster with tools like Kubernetes. In a true cluster environment, the system is agnostic to where services are deployed on the bare metal, with some caveats.

Under normal loads, my Raspberry Pis are not maxing out any system resources except for network bandwidth at times. A perfect case for a CPU per NIC. It's also why I won't be adding an 8gb Pi 4 to my homelab any time soon.

Some other reasons people go with a Pi cluster, unverified by me, are lower power usage, substantially lower costs in some cases, and above all else, it's just fun.

#RaspberryPi #SelfHosting #Homelab #Linux


from @[email protected]

TL;DR — I'd like a source code management system with Fediverse ActivityPub integration.

I've been looking for a self-hosted home for my Mercurial repositories. Systems like Heptapod, a Mercurial fork of GitLab, feel heavy for what will ultimately be a single-user, multi-project instance, hosted on a Raspberry Pi. Sr.ht has emerged as the top contender being simple for small deployments but able to scale up with it's Unix principles of small, composable pieces. It's not yet deployed due to a lack of ARM binaries for systems like Raspberry Pi.

Through my evaluation process, I had the idea of a federated source code management system allowing for ActivityPub publications and subscriptions of source code projects and users. I noticed some developers of open source projects like Bitcoin core posting merge updates on their Fediverse feeds. This informal process could be automated!

While I don't have the time to work on this myself, I've casually contemplated and discussed with some suggestions surfacing like CPub, a “general ActivityPub server built upon Semantic Web ideas”. Ideally, sr.ht would be extended in this direction.

Really, I'm putting the idea out there to see what others think and hopefully inspire someone to built something.

Edit: After posting, Steven Roose pointed out ForgeFed, a project aimed at exactly this purpose. Primarily focused on Git, but Mercurial shouldn't be too much of a departure. Either way, it doesn't currently look ready to deploy for Mercurial.

#ActivityPub #Fediverse #Mercurial #SelfHosting


from @[email protected]

GPU Transcoding with Raspberry Pi

Wyze Cam on Apple HomeKit using the Raspberry Pi 4 hardware accelerated transcoding.

Wyze Cams are inexpensive and awesome web cams. Unfortunately Wyze will not support Apple HomeKit on the current cameras. An alternative is to use Homebridge on a Raspberry Pi to “bridge” the cameras into HomeKit.

NOTE  —  16th of Feb, 2020: A YouTube video by Tech Craft explains how to check an RTSP stream for native HomeKit H.264 support. This is luckily the case for Wyze Cams and therefore there is no need to transcode the stream with h264_omx as this post originally described. The post has been updated to use the more performant and simpler stream copy stream technique.

Performance Notes

  • The Wyze Cam RTSP output format is H.264 encoded which is the stream coded required by HomeKit so no transcoding is required.
  • The Raspberry Pi 4 has a lot of hardware updates to support higher bandwidth video processing. This may also work on older Raspberry Pis. The RPi 4 supports H.265 (4kp60 decode) and H.264 (1080p60 decode, 1080p30 encode).
  • While live streaming with the setup described here, memory usage peaked at 200Mb and CPU peaked at 50% on a single core.


  1. Install the Wyze Cam RTSP firmware
  2. Get a Raspberry Pi 4 B 1GB with Raspbian Buster or newer
  3. Install Homebridge


This is where experimentation was needed to find a successful setup.

On the Raspbian Buster apt repos all dependencies are ready to go with no custom compiling required.

Install the following packages with the respective package manager:

  • apt: libavahi-compat-libdnssd-dev
  • apt: ffmpeg
  • npm: homebridge-camera-ffmpeg  —  The fork homebridge-camera-ffmpeg-omx is not needed because the stream does not need to be transcoded.

Finally, configure Homebridge for each Wyze Cam. The key here is to use the copy vcodec to copy the native H.264 video stream strait to the HomeKit stream.

This config also has the combination found to work best with both streaming and snapshotting for HomeKit. Check out the homebridge-camera-ffmpeg docs and defaults before adding unnecessary configuration options.

Example Homebridge config.json:

    "bridge": {
        "name": "Homebridge",
        "username": "12:34:56:78:90:AB",
        "port": 51900,
        "pin": "031-45-154"
    "description": "Homebridge",
    "platforms": [
            "platform": "Camera-ffmpeg",
            "cameras": [
                    "name": "Wyze Cam",
                    "videoConfig": {
                        "source": "-i rtsp://username:[email protected]/live",
                        "stillImageSource": "-i rtsp://username:[email protected]/live -vframes 1 -r 1",
                        "vcodec": "copy"


At this point, it should be possible to add the accessories to the Home app and see both smooth live streaming and preview snapshots from the cameras.

On the Raspberry Pi, run top or htop to confirm the load is not on the CPU while streaming.

#WyzeCam #HomeKit #RaspberryPi #Homebridge