from @[email protected]



I became more interested in personal privacy after my Roku started spying on what we were watching outside the Roku itself, our mesh WiFi router switched to a subscription model for “AI” and “cloud” features, and our smart home switches required access to “the cloud” just to turn on lights. TVs, WiFi routers, and smart home devices are all driving prices down by supplementing hardware sales revenue with personal data sales.

On top of that, after creating a custom smart lock, I saw first hand how Google and Amazon's smart home infrastructures are built around selling cloud services and capturing my personal data, while Apple HomeKit is designed to work without any internet access at all.

Given these consideration, I wanted a more robust router and firewall between my home network and the internet. I wanted to be able to completely block smart home devices from accessing the internet. And I wanted to do everything as cheaply as possible while maintaining compute resource (CPU, RAM, disk) separation between self-hosted services.


The Ubiquiti EdgerRouter X is the router and firewall while the mesh WiFi is in “bridge” mode, effectively operating as a switch. IP addresses are assigned in ranges and firewall settings are used to block all devices from the internet except those that need it (Apple TV, Laptops, Phones, etc.).

Four Raspberry Pis host all self-hosted services. Public services like code hosting, federated social networks, and a bitcoin node. And private services like DNS-based ad and tracker blocking. (After using Pi-Hole for a while, I switched to AdGuard Home, which is just simpler and easier to maintain.)

Finally, power over Ethernet (PoE) with a PoE switch is used to reduce the cords to the Raspberry Pis.

Custom Racking

A downside of not using a standard rack-mounted host is the non-standard form factors of the Raspberry Pis and hard drives etc.

To handle this, I 3D printed a Raspberry Pi 2U rack mount. It's not used in a rack configuration but it's actually just a great way to have easy, uniform, and modular access to the Pis.

Hardware was purchased from McMaster-Carr.

For the hard drives, I designed and 3D printed a custom stand.

Configuration Management

Host configuration is managed with Ansible. The roles are written to be minimally invasive and optimized for low maintenance.

The Ansible roles are open source.


For clarity, my specific Raspberry Pi Ansible playbook in provided below:

- hosts: rpis
    - rpi-base
    - apt-cacher/client
    - prometheus/rpi-client

- hosts: admin.local
    - adguard-home
    - apt-cacher/server
    - prometheus/server

- hosts: btc.local
    - block-device
    - bitcoind
    - lnd
    - bitcoind-prometheus-exporter

- hosts: media.local
    - block-device
    - plex
    - transmission
    - homebridge
    - minecraft
    - nginx

- hosts: web.local
    - block-device
    - postgresql
    - pleroma/aws-s3-backup
    - pleroma/otp
    - writefreely
    - mercurial/aws-s3-backup
    - mercurial/web
    - oragono
    - prosody
    - nginx


Using a hardwired router as the articulation point between the internet and the rest of a home network is a great way to get privacy, security, and self-hosting without really investing much.

#RaspberryPi #SelfHosting #Homelab #Linux #RaspberryPi #Homekit


from @[email protected]

I wanted to host my code without a lot of extra infrastructure, apps, and metadata to operate and maintain. I just wanted a simple way to share my repos. Contributors can easily send patches via email without a whole new login and user system. I saw hg.prosody.im and was inspired by it's simplicity.

Given the overwhelming popularity of Git and full-service solutions like GitLab, there are only a couple helpful, but slightly outdated, guides for Mercurial + Nginx. It was enough to get me up and running, though, and I wanted to document how I got things working.

Mercurial can run a fork of cgit via WSGI and hosted by Nginx. Check the repo for the most up-to-date version of these configs in an Ansible role. The setup is as follows for my code hosted at the subdomain src.nth.io:

Nginx -> UNIX domain socket -> WSGI -> Mercurial repos


This particular setup is running Ubuntu 20.04.

  • mercurial
  • nginx
  • uwsgi
  • uwsgi-plugin-python3
  • python3-pygments

Nginx Config

Path: /etc/nginx/sites-enabled/src.nth.io.conf

server {
    listen 80;
    listen [::]:80;
    server_name src.nth.io;
    return 301 https://$host$request_uri;

server {
    listen 443 ssl http2;
    listen [::]:443 ssl ipv6only=on;
    server_name src.nth.io;

    ssl_certificate /etc/letsencrypt/live/src.nth.io/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/src.nth.io/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

    ssl_ecdh_curve X25519:prime256v1:secp384r1:secp521r1;
    ssl_stapling on;
    ssl_stapling_verify on;

    location / {
        include     uwsgi_params;
        uwsgi_param REMOTE_PORT     $remote_port;
        uwsgi_param SERVER_PORT     $server_port;
        uwsgi_param SERVER_PROTOCOL $server_protocol;
        uwsgi_param UWSGI_SCHEME    $scheme;
        uwsgi_param SCRIPT_NAME     /;
        uwsgi_param AUTH_USER       $remote_user;
        uwsgi_param REMOTE_USER     $remote_user;
        uwsgi_pass  unix:/run/uwsgi/app/hgweb/socket;

UWSGI Config

Path: /etc/uwsgi/apps-enabled/hgweb.ini

Note the socket key matches the uwsgi_pass parameter from the Nginx config above.

processes = 2
max-requests = 10240
max-requests-delta = 1024
max-worker-lifetime = 604800
socket = unix:/run/uwsgi/app/hgweb/socket
chdir = /var/www/src.nth.io
wsgi-file = hgweb.wsgi
uid = www-data
gid = www-data
plugins = python3

The max request lines allow the uwsgi server to restart the workers and mitigate memory leaks in the HG Web script without incurring downtime.

HgWeb Script

Path: /var/www/src.nth.io/hgweb.wsgi

config = "/var/www/src.nth.io/hgweb.config"
from mercurial import demandimport; demandimport.enable()
from mercurial.hgweb import hgweb
application = hgweb(config.encode())

HgWeb Config

Path: /var/www/src.nth.io/hgweb.config

Note the actual Mercurial repo path at /var/hg/repos.

/ = /var/hg/repos

deny_push = *
allow_archive = gz bz2 zip
encoding = UTF-8
style = gitweb

hgext.highlight =


With this all set up, sudo systemctl start uwsgi and sudo systemctl start nginx. If something doesn't work, check the logs. Ensure the UWSGI user and group allows for read access to the repo directory.

Both the Nginx and uWSGI Ubuntu packages are set to actually have the configs at {sites|apps}-available and then symlinked to the respective {sites|apps}-enabled directories, as documented on their websites.


Check out the Mercurial docs for instructions on theming the website.

#Mercurial #SelfHosting #Nginx


from @[email protected]

Gmail was founded on the premise of free email for users in exchange for Google reading user emails to show relevant ads. It seemed like a fair trade and a new way of paying for services. But over time, as business optimized the model for revenue, it started to get in the way of using those services.

What we watch — My Roku TV started showing popups and inviting us to watch the same show on the Roku when watching something on our Apple TV. This confirmed stories I have heard that TVs are becoming cheaper, less than parts and labor, because they are using embedding image recognition to monitor and sell data on what we are watching.

What we write — Following the fall of RSS, I've used Medium to host various blogs for years. It seemed like a good way to get an audience and see stats on readership. But over the course of those years, Medium started showing popups to readers to buy a Medium membership. Medium also started showing notices to me, as a writer, to publish my works behind a paywall.

What we say — More and more Twitter, my main social network, has started feeling like LinkedIn. A professional work network. It felt less like a community and more like a self indulgent place for people to flaunt their witticisms and sick burns. On top of this, the platform is increasingly struggling to balance moderation with overreaching censorship. To be fair, at their scale, a daunting and maybe impossible task.

Over time, these free services turned from users being the customer to the data brokers and advertisers being the customer. We're the product. But we can opt out.

Self hosting services isn't new and is, in fact, how the internet was built. Society moved to hosted services because self hosting makes it hard to discover people and content and to be discovered ourselves. Federation was, and is, the answer to the discoverability problem but now we have new federation technologies available to us.

With the success of Mastodon, federated social networks and services have achieved critical mass. Self hosting doesn't mean isolation anymore so opting out by leaving centralized networks is a newly refreshed viable choice.

#SelfHosting #Fediverse


from @[email protected]

There seem to be two schools of thought in the homelab community regarding whether Raspberry Pi clusters are worth it. Many are of the thought that four Raspberry Pis, clustered, with all the peripherals like hard drives, networking equipment, power supplies, etc. adds up to be more expensive than a single NUC or blade with way better specs. This seems reasonable and yet I still find myself drawn to the Pi cluster.

Why? Because of resource isolation and balance.

I have four Raspberry Pi 4 B (4gb) hosts running Ubuntu 64bit. I can have one dedicated to running a Bitcoin full node and know that any disk or CPU spike will not interrupt my web hosting on a separate host. I have dedicated and isolated units comprising CPU, RAM, NIC, and HDD resources. This, in some ways, is the opposite of running a true cluster with tools like Kubernetes. In a true cluster environment, the system is agnostic to where services are deployed on the bare metal, with some caveats.

Under normal loads, my Raspberry Pis are not maxing out any system resources except for network bandwidth at times. A perfect case for a CPU per NIC. It's also why I won't be adding an 8gb Pi 4 to my homelab any time soon.

Some other reasons people go with a Pi cluster, unverified by me, are lower power usage, substantially lower costs in some cases, and above all else, it's just fun.

#RaspberryPi #SelfHosting #Homelab #Linux


from @[email protected]

TL;DR — I'd like a source code management system with Fediverse ActivityPub integration.

I've been looking for a self-hosted home for my Mercurial repositories. Systems like Heptapod, a Mercurial fork of GitLab, feel heavy for what will ultimately be a single-user, multi-project instance, hosted on a Raspberry Pi. Sr.ht has emerged as the top contender being simple for small deployments but able to scale up with it's Unix principles of small, composable pieces. It's not yet deployed due to a lack of ARM binaries for systems like Raspberry Pi.

Through my evaluation process, I had the idea of a federated source code management system allowing for ActivityPub publications and subscriptions of source code projects and users. I noticed some developers of open source projects like Bitcoin core posting merge updates on their Fediverse feeds. This informal process could be automated!

While I don't have the time to work on this myself, I've casually contemplated and discussed with some suggestions surfacing like CPub, a “general ActivityPub server built upon Semantic Web ideas”. Ideally, sr.ht would be extended in this direction.

Really, I'm putting the idea out there to see what others think and hopefully inspire someone to built something.

Edit: After posting, Steven Roose pointed out ForgeFed, a project aimed at exactly this purpose. Primarily focused on Git, but Mercurial shouldn't be too much of a departure. Either way, it doesn't currently look ready to deploy for Mercurial.

#ActivityPub #Fediverse #Mercurial #SelfHosting