Advanced Namespace Tools blog

15 January 2017

Using ANTS with /net.alt private networking on VPS hosting

Good VPS providers will often have the option to provide a private network between your VMs. This offers an additional layer of security and isolation from public networks, and data transferred on the private network may not count against your total bandwidth usage. Plan 9 was designed with this kind of inside/outside network interface structure in mind.

Architecting a Small Grid

I wanted to set up a small grid for personal use that would be structured in the traditional Plan 9 fashion, with a few ANTS-specific wrinkles. I have a local terminal (drawterm on a laptop) and native Plan 9 cpu server (an ultra-cheap minitower), and as many VPS nodes as I feel like paying for. (I use small nodes with 768mb RAM and 15gb of SSD storage so it is cheap to run several of them).

For a personal grid, I decided to setup the Venti archival data server on one vps node, a Fossil disk file server on another vps node, and have two vps CPU servers that acquire their root fs via tcp boot from the Fossil. My local terminal connects to my local all-in-one cpu/file server but mostly uses the local server as a connection point to the remote CPUs. If I choose, I can drawterm directly to the remote CPUs, or do work within the local machine's fs.


At the beginning of the workflow described in this post, I already had the vps nodes up and running. Their initial install/configuration is as described by this walkthrough for setting up 9front/ANTS with fossil+venti on vultr vps hosting. (That's a referrer link btw.) You can imagine this blog post as starting out with having two nodes set up via "build vultrfossil" and two nodes set up via "build vultr". Depending on your plans for backing up and replicating your data, you might want to set up the fossil node to use more of the disk for fossil and not include any Venti partitions. The local nodes are standard 9front drawterm and a current 9front/ANTS all-in-one cpu/fs server.

Using ANTS plan9rc "afterfact" Option to Customize Booting

The plan9rc boot script is ANTS primary method of booting. (There is also an option to use a modified version of the standard 9front bootrc script, but the setup in this post requires the use of "bootcmd=plan9rc" in plan9.ini.) It is heavily parameterized for maximum flexibility, but this setup will require some further customization. The key to doing so is the environment variable "afterfact" which can be set in plan9.ini to instruct the plan9rc script to invoke an additional outside script or program during early boot - after the boot namespace ramdisk and tools are loaded in and factotum is started, but before the root filesystem is acquired, initskel starts services for the boot namespace, and the main cpurc or termrc is launched. (Incidentally, it is also possible to substitue custom scripts for the "ramskel" and "initskel" portions of ANTS bootup, but that isn't needed in this scenario.)

So, we need to create a small script that will set up the internal private network in /net.alt while leaving the standard /net for the public facing internet interface. Here is one that is setup for how the vultr private networks are configured:

bind -b '#l1' /net.alt
bind -b '#I1' /net.alt
ip/ipconfig -x /net.alt ether /net.alt/ether1 $altaddr
ip/ipconfig -x /net.alt loopback /dev/null 127.1
echo mtu 1450 >/net.alt/ipifc/0/ctl

This is parameterized by an additional variable, "altaddr", which we will also set in plan9.ini. That way we can use the same mini script for each node and just change the plan9.ini vars. This script will be placed in 9fat under the name "altnet" and we will set "afterfact=/x/altnet" in plan9.ini to tell plan9rc to launch the script. (The 9fat is mounted under /x by the ramskel script.)

Configuring Boot on each Node

With this script in place, all we need to do is set the correct plan9.ini vars on each system to define its actions and role. Here is the plan9.ini for the venti server, which needs to be first up in the chain:

venti=#S/sdF0/arenas /net.alt/tcp!*!17034

When working out these configs, it is sometimes a good idea to leave bootfile= blank, so that boot stops before loading the kernel. That way, if you get something wrong, it is easy to change the variables before the kernel boot process starts. You can also interrupt boot by spamming keys, but in the case of a vps with a web management interface, that is sometimes inconvenient. Something to note about this setup is that the local fossil on the Venti server could be omitted from boot - ANTS is fine with no root server whatsoever, so you could run the Venti server and nothing else on this node if you preferred to do without a standard userspace. Note the second parameter to venti, specifying an interface to listen on.

The next node in the sequence is the Fossil file server. Here is an abbreviated version of its plan9.ini, omitting the boilerplate variables such as mouseport.


Note that here, the venti variable is specifying where to dial. The other bit of additional configuration on this server is telling Fossil to listen on the standard port for 9fs, 564. Here is the fossil config, read and written with fossil/conf - the last line is the only addition to the defaults created by the "build vultrfossil" script:

fsys main config /dev/sdF0/fossil
fsys main open -c 3000
srv -p fscons
srv -A fossil
listen /net.alt/tcp!*!564

The final two servers, the tcp boot cpus, share a plan9.ini configuration (more boilerplate omitted):


Note that cfs is completely optional, on this node I repurposed what was originally the small hjfs partition created during install into a cache file. I also have things set up so that auth is dialed on the standard network interface, but you could certainly have your auth server also communicate via the private network on /net.alt.

With all of the vps nodes configured, its time to boot them. The venti goes first:

Plan 9
126 holes free
0x0001a000 0x0009f000 544768
0x00634000 0x05373000 80998400
81543168 bytes free
cpu0: 2400MHz GenuineIntel P6 (AX 000306C1 CX F7FA3203 DX 078BFBFF)
LAPIC: fee00000 0xe0000000
ELCR: 0C00
cpu0: lapic clock at 1000MHz
pcirouting: PCI.0.1.3 at pin 1 link 60 irq 9 -> 10
#l0: virtio: 1000Mbps port 0xC060 irq 11: 5600000f6247
#l1: virtio: 1000Mbps port 0xC0C0 irq 11: 5a00000f6247
768M memory: 83M kernel data, 684M user, 1309M swap
paqfs...bootfs: Sun Jan 15 09:29:17 GMT 2017
fingerprint: 8e839e03ad3588d9a8f5569bce71aa27405d2e46
cpu factotum...
ramsetup ramskel...
ramfs -S ramboot...tar xf /boot/skel.tar...dossrv: serving #s/dos
tar xzf tools.tgz...copying bootes skeleton to glenda
partial root built
setting /dev/sysname to 9venti
after factotum command /x/altnet
starting venti
2017/0115 09:38:42 venti: conf...venti/venti: mem 33,554,432 bcmem 50,331,648 icmem 67,108,864...httpd tcp!127.1!8000...init...icache 67,108,864 bytes = 1,048,576 entries; 14 scache
sync...announce /net.alt/tcp!*!17034...serving.
starting fossil from /dev/sdF0/fossil...
fsys: dialing venti at /net.alt/tcp!127.1!17034
srv -p fscons: srv: already serving 'fscons'
INITSKEL minimal setup for rootless plan 9 environment
binddevs...mntgen...hubfs...creating hubfs /srv/hubfs
aux/listen1 17060 rcpu rexexec...
storing startup configuration to ramdisk in /usr/glenda/tmp/p9cfg
mounting /srv/boot to /root and starting cpurc
starting shell in current namespace

Wasn't that exciting! Okay, now we boot the fossil (only the relevant changes in boot messages will be shown from here on out):

starting fossil from /dev/sdF0/fossil...
fsys: dialing venti at /net.alt/tcp!!17034

And finally we can boot our tcp-root cpus:

after factotum command /x/altnet
root is from (tcp,tls,aan,local)[tcp]: 
srv /net.alt/tcp!!564 to /srv/boot...
cfs: name mismatch
formatting disk
INITSKEL minimal setup for rootless plan 9 environment
binddevs...mntgen...hubfs...creating hubfs /srv/hubfs
formatting inodes
aux/listen1 17060 rcpu rexexec...

Using the Remote Environment

From my non-plan 9 laptop, I use Drawterm to connect to my local cpu server's ANTS boot namespace, like this:

drawterm -B -a -c 'tcp!!17060' -u mycro

I could certainly also drawterm into the standard namespace rooted in the local disk fs, and rcpu out from there. I am doing it this way partly to keep my environments separated and partly for demonstration purposes for this post. The local disk environment is contained in a separate sub-rio within the drawterm session, entered by "rerootwin -f boot" "service=con ". $home/lib/profile" and "grio -s". Within that environment I also start a persistent hubfs in that namespace at /srv/crosshub with the command "hub crosshub".

To set up the environment I want for working with the remote servers, I like to adjust a few things in the environment:

mount -c /srv/boot /n/b
bind -c /n/b/usr/mycro /n/u
cpu=vpscpu1		# name of your primary remote cpu in ndb
mount -c /srv/crosshub /n/crosshub	# local disk-rooted hub
grio -s -w 2 -y 0 -c 0x770077ff -x /bin/rcpu

This starts up a nice purple-hued grio. Our primary way to open new windows will be using the "/bin/rcpu" command that we added to the menu. Those windows will open running on the remote cpu we specified. If we want a window on the secondary remote cpu, we need to use rcpu -h vpscpu2 in a standard new window. We can also access the persistent rc session rooted in the local disk if we open a new window using the "Hub" menu option.

If there are any issues with the connections of the remote nodes, we don't lose control of them until they are rebooted - we can rcpu -h tcp!remote!17060 to access the ANTS boot/admin namespace on any of them which is independent of the venti->fossil->tcp boot chain.