Disposable Node

Disposable Node lets you AGGREGATE latent resources worldwide (unused disk spaces, stray wifi signals, idle night bandwidth, bored retiree labour etc.). Mix and match resources as nodes EASILY (like assembling toy bricks) without technical knowledge and deploy them rapidly with just ONE COMMAND.

Most people don't know HOW to cook well, but they certainly know WHAT taste good. Disposable Node enables people without non-technical skill to improve their Private Cyberspace by simply mixing and matching predefined computing blocks (like toy bricks).

Disposable Node empowers you to import compute to process your data instead of having to export your data to cloud platforms for processing, by enabling you to leverage the latent processing power of thousands of community computers worldwide from the comfort of your personal phone.

Disposable Node gives you unprecedented income and impact by aggregating resources from millions of community computers e.g. ultra reliable storage, get insights from private data, manage trust in every transaction, set prices for your contributions.

1. Introduction

Applications in the Home Digital Hub are deployed inside Data Containers so they can be isolated from each other, run efficiently and can be managed easily.

This is very different to the traditional processing containers based on isolation of PROCESSING resources (processor, ram etc.):

  1. Application Containers (e.g. Docker) isolate applications to make them easier to deploy and maintain.
  2. Machine Containers (e.g. KVM) isolate computation ti make sharing of hardware resources easier.

With Data Containers we use standard System Containers (e.g. LXD) but with a special value added layer so we isolate based on the data type e.g. private data, shared data, public data etc.

Flexible Virtualisation

Compute Modules can run applications inside be Machine Containers (e.g. KVM) or System Containers (e.g. LXD).

Machine Container has more flexibility but is also more resource hungry, in general Compute Modules should only use Machine Containers if System Container is not suitable.

System Container provides isolation at the operation system level, allowing both traditional applications as well as newer Application Container (e.g. Docker) applications to be run inside them.

Resource Sharing

To promote resource sharing (resulting in lower costs) Compute Module tries to put as many applications as reasonable into one Module (one container), this goes the OPPOSITE direction of application containers (e.g. like Docker) which separates application out.

Compute Module isolates the technical support instead of isolating related applications that does not need to be isolated, so extra layers of protection between applications can be removed.

Compute Module lowers management costs with advanced monitoring and automation technologies instead of isolating applications by giving them their own system resources.

Operation Framework

All Compute Modules share a common Operation Framework (e.g. Infinite Disk for storage and Campus Network for communication) so they can be deployed easily around the world.

Network Isolation

Disposable Node and Personal Console form the two halves of a Private Cyberspace.

Disposable Node provides the backend computing power (e.g. when the required processing resources exceed what's available on a mobile phone or when the information is more securely processed away from the mobile phone), while the Personal Console provides the frontend user interface.


The complete Private Cyberspace can be deployed rapidly with ONE CLICK install of Console and ONE COMMAND install of Node on numerous commodity devices.

2.1. One Command

A Disposable Node can be created with just ONE COMMAND on most computers.

It can share existing computers (from home computers to remote virtual machines) with other applications as well as run on dedicated computers (from tiny Raspberry Pis to massive IBM Mainframes) by themselves.


Most old computers manufactured within the past 10 years (even laptops with damaged screens and keyboards) can be used.

2.2. Node Types

Disposable Nodes are basic building blocks of the Private Cyberspace that can be used to deploy an unlimited range of processing, networking and storage systems. Disposable Nodes cover the whole digital environment, from the version of the software being used to the size of the storage on a computer, from the room the computer is in to the name of the person walking in to do repairs.


2.3. Scalable Design

Each Disposable Node provides a set of application specific computing functions by wrapping relevant software and hardware into independently deployable computing bundles that work synergistically together with each other.


It hides the complexity of operating large scale computing resources behind a simple computing abstraction, allowing those resources to collaborate and be shared quickly and safely between members of a community.

Ironically they create highly reliable systems by being easily disposable themselves.

2. Linux Container

Disposable Nodes support a wide range of virtualisation technologies, including

enabling you to run almost any software possible in them.

Although you can decide to run any software packages in your Disposable Nodes, the following software packages that have been tested and are fully supported worldwide by 88.io for the Private Cyberspace 24.07 release.

Please refer to https://repository.88.io/ for source code and more details.

Currently Disposable Node is based on the Linux Container (LXC) technology.

LXC works best together - as a cluster.

As of 2023-10-01 there are 3 main LXC cluster managers:

  1. LXD
  2. Incus - recent fork of LXD
  3. Proxmox

Only LXD running Ubuntu on x86 machine is supported from the 20.12 release, support for the other two may be available in future releases.

Traditional LXC cluster managers do not have support for LXC nodes separated by high latency links. Most quote less than 5ms as the limit between nodes.

In Proxmox, link latency of more that 5ms "may work" in some conditions e.g. if there are only 3 nodes and they are less than 10ms apart:

In LXD, the number of database queries is a problem that has not been fixed yet:

In most cases, the limitation seems to be the way the distributed database (Dqlite for LXD, Corosync for Proxmox) is used. We are putting together a fund to sponsor the development a "campus" feature for LXC clusters, so LXC nodes can operate reliably on links under 30ms.

Node Sizes

Disposable Nodes are classified based on the size of their memory. Higher memory nodes support all features of lower memory nodes.

Listed below are some suggest minimum node sizes for some applications:

0.5G memory

  • CPU: 1
  • RAM: 0.5 GB
  • SWAP: 1 GB
  • DISK: 16 GB

Virtual Private Mesh

  1. Network Relay Node

1G memory

  • CPU: 1
  • RAM: 1 GB
  • SWAP: 4 GB
  • DISK: 64 GB

Infinite Disk

  1. SMB Server
  2. File Access Node

2G memory

  • CPU: 2
  • RAM: 2 GB
  • SWAP: 2 GB
  • DISK: 64 GB

Fuzzy Blockchain

  1. Chain Audit Node

Infinite Disk

  1. File Storage Node

Home Zone

  1. Home Clients

4G memory

  • CPU: 4
  • RAM: 4 GB
  • SWAP: 4 GB
  • DISK: 256 GB

Home Zone

  1. Home Servers

Shared Computer

Shared Computers are NOT dedicated to running Disposable Nodes, they perform other tasks e.g. running Personal Console in web browsers, editing office documents on Infinite Disk etc. along with running one or more Disposable Nodes.

Shared Computers (Windows, macOS) need at least 4G RAM to run Disposable Nodes as Virtual Machines. The lack of RAM mean running a second VM to protect the main VM might not be possible.

A shared computer with 8GB RAM is recommended and 16 GB RAM is preferred.

Suggested Configurations

4G RAM Computer

  • 1 x 1GB Disposable Node
  • 1 x 0.5GB Disposable Node

8G RAM Computer

  • 1 x 4GB Disposable Node
  • 1 x 0.5GB Disposable Node

16G RAM Computer

  • 1 x 8GB Disposable Node
  • 1 x 0.5GB Disposable Node

Node Types

Base Nodes are Virtual Machines (e.g. KVM) or System Containers (e.g. LXC) that provide basic compute resources for other Disposable Node to built on.

Base Nodes Operating Systems
Process Node Ubuntu, Debian, Red Hat
Mesh Node OpenWrt

Application Servers are created to take advantage of the services offered by the Base Nodes

Campus Network Application Core Software External Software Internal Software
Broadcast Server Mastodon BigBueButton, Peertube, coturn Gallery, Relation
Operate Server GLPI Openwisp, Zabbix Database Partition
Storage Server nbdkit Minio Infinite Disk
Blockchain Server Bitcoin Core PKI Fuzzy Blockchain
Home Network Application Core Software External Software Internal Software
File Application Nextcloud Samba
Home Application Home Assistant
Network Application mitmproxy tinc, nmap IP Rank
Search Application elasticsearch carrot2


Kubernetes (K8s) was designed by Google for Google style computing, not many organisation running K8s has the same applications and resources as Google.

Even for those organisations with thousands of containers, orchestrating them through a single control plane is a big reliability and security hole. The bigger the organisation the more need for compartmentalisation with many control planes, so one update or one hack or mistake is not going to bring all of them down.

For most docker based applications there is NO need for the difficult migration and complex operation of K8s, however for those with existing K8s applications, it is actually possible to run K8s inside a Disposable Node to take advantage of advanced Community Cluster features like Dynamic Alias, Infinite Disk etc.

Besides K8s, it is also possible to run other orchestration clusters (e.g. Docker Swarm, HashiCorp Nomad etc.) inside Disposable Nodes.


While installing full K8s inside Disposable Nodes is supported, but light weight K8s distributions (e.g microK8s, K3s etc.) fit Citizen Synergy's distributed control paradigm better.

The default K8s distribution that can run inside LXC is MicroK8s. It removes the need for to implement high availability (HA) at the LXC level and instead implements HA inside the LXC itself.

The following video introduces MicroK8s running inside LXD:

Default MicroK8s configuration:

Community Cluster vs Kubernetes

Community Cluster is design for the home while Kubernetes is design for the data centre, although both the Community Cluster and Kubernetes can be used to manage Application Containers (e.g. docker) they are very different.

Features Community Cluster Kubernetes
Compute Module Any Commands, Packages, Containers, Machines Special Containers
Compute Set Node Pod
Compute Host Station with Nodes Node with Pods
Compute Location Neighbourhood, Data Centres Data Centres
Replica Active, Inactive Active
Orchestrator Many One
Storage Infinite Disk Container Storage Interface
Network Virtual Private Mesh Container Network Interface

Note: A Compute Set groups Modules together so they can share the same networking and storage.

Replica Sets

Both run many replicas of the same application container in order to scale up application reliability and performance across multiple machines.

Container Groups

Disposable Nodes inside a Community Cluster can be viewed loosely as Pods in Kubernetes, enabling application containers running inside to share common networking and storage.

2. Differences


Kubernetes has ONE orchestrator running on multiple Nodes, centrally controlling processing data given to it by information owners through centralised control of the application and infrastructure.

Community Cluster has MANY information owners processing their own data by controlling the application and infrastructure independently.

As each owner only needs to manage a small part of Community Cluster that it uses, the complexity is substantially reduced.

Complete Management

Kubernetes does not handle much outside of containers (computing abstraction). Fiduciary Exchange covers everything with Disposable Node abstraction (from software to hardware, from support personnel to computer rooms).


All Nodes whether, physical or virtual, follow the same Modular Assist management framework.

Any Site

Kubernetes is designed to be ran in a few secured and stable data centres with high quality networking. Fiduciary Exchange is designed to run across the world in almost anywhere with common internet access.

Universal Storage

Kubernetes has numerous volume types and provisioning methods.

Fiduciary Exchange only has one type (Infinite Disk) that looks and performs like a local disk to support any applications (including emails, documents, videos, databases, search engines etc.)

Bidirectional Network

Kubernetes networking focuses on handling incoming traffic to services provided by the pod (e.g. kube-proxy).

Fiduciary Exchange has Network Nodes controlling both incoming and outgoing traffic from Application Nodes, Station Nodes and Room Nodes.

Your Own App Store

A Disposable Node looks and feels like a real computer, enabling it to run almost any application.

For full control of your Private Cyberspace, the preference is to select from the hundreds of millions of Open Source applications available for free inside your Disposable Nodes (just the one Github repository has more than 400 million applications).

Existing App Stores

With millions of application available, it is very difficult for cyberspace owners to decide which to use and to learn how to install them into their Private Cyberspace, this is where private cyberspace app stores come in.

Within those numerous applications, some are designed to be ran on premises in homes, offices, shops, factories etc.

There are numerous open sourced App Stores available for you to install in your own Private Cyberspace so you can have the One Click install experience (similar to traditional mobile App Stores) for open sourced applications.

Instead of getting your apps from Apple App Store or Google Play Store, build you own App Store by installing one or more the following App Stores in your Private Cyberspace.

  1. CapRover
  2. CasaOS
  3. Cloudron
  4. Co-op Cloud
  5. Cosmos
  6. Easypanel
  7. Elestio
  8. Ethibox
  9. FreedomBox
  10. HomelabOS
  11. PikaPods
  12. Sandstorm
  13. Tipi
  14. Umbrel
  15. Unraid
  16. Yunohost