Disposable Node

Homomorphic Encryption

Sovereign Transaction is the world's first deployment of Homomorphic Encryption at scale, enabling massive amount of sensitive data (e.g. daily movements of whole population) to be processed securely, delivering unprecedented benefits to every citizens.

For the first time in history individuals have FULL control of their identity:

  • Not their governments
  • Not their banks
  • Not their employers
  • Not their schools
  • Not their parents
  • Not their friends

For the first time, Compute Owners have MORE insight into themselves than any of the above. Yes, not even the Online Social Platform they use EVERYDAY have more control of their identity that the Compute Owners themselves.

2. Linux Container

Disposable Nodes support a wide range of virtualisation technologies, including

enabling you to run almost any software possible in them.

Although you can decide to run any software packages in your Disposable Nodes, the following software packages that have been tested and are fully supported worldwide by 88.io for the Private Cyberspace 24.07 release.

Please refer to https://repository.88.io/ for source code and more details.

Currently Disposable Node is based on the Linux Container (LXC) technology.

LXC works best together - as a cluster.

As of 2023-10-01 there are 3 main LXC cluster managers:

  1. LXD
  2. Incus - recent fork of LXD
  3. Proxmox

Only LXD running Ubuntu on x86 machine is supported from the 20.12 release, support for the other two may be available in future releases.

Traditional LXC cluster managers do not have support for LXC nodes separated by high latency links. Most quote less than 5ms as the limit between nodes.

In Proxmox, link latency of more that 5ms "may work" in some conditions e.g. if there are only 3 nodes and they are less than 10ms apart:

In LXD, the number of database queries is a problem that has not been fixed yet:

In most cases, the limitation seems to be the way the distributed database (Dqlite for LXD, Corosync for Proxmox) is used. We are putting together a fund to sponsor the development a "campus" feature for LXC clusters, so LXC nodes can operate reliably on links under 30ms.

Node Sizes

Disposable Nodes are classified based on the size of their memory. Higher memory nodes support all features of lower memory nodes.

Listed below are some suggest minimum node sizes for some applications:

0.5G memory

  • CPU: 1
  • RAM: 0.5 GB
  • SWAP: 1 GB
  • DISK: 16 GB

Virtual Private Mesh

  1. Network Relay Node

1G memory

  • CPU: 1
  • RAM: 1 GB
  • SWAP: 4 GB
  • DISK: 64 GB

Infinite Disk

  1. SMB Server
  2. File Access Node

2G memory

  • CPU: 2
  • RAM: 2 GB
  • SWAP: 2 GB
  • DISK: 64 GB

Fuzzy Blockchain

  1. Chain Audit Node

Infinite Disk

  1. File Storage Node

Home Zone

  1. Home Clients

4G memory

  • CPU: 4
  • RAM: 4 GB
  • SWAP: 4 GB
  • DISK: 256 GB

Home Zone

  1. Home Servers

Shared Computer

Shared Computers are NOT dedicated to running Disposable Nodes, they perform other tasks e.g. running Personal Console in web browsers, editing office documents on Infinite Disk etc. along with running one or more Disposable Nodes.

Shared Computers (Windows, macOS) need at least 4G RAM to run Disposable Nodes as Virtual Machines. The lack of RAM mean running a second VM to protect the main VM might not be possible.

A shared computer with 8GB RAM is recommended and 16 GB RAM is preferred.

Suggested Configurations

4G RAM Computer

  • 1 x 1GB Disposable Node
  • 1 x 0.5GB Disposable Node

8G RAM Computer

  • 1 x 4GB Disposable Node
  • 1 x 0.5GB Disposable Node

16G RAM Computer

  • 1 x 8GB Disposable Node
  • 1 x 0.5GB Disposable Node

Node Types

Base Nodes are Virtual Machines (e.g. KVM) or System Containers (e.g. LXC) that provide basic compute resources for other Disposable Node to built on.

Base Nodes Operating Systems
Process Node Ubuntu, Debian, Red Hat
Mesh Node OpenWrt

Application Servers are created to take advantage of the services offered by the Base Nodes

Campus Network Application Core Software External Software Internal Software
Broadcast Server Mastodon BigBueButton, Peertube, coturn Gallery, Relation
Operate Server GLPI Openwisp, Zabbix Database Partition
Storage Server nbdkit Minio Infinite Disk
Blockchain Server Bitcoin Core PKI Fuzzy Blockchain
Home Network Application Core Software External Software Internal Software
File Application Nextcloud Samba
Home Application Home Assistant
Network Application mitmproxy tinc, nmap IP Rank
Search Application elasticsearch carrot2

Kubernetes

Kubernetes (K8s) was designed by Google for Google style computing, not many organisation running K8s has the same applications and resources as Google.

Even for those organisations with thousands of containers, orchestrating them through a single control plane is a big reliability and security hole. The bigger the organisation the more need for compartmentalisation with many control planes, so one update or one hack or mistake is not going to bring all of them down.

For most docker based applications there is NO need for the difficult migration and complex operation of K8s, however for those with existing K8s applications, it is actually possible to run K8s inside a Disposable Node to take advantage of advanced Community Cluster features like Dynamic Alias, Infinite Disk etc.

Besides K8s, it is also possible to run other orchestration clusters (e.g. Docker Swarm, HashiCorp Nomad etc.) inside Disposable Nodes.

MicroK8s

While installing full K8s inside Disposable Nodes is supported, but light weight K8s distributions (e.g microK8s, K3s etc.) fit Citizen Synergy's distributed control paradigm better.

The default K8s distribution that can run inside LXC is MicroK8s. It removes the need for to implement high availability (HA) at the LXC level and instead implements HA inside the LXC itself.

The following video introduces MicroK8s running inside LXD:

Default MicroK8s configuration:

Community Cluster vs Kubernetes

Community Cluster is design for the home while Kubernetes is design for the data centre, although both the Community Cluster and Kubernetes can be used to manage Application Containers (e.g. docker) they are very different.

Features Community Cluster Kubernetes
Compute Module Any Commands, Packages, Containers, Machines Special Containers
Compute Set Node Pod
Compute Host Station with Nodes Node with Pods
Compute Location Neighbourhood, Data Centres Data Centres
Replica Active, Inactive Active
Orchestrator Many One
Storage Infinite Disk Container Storage Interface
Network Virtual Private Mesh Container Network Interface

Note: A Compute Set groups Modules together so they can share the same networking and storage.

Replica Sets

Both run many replicas of the same application container in order to scale up application reliability and performance across multiple machines.

Container Groups

Disposable Nodes inside a Community Cluster can be viewed loosely as Pods in Kubernetes, enabling application containers running inside to share common networking and storage.

2. Differences

Orchestrator

Kubernetes has ONE orchestrator running on multiple Nodes, centrally controlling processing data given to it by information owners through centralised control of the application and infrastructure.

Community Cluster has MANY information owners processing their own data by controlling the application and infrastructure independently.

As each owner only needs to manage a small part of Community Cluster that it uses, the complexity is substantially reduced.

Complete Management

Kubernetes does not handle much outside of containers (computing abstraction). Fiduciary Exchange covers everything with Disposable Node abstraction (from software to hardware, from support personnel to computer rooms).

2halves

All Nodes whether, physical or virtual, follow the same Modular Assist management framework.

Any Site

Kubernetes is designed to be ran in a few secured and stable data centres with high quality networking. Fiduciary Exchange is designed to run across the world in almost anywhere with common internet access.

Universal Storage

Kubernetes has numerous volume types and provisioning methods.

Fiduciary Exchange only has one type (Infinite Disk) that looks and performs like a local disk to support any applications (including emails, documents, videos, databases, search engines etc.)

Bidirectional Network

Kubernetes networking focuses on handling incoming traffic to services provided by the pod (e.g. kube-proxy).

Fiduciary Exchange has Network Nodes controlling both incoming and outgoing traffic from Application Nodes, Station Nodes and Room Nodes.

Your Own App Store

A Disposable Node looks and feels like a real computer, enabling it to run almost any application.

For full control of your Private Cyberspace, the preference is to select from the hundreds of millions of Open Source applications available for free inside your Disposable Nodes (just the one Github repository has more than 400 million applications).

Existing App Stores

With millions of application available, it is very difficult for cyberspace owners to decide which to use and to learn how to install them into their Private Cyberspace, this is where private cyberspace app stores come in.

Within those numerous applications, some are designed to be ran on premises in homes, offices, shops, factories etc.

There are numerous open sourced App Stores available for you to install in your own Private Cyberspace so you can have the One Click install experience (similar to traditional mobile App Stores) for open sourced applications.

Instead of getting your apps from Apple App Store or Google Play Store, build you own App Store by installing one or more the following App Stores in your Private Cyberspace.

  1. CapRover
  2. CasaOS
  3. Cloudron
  4. Co-op Cloud
  5. Cosmos
  6. Easypanel
  7. Elestio
  8. Ethibox
  9. FreedomBox
  10. HomelabOS
  11. PikaPods
  12. Sandstorm
  13. Tipi
  14. Umbrel
  15. Unraid
  16. Yunohost

Disposable Node Storage

Files inside Disposable Nodes are stored into 4 types of infinite disk buckets.

1. System Bucket

These are folders for storing the underlying operating system and associated files. They are backed up on Infinite Disk under the the System Bucket of that node.

2. Application Folders

Docker Images and their changes (overlayfs)

3. Standard Folders

Docker Volume (real-time back up to Infinite Disk Cluster)

4. Layered Folders

Mount Volume - Docker Volume (real-time back up to Infinite Disk Cluster)