Team Bootup Project

Team Bootup Project is a worldwide movement to solve global scale problems with global scale compute.

The aim is to aggregate as much global compute power as possible and focus it on improving the four fundamental levels of human ecology.

Although the upper levels are dependent on the lower levels (e.g. its is not possible to have a good Economy without a good Environment), it is important that Bootup Projects deliver solutions at all 4 levels.

Initially, we are kicking off the Team Bootup Project with one project at each level:

  1. Society
    Compute Dignity - computedignity.com

  2. Health
    Virtual Vaccine - virtualvaccine.me

  3. Economy
    Latent Resource - latentresource.com

  4. Environment
    Culture Interchange - cultureinterchange.com

Sustainable Development Goal

Team Bootup Project is about solving a much wider sets of problems to humankind than the United Nations Sustainable Development Goals, however it is interesting to see how those 17 goals are related to Bootup Project's 4 levels.

Social

    1. Gender Equality
    1. Reduce Inequalities
    1. Peace, Justice and Strong Institutions
    1. Partnerships for the Goals

Health

    1. Zero hunger
    1. Good Health and Well-Being
    1. Quality Education

Economy

    1. No Poverty
    1. Affordable and Clean Energy
    1. Decent Work and Economic Growth
    1. Industry, Innovation and Infrastructure

Environment

    1. Clean Water and Sanitation
    1. Sustainable Cities and Communities
    1. Responsible Consumption and Production
    1. Climate Action
    1. Life Below Water
    1. Life on Land

As can be seen from above that Bootup Projects at any of the 4 levels directly contribute to achieving the Sustainable Development Goals.

Node Sizes

Disposable Nodes are classified based on the size of their memory. Higher memory nodes support all features of lower memory nodes.

Listed below are some suggest minimum node sizes for some applications:

0.5G memory

  • CPU: 1
  • RAM: 0.5 GB
  • SWAP: 1 GB
  • DISK: 16 GB

Virtual Private Mesh

  1. Network Relay Node

1G memory

  • CPU: 1
  • RAM: 1 GB
  • SWAP: 4 GB
  • DISK: 64 GB

Infinite Disk

  1. SMB Server
  2. File Access Node

2G memory

  • CPU: 2
  • RAM: 2 GB
  • SWAP: 2 GB
  • DISK: 64 GB

Fuzzy Blockchain

  1. Chain Audit Node

Infinite Disk

  1. File Storage Node

Home Zone

  1. Home Clients

4G memory

  • CPU: 4
  • RAM: 4 GB
  • SWAP: 4 GB
  • DISK: 256 GB

Home Zone

  1. Home Servers

Shared Computer

Shared Computers are NOT dedicated to running Disposable Nodes, they perform other tasks e.g. running Personal Console in web browsers, editing office documents on Infinite Disk etc. along with running one or more Disposable Nodes.

Shared Computers (Windows, macOS) need at least 4G RAM to run Disposable Nodes as Virtual Machines. The lack of RAM mean running a second VM to protect the main VM might not be possible.

A shared computer with 8GB RAM is recommended and 16 GB RAM is preferred.

Suggested Configurations

4G RAM Computer

  • 1 x 1GB Disposable Node
  • 1 x 0.5GB Disposable Node

8G RAM Computer

  • 1 x 4GB Disposable Node
  • 1 x 0.5GB Disposable Node

16G RAM Computer

  • 1 x 8GB Disposable Node
  • 1 x 0.5GB Disposable Node

Node Types

Base Nodes are Virtual Machines (e.g. KVM) or System Containers (e.g. LXC) that provide basic compute resources for other Disposable Node to built on.

Base Nodes Operating Systems
Process Node Ubuntu, Debian, Red Hat
Mesh Node OpenWrt

Application Servers are created to take advantage of the services offered by the Base Nodes

Campus Network Application Core Software External Software Internal Software
Broadcast Server Mastodon BigBueButton, Peertube, coturn Gallery, Relation
Operate Server GLPI Openwisp, Zabbix Database Partition
Storage Server nbdkit Minio Infinite Disk
Blockchain Server Bitcoin Core PKI Fuzzy Blockchain
Home Network Application Core Software External Software Internal Software
File Application Nextcloud Samba
Home Application Home Assistant
Network Application mitmproxy tinc, nmap IP Rank
Search Application elasticsearch carrot2

Kubernetes

Kubernetes (K8s) was designed by Google for Google style computing, not many organisation running K8s has the same applications and resources as Google.

Even for those organisations with thousands of containers, orchestrating them through a single control plane is a big reliability and security hole. The bigger the organisation the more need for compartmentalisation with many control planes, so one update or one hack or mistake is not going to bring all of them down.

For most docker based applications there is NO need for the difficult migration and complex operation of K8s, however for those with existing K8s applications, it is actually possible to run K8s inside a Disposable Node to take advantage of advanced Community Cluster features like Dynamic Alias, Infinite Disk etc.

Besides K8s, it is also possible to run other orchestration clusters (e.g. Docker Swarm, HashiCorp Nomad etc.) inside Disposable Nodes.

MicroK8s

While installing full K8s inside Disposable Nodes is supported, but light weight K8s distributions (e.g microK8s, K3s etc.) fit Citizen Synergy's distributed control paradigm better.

The default K8s distribution that can run inside LXC is MicroK8s. It removes the need for to implement high availability (HA) at the LXC level and instead implements HA inside the LXC itself.

The following video introduces MicroK8s running inside LXD:

Default MicroK8s configuration:

Community Cluster vs Kubernetes

Community Cluster is design for the home while Kubernetes is design for the data centre, although both the Community Cluster and Kubernetes can be used to manage Application Containers (e.g. docker) they are very different.

Features Community Cluster Kubernetes
Compute Module Any Commands, Packages, Containers, Machines Special Containers
Compute Set Node Pod
Compute Host Station with Nodes Node with Pods
Compute Location Neighbourhood, Data Centres Data Centres
Replica Active, Inactive Active
Orchestrator Many One
Storage Infinite Disk Container Storage Interface
Network Virtual Private Mesh Container Network Interface

Note: A Compute Set groups Modules together so they can share the same networking and storage.

Replica Sets

Both run many replicas of the same application container in order to scale up application reliability and performance across multiple machines.

Container Groups

Disposable Nodes inside a Community Cluster can be viewed loosely as Pods in Kubernetes, enabling application containers running inside to share common networking and storage.

2. Differences

Orchestrator

Kubernetes has ONE orchestrator running on multiple Nodes, centrally controlling processing data given to it by information owners through centralised control of the application and infrastructure.

Community Cluster has MANY information owners processing their own data by controlling the application and infrastructure independently.

As each owner only needs to manage a small part of Community Cluster that it uses, the complexity is substantially reduced.

Complete Management

Kubernetes does not handle much outside of containers (computing abstraction). Fiduciary Exchange covers everything with Disposable Node abstraction (from software to hardware, from support personnel to computer rooms).

2halves

All Nodes whether, physical or virtual, follow the same Modular Assist management framework.

Any Site

Kubernetes is designed to be ran in a few secured and stable data centres with high quality networking. Fiduciary Exchange is designed to run across the world in almost anywhere with common internet access.

Universal Storage

Kubernetes has numerous volume types and provisioning methods.

Fiduciary Exchange only has one type (Infinite Disk) that looks and performs like a local disk to support any applications (including emails, documents, videos, databases, search engines etc.)

Bidirectional Network

Kubernetes networking focuses on handling incoming traffic to services provided by the pod (e.g. kube-proxy).

Fiduciary Exchange has Network Nodes controlling both incoming and outgoing traffic from Application Nodes, Station Nodes and Room Nodes.

Your Own App Store

A Disposable Node looks and feels like a real computer, enabling it to run almost any application.

For full control of your Private Cyberspace, the preference is to select from the hundreds of millions of Open Source applications available for free inside your Disposable Nodes (just the one Github repository has more than 400 million applications).

Existing App Stores

With millions of application available, it is very difficult for cyberspace owners to decide which to use and to learn how to install them into their Private Cyberspace, this is where private cyberspace app stores come in.

Within those numerous applications, some are designed to be ran on premises in homes, offices, shops, factories etc.

There are numerous open sourced App Stores available for you to install in your own Private Cyberspace so you can have the One Click install experience (similar to traditional mobile App Stores) for open sourced applications.

Instead of getting your apps from Apple App Store or Google Play Store, build you own App Store by installing one or more the following App Stores in your Private Cyberspace.

  1. CapRover
  2. CasaOS
  3. Cloudron
  4. Co-op Cloud
  5. Cosmos
  6. Easypanel
  7. Elestio
  8. Ethibox
  9. FreedomBox
  10. HomelabOS
  11. PikaPods
  12. Sandstorm
  13. Tipi
  14. Umbrel
  15. Unraid
  16. Yunohost

Disposable Node Storage

Files inside Disposable Nodes are stored into 4 types of infinite disk buckets.

1. System Bucket

These are folders for storing the underlying operating system and associated files. They are backed up on Infinite Disk under the the System Bucket of that node.

2. Application Folders

Docker Images and their changes (overlayfs)

3. Standard Folders

Docker Volume (real-time back up to Infinite Disk Cluster)

4. Layered Folders

Mount Volume - Docker Volume (real-time back up to Infinite Disk Cluster)