Minio

Community Storage

Infinite Disk enables the deployment of MinIO in the community (instead of inside data centres) substantially decreasing its costs and increasing its reliability.

1. Server Attached Disks

MinIO disks are bounded to a server, when a server goes down, all the disks attached to it also goes down.

Infinite Disk separates the disk storage out from the MinIO server and putting the "disks" onto remote sites as network storage called File Nodes. Should a MinIO server goign down, these File Nodes can now work independently with ANOTHER MinIO servers.

The MinIO server only manages the File Nodes underneath it and no longer has any physical disk attached, we called this storage-less server a Cluster Node.

2. Large Disk Size

MinIO file size are limited by the disk size, so large disk (e.g. in terabytes range) are normally required, when a fault occurs, rebuilding each physical disk takes a long time and a lot of resources.

Infinite Disk uses File Nodes with fixed 20GByte capacities so each disk can be rebuilt rapidly in resource constrained environments. Small disk size also means even people WITHOUT much spare storage can now become Disk Nodes and offer those storage capacities to others.

Inter-Server Communication

MinIO traditionally uses HTTP for all inter-server communications, but started to move to WebSockets for some communications from 2023-12 onwards:

With this new chance previous tuning for operations over WAN may be affected:


$ git diff
diff --git a/cmd/rpc-common.go b/cmd/rpc-common.go
index 9761787..b6a3f40 100644
--- a/cmd/rpc-common.go
+++ b/cmd/rpc-common.go
@@ -24,7 +24,7 @@ import (
 
 // Allow any RPC call request time should be no more/less than 3 seconds.
 // 3 seconds is chosen arbitrarily.
-const rpcSkewTimeAllowed = 3 * time.Second
+const rpcSkewTimeAllowed = 2 * time.Minute
 
 func isRequestTimeAllowed(requestTime time.Time) bool {
        // Check whether request time is within acceptable skew time.
diff --git a/vendor/github.com/minio/dsync/drwmutex.go b/vendor/github.com/minio/dsync/drwmutex.go
index b15bd4f..96a918b 100644
--- a/vendor/github.com/minio/dsync/drwmutex.go
+++ b/vendor/github.com/minio/dsync/drwmutex.go
@@ -43,7 +43,7 @@ func log(msg ...interface{}) {
 }
 
 // DRWMutexAcquireTimeout - tolerance limit to wait for lock acquisition before.
-const DRWMutexAcquireTimeout = 25 * time.Millisecond // 25ms.
+const DRWMutexAcquireTimeout = 1 * time.Minute // 1min.
 
 // A DRWMutex is a distributed mutual exclusion lock.
 type DRWMutex struct {

Parity Upgrade

MinIO’s parity upgrade is essentially an automatic adjustment in the parity distribution across the remaining available drives within the SAME erasure-coding set when one or more drives go offline.

In a MinIO distributed setup with erasure coding, parity bits are used to protect data. These bits ensure that data can be reconstructed if a drive fails, providing redundancy and reliability. Here’s how MinIO manages parity when drives go offline:

  1. Recalculation of Parity with Fewer Drives: When one or more drives go offline, MinIO dynamically recalculates how parity is distributed across the remaining drives. This recalculation doesn’t physically add new storage from any drive but rather redistributes parity to protect data using the available storage on the remaining drives.

  2. Using Available Storage on Remaining Drives: Instead of having a “dedicated parity drive,” MinIO distributes both data and parity blocks across all drives in the cluster. When a drive fails, the system reallocates data and parity blocks using available space on the remaining drives. The additional storage for this parity comes from the available space on each remaining drive.

  3. Rebalancing When Drives are Restored: If the failed drives are restored, MinIO will again rebalance data and parity distribution to include these drives, ensuring the cluster operates with its full capacity and redundancy level.

MinIO’s “parity upgrade” primarily affects NEW DATA being written after a drive goes offline. The existing (or "old") data that was already written before the drive failure retains its original erasure coding and parity distribution.

When one or more drives go offline:

  1. New Data Protection: Any new data written after the failure will have parity recalculated according to the remaining drives. This ensures that new writes maintain the same level of redundancy across the available storage.

  2. Old Data Vulnerability: Data that was already on disk before the drive went offline remains as it was initially protected. This data doesn’t receive additional parity protection. If it was already protected sufficiently by existing parity, it can still be recovered as long as the number of remaining drives satisfies the original redundancy level. However, the overall resilience for this old data might be lower if the system had only just enough redundancy before the drive failure.

  3. Restoration of Redundancy on Drive Reconnection: Once the offline drive(s) are restored, MinIO typically rebalances and reassigns parity to restore the initial resilience and protection level for both old and new data.

So “parity upgrade” maintains high availability for new writes, but it doesn’t retroactively adjust protection levels for existing data after a drive failure.

Parity-Upgrade only affects drives in the same erasure-coding set.

Commercial Change

No More Binary

As of 2025-10-24, Minio no longer distribute binary versions (their latest binary still available actually has security vulnerability so should NOT be used).

We have now built our own binary from the source with that vulnerability fixed:

Our Minio binaries for Linux on x86 and Armbain on arm64 are now available:

There are also binaries made by others the community:

We plan to continue to produce binaries from official Minio source to track Minio releases:

No New Features

Also it looks like Minio will no longer develop the open sourced branch of their code further:

The overall project is only receiving bug fixes and CVE patches for now; it is not actively being developed for new features.

So we can forget about new features from Minio itself.

Removed Feature

Before the above problems with Minio itself, many Console features has been removed and the community has forked more feature rich versions e.g.

4. Our Repository

We have replicated the following from github

  1. GitHub - minio/minio: MinIO is a high-performance, S3 compatible object store, open sourced under GNU AGPLv3 license.
  2. GitHub - minio/mc: Unix like utilities for object store
  3. GitHub - minio/object-browser: Simple UI for MinIO Object Storage 🧮
  4. GitHub - minio/docs: MinIO Object Storage Documentation

to our own gitlab

pending further developments in the community.

Alternatives

RustFS

Same design as Minio can be deployed as replacement easily.

RustFS design is VERY similar to Minio, almost a fork from Go to Rust.

Comparison:

AIStore

Different design as Minio but has a lot of its features.

AIStore is quite different to Minio: