AI Trust

AI Teaming

Your Own AI

Your Own Chat

The cheapest chat model on owner compute is qwen3.5:4b-instruct you can compare with chatgpt4.2

qwen3.5:4b-instruct

chatgpt5.3

The only AI you can trust is your own AI.

  • by "AI" we mean neural network based artificial intelligence technology.

Owner Compute gives you control of your AI by
combines multiple models together so they can work together as a team, generate better output than single models.

Default AI Cluster

The default models are based on running them in Ollama, you can run many more models with more feature using VLLM but it is more complex to use.

Support AI Voices

Multilingual

Speakers support Single Languages:
a = American
b = British
f = French
e = Spanish
h = Hindi
i = Italian
p = Portuguese
j = Japanese
z = Mandarin

  "name": "af_alloy"
  "name": "af_aoede"
  "name": "af_bella"
  "name": "af_heart"
  "name": "af_jadzia"
  "name": "af_jessica"
  "name": "af_kore"
  "name": "af_nicole"
  "name": "af_nova"
  "name": "af_river"
  "name": "af_sarah"
  "name": "af_sky"
  "name": "af_v0"
  "name": "af_v0bella"
  "name": "af_v0irulan"
  "name": "af_v0nicole"
  "name": "af_v0sarah"
  "name": "af_v0sky"
  "name": "am_adam"
  "name": "am_echo"
  "name": "am_eric"
  "name": "am_fenrir"
  "name": "am_liam"
  "name": "am_michael"
  "name": "am_onyx"
  "name": "am_puck"
  "name": "am_santa"
  "name": "am_v0adam"
  "name": "am_v0gurney"
  "name": "am_v0michael"
  "name": "bf_alice"
  "name": "bf_emma"
  "name": "bf_lily"
  "name": "bf_v0emma"
  "name": "bf_v0isabella"
  "name": "bm_daniel"
  "name": "bm_fable"
  "name": "bm_george"
  "name": "bm_lewis"
  "name": "bm_v0george"
  "name": "bm_v0lewis"
  "name": "ef_dora"
  "name": "em_alex"
  "name": "em_santa"
  "name": "ff_siwis"
  "name": "hf_alpha"
  "name": "hf_beta"
  "name": "hm_omega"
  "name": "hm_psi"
  "name": "if_sara"
  "name": "im_nicola"
  "name": "jf_alpha"
  "name": "jf_gongitsune"
  "name": "jf_nezumi"
  "name": "jf_tebukuro"
  "name": "jm_kumo"
  "name": "pf_dora"
  "name": "pm_alex"
  "name": "pm_santa"
  "name": "zf_xiaobei"
  "name": "zf_xiaoni"
  "name": "zf_xiaoxiao"
  "name": "zf_xiaoyi"
  "name": "zm_yunjian"
  "name": "zm_yunxi"
  "name": "zm_yunxia"
  "name": "zm_yunyang"

Mandarin 普通话

Mixed Mandarin and English

  "name": "af_maple"
  "name": "af_sol"
  "name": "bf_vale"
  "name": "zf_001"
  "name": "zf_002"
  "name": "zf_003"
  "name": "zf_004"
  "name": "zf_005"
  "name": "zf_006"
  "name": "zf_007"
  "name": "zf_008"
  "name": "zf_017"
  "name": "zf_018"
  "name": "zf_019"
  "name": "zf_021"
  "name": "zf_022"
  "name": "zf_023"
  "name": "zf_024"
  "name": "zf_026"
  "name": "zf_027"
  "name": "zf_028"
  "name": "zf_032"
  "name": "zf_036"
  "name": "zf_038"
  "name": "zf_039"
  "name": "zf_040"
  "name": "zf_042"
  "name": "zf_043"
  "name": "zf_044"
  "name": "zf_046"
  "name": "zf_047"
  "name": "zf_048"
  "name": "zf_049"
  "name": "zf_051"
  "name": "zf_059"
  "name": "zf_060"
  "name": "zf_067"
  "name": "zf_070"
  "name": "zf_071"
  "name": "zf_072"
  "name": "zf_073"
  "name": "zf_074"
  "name": "zf_075"
  "name": "zf_076"
  "name": "zf_077"
  "name": "zf_078"
  "name": "zf_079"
  "name": "zf_083"
  "name": "zf_084"
  "name": "zf_085"
  "name": "zf_086"
  "name": "zf_087"
  "name": "zf_088"
  "name": "zf_090"
  "name": "zf_092"
  "name": "zf_093"
  "name": "zf_094"
  "name": "zf_099"
  "name": "zm_009"
  "name": "zm_010"
  "name": "zm_011"
  "name": "zm_012"
  "name": "zm_013"
  "name": "zm_014"
  "name": "zm_015"
  "name": "zm_016"
  "name": "zm_020"
  "name": "zm_025"
  "name": "zm_029"
  "name": "zm_030"
  "name": "zm_031"
  "name": "zm_033"
  "name": "zm_034"
  "name": "zm_035"
  "name": "zm_037"
  "name": "zm_041"
  "name": "zm_045"
  "name": "zm_050"
  "name": "zm_052"
  "name": "zm_053"
  "name": "zm_054"
  "name": "zm_055"
  "name": "zm_056"
  "name": "zm_057"
  "name": "zm_058"
  "name": "zm_061"
  "name": "zm_062"
  "name": "zm_063"
  "name": "zm_064"
  "name": "zm_065"
  "name": "zm_066"
  "name": "zm_068"
  "name": "zm_069"
  "name": "zm_080"
  "name": "zm_081"
  "name": "zm_082"
  "name": "zm_089"
  "name": "zm_091"
  "name": "zm_095"
  "name": "zm_096"
  "name": "zm_097"
  "name": "zm_098"
  "name": "zm_100"

Cantonese 广东话

Mixed Cantonese and English

  "name": "zf_hg"
  "name": "zf_hm"
  "name": "zm_wl"

Primary Models

3 small vision models from China, France and United States working together.

qwen3-vl:4b-instruct-q4_K_M

ministral-3:3b-instruct-2512-q4_K_M

gemma3:4b-it-qat

Any number of models can be added to the above 3 primary models, the models listed below are just some recommended ones that are normally be found in most clusters worldwide.

Vision Models

The first model listed below is recommended.

qwen3-vl:32b-instruct-q4_K_M

llama3.2-vision:11b-instruct-q4_K_M

Thinking Models

The first one listed below is preferred.

qwen3-vl:32b-thinking-q8_0

deepseek-r1:70b-llama-distill-q4_K_M

Self-Attention

Although neural networks have been around for a long time, it is mainly the development of the "self-attention" mechanism that understands relationships between massive amounts of data by mimicking human focus - instead of reading a document word-by-word sequentially, it evaluates the relevance of every part of the input to every other part simultaneously.

Self-Attention enables modern AI to understand context by creating dynamic, weighted connections (relationships) between data points rather than treating them all equally.

1. Mechanism

Attention breaks down every input token (word) into three distinct vector representations, acting like a database lookup system:

  • Query (Q): Your Search Term (e.g. climate change)
    This represents the current word or data point that is "asking" for context.
    A specific piece of data is "looking for" something (e.g., a verb looking for its subject). What am I looking for? What is this word trying to understand?

  • Key (K): Book Names (e.g. the names of all books available)
    This is the "index" or "label" for all other words in the sequence.
    A label representing what information each data point "contains". What do I offer? What information does this word have?

  • Dimension (d): Limit Operation (e.g. scaling down by reduce the scope of operation)
    This is the "dimensionality" of the Keys and Queries.
    It is a single number (a scalar) that tells the model how "long" the vectors are.
    For example, Query and Key vectors can each contain 64 numbers.

  • Value (V): Open books to extract the actual text (the "Value") based on percentages.
    This is the actual information content of those words.
    The actual content or meaning that gets passed along if a match is found. What is my actual content? What information gets passed on?

1.1. Scoring

The mechanism performs similarity matching by calculating a score - multiplying the QUERY of one word with the KEYS of all other words. If a query matches a key, it gets a high score, telling the model that these two words have a strong relationship.

1.2. Weighting

After calculating the relevance scores, the scores are converted into WEIGHTS (probabilities that sum to 1) using a softmax function. This tells the model exactly what percentage of attention to pay to every other word, that is the model assigns "weights" to words.

If the model is processing the word "it," the attention mechanism might give a high weight (0.9) to "cat" and a low weight (0.01) to "the," allowing it to know that "it" = "cat".

1.3. Aggregating

The model multiplies these weights by the VALUE of each word. This creates a weighted sum that updates the original data with its new, context-aware meaning.

After we multiply our WEIGHT by the VALUES:

  1. If a word has a high weight (e.g., 0.9), most of its information is passed through.
  2. If a word has a low weight (e.g., 0.01), its information is mostly filtered out.

The output is not just the original word, but a weighted sum of the values of all other relevant words. This creates a context-aware embedding—a rich, numerical representation of the word that includes all its relationships to the surrounding text.

Note this output is a one dimension context VECTOR for each word, but we process all words at once, comparing each word with every other word simultaneously, so we always ended up with a two dimension attention MATRIX for the whole sentence.

So in everyday operation involving sentences, we are almost always dealing with matrices (e.g. weight matrix, value matrix etc.).

2. Advantages

2.1. Parallel Processing

Unlike older AI models (RNNs/LSTMs) that processed data sequential (step-by-step) and "forgot" early information, attention computes relationships between all words in a sentence at once. This allows it to:

  • Capture Long-Range Dependencies: A word at the beginning of a paragraph can be directly linked to a word at the end, regardless of distance.
  • Scale Efficiently: Because it can process everything in parallel, it can handle massive datasets, which is why it is used in models like GPT.

That is, it creates a direct "shortcut" between any two points in the data, meaning distance no longer hinders its understanding of complex relationships.

2.2. Multi-Head Attention

The model doesn't just do this once. It uses Multi-Head Attention (Multiple Perspectives), which means it runs the attention process multiple times in parallel (each a "head").

  • Each head learns to focus on different types of relationships.
  • For example, in the sentence "The cat sat on the mat," one head might focus on syntax (what is the verb?), while another focuses on semantics (what is the noun?), and a third focuses on coreference (what does "it" refer to?).