Large Language Model

AI Everywhere

Having standardised AI focused disposable node enables rapid deployment and maintenance of AI infrastructure in the community.

LLM Parameters

Model Parameters

Core

  1. Parameters
    Each parameter contains values like weights and biases, which are adjusted during training to minimize the error between the predicted output and the actual output.
  2. Layers
    Parameters are arranged into layers which determines the depth of the LLM.

Inference

  1. Tokens
    Each model has maximum number of tokens that can be generated in response to an input
  2. Decodes
    How each model predict the next output e.g. pick most likely word (greedy) or sequence (beam).
  3. Top-K
    Lower K values lead to more deterministic outputs.
  4. Top-P
    Lower P values lead to more deterministic outputs.
  5. Temperature
    Lower temperatures lead to more deterministic outputs.