# Model Formats
As [ML model](models) applications increase, so too does the need for optimising the models for specific use-cases. To address performance-cost ratio and portability issues, there's recently been a rise of competing model formats.
```{table} Comparison of popular model formats
:name: model-format-table
Feature | [](ONNX) | [](GGML) | [](TensorRT)
--------|------|------|---------
Ease of Use | 🟢 [good](onnx-usage) | 🟡 [moderate](ggml-usage) | 🟡 [moderate](tensorrt-usage)
Integration with Deep Learning Frameworks | 🟢 [most](onnx-support) | 🟡 [growing](ggml-support) | 🟡 [growing](tensorrt-support)
Deployment Tools | 🟢 [yes](onnx-runtime) | 🔴 no | 🟢 [yes](triton-inference)
Interoperability | 🟢 [yes](onnx-interoperability) | 🔴 no | 🔴 [no](tensorrt-interoperability)
Inference Boost | 🟡 moderate | 🟢 good | 🟢 good
Quantisation Support | 🟡 [good](onnx-quantisation) | 🟢 [good](ggml-quantisation) | 🟡 [moderate](tensorrt-quantisation)
Custom Layer Support| 🟢 [yes](onnx-custom-layer) | 🔴 limited | 🟢 [yes](tensorrt-custom-layer)
Maintainer | [LF AI & Data Foundation](https://wiki.lfaidata.foundation) | https://github.com/ggerganov | https://github.com/NVIDIA
```
{{ table_feedback }}
This file has been truncated. show original