This incremental update introduces Docker Model Runner support and enhances AI functionality from v0.10.0.
Docker Model Runner Support
Docker Model Runner is now the default AI model runner backend, simplifying requirements. Ramalama remains available as an alternative.
# run a model (uses docker runner by default)
colima model run gemma3
# serve a model, chat interface available at localhost:8080
colima model serve gemma3
# use ramalama runner instead
colima model run gemma3 --runner ramalama
The runner can be configured via the --runner flag, --model-runner flag, or in the config file.
Other Updates
- Respect for the
DOCKER_CONFIGenvironment variable
Upgrading
To upgrade to v0.10.1:
brew upgrade colima
For details on AI features introduced in this release cycle, see the v0.10.0 release notes.
For the full changelog, see the GitHub release.