This release introduces AI support and other updates.
AI is here
Colima now has AI support.
By leveraging Krunkit and Ramalama, Colima is able to provide the ideal platform to run confined, isolated and secure GPU-powered AI workloads on Apple Silicon devices.
# run a model
colima model run gemma3
# serve a model, chat interface would be available at localhost:8080
colima model serve gemma3
# for more
colima model --help
Other Updates
- Addition of
krunkitvirtual machine type with GPU support:colima start --vm-type krunkit - Incus instances are now reachable directly from the host if network address is enabled:
colima start --network-address - Containerd runtime (with nerdctl command) now inherits
CONTAINERD_*andNERDCTL_*environment variables on the host - Port forwarding can now be disabled by passing
--port-forwarder=nonetocolima start - Volume mounts can now be disabled by passing
--mount=nonetocolima start - Download mechanism has been reworked in native Go, eliminating dependency on
curlandshasumon the host - New
after-bootandreadyprovision modes for provision scripts
Runtime Updates
Note: Container runtime versions can be updated manually by running the
colima updatecommand.
- Docker version updated to v29.2.0
- Nerdctl version updated to v2.2.1
- Incus version updated to v6.21
- K3s version defaults to v1.35.0+k3s1
Upgrading
To upgrade to v0.10.0:
brew upgrade colima
For the latest updates in this release cycle, see the v0.10.1 release notes.
For the full changelog, see the GitHub release.