Assertion Error When Running llama3.2-vision Model in ollama
Description:
The ollama package fails with an assertion error when attempting to run the llama3.2-vision model. The error message is as follows:
Error: llama runner process has terminated: GGML_ASSERT(ggml_nelements(a) == ne0*ne1*ne2) failed
This issue appears to be related to upstream bug https://github.com/ollama/ollama/issues/7590 in the ollama GitHub repository.
Additional info:
Package version(s):
ollama 0.4.7-1, ollama-cuda 0.4.7-1
cuda 12.6.3-1
nvidia-open-dkms 565.57.01-2
Steps to reproduce:
- Install and set up the ollama package.
- Run the following command to load the llama3.2-vision model:
ollama run llama3.2-vision