Ollama software runs LLM model on PC version 0.1.25. The major change is the support for Windows as the first version. Currently, the Windows version is still in preview status but can be used for both accelerating speed with GPU or CPU and working with all models in the library. Once installed and run the model, Ollama will run on port 11434 to receive REST API. In the previous version, Ollama only supported OpenAI API. In this version, it also includes this feature.
TLDR: Ollama software now supports running LLM model on Windows and includes features like GPU acceleration and working with all models in the library.
Leave a Comment