Get the latest version of Local AI service
Learn how to get the latest version of Local AI service
For your convenience, Msty bundles the latest version of Local AI service (Ollama) with the app at the time of the app release.
However, if you want to get the latest version of Local AI service, first try going to Settings
> Local AI
> Service Version
and clicking on Check for updates
.
If there is a new version available, it will be downloaded.
If you are unable to download the latest version of Local AI service using this method, you can follow the steps below to manually download and install the latest version of Local AI service.
On macOS
- Go to releases page on the Ollama repository: https://github.com/ollama/ollama/releases
- From under
Assets
, download the latest version of Ollama for macOS by downloadingollama-darwin
(NOTollama-darwin.zip
). - Once downloaded, copy
ollama-darwin
to~/Library/Application Support/Msty
and rename it asmsty-local
. - Open Terminal and run the following command to make the file executable:
- Restart Msty and verify the version of Local AI service by going to
Settings
>Local AI Service > Service Version
.
On Windows
- Go to releases page on the Ollama repository: https://github.com/ollama/ollama/releases
- From under
Assets
, download the latest version of Ollama for Windows by downloadingollama-windows-amd64.zip
. - Once downloaded, extract the contents of the zip file and copy
ollama-windows.exe
toC:\Users\<username>\AppData\Roaming\Msty
and rename it asmsty-local.exe
. - Copy
lib
folder toC:\Users\<username>\AppData\Roaming\Msty
. - If you are on NVIDIA GPU, you can optionally remove
rocm
- If you are on AMD GPU, you can optionally remove
cuda
- Restart Msty and verify the version of Local AI service by going to
Settings
>Local AI Service > Service Version
.
Note: There is a small annoyance with the official build of Ollama where during chatting it opens up a blank Terminal window. You could just ignore it or wait for the latest release of Msty. We have sent a PR to Ollama team and waiting for it to get merged. Please upvote this PR to get it prioritized: https://github.com/ollama/ollama/pull/4287
On Linux
For CUDA users:
- Go to releases page on the Ollama repository: https://github.com/ollama/ollama/releases
- From under
Assets
, download the latest version of Ollama for Linux -ollama-linux-amd64.tgz
- Once downloaded, copy
lib
to~/.config/Msty/
- Copy
bin/ollama
to~/.config/Msty/
and rename it asmsty-local
- Open Terminal and run the following command to make the file executable:
- Restart Msty and verify the version of Local AI service by going to
Settings
>Local AI Service > Service Version
.
For ROCm (AMD GPU) users:
- Go to releases page on the Ollama repository: https://github.com/ollama/ollama/releases
- From under
Assets
, download the latest version of Ollama for Linux by downloading the one appropriate for your system - eitherollama-linux-amd64-rocm.tgz
- Once downloaded, copy
lib
to~/.config/Msty/
- Copy
bin/ollama
to~/.config/Msty/
and rename it asmsty-local
- Open Terminal and run the following command to make the file executable:
- Restart Msty and verify the version of Local AI service by going to
Settings
>Local AI Service > Service Version
.