Ollama's new app
278 comments
·July 30, 2025pentagrama
mchiang
one of the maintainers for Ollama. I don't see it as a pivot. We are all developers ourselves, and we use it.
In fact, there are many self-made prototypes before this from different individuals. We were hooked, so we built it for ourselves.
Ollama is made for developers, and our focus in continually improving Ollama's capabilities.
flux293m
Congratulations on launching the front-end, but I don't see how it can be made for developers and not have a Linux version.
fkyoureadthedoc
I've never used a single linux GUI app in my 15 years of developing software. No company I've worked for even gives out linux laptops.
weberer
Its very strange, but they do have a Linux client that they refuse to mention in their blog post. I have no idea if this is a simple slip-up or if it was for some reason intentional.
null
smarx007
Whoah, are you telling me that there are devs on Linux who use anything else than a tiled WM? CLI or GTFO /s
ozim
I just updated and a bit annoying by default gemma3:4b was selected that I don't have on my local. I guess would be nicer to default to one of the models that are present.
It was nice it started downloading it but also there was no indication I don't have that model before hand until I opened drop-down to see download buttons.
But of course nice job guys.
mchiang
Thanks for the kind words. Sorry about that, we are working out some of the initial experience for Ollama.
btreecat
Congratulations on the next release.
I really like using ollama as a backend to OpenWebUI.
I don't have any windows machines and I don't work primarily on macos, but I understand that's where all the paying developers are, in theory.
Did y'all consider a partnership with one of the existing UI and bundle that, similar to duckdb approach?
rpastuszak
I’m just curious because I don’t use Ollama and have some spare vram: How do you use it and what models do you use?
pmarreck
do you know why ollama hasn't updated its models in over a month while many fantastic models have been released in that time, most recently GLM 4.5? It's forcing me to use LM Studio which I for whatever reason absolutely do not prefer.
thank you guys for all your work on it, regardless
_boffin_
You know that if you go to hugging face and find a gguf page, you can click on Deploy and select ollama. It comes with “run” but whatever—just change to pull. Has a jacked name, but works.
Also, if you search on ollama’s models, you’ll see user ones that you can download too
coder543
GLM 4.5 has a new/modified architecture. From what I understand, MLX was really one of the only frameworks that had support for it as of yesterday. LM Studio supports MLX as one backend. Everyone else was/is still developing support for it.
Ollama has the new 235B and 30B Qwen3 models from this week, so it’s not as if they have done nothing for a month.
mchiang
We work closely with majority of research labs / model creates directly. Most of the times we will support models on release day. There are sometimes where the release window for major models are fairly close - and we just have to elect to support models where we believe will better support a majority of users.
Nothing out of spite, and purely limited by the amount of effort required to support these models.
We are hopeful too -- where users can technically add models to Ollama directly. Although there is definitely some learning curve.
fouc
just so you know, you can grab any gguf from huggingface and specify the quant like this:
ollama pull hf.co/bartowski/nvidia_OpenCodeReasoning-Nemotron-7B-GGUF:IQ4_XS
WithinReason
qwen3 was updated less than a day ago: https://ollama.com/library/qwen3
nileshtrivedi
Question since you are here, how long before tool-calling is enabled for Gemma3 models?
whs
Seems that Google intend it to be that way - https://ai.google.dev/gemma/docs/capabilities/function-calli... . I suppose they are saying that the model is good enough that if you put the tool call format in prompt it should be able to handle any formats.
I use PetrosStav/gemma3-tools and it seems that it only works half of the time - the rest the model call the tool but it doesn't get properly parsed by Ollama.
null
_boffin_
You can do “tool calling” via Gemma3. The issue is that it all needs to be stuck in the user prompt as there’s no system prompt
LudwigNagasena
Are there any plans to improve observability toolset for developers? There is myriad of various AI chat apps, and there is no clear reason why another one from Ollama would be better. But Ollama is uniquely positioned to provide the best observability experience to its users because it owns the whole server stack, any other observability tool (eg Langfuse) may only treat it as a yet another API black box.
balloob
Does the new app make it easier for users to expose the Ollama daemon on the network (and mdns discovery )? It’s still trickier than needed for Home Assistant users to get started with Ollama (which tends to run on a different machine).
mchiang
In the app settings now, there is a toggle for "Expose Ollama to the network" - it allows for other devices or services on the network to access Ollama.
jasonvorhe
There's a simple toggle for juet that.
whitehexagon
This caught me out yesterday. I was trying to move models onto external disk, and it seems to require re-installation? but there was no sign of the simple CLI option that was previously presented and I gave up.
As a developer feature request, it would be great if ollama could support more than one location at once, so that it is possible to keep a couple models 'live' but have the option to plug in an external disk with extra models being picked up auto-magically based on the ollama_models path please. Or maybe the server could present a simple html interface next to the API endpoint?
And just to say thanks for making these models easily accessible. I am agAInst AI generally, but it is nice to be able to have a play with these models locally. I havent found one that covers Zig, but appreciate the steady stream of new models to try. Thanks.
mark_l_watson
I just symbolically link the default model directory to a fast and cheap external drive. I agree that it would be nice to support multiple model directories.
prophesi
I think having a bash script as the linux installation is more of a stop-gap measure than truly supporting Linux. And ollama is FOSS compared to LM Studio and Msty (as someone who switched from ollama to LM Studio; I'm very happy to see the frontend development of ollama and an easier method of increasing the context length of a model).
null
rubymamis
> One missing feature that the ChatGPT desktop app has and I think is a good idea for these local LLM apps is a shortcut to open a new chat anytime (Alt + Space), with a reduced UI. It is great for quick questions.
This is exactly what I've implemented for my Qt C++ app: https://www.get-vox.com
pzo
this is actually positive even for devs. The more users have ollama installed then you can release some desktop ai app for them and don't have to bundle additional models in your own app. Easier to provide to such user free or cheaper subscription because you don't have additional costs. Latest Qwen30B models area really powerful.
Would be even better if there was a installation template that checks if Ollama is installed and if not download it as sub installation first checking user computer specs if enough RAM and fast enough CPU/GPU. Also API to prompt user (ask for permission) to install specific model if haven't been installed.
nocommandline
> Would be even better if there was a installation template that checks if Ollama is installed and if not download it as sub installation first..... Also API to prompt user (ask for permission) to install specific model if haven't been installed.
That's actually what we've done for our own App [1]. It checks if Ollama and other dependencies are installed. No model is bundled with it. We prompt user to install a model (you pick a model, click a button and we download the model; similar if you wish to remove a model). The aim is to make it quite simple for non-technical folks to use.
AnonC
I’d heard of Msty and briefly tried it before. I checked the website again and it looks quite feature rich. I hadn’t known about LM Studio, and I see that it allows commercial use for free (which Matt doesn’t).
How would you compare and contrast between the two? My main use would be to use it as a tool with a chat interface rather than developing applications that talk to models.
Agentlien
I use Msty all the time and I love it. It just works and it's got all features I want now, including generating alternate responses, swapping models mid-chat, editing both sent messages and responses, ...
I also tried LM Studio a few months back. The interface felt overly complex and I got weird error messages which made it look like I'd have to manually fix errors in the underlying python environment. Would have been fine if it was for work, but I just wanted to play around with LLMs in my spare time so I couldn't be bothered.
hn8726
LM studio just changed their terms to allow commercial usage for free and without any restrictions or additional actions required.
I've used Msty but it seems like LM studio is moving faster, which is kind of important in this space. For example Msty still doesn't support MCP
mlukaszek
Have you seen https://pygpt.net/ ? Overloaded interface and little unfortunate name aside, this seems to be the best one I tried.
xdennis
> little unfortunate name aside
What's wrong with the name? Are you referring to the GPT trademark? That was rejected.
mlukaszek
I may be too picky, and on reflection I probably shouldn't be - it was just my first thought when I saw what the project actually is for the first time.
What I meant is the "Py" prefix is typically used for Python APIs/libraries, or Python bindings to libraries in other languages. Sometimes as a prefix for dev tool names like PyInstaller or PyEnv. It's just less often used for standalone apps, only to indicate the project was developed in Python.
halifaxbeard
This makes me really wonder about the relationship between Open WebUI & Ollama
underlines
Heads up, there’s a fair bit of pushback (justified or not) on r/LocalLLaMA about Ollama’s tactics:
Vendor lock-in: AFAIK it now uses a proprietary llama.cpp fork and builts its own registry on ollama.com in a kind of docker way (I heard docker ppl are actually behind ollama) and it's a bit difficult to reuse model binaries with other inference engines due to their use of hashed filenames on disk etc.
Closed-source tweaks: Many llama.cpp improvements haven’t been upstreamed or credited, raising GPL concerns. They since switched to their own inference backend.
Mixed performance: Same models often run slower or give worse outputs than plain llama.cpp. Tradeoff for convenience - I know.
Opaque model naming: Rebrands or filters community models without transparency, biggest fail was calling the smaller Deepseek-R1 distills just "Deepseek-R1" adding to a massive confusion on social media and from "AI Content Creators", that you can run "THE" DeepSeek-R1 on any potato.
Difficult to change Context Window default: Using Ollama as a backend, it is difficult to change default context window size on the fly, leading to hallucinations and endless circles on output, especially for Agents / Thinking models.
---If you want better, (in some cases more open) alternatives:
llama.cpp: Battle-tested C++ engine with minimal deps and faster with many optimizations
ik_llama.cpp: High-perf fork, even faster than default llama.cpp
llama-swap: YAML-driven model swapping for your endpoint.
LM Studio: GUI for any GGUF model—no proprietary formats with all llama.cpp optimizations available in a GUI
Open WebUI: Front-end that plugs into llama.cpp, ollama, MPT, etc.
therealpygon
“Justified or not” — is certainly a useful caveat when giving the same credit to a few people who complain loudly with mostly unauthentic complaints.
> Vendor lock-in
That is, probably the most ridiculous of the statements. Ollama is open source, llama.cpp is open source, llamafiles are zip files that contain quantized versions of models openly available to be run with numerous other providers. Their llama.cpp changes are primarily for performance and compatibility. Yes, they run a registry on ollama.com for pre-packed, pre-quantized versions of models that are, again, openly available.
> Closed-source tweaks
Oh so many things wrong in a short sentence. Llama.cpp is MIT licensed, not GPL license. A proprietary fork is perfectly legitimate use. Also.. “proprietary“? The source code is literally available, including the patches, on GitHub in ollama/ollama project, in the “llama” folder with a patch file as recent as yesterday?
> Mixed Performance
Yes, almost anything suffers degraded performance when the goal is usability instead of performance. It is why people use C# instead of Assembly or punch cards. Performance isn’t the only metric, which makes this a useless point.
> Opaque model name
Sure, their official models have some ambiguities sometimes. I don’t know know that is the “problem” that people make it out to be when ollama is designed for average people to run models, and so a decision like “ollama run qwen3” not being the absolutely maximum best option possible rather than the option most people can run makes sense. Do really think it is advantageous or user friendly, when Tommy wants to try out “Deepseek-r1” on his potato laptop that a 671b parameter model too large to fit on almost anything consumer computer is the right choice and that it is instead meant as a “deception”? That seems…disingenuous. Not to mention, they are clearly listed as such on ollama.com, where in black and white it says the deep seek-r1 by default refers with the qwen model, and that the full model is available as deep seek-r1:671b
> Context Window
Probably the only fair and legitimate criticism of your entire comment.
I’m not an ollama defender or champion, couldn’t care about the company, and I barely use ollama (mostly just to run qwen3-8b for embedding). It really is just that most of these complaints you’re sharing from others seem to have TikTok-level fact checking.
J_Shelby_J
And llamacpp has a gui out of the box that’s decent.
coder543
I am somewhat surprised that this app doesn't seem to offer any way to connect to a remote Ollama instance. The most powerful computer I own isn't necessarily the one I'm running the GUI on.
jart
This. This. A thousand times this. I hate Windows / MacOS but love their desktops. I love Linux / BSD but hate their desktops. So my most expensive most powerful workstation is always a headless Linux machine that I ssh into from a Windows or MacOS toy computer. Unfortunately most developers do not understand this. Every time I run a command in the terminal and it tries to open a browser tab without printing the URL, it makes me want to scream and shout and retire from tech forever to be a plumber.
tux1968
You can replace the xdg-open command (or whichever command is used on your linux system) with your own. Just program it to fire over the url to a waiting socket on your windows box, and have it automatically open there. The details are pretty easy to work out, and the result will be seamless.
mbreese
I usually do this with a port forward (ip or Unix socket) over SSH. This way my server just sends data to ~/.tunnel/socket, and my SSH connection handles getting it to my client.
(It’s a bit more complicated with starting a listening server in my laptop, making sure the port forwarded file doesn’t exist, etc, but this is the basic idea.)
amelius
I can recommend to spend a day finding and configuring a window manager that suits your needs.
VladCuciureanu
Or just display the URL in terminal. I spent 5 years of my life ricing my Linux machine to get it as I want it to be only to realise that, at least for my needs and likes, nothing matches MacOS’s DE, compositor and font rendering.
Not a bash on Linux desktop users, just my experience.
null
graemep
This was my thought. Given the huge range of desktop environments and window managers available there has to be one that suits you.
Probably one that suits you pretty much out of the box.
Eisenstein
I doubt that she spent the time to create a cross platform C compiler library but didn't bother trying out a few Linux desktops.
null
hyperbolablabla
Hey Justine! Thank you for all your fantastic work
ethan_smith
You can work around this by using SSH port forwarding (ssh -L 11434:localhost:11434 user@remote) to connect to a remote Ollama instance, though native support would definitely be better.
stavros
Wait, it already connects over the network, it just doesn't let you specify the hostname? That's really surprising to me.
mchiang
This is a feature we are looking into supporting. Thank you for reaffirming the need.
stavros
But it seems like the GUI already connects over the network, no? In that case, why do you need to do user research for adding what is basically a command line option, at its simplest? It would probably take less time to add that than to write the comment.
silentguy
They will have to support auth if they are adding support for connecting with remote host. It's not difficult but it's not as trivial as you suggested.
ddlsmurf
I use ollama via the llm plugin: https://github.com/taketwo/llm-ollama?tab=readme-ov-file#oll...
jay_kyburz
The app does have a function to expose Ollama to the network, so perhaps its coming?
hodgehog11
It's definitely coming, there is no way they would leave such an important feature on the table. My guess is they are waiting so they can announce connections to their own servers.
null
accrual
I gave the Ollama UI a try on Windows after using the CLI service for a while.
- I like the simplicity. This would be perfect for setting up a non-technical friend or family member with a local LLM with just a couple clicks
- Multimodal and Markdown support works as expected
- The model dropdown shows both your local models and other popular models available in the registry
I could see using this over Open WebUI for basic use cases where one doesn't need to dial in the prompt or advanced parameters. Maybe those will be exposed later. But for now - I feel the simplicity is a strength.
accrual
Small update: thinking models also work well. I like that it shows the thinking stream in a fainter style while it generates, then hides it to show the final output when it's ready. The thinking output is still available with a click.
Another commenter mentioned not being able to point the new UI to a remote Ollama instance - I agree, that would be super handy for running the UI on a slow machine but inferring on something more powerful.
kroaton
If you like simple, try out Jan as well. https://github.com/menloresearch/jan
IceWreck
Why not Linux? The UI looks to be some kind chrome based thingy - probably electron - should be easy to port to Linux.
Also is there a link to the source?
johncolanduoni
For all of Electron's promise in being cross-platform, "I'll just press this button and ship this Electron app on Linux and everything will be fine" is not the current state of things. A lot of it is papercuts like glibc version aggravation, but GPU support is persistently problematic.
zettabomb
The Element app on Linux is currently broken (if you want to use encryption, so basically for everyone) due to an issue with Electron. Luckily it still works in a regular browser. I'm really baffled by how that can happen.
nguyenkien
> Download Ollama’s new app today on macOS and Windows.
> For pure CLI versions of Ollama, standalone downloads are available on Ollama’s GitHub releases page.
Sound like closed source. Plus, As I check, the app seem to be tauri app, as it use system webview instead of chromium.
nicce
Electron… wonder how this can be marketed as native then.
andsoitis
Nowhere on the page does it state “native”. The person who submitted the story introduced “native”
nguyenkien
Also, this is not electron app. It does use system webview thought.
pjmlp
At least that is already an improvement, away from ChromeOS Development Platform.
null
vismit2000
I believe power users or developers can already use this from CLI in Linux. This new app for Windows and MacOS shows this is intended for regular users.
stavros
Not releasing anything for Linux because regular users don't use it is a great way to never have regular users on Linux.
ceroxylon
I am guessing that the Linux version was first (or the announcement was worded strangely), as it is available on their download page:
DarkmSparks
thats just the cli versions.
this app got gui.
ceroxylon
Ah, I missed that detail, thank you for clarifying.
apitman
I've been on something of a quest to find a really good chat interface for LLMs.
Most import feature for me is that I want to be able to chat with local models, remote models on my other machines, and cloud models (OpenAI API compatible). Anything that makes it easier to switch between models or query them simultaneously is important.
Here's what I've learned so far:
* Msty - my current favorite. Can do true simultaneous requests to multiple models. Nice aesthetic. Sadly not open source. Have had some freezing issues on Linux.
* Jan.ai - Can't make requests to multiple models simultaneously
* LM Studio - Not open source. Doesn't support remote/cloud models (maybe there's a plugin?)
* GPT4All - Was getting weird JSON errors with openrouter models. Have to explicitly switch between models, even if you're trying to use them from different chats.
Still to try: Librechat, Open WebUI, AnythingLLM, koboldcpp.
Would love to hear any other suggestions.
ojosilva
I've been in the same quest for a while. Here's my list, not a recommendation or endorsement list, just a list of alternative clients I've considered, tried or am still evaluating:
- chatbox - https://github.com/chatboxai/chatbox - free and OSS, with a paid tier, supports MCP and local/remote, has a local KB, works well so far and looks promising.
- macai - https://github.com/Renset/macai simple client for remote APIs, does not support image pasting or MCP or anything really, very limited, crashes.
- typingmind.com - web, with a downloadable (if paid) version. Not OSS, but one-time payment, indie dev. One of the first alt chat clients I've ever tried, not using it anymore. Somewhat clunky gui, but ok. Supports MCP, haven't tried it it.
- Open WebUI - deployed for our team so that we could chat through many APIs. Works well for a multi-user web-deployment, but image generation hasn't been working. I don't like it as a personal client though, buggy sometimes but gets frequent fixes fortunately.
- jan.ai - it comes with popular models pre-populated listed, which makes it harder to plug into custom or local model servers. But it supports local model deployment within the app (like what ollama is announcing) which is good for people who don't want to deal with starting a server. I haven't played with it enough, but I personally prefer to deploy a local server (ie ollama, litellm...) and then just have the chat gui app give me a flexible endpoint configuration for adding custom models to it.
I'm also wary of evil actors deploying chat GUIs just to farm your API keys. You should be too. Use disposable api keys, watch usage, refresh with new keys once in a while after trying clients.
tpae
I've been building this: https://dinoki.ai/
Works fully local, privacy first, and it's a native app (Swift for macOS, WPF for Windows)
gcr
do you have any screenshots? the home page shows a picture of a tamagotchi but none of the actual chat interface, which makes me wonder if I’m outside of the target audience
jonahbenton
OpenWebUI is what you are looking for from a usability perspective. Supports many models chat.
doctoboggan
Last I tried OpenWebUI (A few months ago), it was pretty painful to connect non-OpenAI externally hosted models. There was a workaround that involved installing a 3rd party "function" (or was it a "pipeline"?), but it didn't feel smooth.
Is this easier now? Specifically, I would like to easily connect anthropic models just by plugging in my API key.
jerieljan
The trick to this is to run a LiteLLM proxy that has all the connections to whatever you need to connect to and then point Open-WebUI to that.
I've been using this setup for several months now (over a year?) and it's very effective.
The proxy also benefits pretty much any other application you have that recognizes an OpenAI-compatible API. (Or even if it doesn't)
jwrallie
I tried LibreChat and OpenWebUI, between the two I would recommend OpenWebUI.
It feels a bit less polished but has more functions that run locally and things work better out of the box.
My favorite thing is that I can just type my own questions / requests in markdown so I can get formatting and syntax highlighting.
Eisenstein
OpenWebUI refuses to support MCP and uses an MCP to OpenAPI proxy which often doesn't work. If you don't like or need MCP, then it is a good choice.
pmarreck
> refuses to support MCP
Why is that? Seems the way to go to add tooling to any LLM that is tool-capable
wkat4242
Another +1 for OpenWebUI. Development is also going really fast <3
teekert
I like webUI but it’s weird and conplicated how you have to set up the different models (via text files in the browser, the instructions contains a lot of confusing terms). Librechat is nice but I can’t get it to not log me out every 5 min which makes it unusable. I’ve been told it keeps you logged in when using https but I use tailscale so that is difficult (when doing multiple services on a single host).
khimaros
CherryStudio is a power tool for this case https://github.com/CherryHQ/cherry-studio -- has MCP, search, personas, and reasoning support too. i use it heavily with llama.cpp + llama-swap
Eisenstein
Have fun on their Issues page if you don't read and write Chinese. Documentation pages are written in Chinese as well.
fnordlord
I've been using AnythingLLM for a couple months now and really like it. You can organize different "Workspaces" which are models + specific prompts and it supports Ollama along with the major LLM providers. I have it running in a docker container on a raspberry pi and then I use Tailscale to make it accessible anywhere. It looks good on mobile too so it's pretty seamless. I use that and Raycast's Claude extension for random questions and that's pretty much does everything I want.
egonschiele
Build your own! It's a great way to learn, keeps you interested in the latest developments. Plus you get to try out cool UX experiments and see what works. I built my own interface back in 2023 and have been slowly adding to it since. I added local models via MLX last month. I'm surprised more devs aren't rolling their own interface, they are easy to make and you learn a lot.
peterb
gptel in emacs does this. You can run the same prompt against different models in separate emacs windows (local or via api w/ keys) at the same time to compare outputs. I highly recommended it. https://github.com/karthink/gptel
hodgehog11
Not surprising; Ollama is set on becoming the standard interface for companies to deploy "open" models. The focus on "local" is incidental, and likely not long term. I'm sure Ollama is going to announce a plan to use "open" models through their own cloud-based API using this app.
grumbelbart2
> The focus on "local" is incidental
Strongly disagree with this. It is the default go-to for companies that cannot use cloud-based services for IP or regulatory reasons (think of defense contractors). Isn't that the main reason to use "open" models, which are still weaker than closed ones?
theshrike79
We are specifically using Ollama, because our stuff CANNOT leave the company internal net.
Any whiff of a cloud service and the lawyers will freak out.
That's why we run models via Ollama on our laptops (M-series is crazy powerful) and a few servers on the intranet for more oomph.
LM Studio changed their license to allow commercial use without "call me" pricing, so we might look into that more too.
diggan
> Ollama is set on becoming the standard interface for companies to deploy "open" models.
That's not what I've been seeing, but obviously my perspective (as anyone's) is limited. What I'm seeing is deployments of vLLM, SGLang, llama.cpp or even HuggingFace's Transformers with their own wrapper, at least for inference with open weight models. Somehow, the only place where I come across recommendations for running Ollama was on HN and before on r/LocalLlama but not even there as of late. The people who used to run Ollama for local inference (+ OpenWebUI) now seem to mostly be running LM Studio, myself included too.
mark_l_watson
I have been happy using Ollama via the command line and via API, but I am sold on their new UI for coding. I was just using the newly updated qwen3:30b model for coding, and I like the <copy> button in the too right corner of generated code listings - a simple thing but useful.
thorum
If you’re a power user of these LLMs and have coding experience, I actually recommend just whipping together your own bespoke chat UI that you can customize however you like. Grab any OpenAI compatible endpoint for inference and a frontend component framework (many of which have added standard Chat components) - the rest is almost trivial. I threw one together in a week with Gemini’s assistance and now I use it every day. Is it production ready? Hell no but it works exactly how I want it to and whenever I find myself saying “I wish it could do XYZ…” I just add it.
bapak
This is the most "just build your own Linux" comment I read this year.
Just download some tool and be productive within seconds, I'd say.
ellg
Kinda odd to be so dismissive of this mindset given this websites title. Whipping up your own chatui really is not that hard and is a pretty fun exercise. Knowing how your tools work and being able to tweak them to your specific usecases kinda rules!
Petersipoi
There is a big difference between fun exercise and actually creating something that competes with the apps you can download. Building something on par with Claude Desktop, ChatGPT Desktop, etc. would be a lot of work. And I don't think the payoff would be there for most people.
smahs
I only did it once some 15 years back (in a happy memory) using LFS. It took about a week to get to a functional system with basic necessities. A code finetuned model can write a functional chat UI with all common features and a decent UX in under a minute.
vinhnx
I have been exploring AI and LLMs. I built my own AI chat bot using Python [1], and then [2] AI SDK from Vercel and OpenAI compatible API endpoints. And eventually build a product around it.
1. VT.ai https://github.com/vinhnx/VT.ai Python
2. VT Chat https://vtchat.io.vn: my own product
n_kr
Yes I do that too. The important bit is the model. Rest is almost trivial. I had posted a Show HN here about the script I've been using which is open source now ( https://github.com/n-k/tinycoder ) ( https://news.ycombinator.com/item?id=44674856 ).
With a bit of help from ChatGPT etc., it was trivial to make, and I use it everyday now. I may add DDG and github search to it soon too.
dwaaa
this is not coder this help typing instructions. Coding is different. For example look at my repository and tell me how refactorizing it, write a new function etc. In my opinion You must change name.
FergusArgyll
Yeah, I have one which lets me read a pdf and chat side by side, one which is integrated into my rss feed, one with insanely aggressive memory features (experimental) etc etc :)
theshrike79
Or you could use: https://github.com/open-webui/open-webui
Either directly or use it as a base for your own bespoke experience.
pmarreck
[flagged]
tomhow
> Tell me you're not in charge of young kids without telling me you're not in charge of young kids
Please avoid internet tropes on HN.
Arubis
> I don't know if parenting hits the "developer-tinkerer class" harder than others, but damn.
I sort of suspect so? Devs of parenting age trend towards being neurospicy, and dev work requires sustained attention with huge penalties for interruptions.
n_kr
I have a 1yo too, and I could do it. I used the other tools to make one which I liked.
steve_adams_86
> Tell me you're not in charge of young kids
Yeah, my wife would murder me as our kids yelled at me for various things
syspec
I've been using Open WebUI and have been blown away, it's a better ChatGPT interface than ChatGPT!
https://github.com/open-webui/open-webui
Curious how this compares to that, which has a ton of features and runs great
mindcrime
Likewise. I use Ollama as the API server and CLI interface for local models, and use OpenWebUI when I want a web interface (which TBH, isn't that often) and it's a fine combination. Honestly, the idea of Ollama adding their own chat interface UI never even occurred to me. It feels a little bit... unnecessary?
Still choices are good, to props to the Ollama team!
apitman
Is the Open WebUI license still OSI-compatible? I saw some drama about this on reddit but I'm not sure about the current state.
wkat4242
I don't really care about that as a user. Maybe for FOSS purists it's important but copyright is a thing I as techy care nothing about. I can it for free and i can see all the source code. I'm not going to build a fork so the rest doesn't matter.
benatkin
It's a phony BSD license, with an attempt to pass it off as the real thing with some verbiage. It's neither within the letter nor the spirit of the real BSD license.
BeefySwain
No it's not
siggalucci
That’s what I came to say. I made a tool for my Mac where I can highlight any text then set a hotkey to use that text in a query to an LLM.
Nice because it works on any text. Browser, IDE, email etc.
brabel
Isn’t that exactly how Firefox does it?
rihegher
"Ollama’s new app is now available for macOS and Windows" linux sounds out for now
amelius
Shouldn't the LLM be able to code the linux version?
SchemaLoad
Never ask an AI hype bro why after years of coding agents and 10x productivity, software is just as shit as it always has been.
dwaaa
no, this is a scam
permalac
Vibe coding?
1dom
I don't understand this move. A frontend desktop application is the opposite of what I and anyone else I know uses Ollama for. It's a local LLM backend. It's been around long enough now that any long term users have found, created and/or adjusted to their own front end interface.
I'm comfy, but some of the cutting edge local LLMs have been a little bit slow to be available recently, maybe this frontend focus is why.
I will now go and look at other options like Ollama that have either been fully UI integrated since the start, or that is committed to just being a headless backend. If any of them seem better, I'll consider switching, I probably should have done this sooner.
I hope this isn't the first step in Ollama dropping the local CLI focus, offering a subscription and becoming a generic LLM interface like so many of these tools seem to converge on.
mchiang
Rightful worry, and we had the same doubts before we embarked on this. Ollama serves developers, there is no doubt about that. The CLI isn’t getting dropped, in fact, what we’ve learned in building it is having the interface interacting with Ollama is a great way for us to dogfood Ollama while building it.
There are so many choices for having an interface, and as a developer you should have a choice in selecting the UI you want. It will all continue to work with Ollama. Nothing about that changes.
1dom
Thanks for the response, appreciated. It confirms my feelings though: there are already so many choices for an interface, why are you - a team of people who built a backend LLM - now spending your time doing front end stuff under the same backend product name?
This is sending a very loud message that your focus is drifting away from why I use your product. If it was drifting away into something new and original that supplements my usage of your product, I could see the value, but like you said: there's already so many choices of good interface. Now you're going to have to play catchup against people whose first choice and genuine passion is LLM frontend UIs.
Sorry! I will still use ollama, and thank you so much for all the time and effort put in. I probably wouldn't have had a fraction of the local LLM fun I've had if it wasn't for ollama, even if my main usage is through openwebui. Ultimately, my personal preference is software that does 1 thing and does it well. Others prefer the opposite: tightly integrated all-bells-and-whistles, and I'm sure those people will appreciate this more than me - do what works for you, it's worked so far:)
vorticalbox
> some of the cutting edge local LLMs have been a little bit slow to be available recently
You can pull models directily from hugging face ollama pull hf.co/google/gemma-3-27b-it
1dom
I know, I often do that, but it's still not enough. E.g. things like SmolLM3 which required some llama ccp tweaks wouldn't work via guff for the first week after it had been released.
Just checked: https://github.com/ollama/ollama/issues/11340 still open issue.
Looks like a big pivot on target audience from developers to regular users, at least on the homepage https://ollama.com/ as a product. Before, it was all about the CLI versions of Ollama for devs, now it's not even mentioned. At the bottom of the blog post it says:
> For pure CLI versions of Ollama, standalone downloads are available on Ollama’s GitHub releases page.
Nothing against that, just an observation.
Previously I tested several local LLM apps, and the 2 best ones to me were LM Studio [1] and Msty [2]. Will check this one out for sure.
One missing feature that the ChatGPT desktop app has and I think is a good idea for these local LLM apps is a shortcut to open a new chat anytime (Alt + Space), with a reduced UI. It is great for quick questions.
[1] https://lmstudio.ai/
[2] https://msty.app/