Ollama’s new app delivers a user-friendly way to interact with large language models on both macOS and Windows

Ollama’s new app, released on July 30, 2025, delivers a user-friendly way to interact with large language models on both macOS and Windows. Here’s an overview of the standout features and capabilities:

Here is the key Core Features:

  • Download and Chat with Models Locally: The app provides an intuitive interface to download, run, and chat with a wide range of AI models, including advanced options like Google DeepMind’s Gemma 3.

  • Chat with Files: Users can easily drag and drop text or PDF files into the chat. The app processes the file’s contents, enabling meaningful conversations or question answering about the document. For handling large documents, you can increase the context length in the app’s settings, though higher values require more system memory.

  • Multimodal Support : Thanks to Ollama’s new multimodal engine, the app lets you send images to models that are able to process them, such as Google’s Gemma 3. This enables use cases like image analysis and visual question answering, alongside typical text-based interactionsGemma 3 in particular boasts a context window of up to 128,000 characters and can process both text and images in over 140 languages.

  • Documentation Writing and Code Understanding: The app enables you to submit code files for analysis by the models, making it easier to generate documentation or understand complex code snippetsDevelopers can automate workflows such as summarizing codebases or generating documentation directly from source files.

Additional Improvements

  • Optimized for Desktop : The latest macOS and Windows versions feature improved performance, reduced installation footprint, and a model directory that users can change to save disk space or use external storage.

  • Network Access & Automation: Ollama can be accessed over the network, allowing headless operation or connecting to the app from other devices. Through Python and CLI support, users can easily integrate Ollama-powered AI features into their own workflows or automation scripts.

As a summary :
Drag-and-drop files –> Chat with text/PDFs; increase context for large documents
Multimodal support –> Send images to vision-capable models e.g., Gemma 3
Documentation writing –> Analyze code, generate documentation
Model downloads –> Choose and run large selection of LLMs locally
Network/API access –> Expose Ollama for remote or automated workflows

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *