Translate full-length books and documents with Ollama, OpenAI (comptatible), Gemini, Mistral, Poe or OpenRouter. Preserves formatting. Resumes where you left off. No file size limits.
-
Updated
Apr 12, 2026 - Python
Translate full-length books and documents with Ollama, OpenAI (comptatible), Gemini, Mistral, Poe or OpenRouter. Preserves formatting. Resumes where you left off. No file size limits.
ComfyUI-IF_AI_tools is a set of custom nodes for ComfyUI that allows you to generate prompts using a local Large Language Model (LLM) via Ollama. This tool enables you to enhance your image generation workflow by leveraging the power of language models.
RAGLight is a modular framework for Retrieval-Augmented Generation (RAG). It makes it easy to plug in different LLMs, embeddings, and vector stores, and now includes seamless MCP integration to connect external tools and data sources.
vMLX - Home of JANG_Q - Cont Batch, Prefix, Paged, KV Cache Quant, VL - Powers MLX Studio. Image gen/edit, OpenAI/Anth
A persistent local memory for AI, LLMs, or Copilot in VS Code.
Home Assistant LLM integration for local OpenAI-compatible services (llamacpp, vllm, etc)
MESH-API (previously MESH-AI) — Off-Grid AI & API Router with over 30 API extensions for Meshtastic & MeshCore - Seamlessly connect LM Studio, Ollama, AI Providers , 3rd-party APIs, & Home Assistant to your LoRa mesh. Supports custom commands, Twilio SMS, Discord channel routing, & GPS emergency alerts via SMS, email, or Discord + SO MUCH MORE
Python app for LM Studio-enhanced voice conversations with local LLMs. Uses Whisper for speech-to-text and offers a privacy-focused, accessible interface.
DocMind AI is a powerful, open-source Streamlit application leveraging LlamaIndex, LangGraph, and local Large Language Models (LLMs) via Ollama, LMStudio, llama.cpp, or vLLM for advanced document analysis. Analyze, summarize, and extract insights from a wide array of file formats, securely and privately, all offline.
Whisper STT + Orpheus TTS + Gemma 3 using LM Studio to create a virtual assistant.
RetroChat is a powerful command-line interface for interacting with various AI language models. It provides a seamless experience for engaging with different chat providers while offering robust features for managing and customizing your conversations. The code in this repo is 100% AI generated. Nothing has been written by a human.
Local-first meeting transcription and summarization CLI
How to run a local server on LM Studio
PolyCouncil is an open-source multi-model deliberation engine for LM Studio. It runs multiple LLMs in parallel, gathers their answers, scores each response using a shared rubric, and produces a final, consensus-driven result. Designed for testing, comparing, and orchestrating local models with ease.
A MCP stdio toolpack for local LLMs
Large language model persistent memory module that lives outside the model itself
Add a description, image, and links to the lmstudio topic page so that developers can more easily learn about it.
To associate your repository with the lmstudio topic, visit your repo's landing page and select "manage topics."