
Make your Unity characters hear, think, and talk โ using real voice AI. Locally. No cloud.
UnityNeuroSpeech is a lightweight and open-source framework for creating fully voice-interactive AI agents inside Unity.
It connects:
- ๐ง Whisper (STT) โ converts your speech into text
- ๐ฌ Ollama (LLM) โ generates smart responses
- ๐ฃ๏ธ XTTS (TTS) โ speaks back with custom voice + emotions
All locally. All offline.
No subscriptions, no accounts, no OpenAI API keys.
๐ What can you build with UnityNeuroSpeech?
- ๐ฎ AI characters that understand your voice and reply in real time
- ๐ฟ NPCs with personality and memory
- ๐งช Experiments in AI conversation and narrative design
- ๐น๏ธ Voice-driven gameplay mechanics
- ๐ค Interactive bots with humanlike voice responses
โจ Core Features
Feature | Description |
---|---|
๐๏ธ Voice Input | Uses whisper.unity for accurate speech-to-text |
๐ง AI Brain (LLM) | Easily connect to any local model via Ollama |
๐ฃ๏ธ Custom TTS | Supports any voice with Coqui XTTS |
๐ Emotions | Emotion tags (<happy> , <sad> , etc.) parsed automatically from LLM |
๐๏ธ Agent API | Subscribe to events like BeforeTTS() or access AgentState directly |
๐ ๏ธ Editor Tools | Create, manage and customize agents inside Unity Editor |
๐งฑ No cloud | All models and voice run locally on your machine |
๐ Multilingual | Works with over 15+ languages, including English, Russian, Chinese, etc. |
๐งช Built with:
- ๐ง
Microsoft.Extensions.AI
(Ollama) - ๐ค
whisper.unity
- ๐ Python Flask server (for TTS)
- ๐ง Coqui XTTS model
- ๐ค Unity 6
๐ Get Started
- ๐ Quick Start
- โ๏ธ Configure Settings
- ๐ง Create Agent
- ๐ Agent API
- ๐ Build Your Game
- ๐ About Python Server
- โ FAQ
๐ Who made this?
UnityNeuroSpeech was created by HardCodeDev โ
indie dev from Russia who just wanted to make AI talk in Unity.