Skip to content

Make your Unity characters hear, think, and talk โ€” using real voice AI. Locally. No cloud.


UnityNeuroSpeech is a lightweight and open-source framework for creating fully voice-interactive AI agents inside Unity.
It connects:

  • ๐Ÿง  Whisper (STT) โ€“ converts your speech into text
  • ๐Ÿ’ฌ Ollama (LLM) โ€“ generates smart responses
  • ๐Ÿ—ฃ๏ธ XTTS (TTS) โ€“ speaks back with custom voice + emotions

All locally. All offline.
No subscriptions, no accounts, no OpenAI API keys.


๐Ÿš€ What can you build with UnityNeuroSpeech?

  • ๐ŸŽฎ AI characters that understand your voice and reply in real time
  • ๐Ÿ—ฟ NPCs with personality and memory
  • ๐Ÿงช Experiments in AI conversation and narrative design
  • ๐Ÿ•น๏ธ Voice-driven gameplay mechanics
  • ๐Ÿค– Interactive bots with humanlike voice responses

โœจ Core Features

Feature Description
๐ŸŽ™๏ธ Voice Input Uses whisper.unity for accurate speech-to-text
๐Ÿง  AI Brain (LLM) Easily connect to any local model via Ollama
๐Ÿ—ฃ๏ธ Custom TTS Supports any voice with Coqui XTTS
๐Ÿ˜„ Emotions Emotion tags (<happy>, <sad>, etc.) parsed automatically from LLM
๐ŸŽ›๏ธ Agent API Subscribe to events like BeforeTTS() or access AgentState directly
๐Ÿ› ๏ธ Editor Tools Create, manage and customize agents inside Unity Editor
๐Ÿงฑ No cloud All models and voice run locally on your machine
๐ŸŒ Multilingual Works with over 15+ languages, including English, Russian, Chinese, etc.

๐Ÿงช Built with:


๐Ÿ“š Get Started


๐Ÿ˜Ž Who made this?

UnityNeuroSpeech was created by HardCodeDev โ€”
indie dev from Russia who just wanted to make AI talk in Unity.


โญ Star it on GitHub

๐Ÿ‘‰ github.com/HardCodeDev777/UnityNeuroSpeech