A complete starter project for building voice AI apps with LiveKit Agents for Python and LiveKit Cloud.
The starter project includes:
- A simple transcription agent that converts speech to text in real-time
- Transcription pipeline based on Gladia for speech-to-text with translation capabilities
- Easily integrate your preferred STT provider instead
- Eval suite based on the LiveKit Agents testing & evaluation framework
- LiveKit Turn Detector for contextually-aware speaker detection, with multilingual support
- Background voice cancellation
- Integrated metrics and logging
- A Dockerfile ready for production deployment
This starter app is compatible with any custom web/mobile frontend or SIP-based telephony.
Clone the repository and install dependencies to a virtual environment:
cd agent-starter-python
uv syncSet up the environment by creating a .env.local file and filling in the required values:
# LiveKit configuration
LIVEKIT_URL=wss://your-livekit-url.com
LIVEKIT_API_KEY=your-livekit-api-key
LIVEKIT_API_SECRET=your-livekit-api-secret
# Gladia configuration (for speech-to-text transcription)
GLADIA_API_KEY=your-gladia-api-keyRequired environment variables:
LIVEKIT_URLLIVEKIT_API_KEYLIVEKIT_API_SECRETGLADIA_API_KEY: Get a key for speech-to-text transcription with translation
You can load the LiveKit environment automatically using the LiveKit CLI:
lk cloud auth
lk app env -w -d .env.localBefore your first run, you must download certain models such as Silero VAD and the LiveKit turn detector:
uv run python src/agent.py download-filesNext, run this command to speak to your agent directly in your terminal:
uv run python src/agent.py consoleTo run the agent for use with a frontend or telephony, use the dev command:
uv run python src/agent.py devIn production, use the start command:
uv run python src/agent.py startGet started quickly with our pre-built frontend starter apps, or add telephony support:
| Platform | Link | Description |
|---|---|---|
| Web | livekit-examples/agent-starter-react |
Web voice AI assistant with React & Next.js |
| iOS/macOS | livekit-examples/agent-starter-swift |
Native iOS, macOS, and visionOS voice AI assistant |
| Flutter | livekit-examples/agent-starter-flutter |
Cross-platform voice AI assistant app |
| React Native | livekit-examples/voice-assistant-react-native |
Native mobile app with React Native & Expo |
| Android | livekit-examples/agent-starter-android |
Native Android app with Kotlin & Jetpack Compose |
| Web Embed | livekit-examples/agent-starter-embed |
Voice AI widget for any website |
| Telephony | 📚 Documentation | Add inbound or outbound calling to your agent |
For advanced customization, see the complete frontend guide.
This project includes a complete suite of evals, based on the LiveKit Agents testing & evaluation framework. To run them, use pytest.
uv run pytestOnce you've started your own project based on this repo, you should:
-
Check in your
uv.lock: This file is currently untracked for the template, but you should commit it to your repository for reproducible builds and proper configuration management. (The same applies tolivekit.toml, if you run your agents in LiveKit Cloud) -
Remove the git tracking test: Delete the "Check files not tracked in git" step from
.github/workflows/tests.ymlsince you'll now want this file to be tracked. These are just there for development purposes in the template repo itself. -
Add your own repository secrets: You must add secrets for
LIVEKIT_URL,LIVEKIT_API_KEY, andLIVEKIT_API_SECRETso that the tests can run in CI.
This project is production-ready and includes a working Dockerfile. To deploy it to LiveKit Cloud or another environment, see the deploying to production guide.
You can also self-host LiveKit instead of using LiveKit Cloud. See the self-hosting guide for more information. If you choose to self-host, you'll need to also use model plugins instead of LiveKit Inference and will need to remove the LiveKit Cloud noise cancellation plugin.
This project is licensed under the MIT License - see the LICENSE file for details.