This extension serves as a powerful coding assistant, enabling developers to research and write code more efficiently and accurately. It supports various common LLM cloud providers such as OpenAI, Gemini, Grok, and Claude, as well as local LLMs like Ollama and LM Studio, helping developers receive support faster while significantly reducing costs.
To Active the CODi Assistant: Click on the icon in the left side panel (Activity bar) of the VS code IDE, then fill in your configuration data to use it. (Remember to click on Save Setting button at the bottom!)
- Multi-LLM Provider Support: Seamlessly switch between multiple LLM providers (OpenAI, Gemini, Grok, Claude).
- Local LLM Integration: Utilize local LLMs like Ollama and LM Studio for quicker responses and cost savings.
- Code Optimization: Get recommendations for optimizing your code as you write.
- Research Assistance: Effortlessly research coding queries and get instant responses.
- Privacy Focused: Rest assured that this extension does not collect or send any private data over the internet, except for sending data to the LLM provider service to obtain answers.
- Floating menu: A floating menu above the code, line, or block can assist you in making code suggestions or asking additional questions effortlessly. Additionally, you can use this menu to input your own prompts for further assistance or to fix coding issues quickly.
Generate code with inline prompt

- Visual Studio Code (version 1.97.0 or higher)
- Internet connection for cloud provider features
- To have a Gemini, OpenAI, or Grok API key and choose an appropriate model to use involves several steps and considerations
- Local LLM setup (if using local integrations)
To set up local LLMs, you can follow these guides:
-
Ollama: Install Ollama by following the instructions on their official website. After installation, you can pull models using commands like
ollama pull qwen2.5-coder:7bor pull and run a modelollama run qwen2.5-coder:7b. Local API URL (default) =http://localhost:11434/api/generate -
LM Studio: Download LM Studio from their official site and follow the setup instructions provided there. Local API URL (default) =
http://localhost:1234/v1/completionsorhttp://localhost:1234/v1/chat/completions
This extension contributes the following settings:
Check the detail at the end of this file.
- The extension doesn't collect and/or transfer any data from the client over the internet.
- When using local LLM services, users might experience performance (response speed) and accuracy issues that depend on the specific model selected to run in Ollama or LM Studio.
- In most cases, using cloud LLM providers like Gemini, OpenAI, or Grok will result in faster response times compared to local API services.
- The response time of local LLM services is significantly influenced by the local computer's hardware (CPU, GPU support).
- You can change the default
Prompt Keywordto another term that suits you.
- Dynamic settings help you easily switch between LLM models and providers while coding.
- Keyword explanation pops up on mouse hover.
- Review and fix selected code (line, block, or the entire file).
- Floating action menu appears at the top of the code block or selected lines of code.
- Update description: Edit and update any incorrect links and add a 'no data collection' statement to the description.
- Update the description: Update the README file with additional notes and guidelines.
- Auto comment: CODi can understand and add code explanations as comments to the selected code or the line where the cursor is positioned.
- Keyword meaning tool-tip: CODi displays a tooltip (popup) when you hover your mouse over a keyword in a line. (Remember: use the key combination
Command+Shift+Aon macOS orCtrl+Shift+Aon Windows to activate or deactivate this feature.)
- Code syntax highlight added: beauty and clear code syntax added in this version
- Setting UI: Changed the setting UI color following VS Code them.
- Some optimizations: optimized Code reviewing and fixing feature.
For more information on the cloud providers supported by this extension, you can visit the following links:
- OpenAI: OpenAI API Documentation
- Gemini: Gemini Documentation
- Grok: Grok Documentation
- Claude: Claude Documentation
These resources will help you easily find the information you need to get started with each provider.
- Description: Set the keyword that signals when you want CODi to assist you. You can customize it to your preference.
- Description: Enter the URL for your local server API. This is necessary for CODi to connect to your local service.
- Description: Choose your server provider. Options include Ollama, LMStudio, or remote services like OpenAI, Gemini, and Grok AI.
- Description: Specify the name of the AI model you want to use with your local API.
- Description: Decide whether to display the AI response next to your current page or as a single page.
- Description: Input your OpenAI API key to access OpenAI services.
- Description: Select the model you want to use with OpenAI, such as GPT-4 Turbo or others.
- Description: Enter your Gemini API key for access to Gemini services.
- Description: Choose the Gemini model you want to use from the available options.
- Description: Provide your Grok API key to utilize Grok's capabilities.
- Description: Select the Grok model you want to use, with options for different token capacities.
To configure CODi, simply adjust these settings in your preferences to tailor the assistant to your workflow!


