A command-line tool that leverages OpenAI's Chat Completion API to document code with the assistance of AI models.
Watch this Demo video to view features.
- Source Code Documentation: Automatically generate comments and documentation for your source code.
- Multiple File Processing: Handle one or multiple files in a single command.
- Model Selection: Use AI model of your choice with the
--model
flag. - Custom Output: Output the results to a file with the
--output
flag, or display them in the console. - Stream Output: Stream the LLM response to command line with
--stream
flag.
npm install -g dev-mate-cli
dev-mate-cli
needs API_KEY and BASE_URL to generate responses, these variables should be stored in a .env
file within the current directory. Make sure to use the API_KEY and BASE_URL from the same OpenAI-compatible completion API provider.
API_KEY=your_api_key
BASE_URL=https://api.openai.com/v1
Popular providers - OpenRouter, Groq, OpenAI.
To run the tool, specify one or more source files or folders as input:
dev-mate-cli ./examples/file.js
For processing multiple files:
dev-mate-cli ./examples/file.js ./examples/file.cpp
For processing folders:
dev-mate-cli ./examples
-
-m, --model <model-name>
: Choose the AI model to use(default: google/gemma-2-9b-it:free from OpenRouter)
.dev-mate-cli file.js -m "openai/gpt-4o-mini"
-
-o, --output <output-file>
: Write the output to a specified file.dev-mate-cli file.js -o output.js
-
-t, --temperature <value>
: Set the creativity level of the AI model(default: 0.7)
.dev-mate-cli file.js -t 1.1
-
-u, --token-usage
: Display token usage informationdev-mate-cli file.js -u
-
-s, --stream
: Stream response to command linedev-mate-cli file.js -s
- Check Version: To check the current version of the tool, use:
dev-mate-cli --version
- Help: Display the help message listing all available options:
dev-mate-cli --help
-
Document a JavaScript file and save the result:
dev-mate-cli ./examples/file.js --output file-documented.js --model google/gemini-flash-8b-1.5-exp
-
Process multiple files and print output to the console:
dev-mate-cli ./examples/file.js ./examples/file.py --model google/gemini-flash-8b-1.5-exp
To use a file for LLM configuration, create a dotfile
named .dev-mate-cli.toml
in the home directory of your system.
Ex: ~/.dev-mate-cli.toml
:
model = "gpt-4o"
temperature = "1"
Contributions are welcome! If you find a bug or have an idea for an improvement, feel free to open an issue or submit a pull request, view Contribution Guidelines for more details.
This project is licensed under the MIT License.