Skip to content

๐Ÿ“ A Customizable VS Code extension for AI-generated commit messages.

License

Notifications You must be signed in to change notification settings

koimoee/ProCommit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

logo

ProCommit

๐Ÿ“ A Customizable VS Code extension for AI-generated commit messages.

GitHub Workflow Status VSX

Features

  • Emoji features.
  • Custom Generator, Endpoint, and Api Key.
  • Generating commit message using different language.
  • Using multiple result for commit messages.
  • More customizable.

demo

Requirements

To use this extension, you need an API Key:

  • Obtain an API key from OpenAI (Default endpoint).
  • Alternatively, you can use your own custom API key (Custom endpoint).

Install

Install (Manually)

  • Download ProCommit Extension From Direct Link or VSIX Registry
  • In Visual Studio Code, at the bottom of the Activity Bar, click the Extensions icon, and select Install from VSIX. Select the VSIX file ProCommit.vsix and click Install.
  • You're done!

Extension Settings

ProCommit extension contributes the following settings:

General

  • procommit.general.generator: Generator used to create commit messages. Available options: ChatGPT, Custom.
  • procommit.general.messageApproveMethod: Method used to approve generated commit messages. Available options: Quick pick, Message file.
  • procommit.general.language: Control what language should used for commit message.
  • procommit.general.showEmoji: Include emojis in commit messages
  • procommit.general.useMultipleResults: Allow using multiple results for commit messages

OpenAI

  • procommit.openAI.apiKey: API Key needed for generating AI commit messages.
  • procommit.openAI.modelVersion: Version of AI Model used.
  • procommit.openAI.customEndpoint: Custom endpoint URL.
  • procommit.openAI.temperature: Controls randomness. Lower values result in less random completions. As the temperature approaches zero, the model becomes deterministic and repetitive.
  • procommit.openAI.maxTokens: The maximum number of tokens to generate. Requests can use up to 2048 tokens shared between prompt and completion.

License

Released under the MIT License by @Kochan.

Contributing

If you want more languages to be supported, please open an issue on our GitHub repository.