Skip to the content.

Custom Prompts for LLMSecretDetector

Custom prompts are a powerful feature of the LLMSecretDetector that allow users to tailor the detection process to their specific needs. By providing a custom prompt, you can guide the language model to focus on particular types of sensitive information and format the output in a desired way.

Purpose of Custom Prompts

Custom prompts enhance the detection process by:

Creating Custom Prompts

When creating custom prompts, consider the following guidelines:

Example of a Custom Prompt

Here’s an example of a custom prompt used to extract secrets from text:

custom_prompt = (
    "Extract secrets from the following:\n\n"
    "Only include API keys, secrets, tokens, or credentials. Use the following output format as JSON: \n\n"
    "Text: {text}"
)

Using Custom Prompts with LLMSecretDetector

To use a custom prompt with the LLMSecretDetector, pass it as the prompt_format parameter when initializing the detector.

Example Usage

from sentinel import LLMSecretDetector

# Define a custom prompt
custom_prompt = (
    "Extract secrets from the following:\n\n"
    "Only include API keys, secrets, tokens, or credentials. Use the following output format as JSON: \n\n"
    "Text: {text}"
)

# Initialize the detector with the custom prompt
detector = LLMSecretDetector(model, prompt_format=custom_prompt)

# Use the detector in your LLM pipeline

Benefits of Custom Prompts

Custom prompts are a valuable tool for users who need precise and reliable detection of sensitive information in their LLM interactions. By leveraging this feature, you can ensure that your detection process is both effective and aligned with your specific needs.