Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Enable to apply custom LLM Guardrails to the model #1242

Open
2 tasks done
haofeif opened this issue Sep 4, 2024 · 0 comments
Open
2 tasks done

[Feature]: Enable to apply custom LLM Guardrails to the model #1242

haofeif opened this issue Sep 4, 2024 · 0 comments

Comments

@haofeif
Copy link

haofeif commented Sep 4, 2024

Checklist

  • I've searched for similar issues and couldn't find anything matching
  • I've discussed this feature request in the K8sGPT Slack and got positive feedback

Is this feature request related to a problem?

None

Problem Description

A lot of the customers mandate default LLM guardrails to be deployed as baseline responsible AI policies for all the LLM usage. For instance, bedrock has its dedicated ApplyGuardrail Independent API to apply to the input or output from the LLM, regardless of which LLM it is using. Without the LLM Guardrail, the LLM model could not be used due to company security policy.

Solution Description

Thinking of an option for K8SGPT to plug and use 3rd party Guardrail APIs (i.e. Bedrock Guardrail API)

Benefits

This will unblock the adoption for K8SGPT in organization which mandate company responsible AI policy

Potential Drawbacks

Increase the complexity of the solution

Additional Information

N/A

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Proposed
Development

No branches or pull requests

1 participant