Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fetch LLM response based on prompt and cache responses #2201

Open
samshara opened this issue Jul 3, 2024 · 0 comments
Open

Fetch LLM response based on prompt and cache responses #2201

samshara opened this issue Jul 3, 2024 · 0 comments
Assignees
Labels

Comments

@samshara
Copy link
Member

samshara commented Jul 3, 2024

  1. Check if the hashed prompt exists in the cache:
    • If it exists, return the cached response.
    • If it doesn't exist, send the generated prompt to the LLM service, cache the response received, and return the response.
  2. Link filtered extracts and prioritized filtered extracts to the filter response.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants