Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fine tuning model? #82

Open
shiffman opened this issue Apr 8, 2024 · 1 comment
Open

fine tuning model? #82

shiffman opened this issue Apr 8, 2024 · 1 comment

Comments

@shiffman
Copy link
Member

shiffman commented Apr 8, 2024

Just a reminder here for me to experiment with a fine-tuned model rather than out of the box LLM. . . though I think the RAG is working well for now for custom personality, language, and info.

@dipamsen
Copy link
Member

dipamsen commented Apr 9, 2024

Currently for RAG, the following prompt is used:

Dan says: {prompt}
Additional context you can use from most relevant to less relevant:
- {context1}
- {context2..10}

Current code in the editor:
``​`
{currentCode}
``​`

For it to better capture "custom personality, language, and info", we could change the prompt to tell it that those words are said by him, and/or to mimic the style of the text. (Although, something to note is that this additional context is only available during chatting, so it may not inherit these features during code explanation).

Also we could think about moving the Prompt + Code below the other context, if it has any difference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants