This model based on Mistral7B-v0.1 and was trained using 1,112,000 dialogs. Maximum length at training was 2048 tokens.

In the prompt it needs two things from you: narrative and dialog.

The dialog is a series of phrases or lines from individuals involved. You just need to indicate who is saying what.


Other releases:

Dr’s SuperCOT fine-tune

SQLCoder-7B