The model uses 200k samples from different datasets to fine-tune Open-Orca/Mistral-7B-OpenOrca.

Humaneval Java Javascript tokens/s
Phi-1 51.22 10.76 19.25
CodeLlama-34b-Instruct 50.79 41.53 45.85 15.1
CodeLlama-13b-Instruct 50.6 33.99 40.92 25.3
Speechless Code 7B 46.34
CodeLlama-7b-Instruct 45.65

Its coding performance in Python is comparable to Phi-1.5B.