Hacker Newsnew | past | comments | ask | show | jobs light | darkhn

I haven't had much trouble with GPT 3.5 or 4 function calls returning in an undesirable format recently. I did get a few bad syntax responses when OpenAI first rolled it out, but not for the past few months.

Llama 2 can also pick the function call format up, given sufficient training data that contains function call responses, though you'll then have to parse the returned object out of the text-based response.


Has anyone done such fine tuning on llama though? Afaik most projects like llama.cpp use grammars instead.


Yep! The linked notebook includes an example of exactly that (fine-tuning a 7b model to match the syntax of GPT-4 function call responses): https://github.com/OpenPipe/OpenPipe/blob/main/examples/clas...


Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact |

Search: