That's what I was wondering. How the token overhead of function calling with, say, Instructor compares with a regular chat response that includes few-shot learning (schema examples).
Maybe Instructor makes the most sense only when you're working with potentially malicious user data
Maybe Instructor makes the most sense only when you're working with potentially malicious user data