Hacker News new | past | comments | ask | show | jobs | submit login

That's where "rapidly" comes in. Also, LLMs allow very high customization via the choice of prompt. It's a lot quicker to adapt the prompt than to retrain a fine-tuned model. I think the outputs of the stabilized LLM could later be used to properly fine-tune a custom model for efficient use.

As for sentiment, even embeddings can do a good job at it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: