Why use Fine-tuning?
All large language models work on a token system. A token is approximately 4 characters. These token systems have a limit so you can't always provide enough data in a prompt.
For more complex generations, it is also often the case that you need to feed the AI enough examples to understand what you are trying to achieve when you run a generation. These examples, or datasets can range from 50 to hundreds of thousands and will ensure a quality output. A fine-tuned model will perform better than a prompt.
As most large language models charge for the tokens used within a prompt, using fine-tuning can also save you a lot of money. Pushing those examples into a dataset means that when you run the fine-tune, it is optimized.
![](https://assets-global.website-files.com/6423051caaa7d12552c2de54/6538db25b61099330ec490a9_%27Fine-Tune%20Riku.AI.jpeg)
Learn About JSONL Datasets
![](https://assets-global.website-files.com/6423051caaa7d12552c2de54/647da23d2847bcb958dc0254_arrow.svg)
You can fine-tune models directly in Riku with no-code. We even have a JSONL dataset builder to help you take your outputs and put them in the right format without the stress. Making fine-tuning accessible is one of our main goals.