Natural Language Processing | Machine Learning
Exploring the modern wave of machine learning with cutting edge fine tuning
Fine tuning is the process of tailoring a machine learning model to a specific application, which can be vital in achieving consistent and high quality performance. In this article weâll discuss âLow-Rank Adaptationâ (LoRA), one of the most popular fine tuning strategies. First weâll cover the theory, then weâll use LoRA to fine tune a language model, improving its question answering abilities.
Who is this useful for? Anyone interested in learning state of the art machine learning approaches. Weâll be focusing on language modeling in this article, but LoRA is a popular choice in many machine learning applications.
How advanced is this post? This article should be approachable to novice data scientists and enthusiasts, but contains topics which are critical in advanced applications.
Pre-requisites: While not required, a solid working understanding of large language models (LLMs) would probably be useful. Feel free to refer to my article on transformers, a common form of language model, for more information:
Youâll also probably want to have an idea of what a gradient is. I also have an article on that:
If you donât feel confident on either of these topics you can still get a lot from this article, but they exist if you get confused.