How we can train a fine-tuned GPT-3 Model

In the previous blog, we discussed how to create a custom dataset using the Wikipedia library. In this blog, we would discuss how we can train a fine-tuned GPT-3 Model using OpenAI. By default, OpenAI provides you with a few AI models or engines that are appropriate for various tasks. However, every now and then you either don’t get the desired output or it costs too much to get the desired output. With fine-tuning, you can use OpenAI’s base models and engines to train a new model on a curated dataset that you provide. 




Libraries are required in fine-tuning the GPT3 model. 


1. Openai



2. Wandb



3. Pandas







Before creating a model, we need to create an OPENAI_API_KEY from the website. we need that openai_api key to use and tune the model. we can install openai using pip install openai command in python. Load the dataset which we saved in the previous blog. Merge a few columns to make 2 columned data frame ie into ‘prompt’ and ‘completion’. Convert the data frame into JSON format to tune the data and Submit the datasets to get fine-tuned GPT-3 Model.




Sample code of how to tune a model.

The following snippet depicts the merging of columns into a single column

# importing pandas library
import pandas as pd

# reading the csv file which was generated in previous blog
df = pd.read_csv('/content/olympics_sections.csv')

# merging few columns into a single column
df['context'] = df.title + "\n" + df.heading + "\n\n" + df.content




We rename the title and context as prompt and completion as it is a standard format to provide too model as per OpenAI



dff = pd.DataFrame(zip(df['title'], df['context']), columns = ['prompt','completion'])



Convert the CSV file to JSON file format to pass it to the model as fine-tuning accepts only the JSON file format.



dff.to_json("abhi.jsonl", orient='records', lines=True)



Different formats are supported by this tool as long as they have a prompt and a completion column/key. It will save the output into a JSONL file ready for fine-tuning after guiding you through the process of suggested changes. This can be done using the below snippet. The output is modified and written in the file as `abhi_prepared.jsonl` by default.



!openai tools fine_tunes.prepare_data -f abhi.jsonl -q



The code below allows to upload a file using the files API, create a fine-tuning job, and stream events up until the job are complete. Every fine-tuning task begins with a base model, which is always curie by default.



!openai api fine_tunes.create -t "abhii_prepared.jsonl"



By running the above line we get the model as output. we can use that model to test for any other prompt name. For example:


openai.api_key=' your api key '
ft_model = ' name of your model got from above code'
sample1="2020 Summer Olympics"
res = openai.Completion.create(model=ft_model, prompt= "2020 Summer Olympics", max_tokens=500, temperature=0)




2020 Summer Olympics opening ceremony\nVenue\n\nThe ceremony was held at the Olympic Stadium, originally built for the 1988 Summer Olympics. The stadium was renovated for the Olympics and Paralympics, and was the venue for the opening and closing ceremonies of the 2012 and 2016 Summer Olympics.




Read – Manual to implement GPT-3 Model



Also, read About GPT-3 Model


Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *