How To Fine-Tune ChatGPT 3.5 Turbo

This article has outlined how you can fine tune your GPT 3.5 Turbo models. You can do this by preparing your data, uploading your files, and then setting up a custom OpenAI session to handle the fine tuning.



How To Fine-Tune ChatGPT 3.5 Turbo
Image by Editor

 

In case you hadn’t already heard it, OpenAI recently announced that fine-tuning for GPT-3.5 Turbo is available. Furthermore, fine-tuning for GPT-4.0 is expected to be released later in the fall as well. For developers in particular, this has been most welcome news. 

But why precisely was this such an important announcement? In short, it’s because fine-tuning a GPT-3.5 Turbo model offers several important benefits. While we’ll explore what these benefits are later in this article, in essence, fine-tuning enables developers to more effectively manage their projects and shorten their prompts (sometimes by up to 90%) by having instructions embedded into the model itself. 

With a fine-tuned version of GPT-3.5 Turbo, it’s possible to exceed the base Chat GPT-3.5 capabilities for certain tasks. Let’s explore how you can fine-tune your GPT-3.5 Turbo models in greater depth. 

 

Preparing Data for Fine-tuning

 

The first step to fine-tuning your data for GPT-3.5 Turbo is to format it into the correct structure in JSONL format. Each line in your JSONL file will have a message key with three different kinds of messages:

  • Your input message (also called the user message)
  • The context of the message (also called the system message)
  • The model response (also called the assistant message)

Here is an example with all three of these types of messages:

{
  "messages": [
    { "role": "system", "content": "You are an experienced JavaScript developer adept at correcting mistakes" },
    { "role": "user", "content": "Find the issues in the following code." },
    { "role": "assistant", "content": "The provided code has several aspects that could be improved upon." }
  ]
}

 

You’ll then need to save your JSON object file once your data has been prepared. 

 

Uploading Files for Fine-tuning

 

Once you have created and saved your data set like in the above, it’s time to upload the files so you can fine-tune them. 

Here is an example of how you can do this via a Python script provided by OpenAI:

curl https://api.openai.com/v1/files \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -F "purpose=fine-tune" \
  -F "file=@path_to_your_file" 

 

Creating a Fine-tuning Job

 

Now the time has come to finally execute the fine-tuning. Again, OpenAI provides an example of how you can do this:

curl https://api.openai.com/v1/fine_tuning/jobs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
  "training_file": "TRAINING_FILE_ID",
  "model": "gpt-3.5-turbo-0613"
}'

 

As the above example shows, you’ll need to use an openai.file.create for sending the request to upload the file. Remember to save the file ID, as you will need it for future steps. 

 

Utilizing the Fine-tuned Model

 

Now the time has come to deploy and interact with the fine-tuned model. You can do this within the OpenAI playground. 

Note the OpenAI example below:

curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
  "model": "ft:gpt-3.5-turbo:org_id",
  "messages": [
    {
      "role": "system",
      "content": "You are an experienced JavaScript developer adept at correcting mistakes"
    },
    {
      "role": "user",
      "content": "Hello! Can you review this code I wrote?"
    }
  ]
}'

 

This is also a good opportunity for comparing the new fine-tuned model with the original GPT-3.5 Turbo model. 

 

Advantages of Fine-tuning

 

FIne-tuning your GPT-3.5 Turbo prompts offer three primary advantages for improving model quality and performance. 

 

Improved Steerability

 

This is another way of saying that fine-tuning permits developers to ensure their customized models follow specific instructions better. For example, if you’d like your model to be completed in a different language (such as Italian or Spanish), fine-tuning your models enables you to do that. 

The same goes for if you need your model to make your outputs shorter or have the model respond in a certain way. Speaking of outputs…

 

More Reliable Output Formatting 

 

Thanks to fine-tuning, a model can improve its ability to format responses in a consistent way. This is very important for any applications that require a specific format, such as coding. Specifically, developers can fine-tune their models so that user prompts are converted into JSON snippets, which can then be incorporated into larger data modules later on. 

 

Customized Tone

 

If any businesses need to ensure that the output generated by their AI models are completed with a specific tone, fine-tuning is the most efficient way to ensure that. Many businesses need to ensure their content and marketing materials match their brand voice or have a certain tone as a means to better connect with customers. 

If any business has a recognizable brand voice, they can fine-tune their GPT-3.5 Turbo models when preparing their data for fine-tuning. Specifically, this will be done in the ‘user message’ and ‘system message’ message types as discussed above. When done properly, this will result in all messages being created with the company’s brand voice in mind, while also significantly reducing the time needed to edit everything from social media copy to whitepapers. 

 

Future Enhancements

 

As noted above, OpenAI is also expected to soon release fine-tuning for GPT-4.0. Beyond that, the company is expected to release upcoming features such as offering support for function calling and the ability to fine-tune via the UI. The latter will make fine-tuning more accessible for novice users. 

These developments with fine-tuning are not just important for developers but for businesses as well. For instance, many of the most promising startups in the tech and developer space, such as Sweep or SeekOut, are reliant on using AI for completing their services. Businesses such as these will find good use in the ability to fine-tune their GPT data models. 

 

Conclusion

 

Thanks to this new ability to fine-tune GPT-3.5 Turbo, businesses and developers alike can now more effectively supervise their models to ensure that they perform in a manner that is more congruent to their applications.
 

Nahla Davies is a software developer and tech writer. Before devoting her work full time to technical writing, she managed—among other intriguing things—to serve as a lead programmer at an Inc. 5,000 experiential branding organization whose clients include Samsung, Time Warner, Netflix, and Sony.