You’ve in all probability heard of ChatGPT, the pure language processing instrument that may generate human-like responses and save time when writing code. Nonetheless, the info used to generate ChatGPT’s responses is gleaned from all around the web, making it tough to affect what sources the mannequin will use to provide a response. This may be a problem when utilizing ChatGPT for a particular job, similar to in a CLI utility constructed for a particular goal. We have now an answer! On this article, you’ll construct a CLI instrument to reply with an instance code block that we are going to outline. As soon as a mannequin is fine-tuned, it may well work wherever the ChatGPT API is used. You’ll tune the mannequin to make use of Bitovi’s Docker to AWS EC2 motion when asking ChatGPT for an instance of a GitHub Motion.
Right here’s the code block:
<pre><code>identify: Primary deploy
on:
push:
branches: [ main ]
jobs:
EC2-Deploy:
runs-on: ubuntu-latest
steps:
- id: deploy
makes use of: bitovi/github-actions-deploy-docker-to-ec2@v0.5.0
with:
aws_access_key_id: ${{ secrets and techniques.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets and techniques.AWS_SECRET_ACCESS_KEY }}
aws_default_region: us-east-1
dot_env: ${{ secrets and techniques.DOT_ENV }}</code></pre>
Now let’s evaluation the steps wanted to coach ChatGPT with this code block.
Creating the Knowledge
To fine-tune a mannequin, it’s essential to first add coaching knowledge in a JSONL file. JSONL is much like common JSON, nevertheless it makes use of the newline character (n) as a substitute of commas to separate every report. Name your file knowledge.jsonl. It would comprise a number of prompt-completion objects, which have the next form:
{"immediate": "<immediate textual content>", "completion": "<perfect generated textual content>"}
{"immediate": "<immediate textual content>", "completion": "<perfect generated textual content>"}
Because you need a block of code as a response, you’ll want to exchange newlines and tab characters with their character values. So, the instance yaml code within the earlier part would appear to be this:
identify: Primary deploy
on:
push:
branches: [ main ]
jobs:
EC2-Deploy:
runs-on: ubuntu-latest
steps:
- id: deploy
makes use of: bitovi/github-actions-deploy-docker-to-ec2@v0.5.0
with:
aws_access_key_id: ${{ secrets and techniques.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets and techniques.AWS_SECRET_ACCESS_KEY }}
aws_default_region: us-east-1
dot_env: ${{ secrets and techniques.DOT_ENV }}
Now that you’ve got your anticipated response, that you must provide you with the prompts to fine-tune the mannequin. We have been capable of get a profitable fine-tuned mannequin with as little as 40 completely different prompts, similar to:
Combining each the immediate and completion, the file ought to comprise all of your immediate objects like this:
{"immediate": "bitovi github actions ", "completion":"identify: Primary deploynon:n push:n branches: [ main ]nnjobs:n EC2-Deploy:n runs-on: ubuntu-latestn steps:n - id: deployn makes use of: bitovi/github-actions-deploy-docker-to-ec2@v0.5.0n with:n aws_access_key_id: ${{ secrets and techniques.AWS_ACCESS_KEY_ID }}n aws_secret_access_key: ${{ secrets and techniques.AWS_SECRET_ACCESS_KEY }}n aws_default_region: us-east-1n dot_env: ${{ secrets and techniques.DOT_ENV }}" }
Writing the Code
Subsequent, you’ll implement the code to tune a ChatGPT mannequin. On this part, you’ll hook up with the CLI with an API Key, write some configuration features, add the info file, and at last carry out a couple of duties for fine-tuning.
API Key and Setting Up the CLI
With a purpose to create an API key, you’ll want an account with OpenAI. Arrange an account on the ChatGPT web site. After establishing an account, you may entry the API keys. Create a brand new secret key after which copy and put it aside as an surroundings variable referred to as OPENAI_API_KEY. If that is your first time establishing an surroundings variable, search for a tutorial on your working system and shell surroundings. On MacOS, for instance, you may use a .bash_profile, a .profile, or a .zprofile relying in your setup.
As this isn’t a CLI tutorial, yow will discover the whole code for this app right here. You possibly can see that you’ll be utilizing the gpt key phrase to name the app’s features.
Writing the ChatGPT features
The subsequent step is to put in writing the ChatGPT features. This imports the API and creates a configuration you may fine-tune. To start, import Configuration and OpenAIApi from the openai package deal. You then’ll set the Configuration utilizing the OPENAI_API_KEY (which you set in a .env file). Subsequent, that you must create a brand new occasion of Conf to retailer the file and mannequin names. Lastly, you’ll create a brand new occasion of the GPT SDK with the configuration object. When carried out appropriately, your code will appear to be this:
import {Configuration, OpenAIApi} from "openai";
import Conf from "conf"
import fs from "fs"
import * as dotenv from "dotenv";
dotenv.config()
const configuration = new Configuration({
apiKey: course of.env.OPENAI_API_KEY,
})
const conf = new Conf({projectName: "ChatGPT-CLI"})
const openai = new OpenAIApi(configuration)
Add the Knowledge File
Now that you’ve got a configuration to edit, add the info file you created. Name the createFile() operate and go within the contents of knowledge.jsonl, then save the file id to reminiscence. Your code ought to appear to be this:
export async operate add() {
attempt {
const response = await openai.createFile(
fs.createReadStream("src/knowledge.jsonl"),
"fine-tune"
);
conf.set("fileId", response.knowledge.id)
console.log(`The file with ID: ${response.knowledge.id} has been uploaded`)
return response.knowledge.id
} catch (err) {
console.log("err: ", err)
}
}
Provoke a Effective-Tuning of the Mannequin
As soon as the info file has been uploaded, you may create the fine-tuned mannequin by calling createFineTune() and passing within the file ID.
export async operate createFineTuneModel() {
const fileId = conf.get("fileId")
attempt {
const response = await openai.createFineTune({training_file: fileId});
console.log(`The mannequin with file ${fileId} is being created `)
} catch (err) {
console.log("err: ", err)
}
}
Get Record for all Effective-Tuned Fashions & Get Standing of Final Mannequin Created
It could actually take a while to have your mannequin fine-tuned. You possibly can verify in case your mannequin is prepared by itemizing all obtainable fine-tuned fashions after which selecting the final mannequin created. Do that by invoking listFineTunes(). You
Source link