Brief Review — SELF-INSTRUCT: Aligning Language Models with Self-Generated Instructions

SELF-INSTRUCT, Semi-Automatic Approach

Sik-Ho Tsang
5 min readSep 9, 2023

SELF-INSTRUCT: Aligning Language Models with Self-Generated Instructions
SELF-INSTRUCT, by University of Washington, Tehran Polytechnic Arizona State University, Johns Hopkins University, and Allen Institute for AI
2023 ACL, Over 200 Citations (Sik-Ho Tsang @ Medium)

LM Tuning / Prompting
2020 [Human Feedback Model] 2021 [T5+LM, Prompt Tuning] 2022 [GPT-3.5, InstructGPT] [LoRA] [Chain-of-Thought Prompting] [T0] [FLAN] [UL2R, U-PaLM] [Flan-PaLM] [Tk-INSTRUCT] 2023 [LIMA]
==== My Other Paper Readings Are Also Over Here ====

  • Human-written instruction data is often limited in quantity, diversity, and creativity.
  • In this paper, SELF-INSTRUCT, is proposed which designs a pipeline: generates instructions, input, and output samples from a language model, then filters invalid or similar ones before using them to finetune the original model.

Outline

  1. SELF-INSTRUCT
  2. SELF-INSTRUCT Data from GPT-3
  3. Results

1. SELF-INSTRUCT

SELF-INSTRUCT

SELF-INSTRUCT is a pipeline of generating tasks with a vanilla pretrained language model (LM) itself, filtering the generated data, and then conducting instruction tuning with this generated data in order to align the LM to follow instructions better.

1.1. Defining Instruction Data

  • The instruction data we want is a set of instructions {It}, each of which defines a task t in natural language. Task t has nt ≥ 1 input-output instances {(Xt,i, Yt,i)} where i is from 1 to nt.
  • A model M is expected to produce the output, given the task instruction and the corresponding input: M(It, Xt,i) = Yt,i.
  • The instruction and instance input does not have a strict boundary in many cases. For example, “write an essay about school safety” can be a valid instruction that we expect models to respond to directly, while it can also be formulated as “write an essay about the following topic” as the instruction, and “school safety” as an instance input.
  • To encourage the diversity of the data format, instructions are allowed not to require additional input (i.e., X is empty).

1.2. Automatic Instruction Data Generation

  • There are multiple steps (1.2.1. to 1.2.5.) as below.

1.2.1. Instruction Generation

The task pool is initiated with 175 tasks (1 instruction and 1 instance for each task).

  • For every step, 8 task instructions are sampled from this pool as in-context examples. 6 are from the human-written tasks, and 2 are from the model-generated tasks in previous steps to promote diversity.

1.2.2. Classification Task Identification

  • There are two different approaches for classification and non-classification tasks, thus we next need to identify whether the generated instruction represents a classification task or not.

The LM is prompted in a few-shot way to determine this, using 12 classification instructions and 19 non-classification instructions from the seed tasks.

1.2.3. Instance Generation

Given the instructions and their task type, instances for each instruction are generated independently.

  • Input-first Approach for Non-Classification Task: where an LM is asked to come up with the input fields first based on the instruction, and then produce the corresponding output.
  • Output-first Approach for Classification Task: where the possible class labels are first generated, and then condition the input generation on each class label.

1.2.4. Filtering and Postprocessing

To encourage diversity, a new instruction is added to the task pool only when its ROUGE-L similarity with any existing instruction is less than 0.7.

  • Also exclude instructions that contain some specific keywords (e.g., image, picture, graph) that usually cannot be processed by LMs.
  • And filter out instances that are exactly the same or those with the same input but different outputs.

1.2.5. Finetuning the LM to Follow Instructions

After creating large-scale instruction data, it is used to finetune the original LM (i.e., SELF-INSTRUCT).

  • To do this, the instruction and instance input are concatenated as a prompt and the model is trained to generate the instance output in a standard supervised way.
  • To make the model robust to different formats, multiple templates are used to encode the instruction and instance input together. For example, the instruction can be prefixed with “Task:” or not, the input can be prefixed with “Input:” or not, “Output:” can be appended at the end of the prompt or not, and different numbers of break lines can be put in the middle, etc.

2. SELF-INSTRUCT Data from GPT-3

Statistics of Generated Data for GPT-3
  • The largest GPT-3 LM (“davinci” engine) is accessed through the OpenAI API.

A total of over 52K instructions are generated and more than 82K instances corresponding to these instructions after filtering.

Other Statistics

Figure 4: Low ROUGE-L score means decent number of new instructions were generated, which do not have much overlap with the seeds.

Figure 5: There is also diversity in the length of the instructions, instance inputs, and instance outputs.

2.3. Data Quality

Data Quality

Most of the generated instructions are meaningful, while the generated instances may contain more noise (to a reasonable extent).

3. Results

3.1. Unseen Tasks

Unseen Tasks

SELF-INSTRUCT boosts the instruction-following ability of GPT-3 by a large margin.

The vanilla GPT-3 model basically cannot follow human instructions at all. Upon manual analysis, it is found that it usually generates irrelevant and repetitive text, and does not know when to stop generation.

  • SELF-INSTRUCT still brings in additional gains when combined with the SUPERNI training set, proving its value as complementary data.

3.2. Comparisons With InstructGPT

Comparisons with InstructGPT

GPT-3 SELF-INST (i.e., GPT-3 model finetuned with SELF-INSTRUCT) outperforms those counterparts trained on T0 or SUPERNI data by a large margin, demonstrating the value of the generated data despite the noise.

3.3. Data Size and Quality

Data Size and Quality

Overall, we see consistent improvement as the data size is growing. However, this improvement almost plateaus after 16K.

  • Use InstructGPT003 (the best available general-purpose model) to regenerate the output field of all the instances given the instruction and input, then use this improved version of our data to finetune GPT-3.

The resulting model outperforms the counterpart trained with the original data by 10%, which suggests big room for future work.

--

--

Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.