Skip to content

Text classification task

In this tutorial, we will show a standard workflow for a text classification task, in this case, using SetFit and Argilla.

We will follow these steps:

  • Configure the Argilla dataset
  • Add initial model suggestions
  • Evaluate with Argilla
  • Train your model
  • Update the suggestions with the new model

Getting started

Run the Argilla server

If you have already deployed Argilla Server, you can skip this step. Otherwise, you can quickly deploy it in two different ways:

  • Remotely using a HF Space. ⚠️ If persistent storage is not enabled, you will lose your data when the server is stopped.

Note

As this is a release candidate version, you'll need to manually change the version in the HF Space Files > Dockerfile to argilla/argilla-quickstart:v2.0.0rc1.

  • Locally using Docker: docker run -d --name quickstart -p 6900:6900 argilla/argilla-quickstart:v2.0.0rc1

Set up the environment

To complete this tutorial, you need to install the Argilla SDK and a few third-party libraries via pip.

!pip install argilla --pre
!pip install setfit==1.0.3 transformers==4.40.2

Let's make the required imports:

import argilla as rg

from datasets import load_dataset, Dataset
from setfit import SetFitModel, Trainer, get_templated_dataset, sample_dataset

You also need to connect to the Argilla server using the api_url and api_key.

# Replace api_url with your url if using Docker
# Replace api_key if you configured a custom API key
# Uncomment the last line and set your HF_TOKEN if your space is private
client = rg.Argilla(
    api_url="https://[your-owner-name]-[your_space_name].hf.space",
    api_key="owner.apikey"
    # extra_headers={"Authorization": f"Bearer {HF_TOKEN}"}
)

Configure and create the Argilla dataset

Now, we will need to configure the dataset. In the settings, we can specify the guidelines, fields, and questions. If needed, you can also add metadata and vectors. However, for our use case, we just need a text field and a label question.

Note

Check this how-to guide to know more about configuring and creating a dataset.

labels = ["positive", "negative"]

settings = rg.Settings(
    guidelines="Classify the reviews as positive or negative.",
    fields=[
        rg.TextField(
            name="review",
            title="Text from the review",
            use_markdown=False,
        ),
    ],
    questions=[
        rg.LabelQuestion(
            name="sentiment_label",
            title="In which category does this article fit?",
            labels=labels,
        )
    ],
)

Let's create the dataset with the name and the defined settings:

dataset = rg.Dataset(
    name="text_classification_dataset1",
    settings=settings,
)
dataset.create()

Add records

Even if we have created the dataset, it still lacks the information to be annotated (you can check it in the UI). We will use the imdb dataset from the Hugging Face Hub. Specifically, we will use 100 samples from the train split.

hf_dataset = load_dataset("imdb", split="train[:100]")

We will easily add them to the dataset using log and the mapping, where we indicate that the column text is the data that should be added to the field review.

dataset.records.log(records=hf_dataset, mapping={"text": "review"})

Add initial model suggestions

The next step is to add suggestions to the dataset. In our case, we will generate them using a zero-shot SetFit model. However, you can use a framework or technique of your choice.

We will start by defining an example training set with the required labels: positive and negative. Using get_templated_dataset will create sentences from the default template: "This sentence is {label}."

zero_ds = get_templated_dataset(
    candidate_labels=labels,
    sample_size=8,
)

Now, we will prepare a function to train the SetFit model.

Note

For further customization, you can check the SetFit documentation.

def train_model(model_name, dataset):

    model = SetFitModel.from_pretrained(model_name)

    trainer = Trainer(
        model=model,
        train_dataset=dataset,
    )

    trainer.train()

    return model

Let's train the model. We will use TaylorAI/bge-micro-v2, available in the Hugging Face Hub.

model = train_model(model_name="TaylorAI/bge-micro-v2", dataset=zero_ds)

You can save it locally or push it to the Hub. And then, load it from there.

# Save and load locally
# model.save_pretrained("text_classification_model")
# model = SetFitModel.from_pretrained("text_classification_model")

# Push and load in HF
# model.push_to_hub("[username]/text_classification_model")
# model = SetFitModel.from_pretrained("[username]/text_classification_model")

It's time to make the predictions! We will set a function that uses the predict method to get the suggested label. The model will infer the label based on the text.

def predict(model, input, labels):

    model.labels = labels

    prediction = model.predict([input])

    return prediction[0]

To update the records, we will need to retrieve them from the server and update them with the new suggestions. The id will always need to be provided as it is the records' identifier to update a record and avoid creating a new one.

data = dataset.records.to_list(flatten=True)
updated_data = [
    {
        "sentiment_label": predict(model, sample["review"], labels),
        "id": sample["id"],
    }
    for sample in data
]
dataset.records.log(records=updated_data)

Voilà! We have added the suggestions to the dataset, and they will appear in the UI marked with a ✨.

Evaluate with Argilla

Now, we can start the annotation process. Just open the dataset in the Argilla UI and start annotating the records. If the suggestions are correct, you can just click on Submit. Otherwise, you can select the correct label.

Note

Check this how-to guide to know more about annotating in the UI.

Train your model

After the annotation, we will have a robust dataset to train the main model. In our case, we will fine-tune using SetFit. However, you can select the one that best fits your requirements. So, let's start by retrieving the annotated records.

Note

Check this how-to guide to know more about filtering and querying in Argilla.

dataset = client.datasets("text_classification_dataset")
status_filter = rg.Query(filter = rg.Filter(("status", "==", "submitted")))
submitted = list(dataset.records(status_filter))

As we have a single response per record, we can retrieve the selected label straightforwardly and create the training set with 8 samples per label. We selected 8 samples per label to have a balanced dataset for few-shot learning.

train_records = [{
    "text" : r.fields["review"],
    "label" : r.responses.sentiment_label[0].value,
    } for r in submitted
]
train_dataset = Dataset.from_list(train_records)
train_dataset = sample_dataset(train_dataset, label_column="label", num_samples=8)

We can train the model using our previous function, but this time with a high-quality human-annotated training set.

model = train_model(model_name="TaylorAI/bge-micro-v2", dataset=train_dataset)

As the training data had a better-quality, we can expect a better model. So, we can update the remaining non-annotated records with the new model's suggestions.

data = dataset.records.to_list(flatten=True)
updated_data = [
    {
        "sentiment_label": predict(model, sample["review"], labels),
        "id": sample["id"],
    }
    for sample in data
]
dataset.records.log(records=updated_data)

Conclusions

In this tutorial, we present an end-to-end example of a text classification task. This serves as the base, but it can be performed iteratively and seamlessly integrated into your workflow to ensure high-quality curation of your data and improved results.

We started by configuring the dataset, adding records, and training a zero-shot SetFit model, as an example, to add suggestions. After the annotation process, we trained a new model with the annotated data and updated the remaining records with the new suggestions.