Skip to content

rg.Dataset.records

Usage Examples

In most cases, you will not need to create a DatasetRecords object directly. Instead, you can access it via the Dataset object:

dataset.records

For user familiar with legacy approaches

  1. Dataset.records object is used to interact with the records in a dataset. It interactively fetches records from the server in batches without using a local copy of the records.
  2. The log method of Dataset.records is used to both add and update records in a dataset. If the record includes a known id field, the record will be updated. If the record does not include a known id field, the record will be added.

Adding records to a dataset

To add records to a dataset, use the log method. Records can be added as dictionaries or as Record objects. Single records can also be added as a dictionary or Record.

You can also add records to a dataset by initializing a Record object directly.

records = [
    rg.Record(
        fields={
            "question": "Do you need oxygen to breathe?",
            "answer": "Yes"
        },
    ),
    rg.Record(
        fields={
            "question": "What is the boiling point of water?",
            "answer": "100 degrees Celsius"
        },
    ),
] # (1)

dataset.records.log(records)
  1. This is an illustration of a definition. In a real world scenario, you would iterate over a data structure and create Record objects for each iteration.
data = [
    {
        "question": "Do you need oxygen to breathe?",
        "answer": "Yes",
    },
    {
        "question": "What is the boiling point of water?",
        "answer": "100 degrees Celsius",
    },
] # (1)

dataset.records.log(data)
  1. The data structure's keys must match the fields or questions in the Argilla dataset. In this case, there are fields named question and answer.
data = [
    {
        "query": "Do you need oxygen to breathe?",
        "response": "Yes",
    },
    {
        "query": "What is the boiling point of water?",
        "response": "100 degrees Celsius",
    },
] # (1)
dataset.records.log(
    records=data, 
    mapping={"query": "question", "response": "answer"} # (2)
)
  1. The data structure's keys must match the fields or questions in the Argilla dataset. In this case, there are fields named question and answer.
  2. The data structure has keys query and response and the Argilla dataset has question and answer. You can use the mapping parameter to map the keys in the data structure to the fields in the Argilla dataset.

You can also add records to a dataset using a Hugging Face dataset. This is useful when you want to use a dataset from the Hugging Face Hub and add it to your Argilla dataset.

You can add the dataset where the column names correspond to the names of fields, questions, metadata or vectors in the Argilla dataset.

If the dataset's schema does not correspond to your Argilla dataset names, you can use a mapping to indicate which columns in the dataset correspond to the Argilla dataset fields.

from datasets import load_dataset

hf_dataset = load_dataset("imdb", split="train[:100]") # (1)

dataset.records.log(records=hf_dataset)
  1. In this example, the Hugging Face dataset matches the Argilla dataset schema. If that is not the case, you could use the .map of the datasets library to prepare the data before adding it to the Argilla dataset.

Here we use the mapping parameter to specify the relationship between the Hugging Face dataset and the Argilla dataset.

dataset.records.log(records=hf_dataset, mapping={"txt": "text", "y": "label"}) # (1)
  1. In this case, the txt key in the Hugging Face dataset corresponds to the text field in the Argilla dataset, and the y key in the Hugging Face dataset corresponds to the label field in the Argilla dataset.

Updating records in a dataset

Records can also be updated using the log method with records that contain an id to identify the records to be updated. As above, records can be added as dictionaries or as Record objects.

You can update records in a dataset by initializing a Record object directly and providing the id field.

records = [
    rg.Record(
        metadata={"department": "toys"},
        id="2" # (1)
    ),
] # (1)

dataset.records.log(records)
  1. The id field is required to identify the record to be updated. The id field must be unique for each record in the dataset. If the id field is not provided, the record will be added as a new record.

You can also update records in a dataset by providing the id field in the data structure.

data = [
    {
        "metadata": {"department": "toys"},
        "id": "2" # (1)
    },
] # (1)

dataset.records.log(data)
  1. The id field is required to identify the record to be updated. The id field must be unique for each record in the dataset. If the id field is not provided, the record will be added as a new record.

You can also update records in a dataset by providing the id field in the data structure and using a mapping to map the keys in the data structure to the fields in the dataset.

data = [
    {
        "metadata": {"department": "toys"},
        "my_id": "2" # (1)
    },
]

dataset.records.log(
    records=data, 
    mapping={"my_id": "id"} # (2)
)
1. The id field is required to identify the record to be updated. The id field must be unique for each record in the dataset. If the id field is not provided, the record will be added as a new record. 2. Let's say that your data structure has keys my_id instead of id. You can use the mapping parameter to map the keys in the data structure to the fields in the dataset.

You can also update records to an Argilla dataset using a Hugging Face dataset. To update records, the Hugging Face dataset must contain an id field to identify the records to be updated, or you can use a mapping to map the keys in the Hugging Face dataset to the fields in the Argilla dataset.

from datasets import load_dataset

hf_dataset = load_dataset("imdb", split="train[:100]") # (1)

dataset.records.log(records=hf_dataset, mapping={"uuid": "id"}) # (2)
  1. In this example, the Hugging Face dataset matches the Argilla dataset schema.
  2. The uuid key in the Hugging Face dataset corresponds to the id field in the Argilla dataset.

Iterating over records in a dataset

Dataset.records can be used to iterate over records in a dataset from the server. The records will be fetched in batches from the server::

for record in dataset.records:
    print(record)

# Fetch records with suggestions and responses
for record in dataset.records(with_suggestions=True, with_responses=True):
    print(record.suggestions)
    print(record.responses)

# Filter records by a query and fetch records with vectors
for record in dataset.records(query="capital", with_vectors=True):
    print(record.vectors)

Check out the rg.Record class reference for more information on the properties and methods available on a record and the rg.Query class reference for more information on the query syntax.


Class Reference

rg.Dataset.records

Bases: Iterable[Record], LoggingMixin

This class is used to work with records from a dataset and is accessed via Dataset.records. The responsibility of this class is to provide an interface to interact with records in a dataset, by adding, updating, fetching, querying, deleting, and exporting records.

Attributes:

Name Type Description
client Argilla

The Argilla client object.

dataset Dataset

The dataset object.

Source code in src/argilla_sdk/records/_dataset_records.py
class DatasetRecords(Iterable[Record], LoggingMixin):
    """This class is used to work with records from a dataset and is accessed via `Dataset.records`.
    The responsibility of this class is to provide an interface to interact with records in a dataset,
    by adding, updating, fetching, querying, deleting, and exporting records.

    Attributes:
        client (Argilla): The Argilla client object.
        dataset (Dataset): The dataset object.
    """

    _api: RecordsAPI

    DEFAULT_BATCH_SIZE = 256

    def __init__(self, client: "Argilla", dataset: "Dataset"):
        """Initializes a DatasetRecords object with a client and a dataset.
        Args:
            client: An Argilla client object.
            dataset: A Dataset object.
        """
        self.__client = client
        self.__dataset = dataset
        self._api = self.__client.api.records

    def __iter__(self):
        return DatasetRecordsIterator(self.__dataset, self.__client)

    def __call__(
        self,
        query: Optional[Union[str, Query]] = None,
        batch_size: Optional[int] = DEFAULT_BATCH_SIZE,
        start_offset: int = 0,
        with_suggestions: bool = True,
        with_responses: bool = True,
        with_vectors: Optional[Union[List, bool, str]] = None,
    ) -> DatasetRecordsIterator:
        """Returns an iterator over the records in the dataset on the server.

        Parameters:
            query: A string or a Query object to filter the records.
            batch_size: The number of records to fetch in each batch. The default is 256.
            start_offset: The offset from which to start fetching records. The default is 0.
            with_suggestions: Whether to include suggestions in the records. The default is True.
            with_responses: Whether to include responses in the records. The default is True.
            with_vectors: A list of vector names to include in the records. The default is None.
                If a list is provided, only the specified vectors will be included.
                If True is provided, all vectors will be included.

        Returns:
            An iterator over the records in the dataset on the server.

        """
        if query and isinstance(query, str):
            query = Query(query=query)

        if with_vectors:
            self._validate_vector_names(vector_names=with_vectors)

        return DatasetRecordsIterator(
            self.__dataset,
            self.__client,
            query=query,
            batch_size=batch_size,
            start_offset=start_offset,
            with_suggestions=with_suggestions,
            with_responses=with_responses,
            with_vectors=with_vectors,
        )

    def __repr__(self) -> str:
        return f"{self.__class__.__name__}({self.__dataset})"

    ############################
    # Public methods
    ############################

    def log(
        self,
        records: Union[List[dict], List[Record], HFDataset],
        mapping: Optional[Dict[str, str]] = None,
        user_id: Optional[UUID] = None,
        batch_size: int = DEFAULT_BATCH_SIZE,
    ) -> List[Record]:
        """Add or update records in a dataset on the server using the provided records.
        If the record includes a known `id` field, the record will be updated.
        If the record does not include a known `id` field, the record will be added as a new record.
        See `rg.Record` for more information on the record definition.

        Parameters:
            records: A list of `Record` objects, a Hugging Face Dataset, or a list of dictionaries representing the records.
                     If records are defined as a dictionaries or a dataset, the keys/ column names should correspond to the
                     fields in the Argilla dataset's fields and questions. `id` should be provided to identify the records when updating.
            mapping: A dictionary that maps the keys/ column names in the records to the fields or questions in the Argilla dataset.
            user_id: The user id to be associated with the records' response. If not provided, the current user id is used.
            batch_size: The number of records to send in each batch. The default is 256.

        Returns:
            A list of Record objects representing the updated records.

        """
        record_models = self._ingest_records(records=records, mapping=mapping, user_id=user_id or self.__client.me.id)
        batch_size = self._normalize_batch_size(
            batch_size=batch_size,
            records_length=len(record_models),
            max_value=self._api.MAX_RECORDS_PER_UPSERT_BULK,
        )

        created_or_updated = []
        records_updated = 0
        for batch in range(0, len(records), batch_size):
            self._log_message(message=f"Sending records from {batch} to {batch + batch_size}.")
            batch_records = record_models[batch : batch + batch_size]
            models, updated = self._api.bulk_upsert(dataset_id=self.__dataset.id, records=batch_records)
            created_or_updated.extend([Record.from_model(model=model, dataset=self.__dataset) for model in models])
            records_updated += updated

        records_created = len(created_or_updated) - records_updated
        self._log_message(
            message=f"Updated {records_updated} records and added {records_created} records to dataset {self.__dataset.name}",
            level="info",
        )

        return created_or_updated

    def to_dict(self, flatten: bool = False, orient: str = "names") -> Dict[str, Any]:
        """
        Return the records as a dictionary. This is a convenient shortcut for dataset.records(...).to_dict().

        Parameters:
            flatten (bool): The structure of the exported dictionary.
                - True: The record fields, metadata, suggestions and responses will be flattened.
                - False: The record fields, metadata, suggestions and responses will be nested.
            orient (str): The orientation of the exported dictionary.
                - "names": The keys of the dictionary will be the names of the fields, metadata, suggestions and responses.
                - "index": The keys of the dictionary will be the id of the records.
        Returns:
            A dictionary of records.

        """
        records = list(self(with_suggestions=True, with_responses=True))
        data = GenericIO.to_dict(records=records, flatten=flatten, orient=orient)
        return data

    def to_list(self, flatten: bool = False) -> List[Dict[str, Any]]:
        """
        Return the records as a list of dictionaries. This is a convenient shortcut for dataset.records(...).to_list().

        Parameters:
            flatten (bool): Whether to flatten the dictionary and use dot notation for nested keys like suggestions and responses.

        Returns:
            A list of dictionaries of records.
        """
        records = list(self(with_suggestions=True, with_responses=True))
        data = GenericIO.to_list(records=records, flatten=flatten)
        return data

    def to_json(self, path: Union[Path, str]) -> Path:
        """
        Export the records to a file on disk.

        Parameters:
            path (str): The path to the file to save the records.

        Returns:
            The path to the file where the records were saved.

        """
        records = list(self(with_suggestions=True, with_responses=True))
        return JsonIO.to_json(records=records, path=path)

    def from_json(self, path: Union[Path, str]) -> List[Record]:
        """Creates a DatasetRecords object from a disk path to a JSON file.
            The JSON file should be defined by `DatasetRecords.to_json`.

        Args:
            path (str): The path to the file containing the records.

        Returns:
            DatasetRecords: The DatasetRecords object created from the disk path.

        """
        records = JsonIO._records_from_json(path=path)
        return self.log(records=records)

    def to_datasets(self) -> HFDataset:
        """
        Export the records to a HFDataset.

        Returns:
            The dataset containing the records.

        """
        records = list(self(with_suggestions=True, with_responses=True))
        return HFDatasetsIO.to_datasets(records=records)

    ############################
    # Private methods
    ############################

    def _ingest_records(
        self,
        records: Union[List[Dict[str, Any]], Dict[str, Any], List[Record], Record, HFDataset],
        mapping: Optional[Dict[str, str]] = None,
        user_id: Optional[UUID] = None,
    ) -> List[RecordModel]:
        if len(records) == 0:
            raise ValueError("No records provided to ingest.")
        if HFDatasetsIO._is_hf_dataset(dataset=records):
            records = HFDatasetsIO._record_dicts_from_datasets(dataset=records)
        if all(map(lambda r: isinstance(r, dict), records)):
            # Records as flat dicts of values to be matched to questions as suggestion or response
            records = [self._infer_record_from_mapping(data=r, mapping=mapping, user_id=user_id) for r in records]  # type: ignore
        elif all(map(lambda r: isinstance(r, Record), records)):
            for record in records:
                record.dataset = self.__dataset
        else:
            raise ValueError(
                "Records should be a a list Record instances, "
                "a Hugging Face Dataset, or a list of dictionaries representing the records."
            )
        return [record.api_model() for record in records]

    def _normalize_batch_size(self, batch_size: int, records_length, max_value: int):
        norm_batch_size = min(batch_size, records_length, max_value)

        if batch_size != norm_batch_size:
            self._log_message(
                message=f"The provided batch size {batch_size} was normalized. Using value {norm_batch_size}.",
                level="warning",
            )

        return norm_batch_size

    def _validate_vector_names(self, vector_names: Union[List[str], str]) -> None:
        if not isinstance(vector_names, list):
            vector_names = [vector_names]
        for vector_name in vector_names:
            if isinstance(vector_name, bool):
                continue
            if vector_name not in self.__dataset.schema:
                raise ValueError(f"Vector field {vector_name} not found in dataset schema.")

    def _infer_record_from_mapping(
        self,
        data: dict,
        mapping: Optional[Dict[str, str]] = None,
        user_id: Optional[UUID] = None,
    ) -> "Record":
        """Converts a mapped record dictionary to a Record object for use by the add or update methods.
        Args:
            dataset: The dataset object to which the record belongs.
            data: A dictionary representing the record.
            mapping: A dictionary mapping source data keys to Argilla fields, questions, and ids.
            user_id: The user id to associate with the record responses.
        Returns:
            A Record object.
        """
        fields: Dict[str, str] = {}
        responses: List[Response] = []
        record_id: Optional[str] = None
        suggestion_values = defaultdict(dict)
        vectors: List[Vector] = []
        metadata: Dict[str, MetadataValue] = {}

        schema = self.__dataset.schema

        for attribute, value in data.items():
            schema_item = schema.get(attribute)
            attribute_type = None
            sub_attribute = None

            # Map source data keys using the mapping
            if mapping and attribute in mapping:
                attribute_mapping = mapping.get(attribute)
                attribute_mapping = attribute_mapping.split(".")
                attribute = attribute_mapping[0]
                schema_item = schema.get(attribute)
                if len(attribute_mapping) > 1:
                    attribute_type = attribute_mapping[1]
                if len(attribute_mapping) > 2:
                    sub_attribute = attribute_mapping[2]
            elif schema_item is mapping is None and attribute != "id":
                warnings.warn(
                    message=f"""Record attribute {attribute} is not in the schema so skipping.
                        Define a mapping to map source data fields to Argilla Fields, Questions, and ids
                        """
                )
                continue

            if attribute == "id":
                record_id = value
                continue

            # Add suggestion values to the suggestions
            if attribute_type == "suggestion":
                if sub_attribute in ["score", "agent"]:
                    suggestion_values[attribute][sub_attribute] = value

                elif sub_attribute is None:
                    suggestion_values[attribute].update(
                        {"value": value, "question_name": attribute, "question_id": schema_item.id}
                    )
                else:
                    warnings.warn(
                        message=f"Record attribute {sub_attribute} is not a valid suggestion sub_attribute so skipping."
                    )
                continue

            # Assign the value to question, field, or response based on schema item
            if isinstance(schema_item, TextField):
                fields[attribute] = value
            elif isinstance(schema_item, QuestionPropertyBase) and attribute_type == "response":
                responses.append(Response(question_name=attribute, value=value, user_id=user_id))
            elif isinstance(schema_item, QuestionPropertyBase) and attribute_type is None:
                suggestion_values[attribute].update(
                    {"value": value, "question_name": attribute, "question_id": schema_item.id}
                )
            elif isinstance(schema_item, VectorField):
                vectors.append(Vector(name=attribute, values=value))
            elif isinstance(schema_item, MetadataPropertyBase):
                metadata[attribute] = value
            else:
                warnings.warn(message=f"Record attribute {attribute} is not in the schema or mapping so skipping.")
                continue

        suggestions = [Suggestion(**suggestion_dict) for suggestion_dict in suggestion_values.values()]

        return Record(
            id=record_id,
            fields=fields,
            suggestions=suggestions,
            responses=responses,
            vectors=vectors,
            metadata=metadata,
            _dataset=self.__dataset,
        )

__call__(query=None, batch_size=DEFAULT_BATCH_SIZE, start_offset=0, with_suggestions=True, with_responses=True, with_vectors=None)

Returns an iterator over the records in the dataset on the server.

Parameters:

Name Type Description Default
query Optional[Union[str, Query]]

A string or a Query object to filter the records.

None
batch_size Optional[int]

The number of records to fetch in each batch. The default is 256.

DEFAULT_BATCH_SIZE
start_offset int

The offset from which to start fetching records. The default is 0.

0
with_suggestions bool

Whether to include suggestions in the records. The default is True.

True
with_responses bool

Whether to include responses in the records. The default is True.

True
with_vectors Optional[Union[List, bool, str]]

A list of vector names to include in the records. The default is None. If a list is provided, only the specified vectors will be included. If True is provided, all vectors will be included.

None

Returns:

Type Description
DatasetRecordsIterator

An iterator over the records in the dataset on the server.

Source code in src/argilla_sdk/records/_dataset_records.py
def __call__(
    self,
    query: Optional[Union[str, Query]] = None,
    batch_size: Optional[int] = DEFAULT_BATCH_SIZE,
    start_offset: int = 0,
    with_suggestions: bool = True,
    with_responses: bool = True,
    with_vectors: Optional[Union[List, bool, str]] = None,
) -> DatasetRecordsIterator:
    """Returns an iterator over the records in the dataset on the server.

    Parameters:
        query: A string or a Query object to filter the records.
        batch_size: The number of records to fetch in each batch. The default is 256.
        start_offset: The offset from which to start fetching records. The default is 0.
        with_suggestions: Whether to include suggestions in the records. The default is True.
        with_responses: Whether to include responses in the records. The default is True.
        with_vectors: A list of vector names to include in the records. The default is None.
            If a list is provided, only the specified vectors will be included.
            If True is provided, all vectors will be included.

    Returns:
        An iterator over the records in the dataset on the server.

    """
    if query and isinstance(query, str):
        query = Query(query=query)

    if with_vectors:
        self._validate_vector_names(vector_names=with_vectors)

    return DatasetRecordsIterator(
        self.__dataset,
        self.__client,
        query=query,
        batch_size=batch_size,
        start_offset=start_offset,
        with_suggestions=with_suggestions,
        with_responses=with_responses,
        with_vectors=with_vectors,
    )

__init__(client, dataset)

Initializes a DatasetRecords object with a client and a dataset. Args: client: An Argilla client object. dataset: A Dataset object.

Source code in src/argilla_sdk/records/_dataset_records.py
def __init__(self, client: "Argilla", dataset: "Dataset"):
    """Initializes a DatasetRecords object with a client and a dataset.
    Args:
        client: An Argilla client object.
        dataset: A Dataset object.
    """
    self.__client = client
    self.__dataset = dataset
    self._api = self.__client.api.records

from_json(path)

Creates a DatasetRecords object from a disk path to a JSON file. The JSON file should be defined by DatasetRecords.to_json.

Parameters:

Name Type Description Default
path str

The path to the file containing the records.

required

Returns:

Name Type Description
DatasetRecords List[Record]

The DatasetRecords object created from the disk path.

Source code in src/argilla_sdk/records/_dataset_records.py
def from_json(self, path: Union[Path, str]) -> List[Record]:
    """Creates a DatasetRecords object from a disk path to a JSON file.
        The JSON file should be defined by `DatasetRecords.to_json`.

    Args:
        path (str): The path to the file containing the records.

    Returns:
        DatasetRecords: The DatasetRecords object created from the disk path.

    """
    records = JsonIO._records_from_json(path=path)
    return self.log(records=records)

log(records, mapping=None, user_id=None, batch_size=DEFAULT_BATCH_SIZE)

Add or update records in a dataset on the server using the provided records. If the record includes a known id field, the record will be updated. If the record does not include a known id field, the record will be added as a new record. See rg.Record for more information on the record definition.

Parameters:

Name Type Description Default
records Union[List[dict], List[Record], HFDataset]

A list of Record objects, a Hugging Face Dataset, or a list of dictionaries representing the records. If records are defined as a dictionaries or a dataset, the keys/ column names should correspond to the fields in the Argilla dataset's fields and questions. id should be provided to identify the records when updating.

required
mapping Optional[Dict[str, str]]

A dictionary that maps the keys/ column names in the records to the fields or questions in the Argilla dataset.

None
user_id Optional[UUID]

The user id to be associated with the records' response. If not provided, the current user id is used.

None
batch_size int

The number of records to send in each batch. The default is 256.

DEFAULT_BATCH_SIZE

Returns:

Type Description
List[Record]

A list of Record objects representing the updated records.

Source code in src/argilla_sdk/records/_dataset_records.py
def log(
    self,
    records: Union[List[dict], List[Record], HFDataset],
    mapping: Optional[Dict[str, str]] = None,
    user_id: Optional[UUID] = None,
    batch_size: int = DEFAULT_BATCH_SIZE,
) -> List[Record]:
    """Add or update records in a dataset on the server using the provided records.
    If the record includes a known `id` field, the record will be updated.
    If the record does not include a known `id` field, the record will be added as a new record.
    See `rg.Record` for more information on the record definition.

    Parameters:
        records: A list of `Record` objects, a Hugging Face Dataset, or a list of dictionaries representing the records.
                 If records are defined as a dictionaries or a dataset, the keys/ column names should correspond to the
                 fields in the Argilla dataset's fields and questions. `id` should be provided to identify the records when updating.
        mapping: A dictionary that maps the keys/ column names in the records to the fields or questions in the Argilla dataset.
        user_id: The user id to be associated with the records' response. If not provided, the current user id is used.
        batch_size: The number of records to send in each batch. The default is 256.

    Returns:
        A list of Record objects representing the updated records.

    """
    record_models = self._ingest_records(records=records, mapping=mapping, user_id=user_id or self.__client.me.id)
    batch_size = self._normalize_batch_size(
        batch_size=batch_size,
        records_length=len(record_models),
        max_value=self._api.MAX_RECORDS_PER_UPSERT_BULK,
    )

    created_or_updated = []
    records_updated = 0
    for batch in range(0, len(records), batch_size):
        self._log_message(message=f"Sending records from {batch} to {batch + batch_size}.")
        batch_records = record_models[batch : batch + batch_size]
        models, updated = self._api.bulk_upsert(dataset_id=self.__dataset.id, records=batch_records)
        created_or_updated.extend([Record.from_model(model=model, dataset=self.__dataset) for model in models])
        records_updated += updated

    records_created = len(created_or_updated) - records_updated
    self._log_message(
        message=f"Updated {records_updated} records and added {records_created} records to dataset {self.__dataset.name}",
        level="info",
    )

    return created_or_updated

to_datasets()

Export the records to a HFDataset.

Returns:

Type Description
HFDataset

The dataset containing the records.

Source code in src/argilla_sdk/records/_dataset_records.py
def to_datasets(self) -> HFDataset:
    """
    Export the records to a HFDataset.

    Returns:
        The dataset containing the records.

    """
    records = list(self(with_suggestions=True, with_responses=True))
    return HFDatasetsIO.to_datasets(records=records)

to_dict(flatten=False, orient='names')

Return the records as a dictionary. This is a convenient shortcut for dataset.records(...).to_dict().

Parameters:

Name Type Description Default
flatten bool

The structure of the exported dictionary. - True: The record fields, metadata, suggestions and responses will be flattened. - False: The record fields, metadata, suggestions and responses will be nested.

False
orient str

The orientation of the exported dictionary. - "names": The keys of the dictionary will be the names of the fields, metadata, suggestions and responses. - "index": The keys of the dictionary will be the id of the records.

'names'

Returns: A dictionary of records.

Source code in src/argilla_sdk/records/_dataset_records.py
def to_dict(self, flatten: bool = False, orient: str = "names") -> Dict[str, Any]:
    """
    Return the records as a dictionary. This is a convenient shortcut for dataset.records(...).to_dict().

    Parameters:
        flatten (bool): The structure of the exported dictionary.
            - True: The record fields, metadata, suggestions and responses will be flattened.
            - False: The record fields, metadata, suggestions and responses will be nested.
        orient (str): The orientation of the exported dictionary.
            - "names": The keys of the dictionary will be the names of the fields, metadata, suggestions and responses.
            - "index": The keys of the dictionary will be the id of the records.
    Returns:
        A dictionary of records.

    """
    records = list(self(with_suggestions=True, with_responses=True))
    data = GenericIO.to_dict(records=records, flatten=flatten, orient=orient)
    return data

to_json(path)

Export the records to a file on disk.

Parameters:

Name Type Description Default
path str

The path to the file to save the records.

required

Returns:

Type Description
Path

The path to the file where the records were saved.

Source code in src/argilla_sdk/records/_dataset_records.py
def to_json(self, path: Union[Path, str]) -> Path:
    """
    Export the records to a file on disk.

    Parameters:
        path (str): The path to the file to save the records.

    Returns:
        The path to the file where the records were saved.

    """
    records = list(self(with_suggestions=True, with_responses=True))
    return JsonIO.to_json(records=records, path=path)

to_list(flatten=False)

Return the records as a list of dictionaries. This is a convenient shortcut for dataset.records(...).to_list().

Parameters:

Name Type Description Default
flatten bool

Whether to flatten the dictionary and use dot notation for nested keys like suggestions and responses.

False

Returns:

Type Description
List[Dict[str, Any]]

A list of dictionaries of records.

Source code in src/argilla_sdk/records/_dataset_records.py
def to_list(self, flatten: bool = False) -> List[Dict[str, Any]]:
    """
    Return the records as a list of dictionaries. This is a convenient shortcut for dataset.records(...).to_list().

    Parameters:
        flatten (bool): Whether to flatten the dictionary and use dot notation for nested keys like suggestions and responses.

    Returns:
        A list of dictionaries of records.
    """
    records = list(self(with_suggestions=True, with_responses=True))
    data = GenericIO.to_list(records=records, flatten=flatten)
    return data