6. Training a classification model with ResNet

In this tutorial we will download a public model from the AI library to inference and train on custom data locally.Here we will use a ResNet50 model.

Start by installing the following packages if you don’t have them installed already. The model adapter will use them later.torchtorchvisionimgaugscikit-image<0.18

Then, import the modules required for the scripts in this tutorial.

# !pip install torch torchvision imgaug "scikit-image<0.18"
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
import json
import dtlpy as dl
from dtlpy.ml import train_utils

6.1. Clone the Public Model Into Your Project

First, we’ll clone the Model entity to our project. You can view the public models in the public Dataloop Github.You can view all publicly available models by using a Filter. Here we will use a ResNet50 model pretrained on the ImageNET dataset.

filters = dl.Filters(resource=dl.FiltersResource.MODEL, use_defaults=False)
filters.add(field='scope', values='public')
# get the public model
public_model = dl.models.get(model_name='pretrained-resnet50')
# clone to your project
project = dl.projects.get(project_name='<My proejct>')
model = public_model.clone(model_name='my-model',

6.1.1. Run a pretrained model

We will then “build” a model adapter to get the package code locally and create an instance of the ModelAdapter class. Then we will load the pretrained model and weights into the model adapter.

package = dl.packages.get(package_id=model.package_id)
adapter = package.build(module_name='model-adapter')
# call the wrapper function

6.1.2. Predict on an item

Now we can get an item and inference on it with the predict method and upload the annotations. If you would like to see the item and predictions, you can view it locally or you can open the item on the platform and edit it directly there.

item = dl.items.get(item_id='611e174e4c09acc3c5bb81d3')
annotations = adapter.predict_items([item], with_upload=True)
image = np.asarray(Image.open(item.download()))
print('Classification: {}'.format(annotations[0][0].label))

6.2. Train on new dataset

Here we will use a public dataset of sheep faces. We create a project and a dataset and upload the data with 4 labels of sheep.NOTE: You might need to change the location of the items, which currently points to the root of the documentation repository. If you downloaded the dtlpy documentation repo locally, this should work as is.

project = dl.projects.create('Sheep Face - Model Mgmt')
dataset = project.datasets.create('Sheep Face')
_ = dataset.items.upload(local_path='../../../../assets/sample_datasets/SheepFace/items/*',
dataset.add_labels(label_list=['Merino', 'Poll Dorset', 'Suffolk', 'White Suffolk'])

Now we’ll run the “prepare_dataset” method. This will clone and freeze the dataset so that we’ll be able to reproduce the training with the same copy of the data. The cloned dataset will be split into subsets, either filtered using DQL or as percentages. In this example, we’ll use an 80/20 train validation split.

pages = dataset.items.list()
num_items = pages.items_count
train_proportion = 0.8
val_proportion = 0.2
train_partitions = [0] * round(train_proportion * num_items)
val_partitions = [1] * round(val_proportion * num_items)
partitions = train_partitions + val_partitions
item_count = 0
for item in pages.all():
    if partitions[item_count] == 0:
    elif partitions[item_count] == 1:
    item_count += 1
subsets = {'train': dl.Filters(field='dir', values='/train'),
           'validation': dl.Filters(field='dir', values='/val')}
dataset.metadata['system']['subsets'] = {
    'train': json.dumps(dl.Filters(field='dir', values='/train').prepare()),
    'validation': json.dumps(dl.Filters(field='dir', values='/validation').prepare()),
cloned_dataset = train_utils.prepare_dataset(dataset=dataset,

After partitioning and cloning the data, we will clone the pretrained model to have a starting point for the fine-tuning. We create an artifact where we can save the model weights. We will also indicate the model’s configuration will determine some runtime configurations, such as number of epochs. In this tutorial we will train for only 2 epochs.

new_model = model.clone(model_name='sheep-soft-augmentations',
                        configuration={'batch_size': 16,
                                       'start_epoch': 0,
                                       'num_epochs': 2,
                                       'input_size': 256,
                                       'id_to_label_map': {(v - 1): k for k, v in

We’ll load the new, untrained model into the adapter and prepare the local dataset to be used for training.

root_path, data_path, output_path = '<local_path_to_store_data_locally>', '<local_path_to_store_outputs_locally>'

6.3. Start the training

The package, model, and data are now prepared. We are ready to train!

print("Training {!r} with model {!r} on data {!r}".format(package.name, new_model.id, data_path))

6.4. Save the Model

We will save the locally-trained model and upload the trained weights to the Artifact Item. This ensures that everything is on the Dataloop platform and allows other developers to use our trained model.


We can also list all Artifacts associated with this Package, and add more files that are needed to load or run the model.


6.5. Predict on our newly trained model

With everything in place, we will load our model and view an item’s prediction.

item = dl.items.get(item_id='62b327f0da0d04bc7201e48a')
annotations = adapter.predict_items([item], with_upload=True)
image = Image.open(item.download())
print('Classification: {}'.format(annotations[0][0].label))