🧑🏻💻 Samuel Andrey · Posted on 12 Jul 2025
How to Convert YOLO to CoreML
Introduction
After successfully training my YOLO model, I wanted to use it in an iOS app. To do that, I needed to convert it into CoreML format. In this article, I will explain the exact steps I took to convert a YOLOv11 model into a .mlmodel file ready to run on Apple devices.
The process includes installing libraries, preparing the dataset, creating the YAML configuration file, training the model, and exporting it to CoreML. Every step I describe is based on what I did personally.
1. Installing Required Libraries
The first thing I did was install the necessary libraries. I used ultralytics, the official library for YOLOv11, which can be installed via pip. I also imported some standard Python modules like os, shutil, random, and yaml to handle files and folder structures.
Once installed, I imported all libraries so they would be available for the entire workflow. This helped me avoid runtime errors during data preparation or training.
!pip install ultralytics -q
import os
import shutil
import random
import yaml
from ultralytics import YOLO
from IPython.display import Image, display
2. Preparing the Dataset
I defined the dataset path and set up a directory structure that matched the YOLO format. The dataset contained images (.jpg, .png), label files (.txt), and a class list (classes.txt). I created separate folders for training and validation data.
Keeping the folder structure clean and consistent made the training process smoother and ensured that the model could read the data properly.
source_dir = '/kaggle/input/indonesian-coin-rupiah-2025'
source_images_dir = os.path.join(source_dir, 'images')
source_labels_dir = os.path.join(source_dir, 'labels')
source_classes_file = os.path.join(source_dir, 'classes.txt')
base_output_dir = '/kaggle/working/dataset'
train_images_dir = os.path.join(base_output_dir, 'images', 'train')
val_images_dir = os.path.join(base_output_dir, 'images', 'val')
train_labels_dir = os.path.join(base_output_dir, 'labels', 'train')
val_labels_dir = os.path.join(base_output_dir, 'labels', 'val')
os.makedirs(train_images_dir, exist_ok=True)
os.makedirs(val_images_dir, exist_ok=True)
os.makedirs(train_labels_dir, exist_ok=True)
os.makedirs(val_labels_dir, exist_ok=True)
3. Splitting the Dataset
I split the dataset into two parts: train and val. I shuffled the image files and used about 90% for training and 10% for validation. Then, I copied the corresponding image and label files into their respective folders.
This step is important to ensure that the model learns from some data and is tested on unseen data to evaluate its performance.
all_images = [f for f in os.listdir(source_images_dir) if f.lower().endswith(('.png', '.jpg', '.jpeg'))]
random.shuffle(all_images)
split_ratio = 0.10
split_index = int(len(all_images) * split_ratio)
val_files = all_images[:split_index]
train_files = all_images[split_index:]
def copy_files(file_list, dest_img_dir, dest_lbl_dir):
for filename in file_list:
basename, _ = os.path.splitext(filename)
label_filename = f"{basename}.txt"
shutil.copy(os.path.join(source_images_dir, filename), os.path.join(dest_img_dir, filename))
shutil.copy(os.path.join(source_labels_dir, label_filename), os.path.join(dest_lbl_dir, label_filename))
copy_files(train_files, train_images_dir, train_labels_dir)
copy_files(val_files, val_images_dir, val_labels_dir)
print(f" - {len(train_files)} Training files.")
print(f" - {len(val_files)} Validation files.")
4. Creating the Dataset Config YAML
I created a data.yaml config file required by YOLOv11. This file contains the paths to the dataset folders and the list of class names. I wrote it using Python for simplicity and automation.
This YAML file is necessary for training. Without it, the model won’t know where to find the data or how to map class indices.
with open(source_classes_file, 'r') as f:
class_names = [line.strip() for line in f.readlines()]
yaml_config = {
'path': base_output_dir,
'train': 'images/train',
'val': 'images/val',
'names': class_names
}
yaml_file_path = os.path.join('/kaggle/working', 'data.yaml')
with open(yaml_file_path, 'w') as f:
yaml.dump(yaml_config, f, sort_keys=False)
5. Training the YOLO Model
I used the command YOLO('yolo11l.pt') to load the large version of YOLOv11 and started training with the train() function, specifying parameters like epochs, image size, batch size, and the YAML file path.
YOLOv11 offers multiple model sizes:
- n (nano): very lightweight, suitable for edge devices.
- s (small): slightly better accuracy, still fast.
- m (medium): good balance between performance and accuracy.
- l (large): higher accuracy, more resource-intensive.
- x (extra large): best accuracy, requires powerful GPU and memory.
I chose the l version for better accuracy, even though it required more memory and computing power.
model = YOLO('yolo11l.pt')
results = model.train(
data=yaml_file_path,
epochs=80,
imgsz=640,
batch=16,
name='yolo11_rupiah_coin_model',
)
6. Converting the YOLOv11 Model to CoreML
Once training was complete, I exported the model to CoreML format using the export(format='mlmodel') method. I used the best-trained model and saved the .mlmodel file in my working directory.
This CoreML file can now be integrated into an iOS app using Xcode and run locally on Apple devices without needing an internet connection.
best_model = YOLO(best_model_path)
coreml_path = best_model.export(format='mlmodel', nms=True)