Wickramasinghe, H.K.N.C.Siyambalapitiya, R.2025-11-062025-11-062025-11-07Proceedings of the Postgraduate Institute of Science Research Congress (RESCON)-2025, University of Peradeniya,p793051-4622https://ir.lib.pdn.ac.lk/handle/20.500.14444/6012This research investigates the use of lightweight computer vision models for detecting and classifying tomato plant canopies into complete and incomplete types, with a focus on deployment on edge devices for practical use in agricultural environments. The main objective was a comparative evaluation of three lightweight models, You Only Look Once version 8 nano (YOLOv8n), Single Shot MultiBox Detector (SSD) MobileNet-v2 Feature Pyramid Network Lite 320 (SSD-MobileNet-v2-FPNLite-320), and EfficientDet-D0. The models were trained using a custom dataset collected at the Horticultural Crop Research and Development Institute, supplemented with additional internet-sourced images. Images were preprocessed by auto-orienting and resizing to a resolution of 224 × 224 pixels. To improve generalisation, data augmentation was applied, including horizontal and vertical flips, 90° rotations, brightness variation (±15%), and light blurring (up to 1 px), generating three augmented samples per training image. Model evaluation was based on precision, recall, and mean average precision at 50% Intersection over Union (mAP50). YOLOv8n achieved the highest detection accuracy, with an mAP50 of 93.0%, outperforming SSD MobileNet-v2 (60.4%) and EfficientDet-D0 (46.9%). For edge deployment, the YOLOv8n model was optimised using TensorFlow Lite and deployed on a Raspberry Pi 3 Model B (ARM Cortex-A53 CPU @ 1.2 GHz, 1 GB RAM) equipped with an OV5647-62 field-of view camera module. The model ran in the TensorFlow Lite runtime without GPU delegates or hardware acceleration, using single-threaded CPU inference, and achieved real-time detection performance of 4.4 frames per second (FPS). This work lays the groundwork for future research on autonomous path planning in constrained agricultural environments such as polytunnels and greenhouses. The system supports point-to-point navigation, enabling Cable-Driven Parallel Robots (CDPRs) or gantries with three degrees of freedom to adjust until a complete canopy view is obtained, after which the system proceeds along the optimised path. This targeted navigation improves coverage efficiency and enhances monitoring consistency.enCable-driven parallel robotsCanopy detectionComputer visionEdge devicesTomato plantsComparative analysis of lightweight computer vision models for detection and classification of tomato plant canopies for edge deploymentArticle