Real-Time Object Detection for Road Safety A Yolo-Based Approach for Sustainable Cities

Authors

  • Fatimah OYEWUSI Lead City University, Ibadan, Oyo State, Nigeria
  • Ismail AJAGBE Lead City University, Ibadan, Oyo State, Nigeria
  • Ummul-Kulthum OSENI Lead City University, Ibadan, Oyo State, Nigeria

Keywords:

Object Detection,, YOLOv8, Road Safety, Sustainable Cities, Deep Learning

Abstract

This paper presents the development and deployment of a high-performance road safety
detection system using the YOLOv8 architecture to enhance urban traffic safety, directly
supporting Sustainable Development Goal 11 (Sustainable Cities and Communities). The
system provides real-time detection of vehicles, pedestrians, and traffic infrastructure from
road scene images. A comparative analysis of YOLOv8 model variants (Nano, Large, and
Extra-Large) was conducted to determine the optimal balance between inference speed and
detection accuracy. The YOLOv8x model was selected for the final deployment, achieving an
average inference time of 1,227ms while detecting an average of 32 objects per scene.
However, a critical analysis revealed a significant challenge in pedestrian detection, with a
57.1% high-confidence detection rate (8 out of 14 pedestrians), meaning 42.9% require human
verification, highlighting the limitations of current computer vision technology for safetycritical
applications. The system was deployed as a web application using Streamlit and hosted
on Hugging Face Spaces, demonstrating a modern MLOps workflow. This research provides
valuable insights into the practical application of deep learning for road safety, emphasizing
the ethical considerations and the need for multi-modal sensor fusion to overcome the
limitations of purely vision-based systems.

Downloads

Published

2025-08-05