KEYWORDS: Object detection, Education and training, Data modeling, Video, Performance modeling, Visual process modeling, Feature extraction, Video surveillance, Deep learning
Various applications such as urban monitoring, security, and autonomous systems rely heavily on object classification in video imagery. In this paper we present a backbone for an object detection model that uses ConvNeXt architectures with transfer learning to focus specifically on vehicle classification. By adapting the ConvNeXtBase and ConvNeXtXLarge models, and utilizing “Car Object Detection” dataset which consists of numerous videos captured in different environmental settings including varying traffic densities, weather changes and light intensities. To improve the classification capabilities to match vehicles, specialists are incorporated into these adaptations who have developed special convolutional and fully connected layers. This is accomplished through our transfer learning approach that helps the model produce distinctive features needed for accurate detection. Our models are systematically evaluated using standard performance metrics. For instance, ConvNeXtBase achieves 97.91% accuracy with validation accuracy being 97.82%, while ConvNeXtXLarge has an accuracy of 98.34% with validation accuracy at 98.11%. These results not only outperform numerous baseline models but also demonstrate that our models are effective in real world scenarios. The results obtained from this study constitute a significant contribution towards the development of intelligent transport systems as well as provide a solid foundation for future improvements in object classification via transfer learning methods. That’s why you should highly value methodologies provided in this article because they will be useful for any further findings in enhancing intelligent transportation systems by means of deep learning techniques applied to video surveillance tasks one of many applications where transfer learning can be employed successfully for more efficient outcomes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.