Cut tobacco width plays an essential role in the production of the cigarette industry. Current cut tobacco mainly has a width in the range of 0.9 mm, 0.95 mm, and 1.05 mm. With the increase of cut width, the proportion of medium, as well as long filaments in the final products, will increase. In contrast, when the proportion of short and broken shreds decreases, the structural distribution stability of the final products will increase as well. However, if the cut leaf is too wide, the rolled cigarettes tend to be empty, resulting in low flammability and slow ignition speed. Therefore, an online closed-loop control system of cutting machines based on high speed and accurate measurement of the cut tobacco width is a valuable but challenging problem in the modern cigarette industry. This paper proposes a purely nondestructive width measurement method by applying machine vision techniques and machine learning methods for industrial applications. First, the skeleton method and Hough line detection are integrated to achieve a sub-pixel measurement of cut tobacco. Then a global calibration method is implemented to map the measurement result from the image coordinate to a real-world distance. Our experimental results demonstrate that the proposed tobacco width measurement method has the characteristics of a low error rate as well as real-time execution efficiency.
There has been rapid progress in the development of object detection methods. A challenging problem of deep neural networks based object detection algorithms is they rely heavily on abundant training samples. When dealing with scenarios where samples are scarce, the accuracy of these models can degrade sharply, thus hampering their final efficacy. Recently, the emergence of few-shot detection methods has addressed this issue. The majority of existing research in the literature concentrates on constructing few-shot detection models based on two-stage object detectors due to their high capacity and compatibility with affiliated structures. As a result, these methods are capable of delivering high-performance object detection with a limited number of training samples. However, two-stage backbones often imply a lower inference efficiency as well as applicability with deployment on edge devices. To realize high-speed object detection specifically in computation resource-constrained scenarios, we propose a one-stage few-shot object detector by integrating a meta-learning structure with YOLOv3 model. Experimental results demonstrate that our proposed few-shot object detector achieves comparable detection accuracy to existing two-stage object detectors under the same training conditions. Meanwhile, it significantly prompts the inference speed, enabling real-time object detection in videos.
In construction, coal mining, tobacco manufacturing and other industries, wearing helmets is crucial safety measure for workers, and the monitoring of helmet wearing plays a significant role in maintaining production safety. However, manual monitoring demands substantial human, material and financial resources, and will suffer from low efficiency and are error prone. Therefore, we proposed a lightweight real-time deep learning-based detection framework called YOLO-H, for automatic helmet wearing detection. Our YOLO-H model was developed on the foundation of YOLOv5-n by introducing the state-of-the-art techniques such as re-parameterization, decoupled head, label assignment strategy and loss function. Our proposed YOLO-H performed more efficiently and effectively. On a private dataset, our proposed framework achieved 94.5% mAP@0.5 and 65.2% mAP@0.5:0.95 with 82 FPS (Frames Per Second), which surpassed YOLOv5 by a large margin. Compared to other methods, our framework also showed overwhelming performance in terms of speed and accuracy. More importantly, the developed framework can be applied to other object detection scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.