Skip Navigation Links
Journal of Vibration Testing and System Dynamics

C. Steve Suh (editor), Pawel Olejnik (editor),

Xianguo Tuo (editor)

Pawel Olejnik (editor)

Lodz University of Technology, Poland

Email: pawel.olejnik@p.lodz.pl

C. Steve Suh (editor)

Texas A&M University, USA

Email: ssuh@tamu.edu

Xiangguo Tuo (editor)

Sichuan University of Science and Engineering, China

Email: tuoxianguo@suse.edu.cn


Coal Flow Detection of Belt Conveyor Based on FPN

Journal of Vibration Testing and System Dynamics 10(3) (2026) 283--296 | DOI:10.5890/JVTSD.2026.09.006

Jinyang Zhang$^{1,2}$, Yilin Bei$^{3}$, Hao Wu$^{1,2}$

$^{1}$ School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin, 644000, China

$^{2}$ Intelligent Perception and Control Key Laboratory of Sichuan Province, Yibin, 644000, China

$^{3}$ Taishan College Institute of Information Science and Technology, Taian, 271000, China

Download Full Text PDF

 

Abstract

In coal mine production, real-time monitoring of coal flow on conveyor belts is of great significance for ensuring production safety and improving transportation efficiency. Existing coal flow detection methods primarily rely on parameters such as coal width, belt speed, and coal flow depth, employing volume modeling or mass estimation. These methods face challenges such as complex sensor deployment and high costs. Therefore, this paper proposes a coal flow detection method based on a Feature Pyramid Network (FPN), which models the dynamic changes in coal flow on the conveyor belt and determines the duration to achieve intelligent coal flow detection. The backbone network incorporates Partitioned Multi-head Self-Attention (PMSA) to enhance local modeling capabilities. The FPN structure includes Adaptive Fine-Grained Channel Attention (FCA) modules and Convolutional Block Attention Module (CBAM), effectively preventing information loss and enhancing responsiveness to critical spatiotemporal information. Experimental results demonstrate that this method achieves good detection performance across various scenarios, providing a decision-making basis for intelligent speed control of conveyor belts.

References

  1. [1]  Bortnowski, P., Kawalec, W., Król, R., and Ozdoba, M. (2022), Types and causes of damage to the conveyor belt—review, classification and mutual relations, Engineering Failure Analysis, 140, 106520.
  2. [2]  Soofastaei, A., Karimpour, E., Knights, P., and Kizil, M. (2017), Energy-efficient loading and hauling operations, in Energy Efficiency in the Minerals Industry: Best Practices and Research Directions, Cham: Springer International Publishing, 121-146.
  3. [3]  Wang, G. and Ye, L. (2022), Design and research of belt conveyor energy-saving control system based on coal flow recognition, Coal Mine Machinery, 44, 14-17.
  4. [4]  Zhang, K., Kang, L., Chen, X., and others (2022), A review of intelligent unmanned mining current situation and development trend, Energies, 15(2), 513.
  5. [5]  Wang, W., Tian, B., Feng, H., and others (2020), Research on multi-parameter detection methods for mine belt conveyors based on laser ranging, Coal Science and Technology, 48(8), 131-138.
  6. [6]  Orlowska-Kowalska, T. and Kaminski, M. (2011), FPGA implementation of the multilayer neural network for the speed estimation of the two-mass drive system, IEEE Transactions on Industrial Informatics, 7(3), 436-445.
  7. [7]  Wang, G., Li, X., and Yang, L. (2021), Dynamic coal quantity detection and classification of permanent magnet direct drive belt conveyor based on machine vision and deep learning, International Journal of Pattern Recognition and Artificial Intelligence, 35(11), 2152017.
  8. [8]  Mihuţ, N.M. (2015), Designing a system for measuring the flow of material transported on belts using ultrasonic sensors, IOP Conference Series: Materials Science and Engineering, 95(1), 012089.
  9. [9]  Hao, H., Wang, K., and Ding, W. (2023), Dynamic coal quantity detection system for conveyor belts based on ultrasonic arrays, Journal of Mine Automation, 49(4), 120-127.
  10. [10]  Xianguo, L., Lifang, S., Zixu, M., and others (2018), Laser-based on-line machine vision detection for longitudinal rip of conveyor belt, Optik, 168, 360-369.
  11. [11]  Wang, L. (2022), Research on three-dimensional reconstruction methods for laser scanning monitoring of coal release quantity in longwall mining areas, Coal Engineering, 54(5), 125-130.
  12. [12]  Liu, H. (2018), Control system for coal mine belt conveyors based on load detection, Journal of Mine Automation, 44(10), 81-84.
  13. [13]  Miao, D., Wang, Y., Yang, L., and others (2023), Coal flow detection of belt conveyor based on the two-dimensional laser, IEEE Access, 11, 82294-82301.
  14. [14]  Wen, L., Liang, B., Zhang, L., and others (2023), Research on coal volume detection and energy-saving optimization intelligent control method of belt conveyor based on laser and binocular visual fusion, IEEE Access, 12, 75238-75248.
  15. [15]  Wang, G., Li, X., and Yang, L. (2021), Dynamic coal quantity detection and classification of permanent magnet direct drive belt conveyor based on machine vision and deep learning, International Journal of Pattern Recognition and Artificial Intelligence, 35(11), 2152017.
  16. [16]  Hou, C., Qiao, T., Dong, H., and others (2024), Coal flow volume detection method for conveyor belt based on TOF vision, Measurement, 229, 114468.
  17. [17]  Lin, T.Y., Dollár, P., Girshick, R., and others (2017), Feature pyramid networks for object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2117-2125.
  18. [18]  Zhou, X. and Zhang, L. (2022), SA-FPN: An effective feature pyramid network for crowded human detection, Applied Intelligence, 52(11), 12556-12568.
  19. [19]  Chen, L., An, S., Zhao, S., and others (2023), MS-FPN-based pavement defect identification algorithm, IEEE Access, 11, 124797-124807.
  20. [20]  Liu, M., Chen, J., Liu, P., and others (2024), Dual SIE-FPN: semantic and spatial information enhancement for multiscale object detection, IEEE Transactions on Industrial Informatics, 20, 14164-14173.
  21. [21]  Carreira, J. and Zisserman, A. (2017), Quo vadis, action recognition? a new model and the kinetics dataset, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6299-6308.
  22. [22]  Qian, Z. (2025), ECViT: Efficient convolutional vision transformer with local-attention and multi-scale stages, International Joint Conference on Neural Networks (IJCNN), 1-8.
  23. [23]  Sun, H., Wen, Y., Feng, H., Zheng, Y., Mei, Q., Ren, D., and Yu, M. (2024), Unsupervised bidirectional contrastive reconstruction and adaptive fine-grained channel attention networks for image dehazing, Neural Networks, 176, 106314.
  24. [24]  Woo, S., Park, J., Lee, J.Y., and Kweon, I.S. (2018), CBAM: Convolutional block attention module, Proceedings of the European Conference on Computer Vision (ECCV), 3-19.
  25. [25]  Zeng, R., Huang, W., Tan, M., Rong, Y., Zhao, P., Huang, J., and Gan, C. (2019), Graph convolutional networks for temporal action localization, Proceedings of the IEEE/CVF International Conference on Computer Vision, 7094-7103.
  26. [26]  Shi, D., Zhong, Y., Cao, Q., Zhang, J., Ma, L., Li, J., and Tao, D. (2022), React: Temporal action detection with relational queries, in European Conference on Computer Vision, Cham: Springer Nature Switzerland, 105-121.
  27. [27]  Lin, T., Liu, X., Li, X., Ding, E., and Wen, S. (2019), BMN: Boundary-matching network for temporal action proposal generation, Proceedings of the IEEE International Conference on Computer Vision, 3888-3897.
  28. [28]  Lin, C., Xu, C., Luo, D., Wang, Y., Tai, Y., Wang, C., Li, J., Huang, F., and Fu, Y. (2021), Learning salient boundary feature for anchor-free temporal action localization, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3320-3329.
  29. [29]  Dai, R., Das, S., Kahatapitiya, K., Ryoo, M.S., and Brémond, F. (2022), MS-TCT: Multi-scale temporal convtransformer for action detection, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 20041-20051.
  30. [30]  Yang, L., Zheng, Z., Han, Y., Cheng, H., Song, S., Huang, G., and Li, F. (2024), Dyfadet: Dynamic feature aggregation for temporal action detection, in European Conference on Computer Vision, Cham: Springer Nature Switzerland, 305-322.
  31. [31]  Shi, D., Zhong, Y., Cao, Q., Ma, L., Li, J., and Tao, D. (2023), Tridet: Temporal action detection with relative boundary modeling, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18857-18866.
  32. [32]  Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017), Attention is all you need, Advances in Neural Information Processing Systems, 30.
  33. [33]  Lin, H., Cheng, X., Wu, X., and Shen, D. (2022), CAT: Cross attention in vision transformer, in 2022 IEEE International Conference on Multimedia and Expo (ICME), IEEE, 1-6.
  34. [34]  Si, Y., Xu, H., Zhu, X., Zhang, W., Dong, Y., Chen, Y., and Li, H. (2025), SCSA: Exploring the synergistic effects between spatial and channel attention, Neurocomputing, 634, 129866.