基于FFR-YOLO的森林火灾多目标实时检测方法研究

Journal: Advances in Computer and Autonomous Intelligence Research DOI: 10.12238/acair.v3i2.13541

罗佳慧

大连民族大学

Abstract

本研究提出FFR-YOLO模型以解决森林火灾检测在复杂地形中的准确性和实时性问题。通过结构化无锚框模块和动态非极大值抑制算法,显著降低了计算量,使非极大值抑制处理时延缩短。采用非中心化局部感受野和步进式下采样策略,增强了多尺度火焰特征捕捉能力。改进的AF模块(含编码器和Fusion结构)提升了跨尺度特征融合效果。实验表明,FFR-YOLO的mAP@0.5达到97.4%,较YOLOv8提升5.7%,同时保持131 FPS的实时性能。该研究为复杂环境下的森林火灾实时检测提供了高效解决方案,兼具生态保护和社会经济价值。

Keywords

深度学习;森林火灾检测;YOLOv8;注意力机制

References

[1] Dixon R K, Solomon A M, Brown S, Houghton R A, Trexier M C,Wisniewski J.Carbon pools and flux of global forest ecosys tems.Science,1994,263(5144):185-190.
[2] 刘世荣,代力民,温远光.面向生态系统服务的森林生态系统经营:现状、挑战与展望[J].生态学报,2015,35(1):1-9.
[3] Filkov A I,Ngo T,Matthews S,et al.Impact of Australia’s Black Summer fires on fauna and ecosystems[J]. Nature Clim ate Change,2020,10(3):171-176.
[4] GIRSHICK R.Fast R-CNN[C]//IEEE International Confer ence on Computer Vision(ICCV).2015:1440-1448.
[5] REDMON J, DIVVALA S, GIRSHICK R, et al. You Only Look Once:Unified,Real-Time Object Detection[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR).2016:779- 788.
[6] HE K,ZHANG X, REN S, et al. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015,37(9):1904-1916.
[7] REN S,HE K,GIRSHICK R,et al.Faster R-CNN:Towards Real -Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis and Machine Intellige nce,2017,39(6):1137-1149.
[8] DAI J,LI Y,HE K,et al.R-FCN:Object Detection via Region -based Fully Convolutional Networks[J]. Advances in Neural Information Processing Systems,2016,29:379-387.
[9] ERMANET P,EIGEN D, ZHANG X, et al. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks[C]//International Conference on Learning Represent ations (ICLR).2014.
[10] LIU W, ANGUELOV D,ERHAN D,et al.SSD:Single Shot Multi Box Detector[C]// European Conference on Computer Vision (ECCV).2016:21-37.

Copyright © 2025 罗佳慧

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License