CN114936718A - Parking lot fire prediction method - Google Patents

Parking lot fire prediction method Download PDF

Info

Publication number
CN114936718A
CN114936718A CN202210716112.1A CN202210716112A CN114936718A CN 114936718 A CN114936718 A CN 114936718A CN 202210716112 A CN202210716112 A CN 202210716112A CN 114936718 A CN114936718 A CN 114936718A
Authority
CN
China
Prior art keywords
fire
parking lot
information
network
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210716112.1A
Other languages
Chinese (zh)
Inventor
刘寒松
王永
王国强
刘瑞
焦安健
翟贵乾
谭连胜
李贤超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonli Holdings Group Co Ltd
Original Assignee
Sonli Holdings Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonli Holdings Group Co Ltd filed Critical Sonli Holdings Group Co Ltd
Priority to CN202210716112.1A priority Critical patent/CN114936718A/en
Publication of CN114936718A publication Critical patent/CN114936718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Mathematical Physics (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the technical field of fire prediction, and relates to a parking lot fire prediction method, which is characterized in that a fire area is positioned through category activation mapping based on semantic features, results are continuously output in a training process based on multi-scale features of a network middle layer, iterative output results are aggregated by adopting a cross-generation feature aggregation module, meanwhile, original RGB input pictures are ablated based on an initial rough segmentation result and are input into a classification network, and a secondary fine segmentation result can reduce false alarm rate, is not influenced by the surrounding environment, has low data requirement, can be only used in the field of parking lot fire detection, and is also applicable to the field of improving precision by adopting a weak supervision mode due to shortage of other data sets.

Description

Parking lot fire prediction method
Technical Field
The invention belongs to the technical field of fire prediction, relates to a parking lot fire prediction method, and particularly relates to a parking lot fire prediction method based on weak supervision cross-iterative feature relation mutual learning.
Background
The invention belongs to the technical field of fire prediction, relates to a parking lot fire prediction method, and particularly relates to a parking lot fire prediction method based on weak supervision cross-iterative feature relation mutual learning
With the development of economy and the progress of society, vehicles are gradually increased, which puts a higher demand on large-scale parking lots, wherein, especially important is the arrangement of safety facilities of the parking lots, for example, if the large-scale parking lots are exposed to fire and are not put out in time, the vehicles have much fuel and flammable objects, the whole parking lots can be ignited instantly, and great loss is caused to lives and properties of people.
At present, a sensor mode can be adopted to play a role in forecasting based on data indexes such as temperature, smoke and the like during ignition, but the physical device is very dependent on environmental conditions, false alarm often occurs, even alarm can be given only when a specified threshold value is reached, and the actual requirements cannot be met for finding out a fire in time and taking measures; the fire recognition algorithm based on the machine learning algorithm provides a more reliable basis for forecasting fire only by means of physical devices, particularly the rapid development of computer vision, the fire early warning algorithm based on deep learning obtains very high precision and is gradually popularized, the existing fire early warning algorithm is usually trained on a large amount of data to obtain a very high-precision prediction model, however, the data-based mode has the great defect that the generalization performance of a network model is very limited, and when the model is applied to a new environment, cliff-breaking type falling of precision can occur, so that the life and property safety of people is threatened.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a parking lot fire prediction method which is designed and provided for predicting parking lot fires based on weak supervision cross-iterative characteristic relation mutual learning, so that the problems that the existing fire detection depends on unreliable detection of physical sensor components and parts, and the visual flame detection method is based on the problems that a large amount of data training is needed, the generalization of a model is not strong, only the flame position can be predicted, and the shape of flames cannot be predicted are solved.
In order to achieve the purpose, the specific process for realizing the fire prediction of the parking lot comprises the following steps:
(1) collecting pictures of a parking lot in a fire disaster and pictures of the parking lot without the fire disaster, respectively marking the pictures with the fire disaster and the pictures without the fire disaster as category labels, and meanwhile, dividing picture data sets with the fire disaster and the pictures without the fire disaster into a training set, a verification set and a test set;
(2) converting the category information into a category activation map of positioning information by adopting a category activation mapping mode, positioning the position of the fire by utilizing a category label of whether the fire occurs, and roughly positioning the position of the fire in the parking lot to obtain semantic information;
(3) fire area fine positioning based on multi-scale feature aggregation: inputting the picture subjected to rough positioning in the step (2) into a deep learning network, and taking the characteristics output by the output side of the deep learning network as flame multi-scale information;
(4) outputting an iteration result in the training process based on the deep learning network in the step (3), and outputting intermediate iteration information as the weighting of fire common information;
(5) fusing semantic information, multi-scale information and intermediate iteration output information, dividing coarse positioning information into foreground flame information and background information by adopting a segmentation algorithm CRF, applying a foreground region to an input fire detection picture to form a locally-missing fire detection picture, continuously finding a fire region which is not found after secondary classification, and finally obtaining an accurate segmentation region of the parking lot fire;
(6) training an end-to-end parking lot fire partition network in a transfer learning mode based on the accurate partition areas of the parking lot fire obtained in the step (5), and finely partitioning the fire areas in the parking lot;
(7) the constructed picture training data sets with or without fire are used as the input of a fire classification network, whether the fire occurs is predicted, errors are propagated reversely, and the prediction network training with or without fire is carried out; meanwhile, training a fire position segmentation network by utilizing the output of the fire prediction network;
(8) and loading the trained segmentation network model to a segmentation network, and outputting a parking lot flame segmentation result.
According to the invention, the rough location of the fire position of the parking lot is generated as semantic information based on a classification network class activation mapping mechanism, each convolution block of the network can generate multi-scale information, and each iteration in the network training process can generate the semantic class activation mapping information of the fire, so that complementary advantages of the three can be fully exerted.
Compared with the prior art, the invention has the beneficial effects that:
based on a weak supervision fire classification network, a fire area is positioned through category activation mapping based on semantic features, based on multi-scale features of a network middle layer and results continuously output in a training process, cross-generation feature aggregation modules are adopted to aggregate iterative output results, meanwhile, original RGB input pictures are ablated based on initial coarse segmentation results and input into the classification network, secondary fine segmentation results can be obtained, the false alarm rate can be reduced, the influence of the surrounding environment is avoided, the data requirement is low, the method can be only used in the field of parking lot fire detection, and the method is also applicable to other fields with deficient data sets and needing to improve the precision in a weak supervision mode.
Drawings
Fig. 1 is a structure diagram of a cross-iterative feature relationship mutual learning module in a fire prediction network structure framework according to the present invention.
Fig. 2 is a diagram illustrating a fire prediction network according to the present invention.
Fig. 3 is a flow chart of the present invention.
Detailed Description
The invention is further described below by way of example with reference to the accompanying drawings, without limiting the scope of the invention in any way.
Example (b):
in this embodiment, the network structure shown in fig. 1 and the process shown in fig. 2 are used to predict the fire in the parking lot, which specifically includes the following steps:
(1) constructing a fire detection data set:
collecting pictures of a fire disaster of a parking lot and pictures of a fire disaster not occurring in the parking lot, dividing the pictures into two types, namely, a fire disaster (set as a type 1) and a fire disaster (set as a type 0), and dividing data sets of the fire disaster and the fire disaster into three subdata sets, namely, a training set, a verification set and a test set;
(2) fire weak supervision coarse positioning based on semantic feature aggregation
At present, the labeling data of the parking lot data set only includes whether a fire occurs (0 and 1), and there is no other labeling data, so that the position of the fire needs to be located by using only the Class label whether the fire occurs, and the Class information is converted into the Class Activation map of the location information by adopting a Class Activation Mapping (Class Activation Mapping) manner, so that the fire position L (L is a fire area) in the parking lot is located by the way, specifically:
inputting the picture I into a deep learning network and outputting a semantic layer F 4 Features, then obtaining aggregated semantic features based on CAM mechanism
Figure RE-DEST_PATH_IMAGE001
It is defined in detail as follows:
Figure RE-DEST_PATH_IMAGE002
where Cov stands for convolution operation, softmax stands for normalization operation, FC stands for fully-connected layer,
Figure RE-DEST_PATH_IMAGE003
representing feature layers from 1 to n, and GAP representing a global pooling layer;
(3) fire area fine positioning based on multi-scale feature aggregation
Although the general location information L of the fire can be obtained through the category activation mapping, the location information is very coarse and even fails to locate in the parking lot under complex conditions, and especially completely fails when the vehicle is blocked, therefore, the deep learning network (VggNet) can output the output side output characteristic (Conv 2->4) Can be used as multi-scale information to enhance the perception capability of multi-scale objects, and flames also have very many shapes and sizes, so the multi-scale information is also effective in fire detection, and the network side output is characterized
Figure RE-DEST_PATH_IMAGE004
As flame multi-scale information and by down-sampling and up-sampling operations: (
Figure RE-DEST_PATH_IMAGE005
) Will be provided withMulti-scale feature aggregation:
Figure RE-DEST_PATH_IMAGE006
wherein Cov represents a convolution operation and Concat represents a concat join operation;
(4) feature aggregation across iterative feature aggregation
Since the deep learning network (VggNet) continuously generates an intermediate output result (MiddleLevel) in the training process, this result is often discarded in the training process because it contains very much background noise information, but it is an intermediate result of each attempt of the network, which contains positive gain information for correct prediction and negative gain information for each incorrect prediction, and for weakly supervised segmentation, the commonality information needs to be enhanced, so that data circulation and positive gain enhancement are needed between each iteration in the iteration, and the iteration result in the training process is output as the weighting of fire commonality information, and the detailed definition of cross-iteration feature aggregation is as follows:
Figure RE-DEST_PATH_IMAGE007
,
Figure RE-DEST_PATH_IMAGE008
wherein SF stands for softmax normalization, RE stands for reshape operation,
Figure RE-DEST_PATH_IMAGE009
representing matrix cross multiplication, wherein N represents total iteration N times, and t represents the t-th iteration;
(5) fire area segmentation based on area ablation algorithm
After obtaining the parking lot fire location information, (b) semantic information
Figure RE-291256DEST_PATH_IMAGE001
) And multi-scale information: (
Figure RE-DEST_PATH_IMAGE010
) And intermediate iteration output information: (
Figure RE-DEST_PATH_IMAGE011
) Fusing to obtain an initial segmentation result Mask, then dividing coarse positioning information into foreground flame information and background information by adopting a segmentation algorithm CRF, wherein although the flame result obtained by segmentation can occupy most of flame regions, some regions still exist and are not included in the current flame region, and the regions still play a role in predicting whether a fire exists, so that the foreground region is applied to an input fire detection picture I to form a locally-missing fire detection picture, and the undiscovered fire regions can be continuously found after secondary classification:
Figure RE-DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure RE-DEST_PATH_IMAGE013
on behalf of the classification network or networks,
Figure RE-DEST_PATH_IMAGE014
is composed of
Figure RE-146080DEST_PATH_IMAGE013
The result is output and the result is output,
Figure RE-DEST_PATH_IMAGE015
representing the input to the classification network,
Figure RE-DEST_PATH_IMAGE016
representative use
Figure RE-112768DEST_PATH_IMAGE014
An ablation I operation;
(6) fire prediction network based on fire partition area training
The accurate segmentation area of the parking lot fire can be obtained through the classification network, then, based on the accurate segmentation area, an end-to-end parking lot fire segmentation network (segmentionnet) can be trained, the two networks have the same purpose, the fire area in the parking lot is finely segmented, the learning process is similar to a transfer learning mode (TransferLearning), the finely purified knowledge of the classification network is transferred to the segmentation network, however, the segmentation network obtained through prediction can completely independently work without the classification network, and therefore fire segmentation testing speed can be greatly facilitated;
(7) parking lot fire prediction network training
Using the constructed fire picture training data set as a fire classification network
Figure RE-507977DEST_PATH_IMAGE013
The input of the network is carried out, whether a fire disaster occurs or not is predicted, and errors are propagated reversely, so that the purpose of training the network is achieved;
taking the refined result as a pseudo label, training a segmentation network segmentationNet, and reversely propagating errors to achieve the purpose of training the segmentation network;
(8) parking lot fire prediction network test
And loading the trained segmentation network model SegmentationNet into a segmentation network, outputting a flame segmentation result of the parking lot, and making a fire fighting scheme according to the flame segmentation result.
Based on a weak supervision fire classification network, a fire area is positioned through category activation mapping based on semantic features, results are output continuously in a training process and are aggregated and iterated by a cross-generation feature aggregation module based on multi-scale features of a network middle layer, meanwhile, original RGB input pictures are ablated based on an initial rough segmentation result and are input into the classification network, and the segmentation result is refined secondarily.
It is noted that networks, algorithms, and processes not described in detail herein are all common in the art, and embodiments are disclosed to assist in further understanding the present invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.

Claims (1)

1. A parking lot fire prediction method is characterized by comprising the following specific processes:
(1) collecting pictures of a parking lot in a fire disaster and pictures of the parking lot without the fire disaster, respectively marking the pictures with the fire disaster and the pictures without the fire disaster as category labels, and meanwhile, dividing picture data sets with the fire disaster and the pictures without the fire disaster into a training set, a verification set and a test set;
(2) converting the category information into a category activation map of positioning information by adopting a category activation mapping mode, positioning the position of the fire by utilizing a category label of whether the fire occurs, and roughly positioning the position of the fire in the parking lot to obtain semantic information;
(3) fire area fine positioning based on multi-scale feature aggregation: inputting the picture subjected to rough positioning in the step (2) into a deep learning network, and taking the characteristics output by the output side of the deep learning network as flame multi-scale information;
(4) outputting an iteration result in the training process based on the deep learning network in the step (3), and outputting intermediate iteration information as the weighting of fire common information;
(5) fusing semantic information, multi-scale information and intermediate iteration output information, dividing coarse positioning information into foreground flame information and background information by adopting a segmentation algorithm CRF, applying a foreground region to an input fire detection picture to form a locally-missing fire detection picture, continuously finding a fire region which is not found after secondary classification, and finally obtaining an accurate segmentation region of the parking lot fire;
(6) training an end-to-end parking lot fire partition network in a transfer learning mode based on the accurate partition areas of the parking lot fire obtained in the step (5), and finely partitioning the fire areas in the parking lot;
(7) the constructed picture training data sets with or without fire are used as the input of a fire classification network, whether the fire occurs is predicted, errors are propagated reversely, and the prediction network training with or without fire is carried out; meanwhile, training a fire position segmentation network by utilizing the output of the fire prediction network;
(8) and loading the trained segmentation network model to a segmentation network, and outputting a parking lot flame segmentation result.
CN202210716112.1A 2022-06-23 2022-06-23 Parking lot fire prediction method Pending CN114936718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210716112.1A CN114936718A (en) 2022-06-23 2022-06-23 Parking lot fire prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210716112.1A CN114936718A (en) 2022-06-23 2022-06-23 Parking lot fire prediction method

Publications (1)

Publication Number Publication Date
CN114936718A true CN114936718A (en) 2022-08-23

Family

ID=82867882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210716112.1A Pending CN114936718A (en) 2022-06-23 2022-06-23 Parking lot fire prediction method

Country Status (1)

Country Link
CN (1) CN114936718A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393660A (en) * 2022-10-28 2022-11-25 松立控股集团股份有限公司 Parking lot fire detection method based on weak supervision collaborative sparse relationship ranking mechanism

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393660A (en) * 2022-10-28 2022-11-25 松立控股集团股份有限公司 Parking lot fire detection method based on weak supervision collaborative sparse relationship ranking mechanism
CN115393660B (en) * 2022-10-28 2023-02-24 松立控股集团股份有限公司 Parking lot fire detection method based on weak supervision collaborative sparse relationship ranking mechanism

Similar Documents

Publication Publication Date Title
US11783594B2 (en) Method of segmenting pedestrians in roadside image by using convolutional network fusing features at different scales
Bibi et al. Edge AI‐Based Automated Detection and Classification of Road Anomalies in VANET Using Deep Learning
CN110348437B (en) Target detection method based on weak supervised learning and occlusion perception
CN108596053A (en) A kind of vehicle checking method and system based on SSD and vehicle attitude classification
CN111127449B (en) Automatic crack detection method based on encoder-decoder
CN106557579B (en) Vehicle model retrieval system and method based on convolutional neural network
CN112488025B (en) Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN111611861B (en) Image change detection method based on multi-scale feature association
CN108960074B (en) Small-size pedestrian target detection method based on deep learning
CN111815576B (en) Method, device, equipment and storage medium for detecting corrosion condition of metal part
CN114936718A (en) Parking lot fire prediction method
CN114360239A (en) Traffic prediction method and system for multilayer space-time traffic knowledge map reconstruction
CN114120280A (en) Traffic sign detection method based on small target feature enhancement
CN113177528B (en) License plate recognition method and system based on multi-task learning strategy training network model
Liang et al. Car detection and classification using cascade model
CN113012107B (en) Power grid defect detection method and system
CN115393660B (en) Parking lot fire detection method based on weak supervision collaborative sparse relationship ranking mechanism
CN113343123A (en) Training method and detection method for generating confrontation multiple relation graph network
CN112597996A (en) Task-driven natural scene-based traffic sign significance detection method
Zhang et al. Pre-locate net for object detection in high-resolution images
Zhang et al. Deep learning for large-scale point cloud segmentation in tunnels considering causal inference
CN111738324B (en) Multi-frequency and multi-scale fusion automatic crack detection method based on frequency division convolution
CN115527098A (en) Infrared small target detection method based on global mean contrast space attention
CN115240163A (en) Traffic sign detection method and system based on one-stage detection network
CN114219989A (en) Foggy scene ship instance segmentation method based on interference suppression and dynamic contour

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination