CN113435286A - Transfer learning method for unmanned aerial vehicle small target monitoring - Google Patents

Transfer learning method for unmanned aerial vehicle small target monitoring Download PDF

Info

Publication number
CN113435286A
CN113435286A CN202110683465.1A CN202110683465A CN113435286A CN 113435286 A CN113435286 A CN 113435286A CN 202110683465 A CN202110683465 A CN 202110683465A CN 113435286 A CN113435286 A CN 113435286A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
domain network
training
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110683465.1A
Other languages
Chinese (zh)
Inventor
陈怀新
刘壁源
黄周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110683465.1A priority Critical patent/CN113435286A/en
Publication of CN113435286A publication Critical patent/CN113435286A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a transfer learning method for unmanned aerial vehicle small target monitoring, which comprises the steps of taking open source to ground image data acquired by an unmanned aerial vehicle as a training sample; traversing the training samples through the initialized RetinaNet network, and training the training samples by adopting different image sizes to obtain corresponding characteristic graphs; aligning the corresponding feature maps by utilizing a feature level up-sampling alignment mode; obtaining a matching point matrix of the aligned feature map by using an adaptive feature map matching mode based on anchor frame cross-over ratio; calculating residual errors of the aligned key feature points of the feature map; and minimizing residual errors by utilizing gradient back propagation until the minimized residual errors are completed, namely completing the deep neural network data migration training learning from a source domain to a target domain, and using the data migration training learning for unmanned aerial vehicle small target monitoring. The method has good model convergence effect, improves the precision of small target detection, reduces the data preparation cost and avoids excessive increase of the calculated amount.

Description

Transfer learning method for unmanned aerial vehicle small target monitoring
Technical Field
The invention relates to the field of image processing, in particular to a transfer learning method for unmanned aerial vehicle small target monitoring.
Background
In recent years, target detection based on an unmanned aerial vehicle remote sensing platform has wide application in the fields of ground target monitoring, city patrol and the like due to the characteristics of high real-time performance, wide coverage, strong maneuverability, low cost and the like of the unmanned aerial vehicle platform. However, the current mainstream target detection algorithm achieves higher precision on the general target detection data set such as the reference data sets of COCO and ImageNet, and in contrast, the detection precision on the unmanned aerial vehicle data set has a larger gap. This is because the imaging conditions of data sets such as COCO and ImageNet are good, the size of natural image objects included is generally large, the objects have discriminative characteristics, and the background is simple. Compared with the prior art, the unmanned aerial vehicle image target is denser, the size is smaller, the background is more complex, the small-size target is difficult to obtain discriminative characteristics, and the difficulty of identifying high-precision targets is greatly increased.
To improve the detection accuracy of small objects, a multi-scale image pyramid method is often used to enhance the feature representation of small objects. However, the method has huge calculation consumption, the length and the width of the image are enlarged by S times, the calculated amount is increased by S multiplied by S times, and the method is difficult to apply in actual engineering conditions.
Disclosure of Invention
Aiming at the defects in the prior art, the migration learning method for unmanned aerial vehicle small target monitoring provided by the invention solves the problem that the traditional method is large in calculated amount and low in monitoring precision.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
the transfer learning method for unmanned aerial vehicle small target monitoring is provided, and comprises the following steps:
s1, taking the open source ground image data acquired by the unmanned aerial vehicle as a training sample;
s2, initializing a RetinaNet network with ResNet-50 as a backbone structure;
s3, traversing training samples through the initialized RetinaNet network, performing iterative training on the training samples by adopting the image size of 512 multiplied by 512, determining initial parameters of the target domain network, forming a characteristic diagram of the target domain network, performing iterative training on the training samples by adopting the image size of 1024 multiplied by 1024, determining final parameters of the source domain network, and forming the characteristic diagram of the source domain network;
s4, aligning the feature graph of the target domain network with the feature graph of the source domain network by utilizing a feature level up-sampling alignment mode;
s5, matching the level characteristic graphs of the aligned target domain network and the source domain network by using an adaptive characteristic graph matching mode based on anchor frame intersection and comparison to obtain a matching point matrix;
s6, calculating normalized local L2 norm loss, namely the residual error from the source domain network to the corresponding key feature point of the target domain network according to the aligned feature map of the target domain network and the feature map of the source domain network and the matching point matrix thereof;
s7, minimizing residual errors by utilizing gradient back propagation, and reducing the difference between the source domain network characteristic diagram and the target domain network characteristic diagram until the minimized residual errors are completed, namely completing deep neural network data migration training learning from the source domain to the target domain, and the method is used for migration learning of unmanned aerial vehicle small target monitoring.
Further: in step S1, the open-source ground image data is the VisDrone2018-DET dataset, and the training samples are 6471.
Further, the specific method of step S2 includes the following sub-steps:
s2-1, pre-training a ResNet-50 structure in the RetinaNet network by using COCO data of more than 33 ten thousand pictures; wherein the ResNet-50 structure is a convolution characteristic extraction network;
s2-2, setting training parameters of the RetinaNet network in a random gradient descent mode; wherein the initial learning rate, momentum, and weight decay factor are set to 0.001, 0.9, 0.01, respectively.
Further: the number of traversal in the step S3 is 20 rounds, and the input of each iteration is a tensor of 4 image compression, that is, the number of iterations in each round is the number of training samples divided by 4; in the whole 20 rounds of traversal, the learning rate of the first 1000 iterations varies linearly from 0.001 to 0.01, and the learning rate of the subsequent iteration is the same as that of the previous iteration except that the learning rates of the training samples of the 6 th round and the 14 th round of traversal are 1/5 of the previous round.
Further: the feature level upsampling alignment mode in step S4 is a nearest neighbor interpolation mode.
Further, the specific method of step S5 includes the following steps:
s5-1, extracting an anchor frame and a target frame of the hierarchical feature diagram; wherein the anchor frame and the target frame are both rectangular frames represented by two vertex coordinates;
s5-2, dividing the intersection area of the two frames by the union area of the two frames to obtain the intersection ratio of the two frames;
s5-3, forming a matrix with the same size as the hierarchical feature map by utilizing the intersection ratio of the two borders, namely an intersection ratio distribution map, and calculating the average value and the variance of the intersection ratio distribution map;
s5-4, adding the average value and the variance of the cross-over ratio distribution graph to obtain an adaptive threshold;
s5-5, judging whether the feature point of the hierarchical feature map is larger than a self-adaptive threshold, if so, judging to obtain a matching position with a value of 1, otherwise, judging to obtain an neglected point with a value of 0;
and S5-6, combining the judgment results into a new matrix to obtain the matching point matrix.
Further, the specific method for calculating the L2 norm loss by using the feature point matching loss function as the cost function in step S6 is as follows:
according to the formula:
Figure BDA0003123644290000031
the normalized local L2 norm loss is obtainedtrans(ii) a Wherein S (i, j) is the value of i row and j column on the feature map of the aligned source domain network, and T (i, j) is the feature map of the aligned target domain networkThe values of the upper i rows and j columns, mask (i, j) is a matching point matrix, M N is a feature map size, and i belongs to (1,2,. eta., M), and j belongs to (1,2,. eta., N).
The invention has the beneficial effects that: the transfer learning is applied to sample data with the same source but different sizes, a pre-training model is obtained through a large-size image, the transfer learning is utilized to guide the training of the small-scale model, and the up-sampling alignment of the feature map and the matching of the self-adaptive feature map are combined, so that the small target detection precision of the small-scale model is effectively improved, and excessive calculation amount increase is avoided. The multi-scale image is directly acquired by adopting a sampling method, so that the data preparation cost is reduced while the good effect is ensured. The method for pre-training the large-data-volume public image data set and fine-tuning the model by the small-data-volume target data reduces the data volume required by model convergence, enables the model to be converged more quickly and can obtain a better detection result of the target data set.
Drawings
FIG. 1 is a general flow chart of the present invention;
FIG. 2 is a schematic diagram of the algorithm architecture of the present invention;
fig. 3 is a schematic diagram of the feature map matching in step S5 according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1 and fig. 2, the transfer learning method for unmanned aerial vehicle small target monitoring includes the following steps:
s1, taking the open source ground image data acquired by the unmanned aerial vehicle as a training sample;
s2, initializing a RetinaNet network with ResNet-50 as a backbone structure;
s3, traversing training samples through the initialized RetinaNet network, performing iterative training on the training samples by adopting the image size of 512 multiplied by 512, determining initial parameters of the target domain network, forming a characteristic diagram of the target domain network, performing iterative training on the training samples by adopting the image size of 1024 multiplied by 1024, determining final parameters of the source domain network, and forming the characteristic diagram of the source domain network;
s4, aligning the feature graph of the target domain network with the feature graph of the source domain network by utilizing a feature level up-sampling alignment mode;
s5, matching the level characteristic graphs of the aligned target domain network and the source domain network by using an adaptive characteristic graph matching mode based on anchor frame intersection and comparison to obtain a matching point matrix;
s6, calculating normalized local L2 norm loss, namely the residual error from the source domain network to the corresponding key feature point of the target domain network according to the aligned feature map of the target domain network and the feature map of the source domain network and the matching point matrix thereof;
s7, minimizing residual errors by utilizing gradient back propagation, and reducing the difference between the source domain network characteristic diagram and the target domain network characteristic diagram until the minimized residual errors are completed, namely completing deep neural network data migration training learning from the source domain to the target domain, and the method is used for migration learning of unmanned aerial vehicle small target monitoring.
In step S1, the open-source ground image data is the VisDrone2018-DET dataset, and the training samples are 6471.
The specific method of step S2 includes the following substeps:
s2-1, pre-training a ResNet-50 structure in the RetinaNet network by using COCO data of more than 33 ten thousand pictures; wherein the ResNet-50 structure is a convolution characteristic extraction network;
s2-2, setting training parameters of the RetinaNet network in a random gradient descent mode; wherein the initial learning rate, momentum, and weight decay factor are set to 0.001, 0.9, 0.01, respectively.
The number of traversal in the step S3 is 20 rounds, and the input of each iteration is a tensor of 4 image compression, that is, the number of iterations in each round is the number of training samples divided by 4; in the whole 20 rounds of traversal, the learning rate of the first 1000 iterations varies linearly from 0.001 to 0.01, and the learning rate of the subsequent iteration is the same as that of the previous iteration except that the learning rates of the training samples of the 6 th round and the 14 th round of traversal are 1/5 of the previous round.
The feature level upsampling alignment mode in step S4 is a nearest neighbor interpolation mode.
As shown in fig. 3, the specific method of step S5 includes the following steps:
s5-1, extracting an anchor frame and a target frame of the hierarchical feature diagram; wherein the anchor frame and the target frame are both rectangular frames represented by two vertex coordinates;
s5-2, dividing the intersection area of the two frames by the union area of the two frames to obtain the intersection ratio of the two frames;
s5-3, forming a matrix with the same size as the hierarchical feature map by utilizing the intersection ratio of the two borders, namely an intersection ratio distribution map, and calculating the average value and the variance of the intersection ratio distribution map;
s5-4, adding the average value and the variance of the cross-over ratio distribution graph to obtain an adaptive threshold;
s5-5, judging whether the feature point of the hierarchical feature map is larger than a self-adaptive threshold, if so, judging to obtain a matching position with a value of 1, otherwise, judging to obtain an neglected point with a value of 0;
and S5-6, combining the judgment results into a new matrix to obtain the matching point matrix.
In FIG. 3, the real label is the target frame, IOU calculation is the cross-to-cross ratio calculation, Mean represents the average of the cross-to-cross ratio distribution map, std represents the variance of the cross-to-cross ratio distribution map, λthreshDenotes the adaptive threshold and match _ points denotes the match point matrix.
The specific method for calculating the L2 norm loss by using the feature point matching loss function as the cost function in step S6 is as follows:
according to the formula:
Figure BDA0003123644290000061
obtaining a normalized local L2 norm loss losstrans(ii) a Wherein, S (i, j) is a value of i rows and j columns on the feature map of the aligned source domain network, T (i, j) is a value of i rows and j columns on the feature map of the aligned target domain network, mask (i, j) is a matching point matrix, mxn is a feature map size, and i ∈ (1, 2.. said., M), j ∈ (1, 2.. said., N).
In a specific embodiment of the present invention, experimental data show that, in the image pyramid method for improving the detection accuracy of a small target by using a large-scale image, compared with a 512 × 512-sized image training model, the accuracy of the small target is improved by 339.2% for a 1024 × 1024 image training model, but the calculation amount is increased to 4 times of the original calculation amount, that is, increased by 300% by 4 times of upsampling, because the image pyramid method performs upsampling on image input, which brings about a huge increase in calculation amount. In contrast, the migration training method provided by the invention performs 4 times of upsampling on the target domain feature map, aligns to the source domain network, and improves the small target detection precision by 114.2% under the condition that only 8.9% of calculated amount is increased. The precision-computation gain ratio (small target precision lifting ratio/computation increase ratio) of the patent is 12.83, and the image pyramid method gain ratio is 1.13.
The method applies the transfer learning to the sample data with the same source but different sizes, obtains the pre-training model through the large-size image, guides the training of the small-scale model by utilizing the transfer learning, and combines the up-sampling alignment of the feature map and the matching of the self-adaptive feature map, thereby effectively improving the small target detection precision of the small-scale model and avoiding the excessive increase of the calculated amount.
The multi-scale image is directly acquired by adopting a sampling method, so that the data preparation cost is reduced while the good effect is ensured. The method for pre-training the large-data-volume public image data set and fine-tuning the model by the small-data-volume target data reduces the data volume required by model convergence, enables the model to be converged more quickly and can obtain a better detection result of the target data set.

Claims (7)

1. A transfer learning method for unmanned aerial vehicle small target monitoring is characterized by comprising the following steps:
s1, taking the open source ground image data acquired by the unmanned aerial vehicle as a training sample;
s2, initializing a RetinaNet network with ResNet-50 as a backbone structure;
s3, traversing training samples through the initialized RetinaNet network, performing iterative training on the training samples by adopting the image size of 512 multiplied by 512, determining initial parameters of the target domain network, forming a characteristic diagram of the target domain network, performing iterative training on the training samples by adopting the image size of 1024 multiplied by 1024, determining final parameters of the source domain network, and forming the characteristic diagram of the source domain network;
s4, aligning the feature graph of the target domain network with the feature graph of the source domain network by utilizing a feature level up-sampling alignment mode;
s5, matching the level characteristic graphs of the aligned target domain network and the source domain network by using an adaptive characteristic graph matching mode based on anchor frame intersection and comparison to obtain a matching point matrix;
s6, calculating normalized local L2 norm loss, namely the residual error from the source domain network to the corresponding key feature point of the target domain network according to the aligned feature map of the target domain network and the feature map of the source domain network and the matching point matrix thereof;
s7, minimizing residual errors by utilizing gradient back propagation, and reducing the difference between the source domain network characteristic diagram and the target domain network characteristic diagram until the minimized residual errors are completed, namely completing deep neural network data migration training learning from the source domain to the target domain, and the method is used for migration learning of unmanned aerial vehicle small target monitoring.
2. The transfer learning method for unmanned aerial vehicle small target monitoring according to claim 1, characterized in that: in step S1, the acquired open source ground image data is a VisDrone2018-DET dataset, and the training samples are 6471.
3. The transfer learning method for unmanned aerial vehicle small target monitoring as claimed in claim 1, wherein the specific method of step S2 comprises the following sub-steps:
s2-1, pre-training a ResNet-50 structure in the RetinaNet network by using COCO data of more than 33 ten thousand pictures; wherein the ResNet-50 structure is a convolution characteristic extraction network;
s2-2, setting training parameters of the RetinaNet network in a random gradient descent mode; wherein the initial learning rate, momentum, and weight decay factor are set to 0.001, 0.9, 0.01, respectively.
4. The transfer learning method for unmanned aerial vehicle small target monitoring according to claim 1, characterized in that: the number of traversal in the step S3 is 20 rounds, and the input of each iteration is a tensor of 4 image compression, that is, the number of iterations in each round is the number of training samples divided by 4; in the whole 20 rounds of traversal, the learning rate of the first 1000 iterations varies linearly from 0.001 to 0.01, and the learning rate of the subsequent iteration is the same as that of the previous iteration except that the learning rates of the training samples of the 6 th round and the 14 th round of traversal are 1/5 of the previous round.
5. The transfer learning method for unmanned aerial vehicle small target monitoring according to claim 1, characterized in that: the feature level upsampling alignment mode in step S4 is a nearest neighbor interpolation mode.
6. The transfer learning method for unmanned aerial vehicle small target monitoring according to claim 1, wherein the specific method of step S5 includes the following steps:
s5-1, extracting an anchor frame and a target frame of the hierarchical feature diagram; wherein the anchor frame and the target frame are both rectangular frames represented by two vertex coordinates;
s5-2, dividing the intersection area of the two frames by the union area of the two frames to obtain the intersection ratio of the two frames;
s5-3, forming a matrix with the same size as the hierarchical feature map by utilizing the intersection ratio of the two borders, namely an intersection ratio distribution map, and calculating the average value and the variance of the intersection ratio distribution map;
s5-4, adding the average value and the variance of the cross-over ratio distribution graph to obtain an adaptive threshold;
s5-5, judging whether the feature point of the hierarchical feature map is larger than a self-adaptive threshold, if so, judging to obtain a matching position with a value of 1, otherwise, judging to obtain an neglected point with a value of 0;
and S5-6, combining the judgment results into a new matrix to obtain the matching point matrix.
7. The transfer learning method for unmanned aerial vehicle small target monitoring as claimed in claim 1, wherein the specific method for calculating the L2 norm loss by using the feature point matching loss function as the cost function in step S6 is as follows:
according to the formula:
Figure FDA0003123644280000031
the normalized local L2 norm loss is obtainedtrans(ii) a Wherein, S (i, j) is a value of i rows and j columns on the feature map of the aligned source domain network, T (i, j) is a value of i rows and j columns on the feature map of the aligned target domain network, mask (i, j) is a matching point matrix, mxn is a feature map size, and i ∈ (1, 2.. said., M), j ∈ (1, 2.. said., N).
CN202110683465.1A 2021-06-21 2021-06-21 Transfer learning method for unmanned aerial vehicle small target monitoring Pending CN113435286A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110683465.1A CN113435286A (en) 2021-06-21 2021-06-21 Transfer learning method for unmanned aerial vehicle small target monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110683465.1A CN113435286A (en) 2021-06-21 2021-06-21 Transfer learning method for unmanned aerial vehicle small target monitoring

Publications (1)

Publication Number Publication Date
CN113435286A true CN113435286A (en) 2021-09-24

Family

ID=77756685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110683465.1A Pending CN113435286A (en) 2021-06-21 2021-06-21 Transfer learning method for unmanned aerial vehicle small target monitoring

Country Status (1)

Country Link
CN (1) CN113435286A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037078A (en) * 2024-04-12 2024-05-14 国网浙江省电力有限公司湖州供电公司 Substation carbon emission calculation data migration method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860236A (en) * 2020-07-06 2020-10-30 中国科学院空天信息创新研究院 Small sample remote sensing target detection method and system based on transfer learning
CN112926547A (en) * 2021-04-13 2021-06-08 北京航空航天大学 Small sample transfer learning method for classifying and identifying aircraft electric signals

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860236A (en) * 2020-07-06 2020-10-30 中国科学院空天信息创新研究院 Small sample remote sensing target detection method and system based on transfer learning
CN112926547A (en) * 2021-04-13 2021-06-08 北京航空航天大学 Small sample transfer learning method for classifying and identifying aircraft electric signals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BI-YUAN LIU 等,: "ZoomInNet: A Novel Small Object Detector in Drone Images with Cross-Scale Knowledge Distillation", 《REMOTE SENSING》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037078A (en) * 2024-04-12 2024-05-14 国网浙江省电力有限公司湖州供电公司 Substation carbon emission calculation data migration method

Similar Documents

Publication Publication Date Title
CN110232394B (en) Multi-scale image semantic segmentation method
CN109800628B (en) Network structure for enhancing detection performance of SSD small-target pedestrians and detection method
CN111860386B (en) Video semantic segmentation method based on ConvLSTM convolutional neural network
CN111523546B (en) Image semantic segmentation method, system and computer storage medium
CN110349087B (en) RGB-D image high-quality grid generation method based on adaptive convolution
CN110956126A (en) Small target detection method combined with super-resolution reconstruction
CN113076871A (en) Fish shoal automatic detection method based on target shielding compensation
CN106203625A (en) A kind of deep-neural-network training method based on multiple pre-training
Chen et al. Single image super-resolution using deep CNN with dense skip connections and inception-resnet
CN112837320B (en) Remote sensing image semantic segmentation method based on parallel hole convolution
CN110245587B (en) Optical remote sensing image target detection method based on Bayesian transfer learning
CN110991444A (en) Complex scene-oriented license plate recognition method and device
CN110363160A (en) A kind of Multi-lane Lines recognition methods and device
CN110555461A (en) scene classification method and system based on multi-structure convolutional neural network feature fusion
CN115578615A (en) Night traffic sign image detection model establishing method based on deep learning
CN116524189A (en) High-resolution remote sensing image semantic segmentation method based on coding and decoding indexing edge characterization
CN113435286A (en) Transfer learning method for unmanned aerial vehicle small target monitoring
CN113989287A (en) Urban road remote sensing image segmentation method and device, electronic equipment and storage medium
CN117496158A (en) Semi-supervised scene fusion improved MBI contrast learning and semantic segmentation method
CN108961270A (en) A kind of Bridge Crack Image Segmentation Model based on semantic segmentation
CN107564013A (en) Merge the scene cut modification method and system of local message
CN116363610A (en) Improved YOLOv 5-based aerial vehicle rotating target detection method
CN113256528B (en) Low-illumination video enhancement method based on multi-scale cascade depth residual error network
CN113591614B (en) Remote sensing image road extraction method based on close-proximity spatial feature learning
CN112598663B (en) Grain pest detection method and device based on visual saliency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination