CN115346094B - Camouflage target detection method based on main body region guidance - Google Patents

Camouflage target detection method based on main body region guidance Download PDF

Info

Publication number
CN115346094B
CN115346094B CN202211037831.7A CN202211037831A CN115346094B CN 115346094 B CN115346094 B CN 115346094B CN 202211037831 A CN202211037831 A CN 202211037831A CN 115346094 B CN115346094 B CN 115346094B
Authority
CN
China
Prior art keywords
main body
representing
prediction
supervision
camouflage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211037831.7A
Other languages
Chinese (zh)
Other versions
CN115346094A (en
Inventor
吴智聪
周晓飞
张继勇
李世锋
周振
何帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Power Data Service Co ltd
Hangzhou Dianzi University
Original Assignee
China Power Data Service Co ltd
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Power Data Service Co ltd, Hangzhou Dianzi University filed Critical China Power Data Service Co ltd
Priority to CN202211037831.7A priority Critical patent/CN115346094B/en
Publication of CN115346094A publication Critical patent/CN115346094A/en
Application granted granted Critical
Publication of CN115346094B publication Critical patent/CN115346094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a camouflage target detection method based on main body area guidance, which comprises three steps of training image preprocessing, camouflage target prediction network construction and camouflage target prediction network training. Firstly, data enhancement is carried out through training image preprocessing, then a camouflage target prediction network is built, and finally training of the camouflage target prediction network is completed through training set images. The network model provided by the method can fully and effectively utilize the main body region guiding information to realize accurate prediction of the camouflage target in the natural image.

Description

Camouflage target detection method based on main body region guidance
Technical Field
The invention relates to a camouflage target detection method based on main body area guidance, and belongs to the technical field of computer vision.
Background
With the continuous progress of deep learning technology, various tasks in the field of computer vision have also met with many new developments. The detection of camouflage targets (COD, camouflage Object Detection) is a popular research direction in the field of computer vision, and the task is to accurately detect the position information of camouflage targets contained in images and realize binary segmentation of target areas. The camouflage target detection has a large difference from the conventional target detection task, the camouflage target is often very hidden in the image, the overlapping degree with the background environment is high, and the camouflage target cannot be captured by the visual attention of human at the first time. Camouflage target detection has been widely used in many related research fields, such as camouflage military target detection, agricultural crop pest classification, medical image segmentation and the like, and the task has attracted more and more attention.
The traditional camouflage target detection model adopts more traditional detection methods based on manual characteristics, such as detection methods based on color and texture characteristics, detection methods based on frequency domain characteristics and detection methods based on geometric gradient characteristics. However, the method has obvious accuracy defects, the detection accuracy is also easily affected by factors such as noise, illumination and the like, and the comprehensive capacity is inferior to that of a method based on deep learning.
In the prior art, some camouflage target detection models based on deep learning are used for simply transplanting the detection model of the conventional target detection task into the camouflage target detection task, the improvement conforming to the color characteristics of the camouflage target is not performed, the positioning capability of the camouflage target is weaker, and the problem of distinguishing the obvious target from the camouflage target by serious errors exists.
The structure of the Encoder-Decoder in the neural network model was first proposed in the medical image segmentation task, and because its excellent segmentation performance is widely used, it becomes the network structure most commonly used in the image segmentation task.
Disclosure of Invention
The invention aims to provide a camouflage target detection method based on main body area guidance, aiming at the defects of the existing method.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
a camouflage target detection method based on main body area guidance comprises the following steps:
step one, training image preprocessing: the training data set adopts a COD10K data set and a CAMO data set, performs random overturning and random cutting operation on an input training image, and uses a distance change algorithm to generate a label image of a main body area, and is used as a supervision label in the subsequent network training;
step two, building a camouflage target prediction network: the camouflage target prediction network adopts an Encoder-Decoder structure and comprises an encoding part based on a Res2Net-50 backbone network, a decoding part comprising a main body area analysis module, a characteristic fusion module and a prediction supervision part;
in the coding part, inputting the image after image preprocessing into a backbone network of Res2Net-50 to obtain each coding level convolution characteristic diagram with different channel numbers and sizes, inputting each coding level convolution characteristic diagram into a convolution block to compress the channel dimension, and transmitting each level characteristic diagram with the same channel number to a decoding part after compressing;
in the decoding part, the network mainly comprises a main body area analysis module and a feature fusion module, wherein the main body area analysis module receives the convolution feature image output by the encoding part and the feature image output by the feature fusion module at the last level, and predicts the main body area of the camouflage target in a residual fusion mode; the feature fusion module receives the feature image output by the main body area analysis module, realizes feature fusion in a self-attention mode, predicts a camouflage target,
in the prediction supervision part, the feature images output by the main body area analysis module and the feature fusion module of the decoding part are input into a convolution layer, and the final prediction image sequence is obtained by up-sampling and Softmax operation.
Step three, training of a camouflage target prediction network: the predictive supervision part of the network outputs a sequence comprising 5 main body region predictive pictures and 5 disguised target predictive pictures, and uses the main body region label pictures and the target label pictures for supervision training, wherein the supervision of the main body region predictive pictures and the label pictures adopts a BCE loss function, the supervision of the disguised target predictive pictures and the label pictures adopts a mixed loss function of BCE and IOU,
the network adopts a learning rate strategy of step attenuation, the initial learning rate and the attenuation coefficient are respectively 0.0001 and 0.5, and the batch processing parameter is set to 8; the network was optimized using a random gradient descent algorithm with a momentum coefficient set to 0.9.
Compared with the prior art, the invention has the beneficial effects that:
the method has the main advantages that: a main body region analysis module and a feature fusion module of the decoding part. The method designs a main body region analysis module for effectively extracting aiming at a shallow characteristic diagram of which the coding part contains rich information, and guides the detection of a camouflage target by utilizing the prediction information of the main body region through a characteristic fusion module. The network model provided by the method can fully and effectively utilize the main body region guiding information to realize accurate prediction of the camouflage target in the natural image.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic diagram of a network architecture of a camouflage target detection method based on body region guidance according to the present invention;
fig. 2 is a schematic diagram of a prediction result of a camouflage target detection method based on a subject area guidance according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, a camouflage target detection method based on subject area guidance includes the following steps:
step one, training image preprocessing: the training data set adopts a COD10K data set and a CAMO data set, performs random overturning and random cutting operation on an input training image, and uses a distance change (Distance Transformation) algorithm to generate a label image of a main body area, which is used as a supervision label in the subsequent network training;
step two, building a camouflage target prediction network: the camouflage target prediction network adopts an Encoder-Decoder (encoding-decoding) structure and comprises an encoding part based on a Res2Net-50 backbone network, a decoding part comprising a main body area analysis module, a characteristic fusion module and a prediction supervision part;
in the coding part, inputting the image after image preprocessing into a backbone network of Res2Net-50 to obtain each coding level convolution characteristic diagram with different channel numbers and sizes, inputting each coding level convolution characteristic diagram into a convolution block to compress the channel dimension, and transmitting each level characteristic diagram with the same channel number to a decoding part after compressing;
IF i =ReLU(BN(Conv 1+3 (Encoder i )))
wherein, the Encoder i Feature map representing i-th level encoded block, IF i Representing the compressed feature map of the ith layer coded block, conv 1+3 () Representing a 1×1 convolutional layer and a 3×3 convolutional layer, BN () represents a batch normalization operation, reLU () represents a ReLU activation function, and after compression, each hierarchical feature map having the same number of channels is transferred to the decoding section.
In the decoding part, the network mainly comprises a main body area analysis module and a characteristic fusion module. The main body region analysis module receives the convolution feature image output by the coding part and the feature image output by the previous-level feature fusion module, predicts the main body region of the camouflage target in a residual fusion mode, and is defined as follows:
wherein PF is i-1 Representing a feature map output by an i-1 th hierarchical feature fusion module, BF i Conv representing feature map output by i-th-level principal-area prediction module 3 () Representing 3×3 convolutional layers, UP () representing a bilinear interpolation UP-sampling operation, cat () representing a concatenation (Concat) operation, each convolutional layer followed by a batch normalization operation and a ReLU activation function.
The feature fusion module receives the feature image output by the main body area analysis module, realizes feature fusion in a self-attention mode, and accurately predicts a camouflage target, and is defined as follows:
PF i =SA(Conv 3 (BF i )))
where SA () represents a self-attention operation and contains a batch normalization operation and a ReLU activation function after the convolutional layer.
Finally, in the prediction supervision part, the feature map output by the main body area analysis module and the feature fusion module of the decoding part is input into a convolution layer, and the final prediction map sequence is obtained by up-sampling and Softmax operation, as shown in fig. 2.
Step three, training of a camouflage target prediction network: the prediction supervision part of the network outputs a sequence comprising 5 main body area prediction graphs and 5 disguised target prediction graphs, and uses the main body area label graphs and the target label graphs for supervision training, wherein the supervision of the main body area prediction graphs and the label graphs adopts a BCE loss function, and the supervision of the disguised target prediction graphs and the label graphs adopts a mixed loss function of BCE and IOU, and the method is defined as follows:
wherein BP is i Principal region prediction graph, CP, representing output of the i-th-level principal region analysis module corresponding to the prediction supervision section i The camouflage target prediction graph which is output by the ith-level feature fusion module corresponding to the prediction supervision part is represented, BL and GT respectively represent a main body region label graph and a target label graph, and Loss BCE () Representing the BCE Loss function, loss BCE+IOU () Representing the mixing penalty function of BCE and IOU, loss represents the aggregate penalty value.
The network adopts a learning rate strategy of step attenuation, the initial learning rate and the attenuation coefficient are respectively 0.0001 and 0.5, and the batch processing parameter is set to 8; the network was optimized using a random gradient descent algorithm with a momentum coefficient set to 0.9.
And the main body region analysis module predicts the main body region by using a residual error connection mode and utilizing the convolution characteristic diagram output by the coding part and the characteristic diagram output by the characteristic fusion module of the last level.
The feature fusion module receives the feature map output by the main body region analysis module, realizes feature fusion in a self-attention mode, and predicts a camouflage target.
The embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made to these embodiments without departing from the principles and spirit of the invention, and yet fall within the scope of the invention.

Claims (4)

1. A camouflage target detection method based on main body area guidance is characterized in that: the method comprises the following steps:
step one, training image preprocessing: the training data set adopts a COD10K data set and a CAMO data set, performs random overturning and random cutting operation on an input training image, and uses a distance change algorithm to generate a label image of a main body area, and is used as a supervision label in the subsequent network training;
step two, building a camouflage target prediction network: the camouflage target prediction network adopts an Encoder-Decoder structure and comprises an encoding part based on a Res2Net-50 backbone network, a decoding part comprising a main body area analysis module, a characteristic fusion module and a prediction supervision part;
in the coding part, inputting the image after image preprocessing into a backbone network of Res2Net-50 to obtain each coding level convolution characteristic diagram with different channel numbers and sizes, inputting each coding level convolution characteristic diagram into a convolution block to compress the channel dimension, and transmitting each level characteristic diagram with the same channel number to a decoding part after compressing;
IF i =ReLU(BN(Conv 1+3 (Encoder i )))
wherein, the Encoder i Feature map representing i-th level encoded block, IF i Representing the compressed feature map of the ith layer coded block, conv 1+3 () Representing a 1×1 convolutional layer and a 3×3 convolutional layer, BN () representing a batch normalization operation, reLU () representing a ReLU activation function, and delivering each level feature map having the same number of channels to the decoding section after compression;
in the decoding part, the network mainly comprises a main body area analysis module and a feature fusion module, the main body area analysis module receives the convolution feature image output by the encoding part and the feature image output by the feature fusion module of the last level, predicts the main body area of the camouflage target in a residual fusion mode, and is defined as follows:
wherein PF is i-1 Representing a feature map output by an i-1 th hierarchical feature fusion module, BF i Conv representing feature map output by i-th-level principal-area prediction module 3 () Representing 3×3 convolutional layers, UP () representing a bilinear interpolation upsampling operation, cat () representing a concatenation operation, each convolutional layer followed by a batch normalization operation and a ReLU activation function;
the feature fusion module receives the feature image output by the main body area analysis module, realizes feature fusion in a self-attention mode, and accurately predicts a camouflage target, and is defined as follows:
PF i =SA(Conv 3 (BF i )))
where SA () represents a self-attention operation and contains a bulk normalization operation and a ReLU activation function after the convolutional layer;
finally, in a prediction supervision part, inputting the feature images output by the main body area analysis module and the feature fusion module of the decoding part into a convolution layer, and obtaining a final prediction image sequence by up-sampling and Softmax operation;
step three, training of a camouflage target prediction network: the predictive supervision part of the network outputs a sequence comprising 5 main body region predictive pictures and 5 disguised target predictive pictures, and uses the main body region label pictures and the target label pictures for supervision training, wherein the supervision of the main body region predictive pictures and the label pictures adopts a BCE loss function, the supervision of the disguised target predictive pictures and the label pictures adopts a mixed loss function of BCE and IOU,
the network adopts a learning rate strategy of step attenuation, the initial learning rate and the attenuation coefficient are respectively 0.0001 and 0.5, and the batch processing parameter is set to 8; the network was optimized using a random gradient descent algorithm with a momentum coefficient set to 0.9.
2. The method for detecting a camouflage target based on the guidance of the subject area as claimed in claim 1, wherein: the third step specifically comprises the following steps:
the supervision of the subject area prediction graph and the label graph adopts a BCE loss function, and the supervision of the camouflage target prediction graph and the label graph adopts a mixed loss function of BCE and IOU, which is defined as:
wherein BP is i Principal region prediction graph, CP, representing output of the i-th-level principal region analysis module corresponding to the prediction supervision section i The camouflage target prediction graph which is output by the ith-level feature fusion module corresponding to the prediction supervision part is represented, BL and GT respectively represent a main body region label graph and a target label graph, and Loss BCE () Representing the BCE Loss function, loss BCE+IOU () Representing the mixing penalty function of BCE and IOU, loss represents the aggregate penalty value.
3. The method for detecting a camouflage target based on the guidance of the subject area as claimed in claim 1, wherein: and the main body region analysis module predicts the main body region by using a residual error connection mode and utilizing the convolution characteristic diagram output by the coding part and the characteristic diagram output by the characteristic fusion module of the last level.
4. The method for detecting a camouflage target based on the guidance of the subject area as claimed in claim 1, wherein: the feature fusion module receives the feature map output by the main body region analysis module, realizes feature fusion in a self-attention mode, and predicts a camouflage target.
CN202211037831.7A 2022-08-25 2022-08-25 Camouflage target detection method based on main body region guidance Active CN115346094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211037831.7A CN115346094B (en) 2022-08-25 2022-08-25 Camouflage target detection method based on main body region guidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211037831.7A CN115346094B (en) 2022-08-25 2022-08-25 Camouflage target detection method based on main body region guidance

Publications (2)

Publication Number Publication Date
CN115346094A CN115346094A (en) 2022-11-15
CN115346094B true CN115346094B (en) 2023-08-22

Family

ID=83954859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211037831.7A Active CN115346094B (en) 2022-08-25 2022-08-25 Camouflage target detection method based on main body region guidance

Country Status (1)

Country Link
CN (1) CN115346094B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593517B (en) * 2024-01-19 2024-04-16 南京信息工程大学 Camouflage target detection method based on complementary perception cross-view fusion network
CN118115729A (en) * 2024-04-26 2024-05-31 齐鲁工业大学(山东省科学院) Image fake region identification method and system with multi-level and multi-scale feature interaction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779326A (en) * 2012-06-12 2012-11-14 浙江大学 Generating method for digital disguise image
CN105844245A (en) * 2016-03-25 2016-08-10 广州市浩云安防科技股份有限公司 Fake face detecting method and system for realizing same
CN111368712A (en) * 2020-03-02 2020-07-03 四川九洲电器集团有限责任公司 Hyperspectral image disguised target detection method based on deep learning
CN113536973A (en) * 2021-06-28 2021-10-22 杭州电子科技大学 Traffic sign detection method based on significance
CN114067188A (en) * 2021-11-24 2022-02-18 江苏科技大学 Infrared polarization image fusion method for camouflage target
CN114565655A (en) * 2022-02-28 2022-05-31 上海应用技术大学 Depth estimation method and device based on pyramid segmentation attention
CN114581752A (en) * 2022-05-09 2022-06-03 华北理工大学 Camouflage target detection method based on context sensing and boundary refining

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140063054A1 (en) * 2010-02-28 2014-03-06 Osterhout Group, Inc. Ar glasses specific control interface based on a connected external device type

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779326A (en) * 2012-06-12 2012-11-14 浙江大学 Generating method for digital disguise image
CN105844245A (en) * 2016-03-25 2016-08-10 广州市浩云安防科技股份有限公司 Fake face detecting method and system for realizing same
CN111368712A (en) * 2020-03-02 2020-07-03 四川九洲电器集团有限责任公司 Hyperspectral image disguised target detection method based on deep learning
CN113536973A (en) * 2021-06-28 2021-10-22 杭州电子科技大学 Traffic sign detection method based on significance
CN114067188A (en) * 2021-11-24 2022-02-18 江苏科技大学 Infrared polarization image fusion method for camouflage target
CN114565655A (en) * 2022-02-28 2022-05-31 上海应用技术大学 Depth estimation method and device based on pyramid segmentation attention
CN114581752A (en) * 2022-05-09 2022-06-03 华北理工大学 Camouflage target detection method based on context sensing and boundary refining

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
伪装目标检测与分割研究进展;何淋艳 等;《软件导刊》;第21卷(第3期);第237-243页 *

Also Published As

Publication number Publication date
CN115346094A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN115346094B (en) Camouflage target detection method based on main body region guidance
CN112991354B (en) High-resolution remote sensing image semantic segmentation method based on deep learning
CN113240691B (en) Medical image segmentation method based on U-shaped network
CN115049936A (en) High-resolution remote sensing image-oriented boundary enhancement type semantic segmentation method
CN111915619A (en) Full convolution network semantic segmentation method for dual-feature extraction and fusion
CN113222824B (en) Infrared image super-resolution and small target detection method
CN114154016B (en) Video description method based on target space semantic alignment
CN113379601A (en) Real world image super-resolution method and system based on degradation variational self-encoder
CN116912257B (en) Concrete pavement crack identification method based on deep learning and storage medium
CN113362416A (en) Method for generating image based on text of target detection
CN114511554A (en) Automatic nasopharyngeal carcinoma target area delineating method and system based on deep learning
CN112037225A (en) Marine ship image segmentation method based on convolutional nerves
CN116403090A (en) Small-size target detection method based on dynamic anchor frame and transducer
CN113343861B (en) Remote sensing image water body region extraction method based on neural network model
CN115641564A (en) Lightweight parking space detection method
CN114862696A (en) Facial image restoration method based on contour and semantic guidance
CN114418821A (en) Blind watermark processing method based on image frequency domain
CN114241288A (en) Method for detecting significance of remote sensing target guided by selective edge information
CN113724156A (en) Generation countermeasure network defogging method and system combined with atmospheric scattering model
CN116109944B (en) Satellite image cloud target extraction method based on deep learning network architecture
CN117496131B (en) Electric power operation site safety behavior identification method and system
Liu et al. Target rotation region detection based on semantic segmentation
CN115830038A (en) Road crack segmentation method based on improved U-Net network
CN117952950A (en) Rock structure surface trace prediction method of trace segmentation network based on edge information
CN117809181A (en) High-resolution remote sensing image water body extraction network model and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant