CN112884773A - Target segmentation model based on target attention consistency under background transformation - Google Patents

Target segmentation model based on target attention consistency under background transformation Download PDF

Info

Publication number
CN112884773A
CN112884773A CN202110028899.8A CN202110028899A CN112884773A CN 112884773 A CN112884773 A CN 112884773A CN 202110028899 A CN202110028899 A CN 202110028899A CN 112884773 A CN112884773 A CN 112884773A
Authority
CN
China
Prior art keywords
attention
target
image
consistency
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110028899.8A
Other languages
Chinese (zh)
Other versions
CN112884773B (en
Inventor
李冬辉
刘欣宇
高龙
梁宁一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202110028899.8A priority Critical patent/CN112884773B/en
Publication of CN112884773A publication Critical patent/CN112884773A/en
Application granted granted Critical
Publication of CN112884773B publication Critical patent/CN112884773B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a target segmentation model based on target attention consistency under background transformation, which is realized based on a pix2pix model, and a generator of the target segmentation model consists of three branch coding networks, an attention module and a decoder; the discriminator also comprises an attention module, the discriminator receives the generated image and the real segmentation result, the attention module distinguishes the real image from the generated image and guides the generator to focus on the area needing to be improved; the loss functions include the opponent loss function, the L1 loss function, the classification loss functions of the generator and the discriminator, and the target attention consistency loss. The invention takes the image with the removed background as a typical example of a set with the same target and different background images, and respectively restricts the attention consistency of each image in the set and the image with the removed background which is common to the images in the set in the target area, thereby enhancing the attention consistency of all the images under different backgrounds and improving the attention of the target area and the accuracy of target segmentation.

Description

Target segmentation model based on target attention consistency under background transformation
Technical Field
The invention belongs to the technical field of computer vision, and relates to a target segmentation model, in particular to a target segmentation model based on target attention consistency under background transformation.
Background
In the technology and vision technology, accurate object segmentation is different from salient object detection for various objects, and the specific object is required to be segmented from the background with higher precision. For example, it is applied to portrait segmentation of scene change tasks and organ segmentation before medical diagnosis. Although deep neural networks have significantly improved the performance of object segmentation, accurate segmentation in complex scenes remains very difficult due to background interference.
Many visual tasks derive from object segmentation such as salient object detection, image extinction, portrait segmentation, and medical organ segmentation, among others. The above tasks have many commonalities in research, for example, the purpose of these tasks is to accurately separate the target from the background, and often are disturbed by a complex and varied background. Improvements in these tasks focus on achieving a balance between high-level features and low-level features by designing a deep learning model to approximate the actual segmentation boundaries. For salient object detection, many methods are proposed to obtain more accurate features, such as model fusion techniques, stage refinement techniques, attention mechanisms, and the like. Image matting and portrait segmentation tasks have higher requirements on the accuracy of segmentation results, and recent methods attempt to refine boundary details. Furthermore, in the medical field, U-Net first adopted the coding-decoding architecture and skip concatenation, becoming a rhinoplasty of many medical segmentation models, such as 3D U-Net, Res-Unet, MultiRes-UNet and Attention U-Net. However, the above techniques are all to directly position the target or detect the boundary in the complex scene, so that the interference of the background to the target cannot be avoided, and the further improvement of the segmentation accuracy is hindered.
Researchers have found that improving the visual perception of the target by the model helps to solve this problem. For multiple images with the same target, different backgrounds, a human can always be accurately positioned to their same target, since the human maintains a consistent visual perception of the same target in different backgrounds. In fact, semantic data enhancement (SDA) has already implicitly exploited the consistency of visual perception to improve the visual perception of objects by models. The SDA technique can expand a data set by replacing backgrounds and assign the same real segmentation result to images of the same target and different backgrounds, but the perceptual consistency of the SDA is applied to the output end of the model, and the high-order features of the output end lose global semantic information and cannot well maintain the consistency.
Experiments show that the attention diagram can reflect the attention area of the model and the strength of the perception performance of the model in different areas, and even if the data set is expanded by classifying the same segmentation truth values for the images of the same target and different backgrounds, the trained target segmentation model still cannot keep the attention consistency to the target in different backgrounds. As shown in the middle row of fig. 1, there is a significant difference in the attention of the images of different backgrounds in the portrait region, and even some attention is spread to the background region unrelated to the object segmentation. Therefore, how to obtain more accurate attention to the target and further obtain an accurate target segmentation result by using the target attention consistency as a guide when the background changes is a problem which needs to be solved at present.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a target segmentation model based on target attention consistency under background transformation, and solves the problem that the current target segmentation model is influenced by a complex background to cause inaccurate segmentation.
The technical problem to be solved by the invention is realized by adopting the following technical scheme:
a target segmentation model based on target attention consistency under background transformation comprises a generator, a discriminator and a loss function,
the generator is composed of three branch coding networks E1、E2、E3Attention module CgAnd a decoder; three branch coding networks receive the original image set X with the same target and different backgroundsAsBackground removal image XC=C(XAs) And true segmentation result XBObtaining a corresponding feature map F by downsampling and residual blockAsFeature diagram FBAnd feature map FC(ii) a The three feature maps are pooled through a global average pooling layer and a global maximum pooling layer and input into a full-link layer with W as a weight for classification, and an attention module CgCalculating a classification value by weighting the pooled feature maps; the class activation graphs respectively extract an original image set X by linearly combining the feature graphs through channel-by-channel multiplication and summing along the dimension of the combined feature graphsAsAnd background removal image C (X)As) Attention diagram M (X)As) And attention-seeking drawing M (C (X)As) ); attention Module CgTogether direct the encoder to extract the target features and pass them to the decoder to produce a segmentation result G (X)As);
The discriminator comprises an attention module CdThe discriminators receive the generated images G (X) respectivelyAs) And true segmentation result XBAnd distinguishing real and generated images, thereby directing the generator to focus on areas that need improvement;
the loss function comprises a countering loss function L for generating a real imageganFor maintaining stable generated L1Loss function, classification loss function of auxiliary classifier of generator and discriminator
Figure BDA0002891281080000021
And
Figure BDA0002891281080000022
target attention consistency loss is Latt
Further, an attention module in the generator collects the original image set XAsAnd background removal image XCClassified into the same class, true segmentation result XBIs another class.
Further, the attention target consistency is an attention map M (X) of the original image and the background removed imageAs) And M (C (X)As) Equal under the same background transform.
Further, the loss function employs a least squares pix2pix as an optimization function.
Further, the penalty function LganComprises the following steps:
Figure BDA0002891281080000023
the L1 loss function is:
L1=||G(XAs)-XB||1
classification loss function of auxiliary classifier of generator and discriminator
Figure BDA0002891281080000031
And
Figure BDA0002891281080000032
respectively as follows:
Figure BDA0002891281080000033
Figure BDA0002891281080000034
target attention consistency loss LattComprises the following steps:
Latt=||C(M(XAs))-M(C(XAs))||1=||M'As-MC||1
integrating the above-mentioned loss function as an optimized objective function for pix2pix training is:
Figure BDA0002891281080000035
the mu1=1,μ2=1000,μ3=10,μ4=10。
The invention has the advantages and positive effects that:
the method is reasonable in design, a group of image sets with the same target and different backgrounds are synthesized by replacing the backgrounds, the image with the removed background is used as a typical example of the group of images, and the attention consistency of each image and the image with the removed background is respectively restricted, so that the attention consistency of all images under different backgrounds is indirectly enhanced, the visual perception capability of a target segmentation model on the target is improved, and the accuracy of target segmentation is further improved.
Drawings
Fig. 1 is an attention diagram of two examples (1) and (2) of the segmentation model of the prior art (middle row) and the segmentation model of the object proposed by the present invention (last row) in four different contexts to segment the same object;
FIG. 2 is a schematic diagram of a generator of a target segmentation model proposed by the present invention;
FIG. 3 is a diagram of an arbiter for object segmentation model according to the present invention;
fig. 4 is a schematic diagram of the result of segmenting an image in the PFCN data set by the target model according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention imposes visual perception consistency on the attention map, and enhances the attention of the target through target attention consistency (OAC) under background transformation. Target attention concordance (OAC) considers: the object segmentation model should have consistent attention to the object region of the image of the same object, different backgrounds. Target attention consistency (OAC) requirements: if the background of the input image changes, attention should be paid to the same change. The background of the image is changed as shown in the first row of fig. 1, and four images of the same target and different backgrounds are obtained, and the backgrounds of the attention maps of the images are correspondingly changed as shown in the last row of fig. 1, so that the attention of the target area is kept consistent and the accurate target position is provided for target segmentation.
The design idea of the invention is as follows: by replacing the background, a group of image sets with the same target and different backgrounds are synthesized. In consideration of the diversity of the background, the background-removed image is taken as a typical example of the group of images, and the attention consistency of each image and the background-removed image is respectively restricted, so that the attention consistency of all the images in different backgrounds is indirectly enhanced.
Based on the design idea, the invention adopts pix2pix as a segmentation model, uses a three-branch network as an encoder of the pix2pix, and respectively uses a common target, an image set of different backgrounds, a common background removal image of the images and a common real segmentation result as three inputs. In the middle of the encoder-decoder structure of the generator, a Class Activation Map (CAM) is introduced as an attention module. The attention module divides the three images into two types according to the difference of the target area, the image in the original image set and the background removal image thereof are in one type, and the real segmentation result is in the other type. Meanwhile, an attention map of each of the original image and the background removed image is calculated, and their difference in the target area is defined as a loss of target attention consistency. The consistency loss and the classification loss in the attention module are combined to collectively improve the consistency of the target attention under background changes. In addition, an attention module is also used in the discriminator of pix2pix, which improves the discrimination effect by distinguishing the generated segmentation result from the true segmentation result, thereby promoting pix2pix to generate better segmentation. The structure of the generator of the object segmentation model proposed by the present invention is shown in fig. 2.
The image translation model pix2pix model consists of a generator, a discriminator, an immunity loss and an L1 loss. On the basis of an image translation model pix2pix, the method introduces the target attention consistency under the background change to improve the performance of a target segmentation model:
the generator is used for receiving the original image sets to be segmented of different backgrounds and generating corresponding target segmentation results. An attention module is added in the generator for enhancing the attention of the target; the classification loss and attention consistency loss of the attention module are added to the generator.
The discriminator of the present invention receives the generated segmented image and the correct segmentation result and tries to distinguish the difference between them to facilitate the generator to produce a more accurate segmentation result. An attention module is added in the discriminator for enhancing the attention of inaccurate areas in the current generated result.
The loss function of the present invention comprises the confrontational loss and the L1 loss of the pix2pix image translation model.
For convenience of explanation, the meanings of the following symbols are explained first: xARepresenting the original image to be segmented without any data enhancement, XBRepresenting its true segmentation result, XCIndicates removal of XAImage after background of (1), XAsRepresents a pair XAAnd replacing the background to synthesize the image set. X to be segmentedAAnd XAsIs the source domain of pix2pix, XBIs a target domain, XCOnly to increase attention to the target.
The following describes the respective parts of the present invention:
as shown in FIG. 2, the generator is encoded by three branches E1、E2、E3Attention module CgAnd a decoder. Original image set XAsBackground removal image XC=C(XAs) And true segmentation result XBRespectively inputting the data into three branch coding networks sharing all parameters, and obtaining corresponding characteristic graphs F through down sampling and residual blocksAs、FBAnd FC. Then, the feature maps are pooled by a global average pooling layer (GAP) and a global maximum pooling layer (GMP) and input into a fully connected layer weighted by W for classification, attention module CgThe classification value is calculated by weighting the pooled feature maps. In the present invention, in order to increase the attention to the target region, focus is on distinguishing XAsAnd XBDifference in target region, attention Module class XAsAnd XCFor the same class, the real segmentation result XBIs another class.
Meanwhile, the class activation map CAM respectively extracts X by linearly combining the feature maps by channel-by-channel multiplication and summing along the dimension of the combined feature mapsAsAnd C (X)As) Is as followsForce diagram, denoted M (X)As) And M (C (X)As) M (-) represents the process of computing an attention map using a CAM. Attention map M (X) of original image and background removed image according to the consistency of attention targets proposed by the present inventionAs) And M (C (X)As) Should be equal under the same background transformation (background removal), which can be expressed as:
C(M(XAs))=M(C(XAs)) (1)
the classification loss and consistency loss of the attention module together guide the encoder of pix2pix to extract the target features and pass them to the decoder to produce a segmentation result G (X)As)。
As shown in FIG. 3, the attention module is also used in the discriminator, which receives the generated image G (X) respectively, to generate the real segmentation resultAs) And true segmentation result XBAs an input, attention Module CdDistinguishing between real and generated images, thereby directing the generator to focus on areas that need improvement. In addition, the antagonistic action of the discriminators themselves also promotes the generation of true segmentations again.
Loss function: the whole objective function of the model provided by the invention consists of four parts: penalty function LganFor generating real images, L1Losses for maintaining stable generation, the classification losses of the auxiliary classifiers of the generator and the discriminator are respectively
Figure BDA0002891281080000051
And
Figure BDA0002891281080000052
target attention consistency loss is Latt. To maintain stable training, least squares GAN is used as the optimization function.
The countermeasure loss is used for matching the distribution of the image to be segmented and the real segmentation result:
Figure BDA0002891281080000053
l1 loss function with reference to pix2pix model, L for generating images and true segmentation results is introduced1Losses to avoid model collapse and ensure stable generation:
L1=||G(XAs)-XB||1 (3)
classification loss function in CAM in order to explore the difference between image sets of the same target and different backgrounds and the real segmentation result of the image sets on a target area, the CAM of a generator is used for carrying out image set XAsAnd background removal image XCDividing into the same class, and classifying the real result XBClassified into another class:
Figure BDA0002891281080000054
to focus on the areas that need improvement in the current state, the images and true results generated by the CAM classification of the discriminators are not of the same class:
Figure BDA0002891281080000055
wherein
Figure BDA0002891281080000056
And
Figure BDA0002891281080000057
the invention is an attention classifier of a generator and an arbiter respectively, and the cross entropy loss function is adopted for classification.
Loss function of target attention consistency to further increase the attention to the target, the target attention consistency requires that the attention efforts of the original image set and the background removed image must be equal under the same background removal transform. The loss of consistency is defined using the absolute value error as follows:
Latt=||C(M(XAs))-M(C(XAs))||1=||M'As-MC||1 (6)
wherein, M (X)As) Is XAsAttention diagram of (1), C (X)As) And M (C (X)As) Is the background removal image and its attention map. The target attention consistency loss is a strong constraint on the target attention.
Finally, the above-mentioned loss function is integrated into an optimized objective function that can be used for pix2pix training:
Figure BDA0002891281080000061
wherein, mu1=1,μ2=1000,μ3=10,μ4=10。
It should be emphasized that the embodiments described herein are illustrative rather than restrictive, and thus the present invention is not limited to the embodiments described in the detailed description, but also includes other embodiments that can be derived from the technical solutions of the present invention by those skilled in the art.

Claims (5)

1. A target segmentation model based on target attention consistency under background transformation is realized on the basis of a pix2pix model and comprises a generator, a discriminator and a loss function, and is characterized in that:
the generator is composed of three branch coding networks E1、E2、E3Attention module CgAnd a decoder; three branch coding networks respectively receive the original image sets X of the same target and different backgroundsAsTheir common background removal image XC=C(XAs) And true segmentation result XBObtaining a corresponding feature map F by downsampling and residual blockAsFeature diagram FBAnd feature map FC(ii) a The three feature maps are pooled through a global average pooling layer and a global maximum pooling layer and input into a full-link layer with W as a weight for classification, and an attention module CgCalculating a classification value by weighting the pooled feature maps; class activation graph linear combination through channel-by-channel multiplicationFeature maps are summed along the dimension of the combined feature maps to respectively extract an original image set XAsAnd background removal image C (X)As) Attention diagram M (X)As) And attention-seeking drawing M (C (X)As) ); attention Module CgTogether direct the encoder of pix2pix to extract the target features and pass them to the decoder to produce a segmentation result G (X)As);
The discriminator comprises an attention module CdThe discriminators receive the generated images G (X) respectivelyAs) And true segmentation result XBAnd distinguishing real and generated images, thereby directing the generator to focus on areas that need improvement;
the loss function comprises a countering loss function L for generating a real imageganFor maintaining stable generated L1Loss function, classification loss function of auxiliary classifier of generator and discriminator
Figure FDA0002891281070000011
And
Figure FDA0002891281070000012
target attention consistency loss Latt
2. The object segmentation model based on object attention consistency under background transformation as claimed in claim 1, characterized in that: an attention module in the generator gathers a set of original images XAsAnd background removal image XCClassified into the same class, true segmentation result XBIs another class.
3. The object segmentation model based on object attention consistency under background transformation as claimed in claim 1, characterized in that: the attention target consistency is the attention map M (X) of the original image and the background removal imageAs) And M (C (X)As) Equal under the same background transform.
4. The object segmentation model based on object attention consistency under background transformation as claimed in claim 1, characterized in that: the loss function uses least squares pix2pix as the optimization function.
5. The object segmentation model based on object attention consistency under background transformation as claimed in claim 1, characterized in that: the penalty function LganComprises the following steps:
Figure FDA0002891281070000021
the L1 loss function is:
L1=||G(XAs)-XB||1
classification loss function of auxiliary classifier of generator and discriminator
Figure FDA0002891281070000022
And
Figure FDA0002891281070000023
respectively as follows:
Figure FDA0002891281070000024
Figure FDA0002891281070000025
target attention consistency loss LattComprises the following steps:
Latt=||C(M(XAs))-M(C(XAs))||1=||M'As-MC||1
integrating the above-mentioned loss function as an optimized objective function for pix2pix training is:
Figure FDA0002891281070000026
the mu1=1,μ2=1000,μ3=10,μ4=10。
CN202110028899.8A 2021-01-11 2021-01-11 Target segmentation model based on target attention consistency under background transformation Expired - Fee Related CN112884773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110028899.8A CN112884773B (en) 2021-01-11 2021-01-11 Target segmentation model based on target attention consistency under background transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110028899.8A CN112884773B (en) 2021-01-11 2021-01-11 Target segmentation model based on target attention consistency under background transformation

Publications (2)

Publication Number Publication Date
CN112884773A true CN112884773A (en) 2021-06-01
CN112884773B CN112884773B (en) 2022-03-04

Family

ID=76047673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110028899.8A Expired - Fee Related CN112884773B (en) 2021-01-11 2021-01-11 Target segmentation model based on target attention consistency under background transformation

Country Status (1)

Country Link
CN (1) CN112884773B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240950A (en) * 2021-11-23 2022-03-25 电子科技大学 Brain tumor image generation and segmentation method based on deep neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109033095A (en) * 2018-08-01 2018-12-18 苏州科技大学 Object transformation method based on attention mechanism
WO2019090213A1 (en) * 2017-11-03 2019-05-09 Siemens Aktiengesellschaft Segmenting and denoising depth images for recognition applications using generative adversarial neural networks
CN110853051A (en) * 2019-10-24 2020-02-28 北京航空航天大学 Cerebrovascular image segmentation method based on multi-attention dense connection generation countermeasure network
CN111259906A (en) * 2020-01-17 2020-06-09 陕西师范大学 Method for generating and resisting remote sensing image target segmentation under condition containing multilevel channel attention

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019090213A1 (en) * 2017-11-03 2019-05-09 Siemens Aktiengesellschaft Segmenting and denoising depth images for recognition applications using generative adversarial neural networks
CN109033095A (en) * 2018-08-01 2018-12-18 苏州科技大学 Object transformation method based on attention mechanism
CN110853051A (en) * 2019-10-24 2020-02-28 北京航空航天大学 Cerebrovascular image segmentation method based on multi-attention dense connection generation countermeasure network
CN111259906A (en) * 2020-01-17 2020-06-09 陕西师范大学 Method for generating and resisting remote sensing image target segmentation under condition containing multilevel channel attention

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PHILLOP ISOLA 等: "Image-to-Image Translation withConditional Adversarial Networks", 《ARXIV》 *
林振峰 等: "基于条件生成式对抗网络的图像转换综述", 《小型微型计算机系统》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240950A (en) * 2021-11-23 2022-03-25 电子科技大学 Brain tumor image generation and segmentation method based on deep neural network
CN114240950B (en) * 2021-11-23 2023-04-07 电子科技大学 Brain tumor image generation and segmentation method based on deep neural network

Also Published As

Publication number Publication date
CN112884773B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN111462126B (en) Semantic image segmentation method and system based on edge enhancement
Pan et al. Loss functions of generative adversarial networks (GANs): Opportunities and challenges
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN112818862B (en) Face tampering detection method and system based on multi-source clues and mixed attention
CN113221639B (en) Micro-expression recognition method for representative AU (AU) region extraction based on multi-task learning
CN111476805A (en) Cross-source unsupervised domain adaptive segmentation model based on multiple constraints
CN111242238A (en) Method for acquiring RGB-D image saliency target
CN113762138B (en) Identification method, device, computer equipment and storage medium for fake face pictures
CN113792641B (en) High-resolution lightweight human body posture estimation method combined with multispectral attention mechanism
CN110738663A (en) Double-domain adaptive module pyramid network and unsupervised domain adaptive image segmentation method
CN110852199A (en) Foreground extraction method based on double-frame coding and decoding model
CN112884773B (en) Target segmentation model based on target attention consistency under background transformation
CN116012255A (en) Low-light image enhancement method for generating countermeasure network based on cyclic consistency
CN113763300B (en) Multi-focusing image fusion method combining depth context and convolution conditional random field
Liu et al. Action recognition based on features fusion and 3D convolutional neural networks
CN114677372A (en) Depth forged image detection method and system integrating noise perception
CN112541566B (en) Image translation method based on reconstruction loss
CN112016592B (en) Domain adaptive semantic segmentation method and device based on cross domain category perception
Yuan et al. Explore double-opponency and skin color for saliency detection
Weligampola et al. A retinex based gan pipeline to utilize paired and unpaired datasets for enhancing low light images
CN115841438A (en) Infrared image and visible light image fusion method based on improved GAN network
CN112950615B (en) Thyroid nodule invasiveness prediction method based on deep learning segmentation network
Zhang Image recognition methods based on deep learning
Fan et al. Attention-modulated triplet network for face sketch recognition
Khan et al. Face recognition via multi-level 3D-GAN colorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220304