CN110443813B - Segmentation method, device and equipment for blood vessel and fundus image and readable storage medium - Google Patents

Segmentation method, device and equipment for blood vessel and fundus image and readable storage medium Download PDF

Info

Publication number
CN110443813B
CN110443813B CN201910690945.3A CN201910690945A CN110443813B CN 110443813 B CN110443813 B CN 110443813B CN 201910690945 A CN201910690945 A CN 201910690945A CN 110443813 B CN110443813 B CN 110443813B
Authority
CN
China
Prior art keywords
blood vessel
characteristic information
information
segmentation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910690945.3A
Other languages
Chinese (zh)
Other versions
CN110443813A (en
Inventor
任文婷
余双
马锴
郑冶枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Healthcare Shenzhen Co Ltd
Original Assignee
Tencent Healthcare Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Healthcare Shenzhen Co Ltd filed Critical Tencent Healthcare Shenzhen Co Ltd
Priority to CN201910690945.3A priority Critical patent/CN110443813B/en
Publication of CN110443813A publication Critical patent/CN110443813A/en
Application granted granted Critical
Publication of CN110443813B publication Critical patent/CN110443813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the application discloses a segmentation method, a segmentation device, segmentation equipment and a readable storage medium of blood vessels and fundus images, which relate to the computer vision technology of artificial intelligence; specifically, a blood vessel image to be segmented such as a fundus image may be acquired; extracting features of blood vessel images such as fundus images to obtain high-level feature information; performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information; fusing the target characteristic information with the high-level characteristic information to obtain channel attention characteristic information; and dividing blood vessels in the blood vessel image such as fundus image according to the channel attention characteristic information to obtain a blood vessel division result. According to the scheme, the global information loss of the characteristic blood vessel image such as the fundus image can be avoided, and the segmentation accuracy of the blood vessel image such as the fundus image is greatly improved.

Description

Segmentation method, device and equipment for blood vessel and fundus image and readable storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a segmentation method, a segmentation device, segmentation equipment and a readable storage medium for blood vessels and fundus images.
Background
The structural state of the fundus blood vessel has very important connection with all systems of the whole body, and can help people to prevent and treat certain diseases in early stage. For example, arteriosclerosis, hypertension, diabetes, cardiovascular diseases, age-related macular degeneration, etc. affect the morphology of blood vessels at the bottom of the eye, and change the width and degree of curvature of arteries and veins. In actual operation, due to the influence of illumination difference and individual variation, it is difficult for doctors to manually acquire blood vessel information, and diagnosis results are subjective and have errors. Therefore, the automatic segmentation and classification technology for the artery and vein of the fundus blood vessel is of great clinical significance.
With the development of artificial intelligence (AI, artificial Intelligence), the application of AI in the medical field is also becoming more and more widespread, especially in the segmentation of medical images, for example, AI technology may be applied to segment blood vessels from fundus images. The current scheme for segmenting blood vessels is mainly based on a deep learning network, and specifically can train a deep learning network capable of segmenting blood vessels, then input fundus images to be segmented into the trained deep learning network for feature extraction, and perform blood vessel segmentation based on the features to obtain blood vessel segmentation results, such as blood vessel segmentation images and the like.
In the course of research and practice of the prior art, the inventors of the present invention found that the segmentation accuracy is not high because the deep learning network easily loses global information of the image at the extracted features.
Disclosure of Invention
The embodiment of the application provides a segmentation method, a segmentation device, segmentation equipment and a readable storage medium for blood vessels and fundus images, which can improve segmentation accuracy.
The embodiment of the application provides a segmentation method of a blood vessel image, which comprises the following steps:
acquiring a blood vessel image to be segmented;
extracting the characteristics of the blood vessel image to obtain high-level characteristic information;
performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information;
selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information;
fusing the target characteristic information and the high-level characteristic information to obtain channel attention characteristic information;
and segmenting the blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel segmentation result.
The embodiment of the application also provides a blood vessel segmentation method of the fundus image, which comprises the following steps:
Acquiring a fundus image to be segmented;
extracting features of the fundus image to obtain high-level feature information;
performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information;
selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information;
fusing the target characteristic information and the high-level characteristic information to obtain channel attention characteristic information;
and dividing the blood vessel in the fundus image according to the channel attention characteristic information to obtain a blood vessel division result.
Correspondingly, the embodiment of the application also provides a blood vessel segmentation device, which comprises:
an acquisition unit configured to acquire a blood vessel image to be segmented;
the feature extraction unit is used for extracting features of the blood vessel image to obtain high-level feature information;
the learning unit is used for carrying out dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information;
the selection unit is used for selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information;
The fusion unit is used for fusing the target characteristic information with the high-level characteristic information to obtain channel attention characteristic information;
and the segmentation unit is used for segmenting the blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel segmentation result.
In some embodiments, the selection unit comprises:
a residual sub-unit, configured to obtain residual information between the dictionary representation and the high-level feature information;
the coding subunit is used for coding the residual information to obtain residual coding information of the high-level characteristic information;
and the selecting subunit is used for selecting a plurality of channels of the high-level characteristic information according to the residual error coding information to obtain target characteristic information.
In some embodiments, the encoding subunit is configured to:
regularization operation is carried out on the residual coding information according to the smoothing factor, so that the selection weight of the residual information is obtained;
and weighting the residual information according to the selection weight of the residual information to obtain residual codes of the high-level characteristic information.
In some embodiments, the selection subunit is configured to
Activating the residual coding information based on an activation function;
And selecting a plurality of channels of the high-level characteristic information according to the activated residual error coding information to obtain target characteristic information.
In some embodiments, the partitioning unit comprises:
the segmentation subunit is used for segmenting blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel segmentation image;
and the classifying subunit is used for classifying the blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel classified image.
In some embodiments, the acquiring unit is configured to: dividing an original blood vessel image into a plurality of blood vessel image blocks; taking the blood vessel image block as a blood vessel image to be segmented;
the segmentation unit is further used for aggregating the blood vessel segmentation results of the blood vessel image blocks to obtain blood vessel segmentation results of the original blood vessel image.
In some embodiments, the segmentation unit is configured to segment the blood vessel in the blood vessel image according to the channel attention feature information by generating a blood vessel segmentation network in the countermeasure network after training.
In some embodiments, the vessel image segmentation apparatus may further include:
The sample acquisition unit is used for acquiring a target domain blood vessel image and a source domain blood vessel image marked with blood vessel information;
the sample feature extraction unit is used for respectively carrying out feature extraction on the target domain blood vessel image and the source domain blood vessel image by adopting a preset generation countermeasure network to obtain source domain high-level feature information and source domain medium-level feature information of the source domain blood vessel image, and target domain high-level feature information and target domain medium-level feature information of the target domain blood vessel image;
the sample segmentation unit is used for segmenting blood vessels in the source domain blood vessel image based on a blood vessel segmentation network and the source domain high-level characteristic information to obtain a blood vessel segmentation prediction result;
the judging unit is used for judging the source domain of the layer characteristic information in the source domain and the layer characteristic information in the target domain by adopting a judging network in a preset generation countermeasure network to obtain a judging prediction result;
the training unit is used for training the generated countermeasure network according to the blood vessel segmentation prediction result and the discrimination prediction result, and generating the countermeasure network after training.
In some embodiments, the training unit comprises:
the first loss acquisition subunit is used for acquiring the blood vessel segmentation loss according to the blood vessel segmentation prediction result and the blood vessel labeling result of the source domain blood vessel image;
The second loss acquisition subunit is used for acquiring the discrimination loss of the discrimination network according to the real source domain of the source domain layer characteristic information and the real source domain of the target domain layer characteristic information and the discrimination prediction result;
and the training subunit is used for training the generated countermeasure network according to the discrimination loss and the blood vessel segmentation loss, and generating the countermeasure network after training.
In some embodiments, the training subunit is configured to:
constructing a minimized countermeasure loss of a countermeasure network according to the discrimination loss and the vessel segmentation loss;
constructing and generating maximized antagonism loss of the antagonism network according to the discrimination loss;
and performing iterative training on the generated countermeasure network based on the minimized countermeasure loss and the maximized countermeasure loss, and generating the countermeasure network after training.
In some embodiments, the discrimination network includes a feature fusion module and a discrimination subnetwork;
the distinguishing unit is used for:
the method for judging the source domain of the layer characteristic information in the source domain and the layer characteristic information in the target domain by adopting a judging network in a preset generation countermeasure network to obtain a judging prediction result comprises the following steps:
the method comprises the steps of adopting a feature fusion module to respectively conduct feature extraction on source domain layer feature information and target domain layer feature information to obtain feature information of a blood vessel category in a target domain blood vessel image and feature information of the blood vessel category in the source domain blood vessel image;
Adopting a feature fusion module to fuse the feature information of the blood vessel category in the target domain blood vessel image with the feature information of the middle layer in the target domain to obtain first fused feature information corresponding to the target domain blood vessel image;
adopting a feature fusion module to fuse the feature information of the blood vessel category in the source domain blood vessel image with the feature information of the source domain layer to obtain second fused feature information corresponding to the target domain blood vessel image;
and judging the source domain of the first fused characteristic information and the source domain of the second fused characteristic information by adopting the judging sub-network to obtain a judging and predicting result.
In some embodiments, the sample segmentation unit is configured to:
performing dictionary learning on source domain high-level characteristic information of a source domain blood vessel image by adopting the blood vessel segmentation network to obtain a sample dictionary corresponding to the source domain high-level characteristic information;
selecting a plurality of channels of the source domain high-level characteristic information by adopting the blood vessel segmentation network according to the sample dictionary to obtain source domain channel attention characteristic information;
and dividing the blood vessels in the source domain blood vessel image according to the source domain channel attention characteristic information by adopting the blood vessel division network to obtain a blood vessel division prediction result of the source domain blood vessel image.
Correspondingly, the application also provides computer equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the steps in any segmentation method of the blood vessel image provided by the embodiment of the application are realized when the processor executes the program.
Furthermore, the embodiment of the present application also provides a computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the steps in any of the segmentation methods for blood vessel images provided in the embodiment of the present application.
The embodiment of the application can acquire the blood vessel image to be segmented, such as a fundus image; extracting features of the blood vessel image such as fundus image to obtain high-level feature information; performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information; fusing the target characteristic information and the high-level characteristic information to obtain channel attention characteristic information; and segmenting the blood vessels in the blood vessel image such as the fundus image according to the channel attention characteristic information to obtain a blood vessel segmentation result. According to the scheme, the characteristic channels which are more favorable for segmentation are screened from the channels of the high-level characteristic information based on dictionary learning, the channel attention characteristic information is obtained, and segmentation is carried out based on the channel attention characteristic information, so that the receptive field of characteristic extraction can be effectively enlarged, the characteristics can retain global information of blood vessel images such as fundus images, the global information loss of the characteristic blood vessel images such as fundus images is avoided, and the segmentation accuracy of the blood vessel images such as fundus images is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a scenario of a blood vessel image segmentation method provided in an embodiment of the present application;
FIG. 2a is a flowchart of a segmentation method of a blood vessel image provided in an embodiment of the present application;
FIG. 2b is a schematic diagram of a split network framework provided by an embodiment of the present application;
fig. 2c is a schematic structural diagram of a U-Net network according to an embodiment of the present application;
FIG. 2d is a dictionary learning and residual coding flow chart provided by an embodiment of the present application;
FIG. 3a is a schematic flow chart of a training method according to an embodiment of the present application;
fig. 3b is a schematic view of fundus image segmentation provided in an embodiment of the present application;
FIG. 3c is a schematic diagram of feature fusion provided by an embodiment of the present application;
fig. 4 is another flow chart of fundus image segmentation provided in an embodiment of the present application;
fig. 5a is a schematic structural diagram of a segmentation apparatus for a blood vessel image according to an embodiment of the present application;
Fig. 5b is another schematic structural diagram of a segmentation apparatus for blood vessel images according to an embodiment of the present application;
fig. 5c is another schematic structural diagram of a segmentation apparatus for blood vessel images according to an embodiment of the present application;
FIG. 5d is another schematic structural view of a segmentation apparatus for blood vessel images according to an embodiment of the present application;
fig. 5e is another schematic structural diagram of a segmentation apparatus for blood vessel images according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
Embodiments of the present application provide a segmentation method, apparatus computer device and computer readable storage medium for blood vessel and fundus image. The fundus image segmentation device may be integrated in a computer device, which may be a server or a terminal.
The segmentation scheme of the blood vessel image and the fundus image provided by the embodiment of the application relates to Computer Vision technology (CV) of artificial intelligence. The fundus image segmentation can be realized through an artificial intelligence computer vision technology, and a segmentation result is obtained.
The Computer Vision technology (CV) is a science for researching how to make a machine "look at", and more specifically, a camera and a Computer are used to replace human eyes to perform machine Vision such as identifying and measuring on a target, and further perform graphic processing, so that the Computer is processed into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image segmentation, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, map construction, etc., as well as common biometric recognition techniques such as face recognition, fingerprint recognition, etc.
In the present embodiment, image segmentation refers to a computer vision technique and process that divides an image into a plurality of specific regions with unique properties and proposes an object of interest. In the embodiment of the present application, it is mainly referred to that a blood vessel image such as a fundus image is segmented, and a desired target object is found, for example, a blood vessel image, an arteriovenous blood vessel image, or the like is segmented from the blood vessel image such as the fundus image. The segmented target object may then be analyzed by a healthcare worker or other medical professional for further manipulation.
For example, referring to fig. 1, taking an example in which the segmentation means of the blood vessel image is integrated in a computing device, the computer device may acquire a blood vessel image to be segmented, such as a fundus image; extracting features of blood vessel images such as fundus images to obtain high-level feature information; performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information; fusing the target characteristic information with the high-level characteristic information to obtain channel attention characteristic information; and dividing blood vessels in the blood vessel image such as fundus image according to the channel attention characteristic information to obtain a blood vessel division result. For example, a blood vessel segmentation image, an arterial blood vessel segmentation image, a venous blood vessel segmentation image, or the like can be obtained.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
The present embodiment will be described from the perspective of a blood vessel image segmentation apparatus, which may be integrated in a computer device, and the computer device may be a server or a terminal, etc.; the terminal may include, among other things, a tablet, a notebook, a personal computer (PC, personal Computer), a mini-processing box, or other device.
As shown in fig. 2a, the specific flow of the segmentation method of the blood vessel image may be as follows:
201. and acquiring a blood vessel image to be segmented.
The blood vessel image is an image containing blood vessels, such as medical images containing blood vessels, and the like, and can be acquired by medical imaging equipment.
The blood vessel image is a blood vessel image of a certain part of a living body, such as a fundus image. The fundus image is an image of tissue of the back part of the eyeball, such as an endo-membrane image of the eyeball including: retina, papilla, macula, central retinal artery and vein, etc. The fundus image can be acquired by an image acquisition apparatus in the medical field such as a fundus camera or the like. In some scenarios, the fundus image may include a retinal image.
In an embodiment, in order to improve segmentation efficiency and accuracy, a blood vessel image such as a fundus image may be divided into a plurality of image blocks, then, a blood vessel is segmented from each image block by using the method provided by the embodiment of the present invention, and finally, the blood vessel segmentation results of each image block are aggregated to obtain a blood vessel segmentation result of the whole blood vessel image such as a fundus image. At this time, the blood vessel image such as fundus image of the present step may be an image block of a certain blood vessel image such as fundus image.
Specifically, the step of "acquiring a blood vessel image to be segmented such as a fundus image" may include: dividing an original blood vessel image such as a fundus image into a plurality of blood vessel image blocks such as fundus image blocks; and taking the blood vessel image block as a blood vessel image to be segmented.
For example, referring to fig. 2b, a Fundus image (Fundus image) to be segmented may be acquired, and segmented into a plurality of Fundus image blocks (pads).
202. And extracting the characteristics of the blood vessel image to obtain high-level characteristic information.
The embodiment of the application can extract high-level characteristic information of the blood vessel image such as the fundus image by performing characteristic extraction on the blood vessel image such as the fundus image based on the convolutional neural network, for example, performing convolution operation or full convolution operation on the blood vessel image such as the fundus image through a convolutional layer in the convolutional neural network.
In an embodiment, in order to improve the feature extraction efficiency and accuracy, the feature extraction of the blood vessel image, such as the fundus image, can be performed through a feature extraction network based on the U-Net architecture.
The U-Net model is an improved FCN (Fully Convolutional Network, full convolution network) structure, and can be applied to semantic segmentation of medical images because the structure is named by drawing the letter U. Referring to fig. 2c, it is composed of a left compression Path (compression Path) and a right expansion Path (expansion Path). The compression channel is a typical convolutional neural network structure, which repeatedly adopts a structure of 2 convolutional layers and 1 maximum pooling layer, and the dimension of the characteristic diagram is increased by 1 time after each pooling operation. And in the expansion channel, firstly, carrying out deconvolution operation for 1 time to halve the dimension of the feature map, then splicing the feature map obtained by cutting the corresponding compression channel, reconstructing a feature map with the size of 2 times, carrying out feature extraction by adopting 2 convolution layers, and repeating the structure. At the final output layer, the 64-dimensional feature map is mapped into a 2-dimensional output map with 2 convolutional layers.
The U-Net model is a coding-decoding structure, the compression channel is an encoder for extracting the features of the image layer by layer, the expansion channel is a decoder for restoring the position information of the image, and each hidden layer of the U-Net model has more feature dimensions, which is beneficial to model learning of more various and comprehensive features.
Referring to fig. 2c, the encoder includes several coding layers, each of which may be composed of a convolutional layer and a pooling layer; for performing a convolution operation on an input image to extract image features. The decoder includes decoding layers corresponding to the encoding layers, each decoding layer of the U-Net network concatenates the feature information from the previous layer with the feature information from the corresponding encoding layer, and then performs feature extraction through the convolutional layer.
Wherein the high-level feature information, also referred to as high-level feature information, may contain information related to categories, as well as high-level abstractions, etc. While middle layer feature information may generally include image details such as edges and textures. In the Convolutional Neural Network (CNN), the high-level feature information may be represented as a high-level feature map (feature map), or the middle-level feature information may be represented as a middle-level feature map. In one embodiment, the middle level features are lower in level than the high level features but higher relative to the low level features, and thus, in some scenarios, the middle level features may be referred to as high level features, such as a high level feature map.
In a scene of extracting image features by adopting a convolutional neural network, the feature information finally output by the convolutional neural network is high-level feature information, such as a feature map; the feature information output by the middle layer positioned in the first feature extraction layer and the last feature extraction layer is middle layer feature information as a feature map.
For example, referring to fig. 2c, the higher layer feature information refers to the feature information that is finally output by the U-Net network, such as a feature map, i.e., the feature information output by the last decoding layer; the middle layer feature information may refer to feature information such as a feature map output from other decoding layers in the decoder except the first decoding layer and the last decoding layer.
Referring to fig. 2b, a frame diagram of a vessel segmentation network is provided, including a U-Net network, a convolution layer, a channel attention module, a vessel segmentation branch, a vessel classification branch, and so forth; the embodiment of the application can divide the fundus image into a plurality of fundus image blocks (Patches); and carrying out feature extraction on each fundus image block by adopting a U-Net network to obtain a high-level feature map of each fundus image block.
203. And performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information.
In order to reduce the dimension of the high-level features and learn the best features (the most primitive features), reduce the complexity of calculation and improve the segmentation efficiency and accuracy, dictionary learning (Dictionary Learning) can be performed on the high-level feature information, and the high-level feature information is represented by using a dictionary, namely sparse representation is performed on the high-level feature information
For example, the high-level feature information may be subjected to sparse dictionary learning (Sparse Representation Learning) based on a preset sparse dictionary, so as to obtain a sparse dictionary of the high-level feature information.
For example, assume that the input high-order feature map x= { X1,... Firstly, dictionary learning is carried out on X according to the following formula to obtain sparse dictionary representation D:
min||X-DX'|| 2
referring to fig. 2b, after the U-Net extracts the high-level feature image, the high-level feature information is convolved multiple times, the high-level feature (for example, 32×64×64) with a predetermined size is output, the high-level feature information with a predetermined size is input to the channel attention module, and then the channel attention module performs sparse dictionary learning on the input high-level feature image based on a preset sparse dictionary to obtain a sparse dictionary representation of the high-level feature.
204. And selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain the target characteristic information.
In the embodiment of the application, the feature extraction can be performed on the blood vessel image such as the fundus image to obtain the high-level feature information of a plurality of feature channels, and in order to enlarge the receptive field of the network and reserve global texture information, the embodiment of the application can select more meaningful channels from the plurality of feature channels based on dictionary identification, such as the feature channels favorable for classification results, and divide blood vessels in the image.
The target feature information may be high-level feature information of a selected feature channel, for example, high-level feature information of a plurality of feature channels (dimensions) after feature extraction, and at this time, a target feature channel which is more favorable for a classification result may be determined from the plurality of feature channels based on dictionary identification, and the high-level feature information of the target feature channel is taken as the target feature information.
In one embodiment, to highlight feature channels that facilitate classification or segmentation, dictionary learning and residual coding may be combined to assist in screening feature channels of high-level feature information. Specifically, the step of selecting a plurality of channels of the high-level feature information according to the dictionary representation to obtain the target feature information may include:
acquiring residual information between dictionary representation and high-level characteristic information;
coding the residual information to obtain residual coding information of high-level characteristic information;
and selecting a plurality of channels of the high-level characteristic information according to the residual error coding information to obtain target characteristic information.
The residual information may be encoded in various manners, for example, in order to improve accuracy and efficiency of channel selection, the selection weight may be based on a selection weight of the residual information, where the selection weight is a selection weight of a feature channel corresponding to the residual information. Specifically, the step of "encoding residual information to obtain residual encoded information of high-level feature information" may include:
Regularization operation is carried out on the residual coding information according to the smoothing factor, so that the selection weight of the residual information is obtained;
and carrying out weighting processing on the residual information according to the selection weight of the residual information to obtain residual codes of the high-level characteristic information.
For example, when the high-order feature map x= { X1,..once, xm } is extracted by U-Net, dictionary learning and residual coding are performed on the high-order feature map, and the specific flow is referred to fig. 2d:
firstly, calculating residual errors between sparse dictionary representation and feature vectors in an advanced feature image according to the following formula:
r ij =x i -d j i={1,2,...M},j={1,2,...N}
the residual coding e= { E1,..en } of the higher-order feature map can be expressed as:
wherein w is ik Representing the selection weights, the weights may be obtained by popular regularization operations according to the following formula, s= { S1,..sm } represents a smoothing factor:
in an embodiment, in order to activate a portion with smaller residual error in residual error coding, to improve accuracy of channel selection, activation processing may be further performed on residual error coding, and feature channels may be screened based on the activated residual error coding, for example, the step of selecting multiple channels of high-level feature information according to residual error coding information to obtain target feature information may include:
activating the residual coding information based on an activation function;
And selecting a plurality of channels of the high-level characteristic information according to the activated residual error coding information to obtain target characteristic information.
For example, after the residual code E is obtained by the above steps, a portion of the residual code E having a smaller residual may be activated as a discrimination vector for selecting a channel of the output vector y= { Y1. Specifically, the residual code may be subjected to an activation process by an activation function sigmoid.
205. And fusing the target characteristic information with the high-level characteristic information to obtain the channel attention characteristic information.
The feature fusion method can be various, for example, feature addition method can be adopted for fusion.
For example, referring to fig. 2d, after obtaining the residual code E, a portion of the residual code E with smaller residual may be activated as a discrimination vector for selecting a channel of the output vector y= { Y1. Specifically, the residual code can be activated by an activation function sigmoid, then a channel is selected based on the activated residual code based on a matrix multiplication (mul) mode, a selected feature is obtained, and then the selected feature information is fused with a high-order feature map (Higg-level feature map) to obtain the channel attention feature information. For example, the following formula is used for activation and fusion to obtain the channel attention characteristic information:
For another example, referring to fig. 2b, after U-Net extracts a high-level feature map in a vessel segmentation network, convolution processing is performed on the high-level feature information multiple times, a high-level feature (for example, 32×64×64) with a predetermined size is output, the high-level feature information with the predetermined size is input to a channel attention module, and then the channel attention module extracts channel attention feature information in the manner described above, and merges with the high-level feature map and then outputs target feature information.
In an embodiment, the dictionary and the coding process can be also used as independent network layers in the deep learning, and the obtained coding dictionary can be continuously optimized through the end-to-end network back propagation process, so that the segmentation accuracy is improved.
206. And segmenting the blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel segmentation result.
In the embodiment of the application, the blood vessel segmentation is to segment the blood vessel region in the blood vessel image such as the fundus image according to the channel attention characteristic information, that is, to identify or detect the blood vessel region in the blood vessel image such as the fundus image.
In an embodiment, when the blood vessel image is a segmented blood vessel image block, the blood vessel segmentation result of each blood vessel image block can be obtained through the steps, and then the blood vessel segmentation result of each blood vessel image block is aggregated to obtain the blood vessel segmentation result of the original blood vessel image. Specifically, the stitching and combination may be performed in the order of slicing the blood vessel images.
In the embodiment of the present application, the segmentation of the blood vessel is to segment the blood vessel from the blood vessel image, for example, segment all blood vessels, or segment blood vessels of various categories such as arteriovenous blood vessels; that is, the present step of vessel segmentation may include vessel segmentation and classification.
For example, the step of "segmenting a blood vessel in a blood vessel image according to channel attention feature information to obtain a blood vessel segmentation result" may include:
dividing blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel division image;
classifying blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel classification image; for example, a blood vessel region (or blood vessel pixels are classified, etc.) in the blood vessel image.
For example, referring to fig. 2b, after the channel attention module outputs the channel attention profile information, it may be input to the vessel segmentation branch and the vessel classification branch, respectively.
In the blood vessel segmentation branch, blood vessels in the fundus image blocks (Patch) can be segmented based on the channel attention characteristic information, so that blood vessel segmentation images of each fundus image block (Patch) are obtained; then, the blood vessel division images of each fundus image block are stitched in a predetermined order, so that a blood vessel division image of the entire fundus image can be obtained.
Wherein, the vessel segmentation branch can comprise a convolution layer and a Sigmoid layer; the channel attention characteristic information is convolved through a convolution layer to obtain channel attention characteristic information with a preset size (such as 1 multiplied by 64), then a blood vessel (vessel) is segmented from a fundus image block based on the channel attention characteristic information with the preset size in a Sigmoid layer to obtain blood vessel segmented images of the fundus image block, and finally the blood vessel segmented images of each fundus image block are spliced to obtain a final blood vessel segmented image of the fundus image.
In the blood vessel classification branch, blood vessels in the fundus image block (Patch) can be classified based on the channel attention characteristic information, and a blood vessel classification image such as an arteriovenous blood vessel image of each fundus image block (Patch) is obtained. Then, the blood vessel classification images of each fundus image block are spliced in a predetermined order, so that a blood vessel classification image such as an arteriovenous blood vessel image of the whole fundus image can be obtained.
Wherein, the vessel segmentation branch can comprise a convolution layer and a Sigmoid layer; the channel attention characteristic information is convolved through a convolution layer to obtain channel attention characteristic information with a preset size (such as 2 multiplied by 64), then fundus image block blood vessels (vessel) are classified in a Sigmoid layer based on the channel attention characteristic information with the preset size to obtain artery (artery) blood vessel images and vein (vein) blood vessel images of fundus image blocks, finally, artery blood vessel images of each fundus image block are spliced, vein blood vessel images of each fundus image block are spliced, and final artery (artry) blood vessel images and vein (vein) blood vessel images of the fundus image are obtained.
As can be seen from the above, according to the embodiment of the present application, the feature channels which are more favorable for segmentation are screened from the channels of the high-level feature information based on dictionary learning, so as to obtain the channel attention feature information, and segmentation is performed based on the channel attention feature information.
Specifically, the channel attention module provided by the embodiment of the application combines the characteristics of dictionary learning and residual error coding, effectively acquires the region information of the image by coding the high-order features of the input image, and utilizes the obtained coding dictionary to assist in screening more meaningful channels (the feature channels which are more favorable for classification results) in the feature image, so that the accuracy and sensitivity of vessel segmentation, the stability of a vessel segmentation network model and the expressive capacity on a test set can be improved.
The vessel segmentation method described above may be implemented by a deep learning network of AI, for example, after acquiring a vessel image (or a vessel image block) to be segmented, vessel segmentation may be implemented by a deep learning vessel segmentation network, in particular, the vessel segmentation network may perform the above steps 202-206, etc., and in particular, the framework of the vessel segmentation network may refer to fig. 2b.
The vessel segmentation network may be a vessel segmentation network trained on a large number of vessel image datasets. Specific training modes are various, for example, the blood vessel segmentation network can be trained by a back propagation mode based on a large number of marked blood vessel image data sets such as fundus image data sets; specifically, a vessel segmentation loss (e.g., calculated by a loss function) may be calculated from a vessel segmentation result and a vessel labeling result of the network, and the network may be trained based on the vessel segmentation loss.
It is further contemplated that because vessel image datasets are typically scanned by different fundus cameras, the interdomain differences of the different datasets are large, severely impeding the generalization performance of the depth network.
In order to ensure that the algorithm can actually play a role in assisting diagnosis in clinic, the generalization performance of the model needs to be further improved. Meanwhile, the supervised training method needs to provide additional blood vessel labeling, is time-consuming and expensive, and is impractical for large-scale blood vessel classification and segmentation in eye disease diagnosis. Therefore, there is also a need in the clinic to design an unsupervised training method that does not require additional labeling.
Aiming at the problems, the embodiment of the application provides a vascular segmentation network training method adopting an unsupervised condition field adaptive technology, which can learn the characteristic structure on the existing marked data set and transfer knowledge to the new data set, provide more accurate vascular segmentation such as fundus blood vessel, artery and vein classification results and the like for the unmarked new data set, and effectively improve the generalization performance of a deep network (such as a vascular segmentation network) on other data sets.
According to the self-adaptive training method for the unsupervised condition field, a generation countermeasure network comprising a blood vessel segmentation network (serving as a generation network) can be trained in a field countermeasure mode; then, the blood vessels in the unlabeled blood vessel image are segmented, classified and the like by adopting a blood vessel segmentation network in the training generation countermeasure network.
As shown in fig. 3a, a specific flow of a training method for domain adaptation is as follows:
301. and acquiring a target domain blood vessel image and a source domain blood vessel image marked with blood vessel information.
The source domain blood vessel image is a blood vessel image marked with blood vessel information, such as a fundus image, and can be obtained from a data set marked with blood vessel information, and the data set can be called a source domain. For example, an annotated source field fundus image may be acquired from the source field.
The target domain blood vessel image may be an unlabeled blood vessel image to be segmented (or classified), such as a fundus image, which may be acquired from a data set to be classified, which may be referred to as a target domain. For example, a target domain fundus image is acquired from a target domain.
Assuming that the data set with complete labels is the source domain and the data set to be classified is the target domain, firstly, performing data augmentation operation (slicing, up-down, left-right inversion, brightness change and the like) on the initial blood vessel images such as fundus color images in the source domain and the target domain.
Referring to fig. 3b, taking a blood vessel image as an example of a fundus image, a Source field fundus image (Source) is extracted from a Source field, and a Target field fundus image (Target) is extracted from a Target field.
In an embodiment, in order to improve training efficiency and segmentation effect, the fundus image in the source domain and the fundus image in the target domain may be segmented to obtain a fundus image block (patch) of the fundus image in the source domain and a fundus image block of the fundus image in the target domain. At this time, the target field fundus image is a target field fundus image block, and the source field fundus image is a source field fundus image block.
302. And adopting a blood vessel segmentation network in the generation countermeasure network to respectively perform feature extraction on the target domain blood vessel image and the source domain blood vessel image to obtain source domain high-level feature information and source domain layer feature information of the source domain blood vessel image, and target domain high-level feature information and target domain layer feature information of the target domain blood vessel image.
Referring to fig. 2b and 3b above, the vessel segmentation network may include a feature extraction sub-network, which may be a weight-shared feature extraction network (e.g., a twin network), which may be a U-Net network; respectively inputting a Source field fundus image (Source) and a Target field fundus image (Target) into a feature extraction sub-network of a blood vessel segmentation network, and respectively carrying out feature extraction on the Target field fundus image and the Source field fundus image by the feature extraction sub-network, for example, adopting convolution operation to extract features to obtain a Source field high-level feature map and a Source field high-level feature map of the Source field fundus image Feature map fs, target domain high-level feature map of target domain fundus image, target domain layer feature map f T
The description of the high-level feature information and the middle-level feature information may be described in the above embodiments, for example, when the feature extraction sub-network is a U-Net network, the high-level feature information is the feature information that is finally output by the U-Net network, such as a feature map, and the middle-level feature information is the feature information that is output by the last-last decoding layer in a decoder of the U-Net network, such as a feature map.
303. And dividing the blood vessel in the source domain blood vessel image based on the blood vessel division network and the source domain high-level characteristic information to obtain a blood vessel division prediction result.
Specifically, the vessel segmentation process may refer to the description of the above embodiment, for example, referring to fig. 2b, after the feature extraction sub-network extracts the source domain high-level feature information in the vessel segmentation network, the channel attention module may screen the channel of the feature information, output the channel attention feature information, and input the channel attention feature information to the blood collection tube segmentation branch and the blood vessel classification branch, so as to obtain a vessel segmentation image, a vessel classification image, and the like.
304. And judging the source domain of the layer characteristic information in the source domain and the layer characteristic information in the target domain by adopting a preset judging network in the generation countermeasure network to obtain a judging and predicting result.
The source domain discrimination of the feature refers to discriminating whether the input feature of the discrimination network comes from the source domain or the target domain. For example, the discrimination network can accurately discriminate whether the input feature is from the source domain or the target domain in the initial stage of training. However, in order to improve the performance and segmentation accuracy of the network, the embodiment of the application needs to continuously improve the performance of the segmented network and the discrimination network by adopting an opposite training mode, so that the discrimination network cannot determine whether the input fusion feature is from the source domain or the target domain, thereby promoting the target domain to continuously learn the feature distribution of the source domain, and when the network achieves convergence, the obtained network has the best performance and high segmentation accuracy.
Because the conventional domain adaptation training method cannot capture a multi-mode structure with complex data, only the overall distribution of data features is concerned to neglect the correlation among categories, when the problem exists, domain classification is easy to make mistakes, and the knowledge migration effect is not good enough.
Therefore, for the fundus blood vessel segmentation and classification (such as arteriovenous) multitask model, the embodiment of the application provides a field self-adaption method based on a condition countermeasure network, in particular, in order to describe the relation between the characteristics and the categories, the embodiment of the application provides a characteristic fusion scheme, the characteristic information of the blood vessel category can be extracted from the characteristic information (such as a high-order characteristic diagram) in the source field and the target field, then the characteristic information of the blood vessel category is fused with the input middle-level characteristic, so that the discrimination network discriminates the source field of the input fusion characteristic information, the characteristic distribution of the source field is continuously learned by the target field is promoted, and the network performance and migration effect are improved.
For example, the discrimination network in generating the countermeasure network may include a feature fusion module and a discrimination subnetwork, which in one embodiment may also be considered a discriminator (discriminant); the step of adopting a discrimination network in a preset generation countermeasure network to discriminate the source domain of the layer characteristic information in the source domain and the layer characteristic information in the target domain to obtain a discrimination prediction result includes:
the feature fusion module is adopted to respectively conduct feature extraction on the source domain layer feature information and the target domain layer feature information to obtain feature information of a blood vessel category in the target domain blood vessel image and feature information of the blood vessel category in the source domain blood vessel image;
adopting a feature fusion module to fuse the feature information of the blood vessel category in the target domain blood vessel image with the feature information of the target domain layer to obtain first fused feature information corresponding to the target domain blood vessel image;
adopting a feature fusion module to fuse the feature information of the blood vessel category in the source domain blood vessel image with the feature information of the source domain layer to obtain second fused feature information corresponding to the target domain blood vessel image;
and adopting a discrimination sub-network to discriminate the source domain of the first fused characteristic information and the second fused characteristic information, and obtaining a discrimination prediction result.
The blood vessel category may be classified according to actual requirements, for example, classified according to medical fields, and may include venous blood vessels, arterial blood vessels, and the like. Of course, in some scenarios, there may be other divisions, such as coarse blood vessels, fine blood vessels, and the like.
The characteristic information of the blood vessel category can be obtained by carrying out convolution and up-sampling operation on the characteristic information of the layer in the target domain.
In an embodiment, in order to better highlight the blood vessel category, the relationship between the feature and the blood vessel category is described, so as to improve the performance of the network, the type information of the blood vessel category can be fused to obtain multi-mode feature information containing the blood vessel category information, and then the multi-mode feature information is fused with the input feature information. For example, the step of "adopting a feature fusion module to fuse feature information of a blood vessel category in a target domain blood vessel image with feature information of a layer in the target domain to obtain first fused feature information corresponding to the target domain blood vessel image" may include:
adopting a feature fusion module to mutually fuse the feature information of the blood vessel category in the target domain blood vessel image to obtain multi-mode feature information containing the blood vessel category information;
And fusing the multi-mode characteristic information with the middle-layer characteristic information in the target domain by adopting a characteristic fusion module to obtain first fused characteristic information corresponding to the target domain blood vessel image.
Likewise, the step of "fusing, by using a feature fusion module, feature information of a blood vessel category in a source domain blood vessel image and feature information of a source domain layer to obtain second fused feature information corresponding to a target domain blood vessel image" may include:
adopting a feature fusion module to mutually fuse the feature information of the blood vessel category in the source domain blood vessel image to obtain multi-mode feature information containing the blood vessel category information;
and fusing the multi-mode characteristic information with the source domain layer characteristic information by adopting a characteristic fusion module to obtain second fused characteristic information corresponding to the source domain blood vessel image.
The feature information of the blood vessel category is fused in various ways, for example, a feature stitching way can be adopted for fusion. Fusion may also be performed, for example, by means of eigenvector matrix multiplication.
The multi-modal feature information and the middle-level feature information may be fused in various ways, for example, a feature mapping way may be adopted to fuse the multi-modal feature information and the middle-level feature information, and specifically, the extracted middle-level feature information and the multi-modal feature containing blood vessel category information (such as arteriovenous category information) may be mapped to obtain a final fused feature.
For example, referring to FIG. 3c, a schematic diagram of feature fusion is shown. In FIG. 3c, the network is shown to obtain a higher-order feature map X ε R C ×W×H And respectively carrying out convolution and up-sampling operation to obtain a characteristic diagram f of the arterial information and a characteristic diagram g of the venous information. For example, a convolution operation (Conv) with a convolution kernel of 3×3 is performed on the higher-order feature map X, then an Upsampling operation (Upsampling) is performed on the feature map obtained by the rolling layer operation, and then a convolution operation (Conv) with a convolution kernel of 1×1 is performed on the feature map obtained by the Upsampling operation, so as to obtain an arterial feature map (Artery feature map) and a venous feature map (Vein feature map).
Referring to fig. 3c, for ease of calculation, this may be denoted here as f (x) =w f (x),g(x)=W g (x) Wherein W is f And W is g Representing the weight matrix after multi-layer convolution. To accommodate matrix multiplication, the length and width of the feature map is converted into an N-dimensional vector (n=w×h), and the f matrix is multiplied by the g matrix to obtain an output matrix S as:
s ij =f(x i ) T ·g(x i )
then, referring to fig. 3c, the S matrix is normalized by Sigmoid to obtain a matrix β, where the matrix β represents the result of the feature fusion between the i region and the j position of the model:
referring to fig. 3c, the final fusion feature output obtained by mapping the initial high-order feature map X and the multi-modal feature containing the artery and vein category information to each other is:
According to the embodiment of the invention, the domain adaptation network based on the joint condition can effectively capture the multi-modal characteristics of artery and vein categories in a blood vessel image such as a fundus image, meanwhile, the dependency relationship in the characteristics can be well found, the category characteristics and the high-order characteristics can be better fused together in a mutual mapping mode, the network is enabled to be more matched with the multi-modal distribution of a complex domain, and the migration capability of a network model is improved.
305. Training the generated countermeasure network according to the blood vessel segmentation prediction result and the discrimination prediction result to obtain the trained generated countermeasure network.
Specifically, according to a blood vessel segmentation prediction result and a blood vessel labeling result of a source domain blood vessel image, obtaining blood vessel segmentation loss;
acquiring discrimination loss of a discrimination network according to the real source domain of the layer characteristic information in the source domain and the layer characteristic information in the target domain and the discrimination prediction result;
training the generated countermeasure network according to the discrimination loss and the blood vessel segmentation loss, and generating the countermeasure network after training.
For example, referring to fig. 3b, lseg (segmentation loss) and Ladv (discrimination loss) may be calculated, and then the generation of the countermeasure network is iteratively trained based on both.
In one embodiment, to enhance network performance, a similarly generated maximum minimization training of the countering network may be employed; specifically, the step of training the generation countermeasure network according to the discrimination loss and the blood vessel segmentation loss to obtain the trained generation countermeasure network includes:
constructing a minimized countermeasure loss of the generated countermeasure network according to the discrimination loss and the vessel segmentation loss;
constructing and generating maximized countermeasure loss of the countermeasure network according to the discrimination loss;
and performing iterative training on the generated countermeasure network based on minimizing the countermeasure loss and maximizing the countermeasure loss, so as to obtain the trained generated countermeasure network.
For example, iterative training of the generated antagonism network is based on minimizing and maximizing antagonism loss.
In one embodiment, to improve network segmentation accuracy, the segmentation penalty may employ crossover penalty, or the like.
For example, the minimization of the countermeasures loss can be calculated by the following loss function:
wherein Lseg is a cross-loss term of the segmentation, the latter term in the formula refers to the discrimination loss of the discriminator. The goal of the loss function is to optimize the cross-loss term of the segmentation and encourage the network to extract features from the target domain that are closer to those of the source domain.
For another example, the maximized countering loss can be calculated by the following loss function:
where m, n represents the length and width of the output patch and α represents the weight parameter of the network. L (L) seg The training loss function of the segmentation network is mainly composed of a binary cross entropy loss and an L2 regularization loss of network parameters, wherein Lseg can be obtained through the following formula:
wherein θ represents a parameter of the network; λ represents a loss term weight; c represents the output of the c-th channel, and the weight allocation μc of each class is: all blood vessels 3/7, arteries 2/7, veins 2/7.
From the above, the embodiments of the present application propose to segment and classify fundus blood vessels using an unsupervised conditional domain adaptive technique. And by introducing the fusion feature alignment method, more multi-mode features related to the category can be reserved, and the generalization performance of the model on other data sets is effectively improved.
In order to describe the relation between the high-order features and the category features, the embodiment of the application designs a novel feature fusion method, and important components in the features are reserved in a mutual mapping mode, so that the network is more matched with the multimodal distribution of the complex domain. In addition, by applying the scheme, the model can obtain the optimal domain migration generalization performance on the new public data set HRF and the existing public data set INSP.
In addition, in order to verify the effect of the segmentation scheme provided in the embodiment of the present application, experimental tests were also performed on the scheme provided in the embodiment of the present application in each data set:
table 1 shows the results of a comparative experiment of the channel attention module on the AV-DRIVE dataset: table 1 lists whether the performance of the model by adding a channel attention module based on dictionary learning and residual coding, taking the AV-DRIVE dataset as an example, it can be seen that the Accuracy (ACC) of the arteriovenous classification can be improved by 0.7% and the Sensitivity (SE) can be improved by 1.82% when the channel attention module (CA) is adopted. The performance is also improved for the simple task of vessel segmentation, where SP (specificity) is specificity and AUC (Area Under Curve) is ROC curve area.
TABLE 1
Table 2, ablation experimental results of arteriovenous classification on HRF dataset; table 2 lists the model performances of the domain adaptive methods designed by the embodiments of the present application under different module combinations. The source domain selects an AV-DRVIVE data set, and the target domain selects an HRF data set. Wherein Baseline represents the model trained on the AV-DRIVE dataset for testing on the HRF dataset; DA represents the adaptive alignment of only higher order features, and JCDA represents the adaptive alignment based on fusion features. It can be seen that the accuracy of the arteriovenous classification is improved by 5.81% and the sensitivity is improved by 12.3% by adopting the condition domain adaptation method based on fusion characteristics, and the accuracy of the arteriovenous classification is improved by 1.14% and the sensitivity is improved by 3.63% by introducing a channel attention mechanism (CA).
TABLE 2
Table 3 shows the results of the arteriovenous classification of the HRF dataset and INSPIRE dataset by the adaptive method and the adaptive method in other fields; table 3 lists a comparison of the method proposed in the embodiment of the present application and other domain-adaptive methods, where the source domain is selected from the AV-DRIVE dataset and the target domain is selected from the HRF and INSPIRE datasets, respectively, and it can be seen that the method of the present patent achieves vessel classification with optimal results on both datasets.
TABLE 3 Table 3
The method described in the previous examples is described in further detail below by way of example.
In this embodiment, a case where the blood vessel dividing apparatus is specifically integrated in a computer device and a blood vessel image is a fundus image will be described.
First, training against the network is generated.
Wherein the generation countermeasure network may include a blood vessel segmentation network and a discrimination network, the blood vessel segmentation network group serving as a generator, the discrimination network serving as a discriminator. The structure of the vessel segmentation network may refer to the description of the above embodiment, for example, the vessel segmentation network includes a U-Net feature extraction network, a channel attention module, a vessel segmentation branch, a vessel classification branch, a feature fusion module, and the like. Wherein the vessel segmentation network can be a twin vessel segmentation network with shared weight
Firstly, a computer device can acquire a source domain data set (source domain for short) and a target domain data set (target domain for short), wherein the source domain data set comprises a sample fundus image with blood vessel information completely marked; the target field includes an unlabeled fundus image.
The computer device performs data augmentation operations such as segmentation, rotation, brightness adjustment and the like on fundus images in a target domain and a source domain, and at this time, the source domain includes a plurality of marked fundus image blocks (pacth), and the target domain includes a plurality of unmarked fundus image blocks.
Then, referring to fig. 3b, the computer device extracts the segmentation networks of fundus image block input sharing weights from the target domain name and the source domain, respectively, and extracts the higher-order features f, respectively S And f T Extracting multi-mode feature g related to category from network S And g is equal to T And fusing the two characteristics of the source domain and the target domain according to a designed characteristic fusion mode. The segmentation network comprises a dictionary learning and residual error coding channel attention module which can assist the selection process of the features.
After obtaining the discrimination prediction result and the segmentation prediction result, the computer device may generate a maximum countermeasure loss and a minimum countermeasure loss (e.g., calculated according to the loss function described above) of the countermeasure network, and perform iterative training on the segmentation network and the discrimination network through the maximum countermeasure loss and the minimum countermeasure loss, so as to obtain a trained generated countermeasure network. In particular, the training manner of generating the countermeasure network can be referred to the description of the above embodiments.
And secondly, carrying out segmentation processing on blood vessels in the fundus image to be segmented through a blood vessel segmentation network in the training generation countermeasure network.
As shown in fig. 4, a blood vessel segmentation method of fundus images specifically includes the following steps:
401. the computer equipment performs slicing processing on the fundus image to be segmented to obtain a plurality of fundus image blocks.
Referring to fig. 2b, the computer device acquires a Fundus image (Fundus image) to be segmented from the target domain data set, and segments the Fundus image into a plurality of Fundus image blocks (Patches).
402. The computer equipment adopts a feature extraction network in the blood vessel segmentation network to perform feature extraction on the fundus image block to obtain high-level feature information.
For example, fig. 2b may input fundus image tiles to a vessel segmentation network, through which a high-level feature map of fundus image tiles is extracted.
403. The computer equipment adopts a channel attention module of a blood vessel segmentation network to perform dictionary learning and residual coding on the high-level characteristic information, and residual coding is obtained.
Specific learning and residual coding processes are described with reference to fig. 2d and related description above.
404. The computer equipment selects a plurality of channels of the high-level characteristic information according to the residual error coding information through the channel attention module, and fuses the selected target characteristic information with the high-level characteristic information to obtain the channel attention characteristic information.
405. The computer device inputs the channel attention characteristic information to a vessel segmentation branch and a vessel classification branch of the vessel segmentation network, respectively.
406. The computer equipment divides blood vessels in the fundus image through a blood vessel division branch circuit to obtain blood vessel division image blocks of the fundus image block; and the computer equipment splices the blood vessel segmentation image blocks of each image block to obtain a complete blood vessel segmentation image of the fundus image.
The structure of the vessel segmentation branch may be as described in fig. 2b and above.
407. The computer equipment classifies blood vessels in the fundus image through a blood vessel classification branch to obtain a blood vessel classification image; and the computer equipment splices the blood vessel classification image blocks of each image block to obtain a complete blood vessel classification chart of the fundus image.
The structure of the vessel classification branch may be as described in fig. 2b and above.
As can be seen from the above, the segmentation scheme provided by the embodiment of the application can better utilize the regional knowledge carried by the coding layer to assist the selection and judgment process of the output characteristics based on the characteristics of dictionary learning and residual coding, preserve global texture information, and solve the problem that the texture information is easily lost after a small target sample is downsampled for many times. Meanwhile, the condition domain adaptation method based on feature fusion, which is also provided by the embodiment of the application, can learn the feature structure on the existing marked data set and transfer the knowledge to the new data set, thereby providing more accurate fundus blood vessel artery and vein classification results for the new data set without marking.
The scheme provided by the embodiment of the application can provide support for related researches such as clinical researches on fundus blood vessel and systemic diseases, biomarkers of cardiovascular and cerebrovascular diseases and the like. Meanwhile, the scheme can help to quantify and predict the development progress of systemic diseases and predict the risk factors of cardiovascular and cerebrovascular diseases by learning and transferring the distribution knowledge of various data.
In order to better implement the above method, the embodiment of the present application further provides a blood vessel segmentation apparatus, which may be integrated in a computer device, such as a server or a terminal.
For example, as shown in fig. 5a, the blood vessel segmentation apparatus may include an acquisition unit 501, a feature extraction unit 502, a learning unit 503, a selection unit 504, a fusion unit 505, a segmentation unit 506, and the like, as follows:
an acquisition unit 501 for acquiring a blood vessel image to be segmented;
the feature extraction unit 502 is configured to perform feature extraction on the blood vessel image to obtain high-level feature information;
a learning unit 503, configured to perform dictionary learning on the high-level feature information based on a preset dictionary, so as to obtain a dictionary representation corresponding to the high-level feature information;
a selecting unit 504, configured to select a plurality of channels of the high-level feature information according to the dictionary representation, so as to obtain target feature information;
A fusion unit 505, configured to fuse the target feature information with the high-level feature information to obtain channel attention feature information;
and a segmentation unit 506, configured to segment the blood vessel in the blood vessel image according to the channel attention feature information, so as to obtain a blood vessel segmentation result.
In some embodiments, referring to fig. 5b, the selecting unit 504 includes:
a residual sub-unit 5041 for obtaining residual information between the dictionary representation and the high-level feature information;
a coding subunit 5042, configured to code the residual information to obtain residual coding information of the high-level feature information;
a selecting subunit 5043, configured to select, according to the residual coding information, a plurality of channels of the high-level feature information, so as to obtain target feature information.
In some embodiments, the encoding subunit 5042 is configured to:
regularization operation is carried out on the residual coding information according to the smoothing factor, so that the selection weight of the residual information is obtained;
and weighting the residual information according to the selection weight of the residual information to obtain residual codes of the high-level characteristic information.
In some embodiments, the selection subunit 5043 is configured to
Activating the residual coding information based on an activation function;
and selecting a plurality of channels of the high-level characteristic information according to the activated residual error coding information to obtain target characteristic information.
In some embodiments, referring to fig. 5c, the dividing unit 506 includes:
a segmentation subunit 5061, configured to segment a blood vessel in the blood vessel image according to the channel attention feature information, so as to obtain a blood vessel segmentation image;
and a classifying subunit 5062, configured to classify the blood vessels in the blood vessel image according to the channel attention feature information, so as to obtain a blood vessel classification image.
In some embodiments, the obtaining unit 501 is configured to: dividing an original blood vessel image into a plurality of blood vessel image blocks; taking the blood vessel image block as a blood vessel image to be segmented;
the segmentation unit 506 is further configured to aggregate the vessel segmentation result of the vessel image block, to obtain a vessel segmentation result of the original vessel image.
In some embodiments, the segmentation unit 506 is configured to segment the blood vessel in the blood vessel image according to the channel attention feature information by using a blood vessel segmentation network in the training generation countermeasure network.
In some embodiments, referring to fig. 5d, the vessel segmentation device may further comprise:
a sample acquiring unit 507, configured to acquire a target domain blood vessel image and a source domain blood vessel image labeled with blood vessel information;
the sample feature extraction unit 508 is configured to perform feature extraction on the target domain blood vessel image and the source domain blood vessel image by using a preset generation countermeasure network to obtain source domain high-level feature information and source domain medium-level feature information of the source domain blood vessel image, and target domain high-level feature information and target domain medium-level feature information of the target domain blood vessel image;
the sample segmentation unit 509 is configured to segment a blood vessel in the source domain blood vessel image based on a blood vessel segmentation network and the source domain high-level feature information, so as to obtain a blood vessel segmentation prediction result;
the judging unit 510 is configured to judge a source domain of the layer feature information in the source domain and the layer feature information in the target domain by adopting a judging network in a preset generation countermeasure network, so as to obtain a judging prediction result;
and a training unit 511, configured to train the generated countermeasure network according to the vessel segmentation prediction result and the discrimination prediction result, so as to generate the countermeasure network after training.
In some embodiments, referring to fig. 5e, the training unit 511 comprises:
a first loss obtaining subunit 5111, configured to obtain a blood vessel segmentation loss according to the blood vessel segmentation prediction result and the blood vessel labeling result of the source domain blood vessel image;
a second loss obtaining subunit 5112, configured to obtain a discrimination loss of the discrimination network according to the real source domain of the source domain layer feature information and the real source domain of the target domain layer feature information and the discrimination prediction result;
and a training subunit 5113, configured to train the generating countermeasure network according to the discrimination loss and the vessel segmentation loss, so as to generate the countermeasure network after training.
In some embodiments, the training subunit 5113 is configured to:
constructing a minimized countermeasure loss of a countermeasure network according to the discrimination loss and the vessel segmentation loss;
constructing and generating maximized antagonism loss of the antagonism network according to the discrimination loss;
and performing iterative training on the generated countermeasure network based on the minimized countermeasure loss and the maximized countermeasure loss, and generating the countermeasure network after training.
In some embodiments, the discrimination network includes a feature fusion module and a discrimination subnetwork;
The discriminating unit 510 is configured to:
the method for judging the source domain of the layer characteristic information in the source domain and the layer characteristic information in the target domain by adopting a judging network in a preset generation countermeasure network to obtain a judging prediction result comprises the following steps:
the method comprises the steps of adopting a feature fusion module to respectively conduct feature extraction on source domain layer feature information and target domain layer feature information to obtain feature information of a blood vessel category in a target domain blood vessel image and feature information of the blood vessel category in the source domain blood vessel image;
adopting a feature fusion module to fuse the feature information of the blood vessel category in the target domain blood vessel image with the feature information of the middle layer in the target domain to obtain first fused feature information corresponding to the target domain blood vessel image;
adopting a feature fusion module to fuse the feature information of the blood vessel category in the source domain blood vessel image with the feature information of the source domain layer to obtain second fused feature information corresponding to the target domain blood vessel image;
and judging the source domain of the first fused characteristic information and the source domain of the second fused characteristic information by adopting the judging sub-network to obtain a judging and predicting result.
In some embodiments, the sample segmentation unit 507 is configured to:
Performing dictionary learning on source domain high-level characteristic information of a source domain blood vessel image by adopting the blood vessel segmentation network to obtain a sample dictionary corresponding to the source domain high-level characteristic information;
selecting a plurality of channels of the source domain high-level characteristic information by adopting the blood vessel segmentation network according to the sample dictionary to obtain source domain channel attention characteristic information;
and dividing the blood vessels in the source domain blood vessel image according to the source domain channel attention characteristic information by adopting the blood vessel division network to obtain a blood vessel division prediction result of the source domain blood vessel image.
In the implementation, each unit may be implemented as an independent entity, or may be implemented as the same entity or several entities in any combination, and the implementation of each unit may be referred to the foregoing method embodiment, which is not described herein again.
As can be seen from the above, the embodiment of the present application can acquire a blood vessel image to be segmented, such as a fundus image, by the acquisition unit 501; feature extraction is performed on the blood vessel image such as fundus image by the feature extraction unit 502 to obtain high-level feature information; dictionary learning is performed on the high-level characteristic information by the learning unit 503 based on a preset dictionary, so as to obtain dictionary representation corresponding to the high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the dictionary representation by a selecting unit 504 to obtain target characteristic information; fusing the target feature information and the high-level feature information by a fusion unit 505 to obtain channel attention feature information; the blood vessel in the blood vessel image, such as fundus image, is segmented by the segmentation unit 506 according to the channel attention characteristic information, and a blood vessel segmentation result is obtained. According to the scheme, the characteristic channels which are more favorable for segmentation are screened from the channels of the high-level characteristic information based on dictionary learning, the channel attention characteristic information is obtained, and segmentation is carried out based on the channel attention characteristic information, so that the receptive field of characteristic extraction can be effectively enlarged, the characteristics can retain global information of blood vessel images such as fundus images, the global information loss of the characteristic blood vessel images such as fundus images is avoided, and the segmentation accuracy of the blood vessel images such as fundus images is greatly improved.
The embodiment of the application further provides a computer device, as shown in fig. 6, which shows a schematic structural diagram of the computer device according to the embodiment of the application, specifically:
the computer device may include one or more processing cores 'processors 601, one or more computer-readable storage media's memory 602, power supply 603, and input unit 604, among other components. Those skilled in the art will appreciate that the computer device structure shown in FIG. 6 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components. Wherein:
processor 601 is the control center of the computer device and uses various interfaces and lines to connect the various parts of the overall computer device, perform various functions of the computer device and process data by running or executing software programs and/or modules stored in memory 602, and invoking data stored in memory 602. Optionally, the processor 601 may include one or more processing cores; preferably, the processor 601 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 601.
The memory 602 may be used to store software programs and modules, and the processor 601 may execute various functional applications and data processing by executing the software programs and modules stored in the memory 602. The memory 602 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 602 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 602 may also include a memory controller to provide access to the memory 602 by the processor 601.
The computer device further includes a power supply 603 for powering the various components, preferably, the power supply 603 can be logically coupled to the processor 601 through a power management system, such that functions of managing charging, discharging, and power consumption are performed by the power management system. The power supply 603 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The computer device may also include an input unit 604, which input unit 604 may be used to receive entered numerical or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 601 in the computer device loads executable files corresponding to the processes of one or more application programs into the memory 602 according to the following instructions, and the processor 601 executes the application programs stored in the memory 602, so as to implement various functions as follows:
acquiring a blood vessel image to be segmented, such as a fundus image; extracting features of the blood vessel image such as fundus image to obtain high-level feature information; performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information; fusing the target characteristic information and the high-level characteristic information to obtain channel attention characteristic information; and segmenting the blood vessels in the blood vessel image such as the fundus image according to the channel attention characteristic information to obtain a blood vessel segmentation result.
The above operations may be specifically referred to the foregoing embodiments, and are not described herein in detail.
As is clear from the above, the computer apparatus of the present embodiment acquires a blood vessel image to be segmented, such as a fundus image; extracting features of the blood vessel image such as fundus image to obtain high-level feature information; performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information; fusing the target characteristic information and the high-level characteristic information to obtain channel attention characteristic information; and segmenting the blood vessels in the blood vessel image such as the fundus image according to the channel attention characteristic information to obtain a blood vessel segmentation result. According to the scheme, the characteristic channels which are more favorable for segmentation are screened from the channels of the high-level characteristic information based on dictionary learning, the channel attention characteristic information is obtained, and segmentation is carried out based on the channel attention characteristic information, so that the receptive field of characteristic extraction can be effectively enlarged, the characteristics can retain global information of blood vessel images such as fundus images, the global information loss of the characteristic blood vessel images such as fundus images is avoided, and the segmentation accuracy of the blood vessel images such as fundus images is greatly improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a computer program that is capable of being loaded by a processor to perform steps in any of the object detection methods provided by embodiments of the present application. For example, the computer program may perform the steps of:
acquiring a blood vessel image to be segmented, such as a fundus image; extracting features of the blood vessel image such as fundus image to obtain high-level feature information; performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information; fusing the target characteristic information and the high-level characteristic information to obtain channel attention characteristic information; and segmenting the blood vessels in the blood vessel image such as the fundus image according to the channel attention characteristic information to obtain a blood vessel segmentation result.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Because the instructions stored in the computer readable storage medium may execute the steps in any of the blood vessel segmentation methods provided in the embodiments of the present application, the beneficial effects that any of the blood vessel segmentation methods provided in the embodiments of the present application may be achieved are detailed in the previous embodiments, and are not described herein.
The foregoing has described in detail the vessel segmentation method, apparatus, computer device and computer readable storage medium of a vessel image, fundus image provided in the embodiments of the present application, and specific examples have been applied herein to illustrate the principles and embodiments of the present invention, and the above description of the examples is only for aiding in understanding the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present invention, the present description should not be construed as limiting the present invention.

Claims (16)

1. A method of segmenting a blood vessel image, comprising:
acquiring a blood vessel image to be segmented;
extracting the characteristics of the blood vessel image to obtain high-level characteristic information;
performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information;
selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information, wherein the method comprises the following steps: acquiring residual information between the dictionary representation and the high-level characteristic information; coding the residual information to obtain residual coding information of high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the residual error coding information to obtain target characteristic information;
fusing the target characteristic information and the high-level characteristic information to obtain channel attention characteristic information;
and segmenting the blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel segmentation result.
2. The segmentation method of a blood vessel image according to claim 1, wherein encoding the residual information to obtain residual encoded information of higher-layer feature information includes:
Regularization operation is carried out on the residual coding information according to the smoothing factor, so that the selection weight of the residual information is obtained;
and weighting the residual information according to the selection weight of the residual information to obtain residual codes of the high-level characteristic information.
3. The method for segmenting a blood vessel image according to claim 1, wherein selecting a plurality of channels of the high-level feature information according to the residual coding information to obtain target feature information comprises:
activating the residual coding information based on an activation function;
and selecting a plurality of channels of the high-level characteristic information according to the activated residual error coding information to obtain target characteristic information.
4. The segmentation method of a blood vessel image according to claim 1, wherein segmenting the blood vessel in the blood vessel image according to the channel attention feature information to obtain a blood vessel segmentation result comprises:
dividing blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel division image;
and classifying blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel classification image.
5. The segmentation method of a blood vessel image as set forth in claim 1, wherein acquiring the blood vessel image to be segmented comprises:
dividing an original blood vessel image into a plurality of blood vessel image blocks;
taking the blood vessel image block as a blood vessel image to be segmented;
the method further comprises the steps of: and aggregating the blood vessel segmentation results of the blood vessel image blocks to obtain blood vessel segmentation results of the original blood vessel image.
6. The segmentation method of a blood vessel image as set forth in claim 1, wherein segmenting a blood vessel in the blood vessel image according to the channel attention feature information comprises:
and generating a blood vessel segmentation network in the countermeasure network after training, and segmenting blood vessels in the blood vessel image according to the channel attention characteristic information.
7. The method of segmentation of a blood vessel image as set forth in claim 6, further comprising:
acquiring a target domain blood vessel image and a source domain blood vessel image marked with blood vessel information;
the method comprises the steps of adopting a preset generation countermeasure network to perform feature extraction on a target domain blood vessel image and a source domain blood vessel image respectively to obtain source domain high-level feature information and source domain medium-level feature information of the source domain blood vessel image, and target domain high-level feature information and target domain medium-level feature information of the target domain blood vessel image;
Based on a blood vessel segmentation network and the source domain high-level characteristic information, carrying out blood vessel segmentation on the source domain blood vessel image to obtain a blood vessel segmentation prediction result;
judging the source domain of the layer characteristic information in the source domain and the layer characteristic information in the target domain by adopting a preset judging network in the generating countermeasure network to obtain a judging prediction result;
and training the generated countermeasure network according to the blood vessel segmentation prediction result and the discrimination prediction result to obtain the trained generated countermeasure network.
8. The method of segmenting a blood vessel image according to claim 7, wherein training the generated countermeasure network based on the blood vessel segmentation prediction result and the discrimination prediction result to obtain the trained generated countermeasure network comprises:
acquiring a blood vessel segmentation loss according to a blood vessel segmentation prediction result and a blood vessel labeling result of the source domain blood vessel image;
acquiring discrimination loss of a discrimination network according to the real source domain of the layer characteristic information in the source domain and the layer characteristic information in the target domain and the discrimination prediction result;
training the generated countermeasure network according to the discrimination loss and the blood vessel segmentation loss, and generating the countermeasure network after training.
9. The method of segmenting a blood vessel image of claim 8, wherein training a generated countermeasure network based on the discrimination loss and the blood vessel segmentation loss, the trained generated countermeasure network comprising:
constructing a minimized countermeasure loss of a countermeasure network according to the discrimination loss and the vessel segmentation loss;
constructing and generating maximized antagonism loss of the antagonism network according to the discrimination loss;
and performing iterative training on the generated countermeasure network based on the minimized countermeasure loss and the maximized countermeasure loss, and generating the countermeasure network after training.
10. The segmentation method of a blood vessel image as set forth in claim 8, wherein the discrimination network includes a feature fusion module and a discrimination subnetwork;
the method for judging the source domain of the layer characteristic information in the source domain and the layer characteristic information in the target domain by adopting a judging network in a preset generation countermeasure network to obtain a judging prediction result comprises the following steps:
the method comprises the steps of adopting a feature fusion module to respectively conduct feature extraction on source domain layer feature information and target domain layer feature information to obtain feature information of a blood vessel category in a target domain blood vessel image and feature information of the blood vessel category in the source domain blood vessel image;
Adopting a feature fusion module to fuse the feature information of the blood vessel category in the target domain blood vessel image with the feature information of the middle layer in the target domain to obtain first fused feature information corresponding to the target domain blood vessel image;
adopting a feature fusion module to fuse the feature information of the blood vessel category in the source domain blood vessel image with the feature information of the source domain layer to obtain second fused feature information corresponding to the target domain blood vessel image;
and judging the source domain of the first fused characteristic information and the source domain of the second fused characteristic information by adopting the judging sub-network to obtain a judging and predicting result.
11. The method for segmenting a blood vessel image according to claim 10, wherein fusing feature information of a blood vessel category in a target domain blood vessel image and feature information of a layer in the target domain by using a feature fusion module to obtain first fused feature information corresponding to the target domain blood vessel image comprises:
adopting a feature fusion module to mutually fuse the feature information of the blood vessel category in the target domain blood vessel image to obtain multi-mode feature information containing the blood vessel category information;
and fusing the multi-mode characteristic information with the middle-layer characteristic information in the target domain by adopting a characteristic fusion module to obtain first fused characteristic information corresponding to the target domain blood vessel image.
12. The segmentation method of the vessel image as set forth in claim 8, wherein performing vessel segmentation on the source domain vessel image based on a vessel segmentation network and the source domain high-level feature information to obtain a vessel segmentation prediction result comprises:
performing dictionary learning on source domain high-level characteristic information of a source domain blood vessel image by adopting the blood vessel segmentation network to obtain a sample dictionary corresponding to the source domain high-level characteristic information;
selecting a plurality of channels of the source domain high-level characteristic information by adopting the blood vessel segmentation network according to the sample dictionary to obtain source domain channel attention characteristic information;
and performing vessel segmentation on the source domain vessel image according to the source domain channel attention characteristic information by adopting the vessel segmentation network to obtain a vessel segmentation prediction result of the source domain vessel image.
13. A blood vessel segmentation method of a fundus image, comprising:
acquiring a fundus image to be segmented;
extracting features of the fundus image to obtain high-level feature information;
performing dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information;
selecting a plurality of channels of the high-level characteristic information according to the dictionary representation to obtain target characteristic information, wherein the method comprises the following steps: acquiring residual information between the dictionary representation and the high-level characteristic information; coding the residual information to obtain residual coding information of high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the residual error coding information to obtain target characteristic information;
Fusing the target characteristic information and the high-level characteristic information to obtain channel attention characteristic information;
and dividing the blood vessel in the fundus image according to the channel attention characteristic information to obtain a blood vessel division result.
14. A segmentation apparatus for a blood vessel image, comprising:
an acquisition unit configured to acquire a blood vessel image to be segmented;
the feature extraction unit is used for extracting features of the blood vessel image to obtain high-level feature information;
the learning unit is used for carrying out dictionary learning on the high-level characteristic information based on a preset dictionary to obtain dictionary representation corresponding to the high-level characteristic information;
the selecting unit is configured to select, according to the dictionary representation, a plurality of channels of the high-level feature information to obtain target feature information, and includes: acquiring residual information between the dictionary representation and the high-level characteristic information; coding the residual information to obtain residual coding information of high-level characteristic information; selecting a plurality of channels of the high-level characteristic information according to the residual error coding information to obtain target characteristic information;
the fusion unit is used for fusing the target characteristic information with the high-level characteristic information to obtain channel attention characteristic information;
And the segmentation unit is used for segmenting the blood vessels in the blood vessel image according to the channel attention characteristic information to obtain a blood vessel segmentation result.
15. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor realizes the steps of the method according to any of claims 1-13.
16. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any of claims 1-13 when the program is executed.
CN201910690945.3A 2019-07-29 2019-07-29 Segmentation method, device and equipment for blood vessel and fundus image and readable storage medium Active CN110443813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910690945.3A CN110443813B (en) 2019-07-29 2019-07-29 Segmentation method, device and equipment for blood vessel and fundus image and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910690945.3A CN110443813B (en) 2019-07-29 2019-07-29 Segmentation method, device and equipment for blood vessel and fundus image and readable storage medium

Publications (2)

Publication Number Publication Date
CN110443813A CN110443813A (en) 2019-11-12
CN110443813B true CN110443813B (en) 2024-02-27

Family

ID=68432105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910690945.3A Active CN110443813B (en) 2019-07-29 2019-07-29 Segmentation method, device and equipment for blood vessel and fundus image and readable storage medium

Country Status (1)

Country Link
CN (1) CN110443813B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969632B (en) * 2019-11-28 2020-09-08 北京推想科技有限公司 Deep learning model training method, image processing method and device
CN111161278B (en) * 2019-12-12 2023-04-18 西安交通大学 Deep network aggregation-based fundus image focus segmentation method
CN111161240B (en) * 2019-12-27 2024-03-05 上海联影智能医疗科技有限公司 Blood vessel classification method, apparatus, computer device, and readable storage medium
CN111096767A (en) * 2020-01-08 2020-05-05 南京市第一医院 Deep learning-based mediastinal lymph node ultrasound elastic image segmentation and classification method
CN111445428B (en) * 2020-03-04 2023-05-16 清华大学深圳国际研究生院 Method and device for enhancing biological living body blood vessel imaging data based on unsupervised learning
CN111753825A (en) * 2020-03-27 2020-10-09 北京京东尚科信息技术有限公司 Image description generation method, device, system, medium and electronic equipment
US11282193B2 (en) * 2020-03-31 2022-03-22 Ping An Technology (Shenzhen) Co., Ltd. Systems and methods for tumor characterization
CN113538604B (en) * 2020-04-21 2024-03-19 中移(成都)信息通信科技有限公司 Image generation method, device, equipment and medium
CN111915571A (en) * 2020-07-10 2020-11-10 云南电网有限责任公司带电作业分公司 Image change detection method, device, storage medium and equipment fusing residual error network and U-Net network
CN111951280B (en) * 2020-08-10 2022-03-15 中国科学院深圳先进技术研究院 Image segmentation method, device, equipment and storage medium
CN112215285B (en) * 2020-10-13 2022-10-25 电子科技大学 Cross-media-characteristic-based automatic fundus image labeling method
CN112634279B (en) * 2020-12-02 2023-04-07 四川大学华西医院 Medical image semantic segmentation method based on attention Unet model
CN112890764B (en) * 2021-01-18 2022-12-13 哈尔滨工业大学 Unmanned low-cost portable eye ground disease detection system
CN113129309B (en) * 2021-03-04 2023-04-07 同济大学 Medical image semi-supervised segmentation system based on object context consistency constraint
CN113706440A (en) * 2021-03-12 2021-11-26 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113240021B (en) * 2021-05-19 2021-12-10 推想医疗科技股份有限公司 Method, device and equipment for screening target sample and storage medium
CN115409764B (en) * 2021-05-28 2024-01-09 南京博视医疗科技有限公司 Multi-mode fundus blood vessel segmentation method and device based on domain self-adaption
CN113344893A (en) * 2021-06-23 2021-09-03 依未科技(北京)有限公司 High-precision fundus arteriovenous identification method, device, medium and equipment
CN113902757B (en) * 2021-10-09 2022-09-02 天津大学 Blood vessel segmentation method based on self-attention mechanism and convolution neural network hybrid model
CN113989215B (en) * 2021-10-25 2022-12-06 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN114638964A (en) * 2022-03-07 2022-06-17 厦门大学 Cross-domain three-dimensional point cloud segmentation method based on deep learning and storage medium
CN114913592B (en) * 2022-05-18 2024-06-21 重庆邮电大学 Fundus image classification method based on convolutional neural network
CN114862878B (en) * 2022-05-30 2023-06-06 北京百度网讯科技有限公司 Image segmentation model generation method and device, and image segmentation method and device
CN115984952B (en) * 2023-03-20 2023-11-24 杭州叶蓁科技有限公司 Eye movement tracking system and method based on bulbar conjunctiva blood vessel image recognition
CN116993762B (en) * 2023-09-26 2024-01-19 腾讯科技(深圳)有限公司 Image segmentation method, device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298974A (en) * 2014-10-10 2015-01-21 北京工业大学 Human body behavior recognition method based on depth video sequence
CN104361363A (en) * 2014-11-25 2015-02-18 中国科学院自动化研究所 Deep deconvolution feature learning network, generating method thereof and image classifying method
CN105224942A (en) * 2015-07-09 2016-01-06 华南农业大学 A kind of RGB-D image classification method and system
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN108304765A (en) * 2017-12-11 2018-07-20 中国科学院自动化研究所 Multitask detection device for face key point location and semantic segmentation
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗系统有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109584167A (en) * 2018-10-24 2019-04-05 深圳市旭东数字医学影像技术有限公司 Blood vessel enhancing and dividing method and system in CT image liver based on second order feature
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information
CN109801293A (en) * 2019-01-08 2019-05-24 平安科技(深圳)有限公司 Remote Sensing Image Segmentation, device and storage medium, server
CN109872364A (en) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 Image-region localization method, device, storage medium and medical image processing equipment
CN109872306A (en) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 Medical image cutting method, device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014152919A1 (en) * 2013-03-14 2014-09-25 Arizona Board Of Regents, A Body Corporate Of The State Of Arizona For And On Behalf Of Arizona State University Kernel sparse models for automated tumor segmentation

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298974A (en) * 2014-10-10 2015-01-21 北京工业大学 Human body behavior recognition method based on depth video sequence
CN104361363A (en) * 2014-11-25 2015-02-18 中国科学院自动化研究所 Deep deconvolution feature learning network, generating method thereof and image classifying method
CN105224942A (en) * 2015-07-09 2016-01-06 华南农业大学 A kind of RGB-D image classification method and system
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN108304765A (en) * 2017-12-11 2018-07-20 中国科学院自动化研究所 Multitask detection device for face key point location and semantic segmentation
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗系统有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109584167A (en) * 2018-10-24 2019-04-05 深圳市旭东数字医学影像技术有限公司 Blood vessel enhancing and dividing method and system in CT image liver based on second order feature
CN109448006A (en) * 2018-11-01 2019-03-08 江西理工大学 A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information
CN109801293A (en) * 2019-01-08 2019-05-24 平安科技(深圳)有限公司 Remote Sensing Image Segmentation, device and storage medium, server
CN109872364A (en) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 Image-region localization method, device, storage medium and medical image processing equipment
CN109872306A (en) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 Medical image cutting method, device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Deep Supervision Adversarial Learning Network for Retinal Vessel Segmentation";Yuhan Dong et al;《2019 12th International Congress on Image and Signal Processing》;全文 *
"Pyramid Attention Network for Semantic Segmentation";Hanchao LI et al;《arXive》;全文 *
"基于双字典学习的眼底图像血管分割";杨艳等;《光电子.激光》;全文 *

Also Published As

Publication number Publication date
CN110443813A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110443813B (en) Segmentation method, device and equipment for blood vessel and fundus image and readable storage medium
Costa et al. End-to-end adversarial retinal image synthesis
CN111199550B (en) Training method, segmentation method, device and storage medium of image segmentation network
US10706333B2 (en) Medical image analysis method, medical image analysis system and storage medium
CN111340819B (en) Image segmentation method, device and storage medium
Zuo et al. R2AU‐Net: attention recurrent residual convolutional neural network for multimodal medical image segmentation
CN109919928B (en) Medical image detection method and device and storage medium
EP3730040A1 (en) Method and apparatus for assisting in diagnosis of cardiovascular disease
Wang et al. Automated interpretation of congenital heart disease from multi-view echocardiograms
Tian et al. Multi-path convolutional neural network in fundus segmentation of blood vessels
CN110309849A (en) Blood-vessel image processing method, device, equipment and storage medium
JP7250166B2 (en) Image segmentation method and device, image segmentation model training method and device
Xie et al. Optic disc and cup image segmentation utilizing contour-based transformation and sequence labeling networks
Abbas et al. Machine learning methods for diagnosis of eye-related diseases: a systematic review study based on ophthalmic imaging modalities
CN113469981B (en) Image processing method, device and storage medium
Uddin et al. Machine learning based diabetes detection model for false negative reduction
Tsivgoulis et al. An improved SqueezeNet model for the diagnosis of lung cancer in CT scans
CN113129316A (en) Heart MRI image multi-task segmentation method based on multi-mode complementary information exploration
Zhang et al. TiM‐Net: Transformer in M‐Net for Retinal Vessel Segmentation
Zhang et al. Attention-guided feature extraction and multiscale feature fusion 3d resnet for automated pulmonary nodule detection
Lin et al. Blu-gan: Bi-directional convlstm u-net with generative adversarial training for retinal vessel segmentation
Tariq et al. Diabetic retinopathy detection using transfer and reinforcement learning with effective image preprocessing and data augmentation techniques
CN117373138A (en) Cross-modal living fusion detection method and device, storage medium and computer equipment
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis
CN116092667A (en) Disease detection method, system, device and storage medium based on multi-mode images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant