CN112001928A - Retinal vessel segmentation method and system - Google Patents

Retinal vessel segmentation method and system Download PDF

Info

Publication number
CN112001928A
CN112001928A CN202010688015.7A CN202010688015A CN112001928A CN 112001928 A CN112001928 A CN 112001928A CN 202010688015 A CN202010688015 A CN 202010688015A CN 112001928 A CN112001928 A CN 112001928A
Authority
CN
China
Prior art keywords
image
blood vessel
retinal
loss function
skeleton
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010688015.7A
Other languages
Chinese (zh)
Other versions
CN112001928B (en
Inventor
李瑞瑞
李明鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN202010688015.7A priority Critical patent/CN112001928B/en
Publication of CN112001928A publication Critical patent/CN112001928A/en
Application granted granted Critical
Publication of CN112001928B publication Critical patent/CN112001928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a retinal vessel segmentation method, which comprises the following steps: preprocessing a retina fundus image to be processed to obtain a first image; performing skeleton extraction on the first image through a first UNet network to obtain a second image; and merging the first image and the second image to obtain a third image, and performing blood vessel segmentation processing on the third image through a second UNet network to obtain a retinal blood vessel segmentation result. The invention also discloses a retinal vessel segmentation system. The invention has the beneficial effects that: based on the skeleton information to assist the blood vessel segmentation, the complete blood vessel topological structure can be extracted.

Description

Retinal vessel segmentation method and system
Technical Field
The invention relates to the technical field of medical image processing, in particular to a retinal blood vessel segmentation method and a retinal blood vessel segmentation system.
Background
In the related technology, a full convolution network is mainly used for segmenting retinal blood vessels end to end, but the segmented blood vessels have a large number of broken and missing conditions, the complete topological structure cannot be maintained, and the training process is unstable due to the influence of the learning difficulty of various characteristics of the blood vessels.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a retinal vessel segmentation method and system, which can extract a complete vessel topology structure based on skeleton information to assist vessel segmentation.
The invention provides a retinal vessel segmentation method, which comprises the following steps:
preprocessing a retina fundus image to be processed to obtain a first image;
performing skeleton extraction on the first image through a first UNet network to obtain a second image;
and merging the first image and the second image to obtain a third image, and performing blood vessel segmentation processing on the third image through a second UNet network to obtain a retinal blood vessel segmentation result.
As a further improvement of the present invention, the method further comprises: training the first UNet network and the second UNet network with a training set;
wherein the training set comprises: the retina fundus images, the retina blood vessel labeling images and the retina blood vessel skeleton labeling images are obtained through image analysis.
As a further improvement of the present invention, the original data set includes each retinal fundus image and each retinal blood vessel labeling image;
respectively extracting a retinal blood vessel skeleton image from each retinal blood vessel label image in the original data set;
respectively performing data augmentation on each extracted retinal blood vessel skeleton labeling image, each retinal fundus image and each retinal blood vessel labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel labeling images and retina blood vessel skeleton labeling images to obtain a plurality of fundus retina images, a plurality of retina blood vessel labeling images and a plurality of retina blood vessel skeleton labeling images in the training set.
As a further improvement of the present invention, the extracting of the retinal blood vessel skeleton image from each retinal blood vessel labeled image in the original data set includes:
respectively carrying out binarization processing on each retinal vessel labeled image in the original data set to obtain each binary image;
and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
As a further improvement of the present invention, training the first UNet network and the second UNet network through a training set includes:
using the retinal fundus image in the training set as an input image of the first UNet network;
and combining the output image of the last layer of the first UNet network and the input image of the first UNet network to be used as the input image of the second UNet network.
As a further improvement of the present invention, training the first UNet network and the second UNet network through a training set includes:
the output image of the second last layer of the first UNet network is up-sampled, and a first loss function is used for the up-sampled image and a retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a second loss function for the output image of the last layer of the first UNet network and the retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a third loss function for the output image of the last layer of the second UNet network and the retinal blood vessel labeling image corresponding to the retinal fundus image;
wherein the first loss function and the second loss function are not the same.
As a further improvement of the present invention, the first loss function is a weighted cross-entropy loss function, the second loss function is a standard cross-entropy loss function, and the third loss function is a standard cross-entropy loss function.
As a further improvement of the present invention, training the first UNet network and the second UNet network through a training set includes:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
performing parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
As a further refinement of the invention, determining the minimum value of the target loss function comprises:
calculating a gradient of the target loss function by adopting back propagation;
and determining the minimum value of the target loss function by adopting a random gradient descent algorithm.
As a further improvement of the present invention, the last layer of the first UNet network and the last layer of the second UNet network use a sigmoid activation function;
the first UNet network and the second UNet network adopt normal distribution initialization parameters.
The present invention also provides a retinal vessel segmentation system, the system comprising:
the pre-processing module is used for pre-processing the retinal fundus image to be processed to obtain a first image;
the skeleton extraction module is used for carrying out skeleton extraction on the first image through a first UNet network to obtain a second image;
and the blood vessel segmentation module is used for merging the first image and the second image to obtain a third image, and performing blood vessel segmentation processing on the third image through a second UNet network to obtain a retina blood vessel segmentation result.
As a further improvement of the present invention, the system further comprises:
a training module to train the first UNet network and the second UNet network through a training set;
wherein the training set comprises: the retina fundus images, the retina blood vessel labeling images and the retina blood vessel skeleton labeling images are obtained through image analysis.
As a further improvement of the present invention, the original data set includes each retinal fundus image and each retinal blood vessel labeling image;
respectively extracting a retinal blood vessel skeleton image from each retinal blood vessel label image in the original data set;
respectively performing data augmentation on each extracted retinal blood vessel skeleton labeling image, each retinal fundus image and each retinal blood vessel labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel labeling images and retina blood vessel skeleton labeling images to obtain a plurality of fundus retina images, a plurality of retina blood vessel labeling images and a plurality of retina blood vessel skeleton labeling images in the training set.
As a further improvement of the present invention, the extracting of the retinal blood vessel skeleton image from each retinal blood vessel labeled image in the original data set includes:
respectively carrying out binarization processing on each retinal vessel labeled image in the original data set to obtain each binary image;
and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
As a further refinement of the invention, the training module is configured to:
using the retinal fundus image in the training set as an input image of the first UNet network;
and combining the output image of the last layer of the first UNet network and the input image of the first UNet network to be used as the input image of the second UNet network.
As a further refinement of the invention, the training module is configured to:
the output image of the second last layer of the first UNet network is up-sampled, and a first loss function is used for the up-sampled image and a retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a second loss function for the output image of the last layer of the first UNet network and the retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a third loss function for the output image of the last layer of the second UNet network and the retinal blood vessel labeling image corresponding to the retinal fundus image;
wherein the first loss function and the second loss function are not the same.
As a further improvement of the present invention, the first loss function is a weighted cross-entropy loss function, the second loss function is a standard cross-entropy loss function, and the third loss function is a standard cross-entropy loss function.
As a further refinement of the invention, the training module is configured to:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
performing parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
As a further refinement of the invention, determining the minimum value of the target loss function comprises:
calculating a gradient of the target loss function by adopting back propagation;
and determining the minimum value of the target loss function by adopting a random gradient descent algorithm.
As a further improvement of the present invention, the last layer of the first UNet network and the last layer of the second UNet network use a sigmoid activation function;
the first UNet network and the second UNet network adopt normal distribution initialization parameters.
The invention has the beneficial effects that:
the retinal vessel segmentation is divided into two parts of skeleton extraction and vessel segmentation, which are respectively realized through a coding and decoding network. The retinal vessel is divided into two stages for study, wherein the characteristics difficult to learn and the characteristics easy to learn are learned separately, so that the study condition of each stage can be observed in real time, and the good fitting effect on the samples easy to learn is reduced in the process of learning the samples difficult to learn.
The skeleton extraction network and the blood vessel segmentation network adopt a deep convolutional network (UNet network) with powerful feature extraction, a large amount of data can be fitted, and relevant information is extracted from the data.
The framework extraction network adopts a deep supervision method, and in the framework extraction process, different loss functions are adopted for the outputs of different decoding layers, so that the vessel frameworks with different scales can be learned by different weights, the network training can be more stable, and the framework extraction is favorably completed. On the basis of extracting the skeleton, the skeleton information is further utilized to assist the blood vessel segmentation, the blood vessel segmentation can be realized more efficiently, and the blood vessel structure obtained by segmentation is more complete.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flow chart of a retinal vessel segmentation method according to an exemplary embodiment of the present invention;
FIG. 2 is a diagram of a network framework for a retinal vessel segmentation method according to an exemplary embodiment of the present invention;
FIG. 3 is a schematic diagram of an extracted retinal vascular skeleton according to an exemplary embodiment of the present invention;
FIG. 4 is a diagram illustrating retinal vessel segmentation results according to an exemplary embodiment of the present invention;
fig. 5 is a schematic network training flow diagram of a retinal vessel segmentation method according to an exemplary embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
In addition, in the description of the present invention, the terms used are for illustrative purposes only and are not intended to limit the scope of the present invention. The terms "comprises" and/or "comprising" are used to specify the presence of stated elements, steps, operations, and/or components, but do not preclude the presence or addition of one or more other elements, steps, operations, and/or components. The terms "first," "second," and the like may be used to describe various elements, not necessarily order, and not necessarily limit the elements. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified. These terms are only used to distinguish one element from another. These and/or other aspects will become apparent to those of ordinary skill in the art in view of the following drawings, and the description of the embodiments of the present invention will be more readily understood by those of ordinary skill in the art. The drawings are only for purposes of illustrating the described embodiments of the invention. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated in the present application may be employed without departing from the principles described in the present application.
As shown in fig. 1, a retinal blood vessel segmentation method according to an embodiment of the present invention includes:
s1, preprocessing the retina fundus image to be processed to obtain a first image;
s2, performing skeleton extraction on the first image through a first UNet network to obtain a second image;
and S3, merging the first image and the second image to obtain a third image, and performing blood vessel segmentation processing on the third image through a second UNet network to obtain a retina blood vessel segmentation result.
In the network framework diagram of the present invention, as shown in fig. 2, the first UNet network serves as a skeleton extraction network, and the second UNet network serves as a blood vessel segmentation network. The preprocessing process (such as resolution unification and gray processing) is used for processing the input retina image (RGB image) to be processed, so that the input retina image can be conveniently used as the input of the first UNet network, and the segmentation efficiency is improved. Fig. 3 and 4 are schematic diagrams of the extracted retinal blood vessel skeleton (second image) and the obtained retinal blood vessel segmentation result, respectively.
In the prior art, a single segmentation network is generally adopted to carry out network learning by kneading various characteristics such as blood vessel edges, blood vessel topological structures, blood vessel widths and the like together. However, features like edges may be blurred due to the influence of position, brightness, etc., and thus are not easily distinguished from non-vascular regions, which are difficult to learn. If these characteristics difficult to learn are kneaded together with the characteristics easy to learn, it is not favorable for network learning. The method of the invention divides the retina blood vessel into two parts of skeleton extraction and blood vessel division, which are respectively realized by a coding and decoding network (UNet network). The first UNet network is used for learning easy-to-learn characteristics such as a blood vessel topological structure, and the result is used as a structure prior and is used as the input of the second UNet network to assist in blood vessel segmentation. The staged retinal vessel segmentation method decomposes the vessel characteristics, and the skeleton extraction is less affected by wrong labeling than the direct vessel segmentation, thereby being beneficial to the extraction of the vessel topological structure.
Wherein the UNet network consists of a contraction path and an expansion path. Wherein the systolic path follows a typical convolutional network structure, which consists of two repeated 33 convolutional kernels (unfilled convolution), and both use a modified linear unit (ReLU) activation function and a 22 max pooling operation with a step size of 2 for downsampling (downsampling), and the number of feature channels is doubled in each downsampling step. In the dilation path, each step involves upsampling (upsampling) the feature map; then, 22 convolution kernels are used for convolution operation (up-convolution) for reducing the number of characteristic channels by half; then, corresponding cut characteristic graphs in the cascade contraction path are obtained; the convolution operation is again performed with two convolution kernels of 33 and both use the ReLU activation function. In the last layer, convolution operation is carried out by using convolution kernels of 1 x 1, and each feature vector of 64 dimensions is mapped to an output layer of the network. Thus, the network has 23 convolutional layers.
In an alternative embodiment, the method further comprises: training the first UNet network and the second UNet network with a training set;
wherein the training set comprises: the retina fundus images, the retina blood vessel labeling images and the retina blood vessel skeleton labeling images are obtained through image analysis.
Wherein, the retina fundus image, the retina blood vessel marking image and the retina blood vessel skeleton marking image have one-to-one correspondence relationship.
In an alternative embodiment, the raw data set includes respective retinal fundus images and respective retinal vessel annotation images;
respectively extracting a retinal blood vessel skeleton image from each retinal blood vessel label image in the original data set;
respectively performing data augmentation on each extracted retinal blood vessel skeleton labeling image, each retinal fundus image and each retinal blood vessel labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel marking images and retina blood vessel skeleton marking images to obtain a plurality of fundus retina training images, a plurality of retina blood vessel marking training images and a plurality of retina blood vessel skeleton marking training images in the training set.
The method of the invention trains each network respectively to obtain the optimal network parameters, and can better perform skeleton extraction and blood vessel segmentation on the retinal fundus image to be processed respectively. The raw data set includes respective retinal fundus images and respective retinal vessel annotation images. Wherein each retinal vessel annotation image employs, for example, a DRIVE data set and a start data set. The DRIVE data set comprises 40 images with vessel labels, wherein 7 images have early diabetic retinopathy, 33 images have fundus images without diabetic retinopathy, the resolution of each image is 565 multiplied by 584, and each image corresponds to the result of manual segmentation of 2 experts. The STARE data set included 20 images with vessel labeling, 10 with lesions and 10 without lesions, each image having a resolution of 605 x 700, each image corresponding to the results of 2 expert manual segmentations.
The data of each image is augmented by rotating, turning over, changing brightness and the like, and the augmented image is subjected to pixel value standardization processing, so that network training can be more stable, and an extended data set can be obtained. The augmented data set is partitioned into a training set, a test set, and a validation set for training, testing, and validation of the network.
In an optional embodiment, the extracting a retinal blood vessel skeleton image from each retinal blood vessel labeled image in the original data set includes:
respectively carrying out binarization processing on each retinal vessel labeled image in the original data set to obtain each binary image;
and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
In the network training process, the method also assists the blood vessel segmentation of the second UNet network with the skeleton information extracted from the first UNet network, so as to obtain a more complete blood vessel segmentation result. Because the expansion data set has more data, the two networks need to be trained sufficiently, so that the two networks can work cooperatively.
In an alternative embodiment, the last layer of the first UNet network and the last layer of the second UNet network use a sigmoid activation function; the first UNet network and the second UNet network adopt normal distribution initialization parameters.
An alternative embodiment, training the first UNet network and the second UNet network through a training set, comprising:
using the retinal fundus image in the training set as an input image of the first UNet network;
and combining the output image of the last layer of the first UNet network and the input image of the first UNet network to be used as the input image of the second UNet network.
An alternative embodiment, training the first UNet network and the second UNet network through a training set, comprising:
the output image of the second last layer of the first UNet network is up-sampled, and a first loss function is used for the up-sampled image and a retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a second loss function for the output image of the last layer of the first UNet network and the retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a third loss function for the output image of the last layer of the second UNet network and the retinal blood vessel labeling image corresponding to the retinal fundus image;
wherein the first loss function and the second loss function are not the same.
In an alternative embodiment, the upsampled image and the retinal blood vessel skeleton labeled image corresponding to the retinal fundus image may use, for example, a weighted cross entropy loss function as the first loss function; the output image of the last layer of the first UNet network and the retinal blood vessel skeleton labeling image corresponding to the retinal fundus image can use a standard cross entropy loss function as a second loss function; the output image of the last layer of the second UNet network and the retinal vessel labeling image corresponding to the retinal fundus image may use, for example, a standard cross entropy loss function as a third loss function.
After the method analyzes the characteristics of different scales of the hidden layer in the first UNet network, different layers adopt different loss functions to calculate the loss, and the directness and the transparency of the learning process of the hidden layer of the network can be improved. The extracted blood vessel centerline binary image is used as a retinal blood vessel skeleton labeling image to represent the structure of blood vessels, and the skeleton of the thin blood vessels is usually the thin blood vessels themselves due to the small width. For thick vessels, small deviations of the centerline do not affect the characterization of the vessel structure, and these deviations are referred to as false positive samples. To reduce the penalty on such false positive samples, the present invention applies a weighted cross-entropy loss function to the first UNet network second-to-last layer output. Due to the characteristic of information filtering in the first UNet network layer, the penultimate layer usually only retains the information of the thick blood vessels and filters the information of the thin blood vessels. Thus, the first UNet network second to last layer uses a weighted cross entropy loss function, which can only reduce the loss for coarse vessel skeleton offsets.
An alternative embodiment, training the first UNet network and the second UNet network through a training set, comprising:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
performing parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
In an alternative embodiment, determining the minimum value of the objective loss function comprises:
calculating a gradient of the target loss function by adopting back propagation;
and determining the minimum value of the target loss function by adopting a random gradient descent algorithm.
The invention takes a retina fundus image (RGB three-channel image) in a training set as the input of a first UNet network for forward propagation, and adopts a deep supervision method, the output of the second layer from the last layer of the first UNet network and the corresponding skeleton label are subjected to up-sampling and then are subjected to weighted cross entropy calculation loss, and the output of the last layer of the first UNet network and the corresponding skeleton label are subjected to standard cross entropy calculation loss. And combining the output of the last layer of the first UNet network and the input of the first UNet network into four channels, taking the four channels as the input of the second UNet network, obtaining a segmentation result after forward propagation, and calculating the loss of the segmentation result and a corresponding segmentation label by using standard cross entropy.
The formula of forward propagation is shown as (1), and the formula of weighted cross entropy is shown as (2) (where α is 1, the standard cross entropy is obtained).
Seg=UNet2(combinate(RGB,UNet1(RGB))) (1)
Figure BDA0002588289580000111
In equation (1), UNet2 denotes the second UNet network, RGB denotes the retinal fundus image, and UNet1 denotes the first UNet network. In the formula (2), yiRepresenting an annotation image, yi' denotes an output image.
In the network training process, as shown in fig. 5, the method extracts the blood vessel skeleton from the input retina fundus image by using the skeleton extraction network, combines the extracted skeleton and the retina image and inputs the combined image into the blood vessel segmentation network for segmentation, and in the network training process, all data in a training set are repeatedly iterated for a plurality of rounds. And after each round is finished, the network can be verified by using the data in the verification set, loss is calculated, and whether the overfitting condition exists or not is observed. After training is finished, the network can be tested by using the data in the test set, the test result can be evaluated by using indexes such as recall rate, accuracy rate, F1 value and the like, and the learning condition of the network can be quantitatively analyzed.
The method of the invention divides the retina blood vessel into two parts of skeleton extraction and blood vessel division, which are respectively realized by a coding and decoding network. The retinal vessel is divided into two stages for study, wherein the characteristics difficult to learn and the characteristics easy to learn are learned separately, so that the study condition of each stage can be observed in real time, and the good fitting effect on the samples easy to learn is reduced in the process of learning the samples difficult to learn. The skeleton extraction network and the blood vessel segmentation network adopt a deep convolutional network (UNet network) with powerful feature extraction, a large amount of data can be fitted, and relevant information is extracted from the data. The framework extraction network adopts a deep supervision method, and in the framework extraction process, different loss functions are adopted for the outputs of different decoding layers, so that the vessel frameworks with different scales can be learned by different weights, the network training can be more stable, and the framework extraction is favorably completed. On the basis of extracting the skeleton, the skeleton information is further utilized to assist the blood vessel segmentation, the blood vessel segmentation can be realized more efficiently, and the blood vessel structure obtained by segmentation is more complete.
The retinal vessel segmentation system of the embodiment of the invention comprises:
the pre-processing module is used for pre-processing the retinal fundus image to be processed to obtain a first image;
the skeleton extraction module is used for carrying out skeleton extraction on the first image through a first UNet network to obtain a second image;
and the blood vessel segmentation module is used for merging the first image and the second image to obtain a third image, and performing blood vessel segmentation processing on the third image through a second UNet network to obtain a retina blood vessel segmentation result.
The network framework diagram adopted by the system is shown in fig. 2, wherein a first UNet network is used as a skeleton extraction network, and a second UNet network is used as a blood vessel segmentation network. The preprocessing module preprocesses an input retina image (RGB image) to be processed through resolution unification, gray processing and the like, and then the processed retina image is used as the input of the first UNet network, and therefore segmentation efficiency is improved. Fig. 3 and 4 are schematic diagrams of the extracted retinal blood vessel skeleton (second image) and the obtained retinal blood vessel segmentation result, respectively.
In the prior art, a single segmentation network is generally adopted to carry out network learning by kneading various characteristics such as blood vessel edges, blood vessel topological structures, blood vessel widths and the like together. However, features like edges may be blurred due to the influence of position, brightness, etc., and thus are not easily distinguished from non-vascular regions, which are difficult to learn. If these characteristics difficult to learn are kneaded together with the characteristics easy to learn, it is not favorable for network learning. The system of the invention divides the retina blood vessel into two parts of skeleton extraction and blood vessel division, which are respectively realized by a coding and decoding network (UNet network). The first UNet network is used for learning easy-to-learn characteristics such as a blood vessel topological structure, and the result is used as a structure prior and is used as the input of the second UNet network to assist in blood vessel segmentation. The staged retinal vessel segmentation method decomposes the vessel characteristics, and the skeleton extraction is less affected by wrong labeling than the direct vessel segmentation, thereby being beneficial to the extraction of the vessel topological structure.
Wherein the UNet network consists of a contraction path and an expansion path. Wherein the systolic path follows a typical convolutional network structure, which consists of two repeated 33 convolutional kernels (unfilled convolution), and both use a modified linear unit (ReLU) activation function and a 22 max pooling operation with a step size of 2 for downsampling (downsampling), and the number of feature channels is doubled in each downsampling step. In the dilation path, each step involves upsampling (upsampling) the feature map; then, 22 convolution kernels are used for convolution operation (up-convolution) for reducing the number of characteristic channels by half; then, corresponding cut characteristic graphs in the cascade contraction path are obtained; the convolution operation is again performed with two convolution kernels of 33 and both use the ReLU activation function. In the last layer, convolution operation is carried out by using convolution kernels of 1 x 1, and each feature vector of 64 dimensions is mapped to an output layer of the network. Thus, the network has 23 convolutional layers.
In an alternative embodiment, the system further comprises:
a training module to train the first UNet network and the second UNet network through a training set;
wherein the training set comprises: the retina fundus images, the retina blood vessel labeling images and the retina blood vessel skeleton labeling images are obtained through image analysis.
Wherein, the retina fundus image, the retina blood vessel marking image and the retina blood vessel skeleton marking image have one-to-one correspondence relationship.
In an alternative embodiment, the raw data set includes respective retinal fundus images and respective retinal vessel annotation images;
respectively extracting a retinal blood vessel skeleton image from each retinal blood vessel label image in the original data set;
respectively performing data augmentation on each extracted retinal blood vessel skeleton labeling image, each retinal fundus image and each retinal blood vessel labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel marking images and retina blood vessel skeleton marking images to obtain a plurality of fundus retina training images, a plurality of retina blood vessel marking training images and a plurality of retina blood vessel skeleton marking training images in the training set.
The system provided by the invention respectively trains each network to obtain the optimal network parameters, and can better respectively perform skeleton extraction and blood vessel segmentation on the retinal fundus image to be processed. The raw data set includes respective retinal fundus images and respective retinal vessel annotation images. Wherein each retinal vessel annotation image employs, for example, a DRIVE data set and a start data set. The DRIVE data set comprises 40 images with vessel labels, wherein 7 images have early diabetic retinopathy, 33 images have fundus images without diabetic retinopathy, the resolution of each image is 565 multiplied by 584, and each image corresponds to the result of manual segmentation of 2 experts. The STARE data set included 20 images with vessel labeling, 10 with lesions and 10 without lesions, each image having a resolution of 605 x 700, each image corresponding to the results of 2 expert manual segmentations.
The data of each image is augmented by rotating, turning over, changing brightness and the like, and the augmented image is subjected to pixel value standardization processing, so that network training can be more stable, and an extended data set can be obtained. The augmented data set is partitioned into a training set, a test set, and a validation set for training, testing, and validation of the network.
In an optional embodiment, the extracting a retinal blood vessel skeleton image from each retinal blood vessel labeled image in the original data set includes:
respectively carrying out binarization processing on each retinal vessel labeled image in the original data set to obtain each binary image;
and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
In the network training process, the system also assists the blood vessel segmentation of the second UNet network with the skeleton information extracted from the first UNet network, so as to obtain a more complete blood vessel segmentation result. Because the expansion data set has more data, the two networks need to be trained sufficiently, so that the two networks can work cooperatively.
In an alternative embodiment, the last layer of the first UNet network and the last layer of the second UNet network use a sigmoid activation function; the first UNet network and the second UNet network adopt normal distribution initialization parameters.
In an alternative embodiment, the training module is further configured to:
using the retinal fundus image in the training set as an input image of the first UNet network;
and combining the output image of the last layer of the first UNet network and the input image of the first UNet network to be used as the input image of the second UNet network.
In an alternative embodiment, the training module is further configured to:
the output image of the second last layer of the first UNet network is up-sampled, and a first loss function is used for the up-sampled image and a retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a second loss function for the output image of the last layer of the first UNet network and the retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a third loss function for the output image of the last layer of the second UNet network and the retinal blood vessel labeling image corresponding to the retinal fundus image;
wherein the first loss function and the second loss function are not the same.
In an alternative embodiment, the upsampled image and the retinal blood vessel skeleton labeled image corresponding to the retinal fundus image may use, for example, a weighted cross entropy loss function as the first loss function; the output image of the last layer of the first UNet network and the retinal blood vessel skeleton labeling image corresponding to the retinal fundus image can use a standard cross entropy loss function as a second loss function; the output image of the last layer of the second UNet network and the retinal vessel labeling image corresponding to the retinal fundus image may use, for example, a standard cross entropy loss function as a third loss function.
After the system analyzes the characteristics of different scales of the hidden layer in the first UNet network, different layers adopt different loss functions to calculate the loss, and the directness and the transparency of the learning process of the hidden layer of the network can be improved. The extracted blood vessel centerline binary image is used as a retinal blood vessel skeleton labeling image to represent the structure of blood vessels, and the skeleton of the thin blood vessels is usually the thin blood vessels themselves due to the small width. For thick vessels, small deviations of the centerline do not affect the characterization of the vessel structure, and these deviations are referred to as false positive samples. To reduce the penalty on such false positive samples, the present invention applies a weighted cross-entropy loss function to the first UNet network second-to-last layer output. Due to the characteristic of information filtering in the first UNet network layer, the penultimate layer usually only retains the information of the thick blood vessels and filters the information of the thin blood vessels. Thus, the first UNet network second to last layer uses a weighted cross entropy loss function, which can only reduce the loss for coarse vessel skeleton offsets.
In an alternative embodiment, the training module is further configured to:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
performing parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
In an alternative embodiment, determining the minimum value of the objective loss function comprises:
calculating a gradient of the target loss function by adopting back propagation;
and determining the minimum value of the target loss function by adopting a random gradient descent algorithm.
The invention takes a retina fundus image (RGB three-channel image) in a training set as the input of a first UNet network for forward propagation, and adopts a deep supervision method, the output of the second layer from the last layer of the first UNet network and the corresponding skeleton label are subjected to up-sampling and then are subjected to weighted cross entropy calculation loss, and the output of the last layer of the first UNet network and the corresponding skeleton label are subjected to standard cross entropy calculation loss. And combining the output of the last layer of the first UNet network and the input of the first UNet network into four channels, taking the four channels as the input of the second UNet network, obtaining a segmentation result after forward propagation, and calculating the loss of the segmentation result and a corresponding segmentation label by using standard cross entropy.
The formula of forward propagation is shown as (1), and the formula of weighted cross entropy is shown as (2) (where α is 1, the standard cross entropy is obtained).
Seg=UNet2(combinate(RGB,UNet1(RGB))) (1)
Figure BDA0002588289580000161
In equation (1), UNet2 denotes the second UNet network, RGB denotes the retinal fundus image, and UNet1 denotes the first UNet network. In the formula (2), yiRepresenting an annotation image, yi' denotes an output image.
In the system of the invention, the training module extracts the blood vessel skeleton from the input retina fundus image by using a skeleton extraction network in the network training process, combines the extracted skeleton and the retina image and inputs the combined image into a blood vessel segmentation network for segmentation, and in the network training process, all data in a training set are repeatedly iterated for a plurality of rounds. And after each round is finished, the network can be verified by using the data in the verification set, loss is calculated, and whether the overfitting condition exists or not is observed. After training is finished, the network can be tested by using the data in the test set, the test result can be evaluated by using indexes such as recall rate, accuracy rate, F1 value and the like, and the learning condition of the network can be quantitatively analyzed.
The system of the invention divides the retina blood vessel into two parts of skeleton extraction and blood vessel division, which are respectively realized by a coding and decoding network. The retinal vessel is divided into two stages for study, wherein the characteristics difficult to learn and the characteristics easy to learn are learned separately, so that the study condition of each stage can be observed in real time, and the good fitting effect on the samples easy to learn is reduced in the process of learning the samples difficult to learn. The skeleton extraction network and the blood vessel segmentation network adopt a deep convolutional network (UNet network) with powerful feature extraction, a large amount of data can be fitted, and relevant information is extracted from the data. The framework extraction network adopts a deep supervision method, and in the framework extraction process, different loss functions are adopted for the outputs of different decoding layers, so that the vessel frameworks with different scales can be learned by different weights, the network training can be more stable, and the framework extraction is favorably completed. On the basis of extracting the skeleton, the skeleton information is further utilized to assist the blood vessel segmentation, the blood vessel segmentation can be realized more efficiently, and the blood vessel structure obtained by segmentation is more complete.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Furthermore, those of ordinary skill in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It will be understood by those skilled in the art that while the present invention has been described with reference to exemplary embodiments, various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (10)

1. A retinal vessel segmentation method, characterized in that the method comprises:
preprocessing a retina fundus image to be processed to obtain a first image;
performing skeleton extraction on the first image through a first UNet network to obtain a second image;
and merging the first image and the second image to obtain a third image, and performing blood vessel segmentation processing on the third image through a second UNet network to obtain a retinal blood vessel segmentation result.
2. The method of claim 1, wherein the method further comprises: training the first UNet network and the second UNet network with a training set;
wherein the training set comprises: the retina fundus images, the retina blood vessel labeling images and the retina blood vessel skeleton labeling images are obtained through image analysis.
3. The method of claim 2, wherein the raw data set includes respective retinal fundus images and respective retinal vessel annotation images;
respectively extracting a retinal blood vessel skeleton image from each retinal blood vessel label image in the original data set;
respectively performing data augmentation on each extracted retinal blood vessel skeleton labeling image, each retinal fundus image and each retinal blood vessel labeling image in the original data set;
and respectively carrying out pixel value standardization processing on all the expanded retina fundus images, retina blood vessel labeling images and retina blood vessel skeleton labeling images to obtain a plurality of fundus retina images, a plurality of retina blood vessel labeling images and a plurality of retina blood vessel skeleton labeling images in the training set.
4. The method of claim 3, wherein extracting a retinal blood vessel skeleton image for each retinal blood vessel labeling image in the raw data set comprises:
respectively carrying out binarization processing on each retinal vessel labeled image in the original data set to obtain each binary image;
and extracting the blood vessel center line of each binary image, and taking the extracted blood vessel center line binary image as a retina blood vessel skeleton labeling image.
5. The method of claim 2, wherein training the first and second UNet networks through a training set comprises:
using the retinal fundus image in the training set as an input image of the first UNet network;
and combining the output image of the last layer of the first UNet network and the input image of the first UNet network to be used as the input image of the second UNet network.
6. The method of claim 2, wherein training the first and second UNet networks through a training set comprises:
the output image of the second last layer of the first UNet network is up-sampled, and a first loss function is used for the up-sampled image and a retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a second loss function for the output image of the last layer of the first UNet network and the retinal blood vessel skeleton labeling image corresponding to the retinal fundus image;
using a third loss function for the output image of the last layer of the second UNet network and the retinal blood vessel labeling image corresponding to the retinal fundus image;
wherein the first loss function and the second loss function are not the same.
7. The method of claim 6, wherein the first loss function is a weighted cross-entropy loss function, the second loss function is a standard cross-entropy loss function, and the third loss function is a standard cross-entropy loss function.
8. The method of claim 7, wherein training the first and second UNet networks through a training set comprises:
adding the first loss function, the second loss function and the third loss function to obtain a target loss function;
determining a minimum value of the target loss function;
performing parameter optimization on the loss of the first UNet network and the second UNet network based on the minimum value of the target loss function.
9. The method as recited in claim 8, wherein determining the minimum value of the objective loss function comprises:
calculating a gradient of the target loss function by adopting back propagation;
and determining the minimum value of the target loss function by adopting a random gradient descent algorithm.
10. A retinal vessel segmentation system, the system comprising:
the pre-processing module is used for pre-processing the retinal fundus image to be processed to obtain a first image;
the skeleton extraction module is used for carrying out skeleton extraction on the first image through a first UNet network to obtain a second image;
and the blood vessel segmentation module is used for merging the first image and the second image to obtain a third image, and performing blood vessel segmentation processing on the third image through a second UNet network to obtain a retina blood vessel segmentation result.
CN202010688015.7A 2020-07-16 2020-07-16 Retina blood vessel segmentation method and system Active CN112001928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010688015.7A CN112001928B (en) 2020-07-16 2020-07-16 Retina blood vessel segmentation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010688015.7A CN112001928B (en) 2020-07-16 2020-07-16 Retina blood vessel segmentation method and system

Publications (2)

Publication Number Publication Date
CN112001928A true CN112001928A (en) 2020-11-27
CN112001928B CN112001928B (en) 2023-12-15

Family

ID=73468037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010688015.7A Active CN112001928B (en) 2020-07-16 2020-07-16 Retina blood vessel segmentation method and system

Country Status (1)

Country Link
CN (1) CN112001928B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288794A (en) * 2020-09-04 2021-01-29 深圳硅基智能科技有限公司 Method and device for measuring blood vessel diameter of fundus image
CN113344842A (en) * 2021-03-24 2021-09-03 同济大学 Blood vessel labeling method of ultrasonic image
CN113658104A (en) * 2021-07-21 2021-11-16 南方科技大学 Blood vessel image processing method, electronic device and computer-readable storage medium
CN114565620A (en) * 2022-03-01 2022-05-31 电子科技大学 Fundus image blood vessel segmentation method based on skeleton prior and contrast loss
CN116797794A (en) * 2023-07-10 2023-09-22 北京透彻未来科技有限公司 Intestinal cancer pathology parting system based on deep learning
WO2023240319A1 (en) * 2022-06-16 2023-12-21 Eyetelligence Limited Fundus image analysis system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗系统有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109658422A (en) * 2018-12-04 2019-04-19 大连理工大学 A kind of retinal images blood vessel segmentation method based on multiple dimensioned deep supervision network
CN109949302A (en) * 2019-03-27 2019-06-28 天津工业大学 Retinal feature Structural Techniques based on pixel
CN110197493A (en) * 2019-05-24 2019-09-03 清华大学深圳研究生院 Eye fundus image blood vessel segmentation method
CN110443815A (en) * 2019-08-07 2019-11-12 中山大学 In conjunction with the semi-supervised retina OCT image layer dividing method for generating confrontation network
CN110652312A (en) * 2019-07-19 2020-01-07 慧影医疗科技(北京)有限公司 Blood vessel CTA intelligent analysis system and application
CN110689526A (en) * 2019-09-09 2020-01-14 北京航空航天大学 Retinal blood vessel segmentation method and system based on retinal fundus image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗系统有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109658422A (en) * 2018-12-04 2019-04-19 大连理工大学 A kind of retinal images blood vessel segmentation method based on multiple dimensioned deep supervision network
CN109949302A (en) * 2019-03-27 2019-06-28 天津工业大学 Retinal feature Structural Techniques based on pixel
CN110197493A (en) * 2019-05-24 2019-09-03 清华大学深圳研究生院 Eye fundus image blood vessel segmentation method
CN110652312A (en) * 2019-07-19 2020-01-07 慧影医疗科技(北京)有限公司 Blood vessel CTA intelligent analysis system and application
CN110443815A (en) * 2019-08-07 2019-11-12 中山大学 In conjunction with the semi-supervised retina OCT image layer dividing method for generating confrontation network
CN110689526A (en) * 2019-09-09 2020-01-14 北京航空航天大学 Retinal blood vessel segmentation method and system based on retinal fundus image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RUIRUI LI 等: "Connection Sensitive Attention U-NET for Accurate Retinal Vessel Segmentation", ARXIV *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288794A (en) * 2020-09-04 2021-01-29 深圳硅基智能科技有限公司 Method and device for measuring blood vessel diameter of fundus image
CN113344842A (en) * 2021-03-24 2021-09-03 同济大学 Blood vessel labeling method of ultrasonic image
CN113658104A (en) * 2021-07-21 2021-11-16 南方科技大学 Blood vessel image processing method, electronic device and computer-readable storage medium
CN114565620A (en) * 2022-03-01 2022-05-31 电子科技大学 Fundus image blood vessel segmentation method based on skeleton prior and contrast loss
WO2023240319A1 (en) * 2022-06-16 2023-12-21 Eyetelligence Limited Fundus image analysis system
CN116797794A (en) * 2023-07-10 2023-09-22 北京透彻未来科技有限公司 Intestinal cancer pathology parting system based on deep learning

Also Published As

Publication number Publication date
CN112001928B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN112001928A (en) Retinal vessel segmentation method and system
CN111815574B (en) Fundus retina blood vessel image segmentation method based on rough set neural network
CN110992270A (en) Multi-scale residual attention network image super-resolution reconstruction method based on attention
CN105825235B (en) A kind of image-recognizing method based on multi-characteristic deep learning
CN111127482B (en) CT image lung and trachea segmentation method and system based on deep learning
CN110599500B (en) Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN107437092A (en) The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net
CN111091573B (en) CT image pulmonary vessel segmentation method and system based on deep learning
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN105825509A (en) Cerebral vessel segmentation method based on 3D convolutional neural network
CN112132817A (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN112258488A (en) Medical image focus segmentation method
CN111161287A (en) Retinal vessel segmentation method based on symmetric bidirectional cascade network deep learning
CN113393446B (en) Convolutional neural network medical image key point detection method based on attention mechanism
Saleh et al. An efficient algorithm for retinal blood vessel segmentation using h-maxima transform and multilevel thresholding
CN112734755A (en) Lung lobe segmentation method based on 3D full convolution neural network and multitask learning
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
CN114565620B (en) Fundus image blood vessel segmentation method based on skeleton prior and contrast loss
CN112598031A (en) Vegetable disease detection method and system
CN112734748A (en) Image segmentation system for hepatobiliary and biliary calculi
CN114581434A (en) Pathological image processing method based on deep learning segmentation model and electronic equipment
CN115294075A (en) OCTA image retinal vessel segmentation method based on attention mechanism
CN112700460A (en) Image segmentation method and system
CN116503431A (en) Codec medical image segmentation system and method based on boundary guiding attention
CN117392153B (en) Pancreas segmentation method based on local compensation and multi-scale adaptive deformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant