CN116152285A - Image segmentation system based on deep learning and gray information - Google Patents

Image segmentation system based on deep learning and gray information Download PDF

Info

Publication number
CN116152285A
CN116152285A CN202310117871.0A CN202310117871A CN116152285A CN 116152285 A CN116152285 A CN 116152285A CN 202310117871 A CN202310117871 A CN 202310117871A CN 116152285 A CN116152285 A CN 116152285A
Authority
CN
China
Prior art keywords
module
convolution block
segmentation
image
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310117871.0A
Other languages
Chinese (zh)
Other versions
CN116152285B (en
Inventor
王宽全
刘亚淑
骆功宁
王玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202310117871.0A priority Critical patent/CN116152285B/en
Publication of CN116152285A publication Critical patent/CN116152285A/en
Application granted granted Critical
Publication of CN116152285B publication Critical patent/CN116152285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The image segmentation system comprises a coding module, a spatial attention module, a gray correction module, a segmentation module and a loss module, wherein the coding module is respectively connected with the spatial attention module, the gray correction module and the segmentation module, the spatial attention module is connected with the gray correction module, and the loss module is respectively connected with the gray correction module and the segmentation module. The invention adopts a deep learning technology and gray bias correction, and the gray bias correction and the segmentation are cooperatively carried out by using the deep learning technology. Belonging to the field of medical image segmentation.

Description

Image segmentation system based on deep learning and gray information
Technical Field
The invention relates to a segmentation system, in particular to a segmentation system based on deep learning and gray information for a small sample nuclear magnetic resonance data image, and belongs to the field of medical image segmentation.
Background
The clinical diagnosis and treatment of many existing diseases require analyzing the patient's disease body structure and related functional indexes by means of nuclear magnetic resonance images, wherein atrial fibrillation is one of the most common arrhythmia diseases, is an important cause of high-mortality and high-disability rate diseases such as cerebral apoplexy and myocardial infarction, and the extraction of atria from nuclear magnetic resonance images is the basis of the analysis work. Ideally, the same tissue has the same gray distribution on the image, while the gray distribution varies between different tissues. In fact, due to the diversity of human body structures and the problem of image acquisition equipment, the same tissue gray scales in the nuclear magnetic resonance image can be subjected to a phenomenon of inconsistent distribution, namely, offset exists in the nuclear magnetic resonance image, and the process of extracting the inconsistent gray scales is called offset correction. Unlike natural images, gray scale distribution is an important basis for segmenting medical images, and the presence of bias further increases the difficulty of an automated segmentation method.
The existing medical image automatic segmentation technology mainly comprises a method based on an active contour model, a method based on deep learning and the like. The method based on the active contour model belongs to an unsupervised model, and has no larger requirement on data quantity, but the method mainly depends on gray distribution differences among different tissues to realize segmentation, so that the segmentation accuracy in nuclear magnetic resonance images with uneven gray distribution is lower. Although the segmentation method based on the deep learning technology greatly improves the automatic segmentation efficiency, the segmentation method depends on a large amount of data, has poor generalization performance, seriously hinders the development and popularization of the segmentation of the nuclear magnetic resonance image, and can also cause the results of incomplete segmentation, discontinuous segmentation targets and the like due to uneven gray level distribution of the nuclear magnetic resonance image.
Disclosure of Invention
The invention aims to solve the problems that the existing automatic segmentation technology of medical images depends on a large amount of image data when performing nuclear magnetic resonance image segmentation, has poor generalization performance, has low segmentation accuracy in nuclear magnetic resonance images with uneven gray scale distribution, or has insufficient segmentation and discontinuous segmentation targets, and further provides an image segmentation system based on deep learning and gray scale information.
The device comprises a coding module, a spatial attention module, a gray correction module, a segmentation module and a loss module;
the coding module is respectively connected with the spatial attention module, the gray correction module and the segmentation module, the spatial attention module is connected with the gray correction module, and the loss module is respectively connected with the gray correction module and the segmentation module;
the coding module is used for receiving the nuclear magnetic resonance image, coding the nuclear magnetic resonance image, extracting shallow layer characteristics of the nuclear magnetic resonance image and respectively transmitting the shallow layer characteristics to the spatial attention module, the gray correction module and the segmentation module; the system is also used for receiving the gradient returned by the spatial attention module, the gradient returned by the gray correction module and the gradient returned by the segmentation module;
the coding module sequentially comprises four groups of structures and a convolution block, and each group of structures sequentially comprises a convolution block and a maximum pooling layer;
the spatial attention module is used for receiving shallow features output by the convolution blocks in the four groups of structures in the coding module, weighting each shallow feature to obtain weighted shallow features, and sending the weighted shallow features to the gray correction module; the system is also used for receiving the gradient returned by the gray correction module and transmitting the gradient and the gradient of the module to the coding module together;
the gray correction module is used for receiving the shallow features output by the encoding module and the weighted shallow features output by the spatial attention module, carrying out gray offset correction on the nuclear magnetic resonance image based on the features, and outputting an unbiased image and an offset image of the nuclear magnetic resonance image; the system is also used for receiving the gradient returned by the loss module and sending the gradient and the gradient of the module to the spatial attention module and the coding module together;
the segmentation module is used for receiving the shallow layer characteristics output by the coding module and the gradient returned by the gray correction module received by the coding module, segmenting the nuclear magnetic resonance image based on the received characteristics and gradient, and outputting distribution diagrams of a target area and a background area; the gradient receiving module is also used for receiving the gradient returned by the loss module and sending the gradient and the gradient of the module to the coding module together;
the loss module is used for receiving the nuclear magnetic resonance image, the unbiased image and the offset image output by the gray correction module, the distribution diagram of the target area and the background area output by the segmentation module, calculating segmentation loss, offset correction loss, segmentation and offset correction joint optimization loss and joint loss based on the images, and sending the calculated loss result to the gray correction module and the segmentation module.
Further, each convolution block in the encoding module includes, in order, a convolution layer, a ReLU activation layer, and a group regularization layer.
Further, the spatial attention module comprises four groups of structures in sequence, each group of structures comprising one convolution layer and two nonlinear activation layers in sequence.
Further, the gray correction module sequentially comprises four groups of structures, a convolution block and a Sigmoid activation layer, and each group of structures sequentially comprises a convolution block and a linear interpolation up-sampling layer.
Further, each convolution block in the gray correction module sequentially comprises a convolution layer, a nonlinear activation layer and a batch regularization layer.
Further, the segmentation module sequentially comprises four groups of structures, a convolution block and a Softmax activation layer, and each group of structures sequentially comprises a convolution block and a deconvolution up-sampling layer.
Further, each convolution block in the segmentation module comprises a convolution layer, a nonlinear activation layer and a group regularization layer in sequence.
Further, an image segmentation method based on deep learning and gray information, comprising the steps of:
s1, randomly dividing nuclear magnetic resonance images into a training set D t ={(x i ,y i ) I=1,..n } and verification set D v ={(x j ,y j ) I j=1, & gt, M }, wherein x represents the nuclear magnetic resonance image, y represents the label corresponding to the nuclear magnetic resonance image, i represents the ith nuclear magnetic resonance image in the training set, N represents the total number of nuclear magnetic resonance images in the training set, j represents the jth nuclear magnetic resonance image in the verification set, and M represents the total number of nuclear magnetic resonance images in the verification set;
s2, training set D t In the encoding module, when the nmr image is input into the a-convolution block 1, the a-convolution block 1 represents the first convolution block of the a-encoding module, and the a-convolution block 1 outputs the shallow feature F 1 a F is to F 1 a Input and a. In the maximum pooling layer connected with the convolution block 1, output characteristics
Figure BDA0004079204790000031
Will->
Figure BDA0004079204790000032
In the input a, convolution block 2, a, convolution block 2 represents the second convolution block of the a coding module, and shallow layer characteristic F is output 2 a Repeating the above operation until a. The convolution block 5 outputs the shallow feature F 5 a Convolution block 5 represents the fifth convolution block of the a coding module, and obtains the shallow characteristic F output by each convolution block in the coding module s a S=1, 2,3,4,5, resulting in a shallow feature set +.>
Figure BDA0004079204790000033
S3, outputting shallow layer characteristics F of the first four convolution blocks in the coding module 1 a -F 4 a Respectively input into the corresponding first four convolution blocks in the spatial attention module, namely, the a. Convolution block 1 outputsShallow layer feature F 1 a Input b. in convolution block 1, b. convolution block 1 represents the first convolution block of the b-space attention module, output feature F 1 b F is to F 1 b Input to the b-nonlinear active layer connected to b-convolution block 1, output characteristics
Figure BDA0004079204790000034
Shallow layer feature F output by convolution block 2 2 a In the input b. convolution block 2, b. convolution block 2 represents the second convolution block of the b-space attention module, output feature F 2 b F is to F 2 b In the b.nonlinear active layer with input connected to the b.convolution block 2, the output feature +.>
Figure BDA0004079204790000035
Repeating the above operation until the output characteristic +.>
Figure BDA0004079204790000036
Get the characteristics->
Figure BDA0004079204790000037
S4, outputting shallow layer characteristics F of a fifth convolution block in the coding module 5 a The fifth convolution block of the direct input gray correction module outputs the characteristic F 5 c F is to F 5 c Input to the linear interpolation upsampling layer coupled to the fifth convolution block of the gray correction module, output features
Figure BDA0004079204790000038
Shallow layer characteristic F of fourth convolution block output in coding module 4 a Features of the fourth convolution block output in the spatial attention module +.>
Figure BDA0004079204790000039
And features->
Figure BDA00040792047900000310
Input ashIn the fourth convolution block of the degree correction module, the output characteristic F 4 c F is to F 4 c In a linear interpolation upsampling layer connected to a fourth convolution block of the gray correction module, the output characteristic +.>
Figure BDA00040792047900000311
The characteristic is obtained according to the whole processing procedure of the fourth convolution block>
Figure BDA0004079204790000041
F 1 c Outputting the characteristic F of the first convolution block of the obtained gray correction module 1 c Inputting a Sigmoid activation layer, and outputting an unbiased image J and a biased image B of the nuclear magnetic resonance image;
s5, outputting shallow layer characteristics F of a fifth convolution block in the coding module 5 a Directly input into a fifth convolution block of the segmentation module to output a characteristic F 5 d F is to F 5 d In the deconvolution upsampling layer with the input connected to the fifth convolution block of the segmentation module, the features are output
Figure BDA0004079204790000042
Shallow layer characteristic F of fourth convolution block output in coding module 4 a And features->
Figure BDA0004079204790000043
In a fourth convolution block of the input segmentation module, the output feature F 4 d F is to F 4 d After the deconvolution process of the input connection with the fourth convolution block of the segmentation module, the feature +.>
Figure BDA0004079204790000044
The characteristic is obtained according to the whole processing procedure of the fourth convolution block>
Figure BDA0004079204790000045
F 1 d First of the obtained segmentation modulesFeature F of the output of the convolution blocks 1 d Inputting a Softmax activation layer, and outputting a distribution map p of a target area and a background area;
s6, respectively calculating segmentation loss, offset correction loss, segmentation and offset correction joint optimization loss and joint loss according to the obtained offset image B, the unbiased image J, the distribution map p of the target area and the background area, the input nuclear magnetic resonance image x and the corresponding label y;
s7, carrying out iterative training on all training data according to S2-S6, calculating the gradient of each module after the corresponding results are output by the modules in each iteration, returning the gradient of each module to the module connected with the module, correcting the learning direction of the next iteration, and stopping training until the iteration number reaches the set maximum iteration number to obtain a trained segmentation system;
s8, verifying the set D v Inputting the nuclear magnetic resonance image into a trained segmentation system to obtain a corresponding segmentation result.
Further, segmentation loss in S6:
Figure BDA0004079204790000046
wherein λ represents the weight used to balance the two losses;
offset correction loss:
Figure BDA0004079204790000047
/>
Figure BDA0004079204790000051
wherein p is i A distribution map representing a target region and a background region of the ith nuclear magnetic resonance image;
Figure BDA0004079204790000052
to calculate the corresponding imageA gradient; c i Representing the gray average value of the ith nuclear magnetic resonance image; tv (B) represents a total variation function for calculating the smoothness of the offset image, obtained by calculating the sum of partial derivatives of each pixel v on the offset image B in the X, Y, Z directions, respectively; v represents the total number of pixels of the current nuclear magnetic resonance image x.
Further, the segmentation and bias correction joint optimization loss in S6:
L wce (J,p,y)=-e (1-g(J)) ylogp
Figure BDA0004079204790000053
wherein g (J) represents a boundary indication function;
joint loss:
L=L seg1 L lsf2 L wce
wherein, gamma 1 、γ 2 Representing the weights.
The beneficial effects are that:
the invention discloses an image segmentation system which comprises a coding module, a spatial attention module, a gray correction module, a segmentation module and a loss module, wherein the coding module, the spatial attention module, the gray correction module and the segmentation module are all constructed by deep learning, the coding module is respectively connected with the spatial attention module, the gray correction module and the segmentation module, the spatial attention module is connected with the gray correction module, and the loss module is respectively connected with the gray correction module and the segmentation module. The coding module is used for extracting shallow layer characteristics of the nuclear magnetic resonance image; the spatial attention module is used for weighting the shallow features to obtain weighted shallow features; the gray correction module is used for obtaining an unbiased image and a biased image of the nuclear magnetic resonance image according to the shallow features and the weighted shallow features; the segmentation module is used for segmenting the original nuclear magnetic resonance image according to the shallow layer characteristics and the unbiased image and the offset image of the nuclear magnetic resonance image and outputting distribution diagrams of the target area and the background area; the loss module is used for calculating segmentation loss, bias correction loss, segmentation and bias correction joint optimization loss (cooperative loss) and joint loss according to distribution diagrams of the nuclear magnetic resonance image, the unbiased image and the bias image, and the target area and the background area.
The segmentation system constructed by the invention adopts the deep learning technology, the gray offset correction, the gray distribution attention weighting technology and the like. When the segmentation task of the nuclear magnetic resonance image of the small sample is carried out, the two tasks of gray bias correction and segmentation are carried out cooperatively by utilizing a deep learning technology, the gray bias correction module enables the system to have the capability of sensing gray distribution information by constructing an energy functional needed by the active contour system, gives out a bias image and an unbiased image of the original nuclear magnetic resonance image, the segmentation module realizes the separation of a target and a background by constructing an optimization function, and utilizes the homogeneity of the unbiased image to link the unbiased image with a segmentation result, so that the generalization and the accuracy of the segmentation task are improved, the aim of mutual promotion and mutual optimization of the two is realized by constructing a cooperative loss function of the two, and the segmentation system with good segmentation performance based on the small sample training set is realized. The automatic extraction method fused with gray bias information has important significance for improving the personalized diagnosis and treatment efficiency of atrial fibrillation diseases.
The offset image in the gray offset correction module guides the segmentation method to learn the personal characteristics of the nuclear magnetic resonance image, so that the nuclear magnetic resonance image is prevented from being interfered by uneven gray distribution, the segmentation accuracy is improved, and the problems of incomplete segmentation and discontinuous segmentation targets are effectively solved. Meanwhile, the collaborative loss function built by the unbiased image and the segmentation result guides the segmentation method to learn more common characteristics of target distribution, so that the segmentation task is focused on the structural information of the target, the robustness and generalization capability of the segmentation task are improved, and therefore, satisfactory performance can be achieved by using a small number of samples. The method overcomes the difficulty that a large amount of training data is needed by a deep learning system, and also solves the problems of difficult data and label acquisition and high cost.
Drawings
FIG. 1 is a schematic illustration of the present invention;
Detailed Description
The first embodiment is as follows: referring to fig. 1, an image segmentation system based on deep learning and gray information according to the present embodiment includes a coding module, a spatial attention module, a gray correction module, a segmentation module, and a loss module, where the coding module is connected to the spatial attention module, the gray correction module, and the segmentation module, the spatial attention module is connected to the gray correction module, and the loss module is connected to the gray correction module and the segmentation module, respectively.
a) The coding module is used for receiving the nuclear magnetic resonance image, coding the nuclear magnetic resonance image, extracting shallow layer characteristics of the nuclear magnetic resonance image by convolution and other operations, and respectively transmitting the shallow layer characteristics into the b space attention module, the c gray correction module and the d segmentation module; and the system is also used for receiving the gradient returned by the b space attention module (obtained through gradient return), the gradient returned by the c gray correction module and the gradient returned by the d segmentation module.
The coding module sequentially comprises four groups of structures and a convolution block, wherein each group of structures sequentially comprises a convolution block and a maximum pooling layer (Max-pooling layer), and the convolution blocks are connected with the maximum pooling layer. Each convolution block includes, in order, a convolution layer (Convolution layer), a ReLU activation layer (ReLU activation layer), and a group regularization layer (Group normalization layer).
The maximum pooling layer reduces the feature size by half, the length and width of the input nuclear magnetic resonance image are assumed to be 8, the length and width of the input nuclear magnetic resonance image are changed into 4 after the maximum pooling layer is processed once, and the maximum pooling layer is changed into 2 after the maximum pooling layer is processed once, and the maximum pooling layer is equivalent to the reduction operation, so that the calculated amount is reduced to a certain extent.
b) The spatial attention module is used for receiving shallow features output by the convolution blocks in the four groups of structures in the a coding module, weighting each shallow feature to obtain weighted shallow features, and sending the weighted shallow features to the c gray correction module; and the gradient receiving module is also used for receiving the gradient returned by the c gray correction module and sending the gradient and the gradient of the module to the a coding module.
The spatial attention module comprises four groups of structures in sequence, each group of structures comprising one convolution layer and two nonlinear activation layers in sequence, the nonlinear activation layers being b.nonlinear activation layers 1 and b.nonlinear activation layers 2 in fig. 1.
c) The gray correction module is used for receiving the shallow characteristics output by each convolution block in the coding module a and the weighted shallow characteristics output by the spatial attention module b, decoding the characteristics, realizing gray offset correction on the nuclear magnetic resonance image based on the characteristics, and outputting an unbiased image and an offset image of the nuclear magnetic resonance image; and the system is also used for receiving the gradient returned by the loss module and sending the gradient and the gradient of the module to the b space attention module and the a coding module. The gray correction module receives the gradient returned by the bias correction loss, the segmentation and the bias correction joint optimization loss (cooperative loss) in the loss module.
The gray correction module sequentially comprises four groups of structures, a convolution block and a Sigmoid activation layer, wherein each group of structures sequentially comprises a convolution block and a linear interpolation up-sampling layer (Linear interpolation upsampling layer), and the convolution blocks are connected with the linear interpolation up-sampling layer. Each convolution block includes, in order, a convolution layer, a nonlinear activation layer, and a batch regularization layer (Batch normalization layer). The linear interpolation upsampling layer doubles the feature size.
d) The segmentation module is used for receiving shallow features output by each convolution block in the a coding module and gradients returned by the c gray correction module received by the a coding module, decoding the shallow features, and combining the gradients returned by each module received by the a coding module to realize the accurate segmentation of the nuclear magnetic resonance image and output distribution diagrams of a target area and a background area; and the gradient receiving module is also used for receiving the gradient returned by the loss module and sending the gradient and the gradient of the module to the a coding module. The segmentation module receives the gradient returned by each loss in the loss module.
The segmentation module sequentially comprises four groups of structures, a convolution block and a Softmax activation layer, wherein each group of structures sequentially comprises a convolution block and a deconvolution up-sampling layer (Transpose convolution layer), and the convolution block is connected with the deconvolution up-sampling layer. Each convolution block comprises a convolution layer, a nonlinear activation layer and a group regularization layer in sequence. The deconvolution layer doubles the feature size.
e) The loss module is used for receiving the nuclear magnetic resonance image, the unbiased image and the offset image output by the c gray correction module, the distribution diagram of the target area and the background area output by the d segmentation module, calculating segmentation loss, offset correction loss, segmentation and offset correction joint optimization loss (cooperative loss) and joint loss based on the images, and sending calculated loss results to the c gray correction module and the d segmentation module.
The second embodiment is as follows: the image segmentation method based on deep learning and gray information according to the present embodiment will be described with reference to fig. 1, and includes the following steps:
step 1, randomly dividing the nuclear magnetic resonance image into a training set D t ={(x i ,y i ) I=1,..n } and verification set D v ={(x j ,y j ) I j=1..m }, wherein x represents the nuclear magnetic resonance image, y represents the label corresponding to the nuclear magnetic resonance image, the label comprises a foreground Zuo Xin room and a background, the left atrium pixel value is 1, the background pixel value is 0, i represents the ith nuclear magnetic resonance data in the training set, N represents the total number of nuclear magnetic resonance data in the training set, j represents the jth nuclear magnetic resonance data in the verification set, and M represents the total number of nuclear magnetic resonance data in the verification set.
Step 2, training set D t Inputting a certain nuclear magnetic resonance image in the a coding module to extract shallow information of the nuclear magnetic resonance image, obtaining shallow characteristics output by each convolution block in the a coding module, and obtaining a shallow characteristic set F a ={F 1 a ,F 2 a ,F 3 a ,F 4 a ,F 5 a }。
As shown in fig. 1, in the encoding module a, after the nmr image is input into the a-convolution block 1, the a-convolution block 1 represents the first convolution block of the a-encoding module, and the a-convolution block 1 outputs the shallow feature F of the current whole nmr image 1 a F is to F 1 a Input to the maximum pooling layer connected to a. Convolution block 1, output characteristicsSign of sign
Figure BDA0004079204790000081
Will->
Figure BDA0004079204790000082
In the input a, convolution block 2, a, convolution block 2 represents the second convolution block of the a coding module, and shallow layer characteristic F is output 2 a Repeating the above operation until a. The convolution block 5 outputs the shallow feature F 5 a The convolution block 5 represents the fifth convolution block of the a coding module, and the shallow layer characteristic F output by each convolution block in the a coding module is obtained s a ,s=1,2,3,4,5。
Step 3, outputting shallow layer characteristics F of the first four convolution blocks in the a coding module 1 a -F 4 a Respectively input into the corresponding first four convolution blocks in the b space attention module, and output the characteristics
Figure BDA0004079204790000083
As shown in fig. 1, in the spatial attention module b, the shallow features F output by the a. convolution block 1 1 a Input b. in convolution block 1, b. convolution block 1 represents the first convolution block of the b-space attention module, output feature F 1 b F is to F 1 b The input is connected with the b-nonlinear activation layer of the b-convolution block 1, and the characteristics are output after the processing of the two layers of nonlinear activation layers
Figure BDA0004079204790000084
Shallow layer feature F output by convolution block 2 2 a In the input b. convolution block 2, b. convolution block 2 represents the second convolution block of the b-space attention module, output feature F 2 b F is to F 2 b In the b.nonlinear active layer with input connected to the b.convolution block 2, the output feature +.>
Figure BDA0004079204790000085
Repeating the above steps to obtain corresponding characteristic +.>
Figure BDA0004079204790000086
Step 4, as shown in fig. 1, in the c gray scale correction module, shallow layer characteristic F output by fifth convolution block (a. Convolution block 5) in the a coding module is obtained 5 a The fifth convolution block (c.convolution block 5) of the direct input c gray correction module outputs the characteristic F 5 c F is to F 5 c Input into the linear interpolation upsampling layer connected to the fifth convolution block (c.convolution block 5) of the c-gray correction module, output features
Figure BDA0004079204790000091
Shallow layer characteristic F output by fourth convolution block (a. Convolution block 4) in a coding module 4 a B features F output by a fourth convolution block (b. Convolution block 4) in the spatial attention module 4 b And features->
Figure BDA0004079204790000092
In a fourth convolution block (c.convolution block 4) of the input c gray correction module, a feature F is output 4 c F is to F 4 c The input is connected to the fourth convolution block (c.convolution block 4) of the c-gray correction module in the linear interpolation upsampled layer, the output feature +.>
Figure BDA0004079204790000093
The characteristic +.>
Figure BDA0004079204790000094
Characteristics->
Figure BDA0004079204790000095
Feature F 1 c The feature F outputted from the first convolution block (c.convolution block 1) of the obtained c-gradation correction module 1 c And inputting the Sigmoid activation layer, and outputting an unbiased image J and a biased image B to obtain a biased correction result.
Step 5, as shown in FIG. 1, in the d-segmentation moduleIn, outputting the shallow characteristic F of the fifth convolution block (a. Convolution block 5) in the a coding module 5 a The fifth convolution block (d.convolution block 5) of the direct input d-segmentation module outputs the characteristic F 5 d F is to F 5 d The input is connected with the deconvolution up-sampling layer of the fifth convolution block (d. convolution block 5) of the d segmentation module, and the output characteristics
Figure BDA0004079204790000096
Shallow layer characteristic F output by fourth convolution block (a. Convolution block 4) in a coding module 4 a And features->
Figure BDA0004079204790000097
In a fourth convolution block (d.convolution block 4) of the input d-segmentation module, the output characteristic F 4 d F is to F 4 d After the deconvolution process of the input connection with the fourth convolution block (d.convolution block 4) of the d-segmentation module, the feature +.>
Figure BDA0004079204790000098
According to the overall processing procedure of the convolution block 4, the feature F output by the first convolution block (d. Convolution block 1) of the d segmentation module is obtained 1 d F is to F 1 d Inputting the Softmax activation layer, outputting the distribution map of the target region (left atrium) and the background region to obtain a segmentation result p (la, bg) =softmax (F 1 d ) Where la, bg represent the segmentation results corresponding to the left atrium and the background, respectively.
Step 6, calculating the segmentation loss L according to the obtained bias image B, unbiased image J, distribution map p of target area and background area, input nuclear magnetic resonance image x and corresponding label y seg (p, y), offset correction loss L lsf (x, b, J, p), segmentation and bias correction joint optimization loss (synergistic loss) L wce (x, b, p, y) and joint loss.
Figure BDA0004079204790000099
Where λ represents the weight used to balance the two losses.
Figure BDA00040792047900000910
Figure BDA0004079204790000101
Wherein p is i The segmentation result (distribution diagram of the target region and the background region) of the ith nmr image is shown, i.e., 1 is shown in the left atrium, and 0 is shown in the background;
Figure BDA0004079204790000102
to calculate gradients of the corresponding images; c i Representing the gray average value of the ith nuclear magnetic resonance image; tv (B) represents a total variation function (Total variation function) for calculating the smoothness of the offset image by calculating the sum of the partial derivatives of each pixel v on the offset image B in the X, Y, Z directions, respectively; v represents the total number of pixels of the current nuclear magnetic resonance image x.
L wce (J,p,y)=-e (1-g(J)) ylogp
Figure BDA0004079204790000103
Where g (J) represents a boundary indication function.
Joint loss:
L=L seg1 L lsf2 L wce
wherein, gamma 1 、γ 2 Representing the weights. After the loss calculation is completed, the loss calculation is transmitted to the corresponding module according to an error counter-propagation algorithm, the gradient of the corresponding module is calculated, and the gradient is used for correcting the learning direction of the next iteration of the segmentation system. The propagation direction is indicated by the dashed arrow in fig. 1.
And 7, carrying out iterative training on all training data according to the steps 2-6, calculating the gradient of each module after the corresponding results are output by the modules in each iteration, returning the gradient of each module to the module connected with the module, correcting the learning direction of the next iteration, and stopping training until the iteration times reach the set maximum iteration times, thereby obtaining the trained segmentation system.
Step 8, the verification set D v Inputting the nuclear magnetic resonance image into a trained segmentation system to obtain segmentation results of the left atrium and the background.

Claims (10)

1. The image segmentation system based on deep learning and gray information is characterized in that: the device comprises a coding module, a spatial attention module, a gray correction module, a segmentation module and a loss module;
the coding module is respectively connected with the spatial attention module, the gray correction module and the segmentation module, the spatial attention module is connected with the gray correction module, and the loss module is respectively connected with the gray correction module and the segmentation module;
the coding module is used for receiving the nuclear magnetic resonance image, coding the nuclear magnetic resonance image, extracting shallow layer characteristics of the nuclear magnetic resonance image and respectively transmitting the shallow layer characteristics to the spatial attention module, the gray correction module and the segmentation module; the system is also used for receiving the gradient returned by the spatial attention module, the gradient returned by the gray correction module and the gradient returned by the segmentation module;
the coding module sequentially comprises four groups of structures and a convolution block, and each group of structures sequentially comprises a convolution block and a maximum pooling layer;
the spatial attention module is used for receiving shallow features output by the convolution blocks in the four groups of structures in the coding module, weighting each shallow feature to obtain weighted shallow features, and sending the weighted shallow features to the gray correction module; the system is also used for receiving the gradient returned by the gray correction module and transmitting the gradient and the gradient of the module to the coding module together;
the gray correction module is used for receiving the shallow features output by the encoding module and the weighted shallow features output by the spatial attention module, carrying out gray offset correction on the nuclear magnetic resonance image based on the features, and outputting an unbiased image and an offset image of the nuclear magnetic resonance image; the system is also used for receiving the gradient returned by the loss module and sending the gradient and the gradient of the module to the spatial attention module and the coding module together;
the segmentation module is used for receiving the shallow layer characteristics output by the coding module and the gradient returned by the gray correction module received by the coding module, segmenting the nuclear magnetic resonance image based on the received characteristics and gradient, and outputting distribution diagrams of a target area and a background area; the gradient receiving module is also used for receiving the gradient returned by the loss module and sending the gradient and the gradient of the module to the coding module together;
the loss module is used for receiving the nuclear magnetic resonance image, the unbiased image and the offset image output by the gray correction module, the distribution diagram of the target area and the background area output by the segmentation module, calculating segmentation loss, offset correction loss, segmentation and offset correction joint optimization loss and joint loss based on the images, and sending the calculated loss result to the gray correction module and the segmentation module.
2. The image segmentation system based on deep learning and gray information as set forth in claim 1, wherein: each convolution block in the coding module comprises a convolution layer, a ReLU activation layer and a group regularization layer in sequence.
3. The image segmentation system based on deep learning and gray information as set forth in claim 2, wherein: the spatial attention module comprises four groups of structures in sequence, and each group of structures comprises a convolution layer and two nonlinear activation layers in sequence.
4. The image segmentation system based on deep learning and gray information as set forth in claim 3, wherein: the gray correction module sequentially comprises four groups of structures, a convolution block and a Sigmoid activation layer, and each group of structures sequentially comprises a convolution block and a linear interpolation up-sampling layer.
5. The image segmentation system based on deep learning and gray information as set forth in claim 4, wherein: each convolution block in the gray correction module comprises a convolution layer, a nonlinear activation layer and a batch regularization layer in sequence.
6. The image segmentation system based on deep learning and gray information as set forth in claim 5, wherein: the segmentation module sequentially comprises four groups of structures, a convolution block and a Softmax activation layer, and each group of structures sequentially comprises a convolution block and a deconvolution up-sampling layer.
7. The image segmentation system based on deep learning and gray information as set forth in claim 6, wherein: each convolution block in the segmentation module comprises a convolution layer, a nonlinear activation layer and a group regularization layer in sequence.
8. The image segmentation method of the image segmentation system based on the deep learning and gray information as set forth in claim 1, wherein: an image segmentation method based on deep learning and gray information comprises the following steps:
s1, randomly dividing nuclear magnetic resonance images into a training set D t ={(x i ,y i ) I=1,..n } and verification set D v ={(x j ,y j ) I j=1, & gt, M }, wherein x represents the nuclear magnetic resonance image, y represents the label corresponding to the nuclear magnetic resonance image, i represents the ith nuclear magnetic resonance image in the training set, N represents the total number of nuclear magnetic resonance images in the training set, j represents the jth nuclear magnetic resonance image in the verification set, and M represents the total number of nuclear magnetic resonance images in the verification set;
s2, training set D t In the encoding module, when the nmr image is input into the a-convolution block 1, the a-convolution block 1 represents the first convolution block of the a-encoding module, and the a-convolution block 1 outputs the shallow feature F 1 a F is to F 1 a Input to the maximum pooling layer connected to a. Convolution block 1, outputFeatures (e.g. a character)
Figure FDA0004079204780000021
Will->
Figure FDA0004079204780000022
In the input a. Convolution block 2, a. Convolution block 2 represents the second convolution block of the a coding module, and shallow layer characteristics are output>
Figure FDA0004079204780000023
Repeating the above operation until a. The convolution block 5 outputs shallow features
Figure FDA0004079204780000024
a. Convolution block 5 represents the fifth convolution block of the a coding module, and the shallow characteristic F output by each convolution block in the coding module is obtained s a S=1, 2,3,4,5, resulting in a shallow feature set +.>
Figure FDA0004079204780000025
S3, outputting shallow layer characteristics of the first four convolution blocks in the coding module
Figure FDA0004079204780000026
Respectively input into the corresponding first four convolution blocks in the spatial attention module, namely, shallow layer characteristic F output by a convolution block 1 1 a Input b. in convolution block 1, b. convolution block 1 represents the first convolution block of the b-space attention module, output feature F 1 b F is to F 1 b In the b.nonlinear active layer with input connected to b.convolution block 1, the output feature +.>
Figure FDA00040792047800000210
Shallow feature of the output of convolution block 2 +.>
Figure FDA0004079204780000027
Input into convolution block 2, convolution block2 represents the second convolution block of the b-space attention module, output feature +.>
Figure FDA0004079204780000028
Will->
Figure FDA0004079204780000029
In the b.nonlinear active layer with input connected to the b.convolution block 2, the output feature +.>
Figure FDA0004079204780000031
Repeating the above operation until the output characteristic +.>
Figure FDA0004079204780000032
Get the characteristics->
Figure FDA0004079204780000033
S4, outputting shallow layer characteristics of a fifth convolution block in the coding module
Figure FDA0004079204780000034
In the fifth convolution block of the direct input gray correction module, the output feature +.>
Figure FDA0004079204780000035
Will->
Figure FDA0004079204780000036
In the linear interpolation up-sampling layer connected with the fifth convolution block of the gray correction module, the output characteristic +.>
Figure FDA0004079204780000037
Shallow features of the fourth convolution block output in the coding module +.>
Figure FDA0004079204780000038
The fourth convolution block output in the spatial attention moduleCharacteristics->
Figure FDA0004079204780000039
And features->
Figure FDA00040792047800000310
In the fourth convolution block of the input gray correction module, the output characteristics +.>
Figure FDA00040792047800000311
Will be
Figure FDA00040792047800000312
In a linear interpolation upsampling layer connected to a fourth convolution block of the gray correction module, the output characteristic +.>
Figure FDA00040792047800000313
The characteristic is obtained according to the whole processing procedure of the fourth convolution block>
Figure FDA00040792047800000314
F 1 c Outputting the characteristic F of the first convolution block of the obtained gray correction module 1 c Inputting a Sigmoid activation layer, and outputting an unbiased image J and a biased image B of the nuclear magnetic resonance image;
s5, outputting shallow layer characteristics of a fifth convolution block in the coding module
Figure FDA00040792047800000315
In the fifth convolution block of the direct input segmentation module, the output characteristics +.>
Figure FDA00040792047800000316
Will->
Figure FDA00040792047800000317
In the deconvolution upsampling layer, to which the fifth convolution block of the segmentation module is connected, the output characteristic +.>
Figure FDA00040792047800000318
Shallow features of the fourth convolution block output in the coding module +.>
Figure FDA00040792047800000319
And features->
Figure FDA00040792047800000320
In the fourth convolution block of the input segmentation module, the output feature +.>
Figure FDA00040792047800000321
Will->
Figure FDA00040792047800000322
After the deconvolution process of the input connection with the fourth convolution block of the segmentation module, the feature +.>
Figure FDA00040792047800000323
The characteristic is obtained according to the whole processing procedure of the fourth convolution block>
Figure FDA00040792047800000324
F 1 d Outputting the characteristic F of the first convolution block of the obtained segmentation module 1 d Inputting a Softmax activation layer, and outputting a distribution map p of a target area and a background area;
s6, respectively calculating segmentation loss, offset correction loss, segmentation and offset correction joint optimization loss and joint loss according to the obtained offset image B, the unbiased image J, the distribution map p of the target area and the background area, the input nuclear magnetic resonance image x and the corresponding label y;
s7, carrying out iterative training on all training data according to S2-S6, calculating the gradient of each module after the corresponding results are output by the modules in each iteration, returning the gradient of each module to the module connected with the module, correcting the learning direction of the next iteration, and stopping training until the iteration number reaches the set maximum iteration number to obtain a trained segmentation system;
s8, verifying the set D v Inputting the nuclear magnetic resonance image into a trained segmentation system to obtain a corresponding segmentation result.
9. The image segmentation system based on deep learning and gray information as set forth in claim 8, wherein: segmentation loss in S6:
Figure FDA0004079204780000041
wherein λ represents the weight used to balance the two losses;
offset correction loss:
Figure FDA0004079204780000042
Figure FDA0004079204780000043
wherein p is i A distribution map representing a target region and a background region of the ith nuclear magnetic resonance image; to calculate the gradient of the corresponding image; c i Representing the gray average value of the ith nuclear magnetic resonance image; tv (B) represents a total variation function for calculating the smoothness of the offset image, obtained by calculating the sum of partial derivatives of each pixel v on the offset image B in the X, Y, Z directions, respectively; v represents the total number of pixels of the current nuclear magnetic resonance image x.
10. The image segmentation system based on deep learning and gray information as set forth in claim 9, wherein: segmentation and bias correction joint optimization loss in S6:
L wce (J,p,y)=-e (1-g(J)) ylogp
Figure FDA0004079204780000044
wherein g (J) represents a boundary indication function;
joint loss:
L=L seg1 L lsf2 L wce
wherein, gamma 1 、γ 2 Representing the weights.
CN202310117871.0A 2023-02-15 2023-02-15 Image segmentation system based on deep learning and gray information Active CN116152285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310117871.0A CN116152285B (en) 2023-02-15 2023-02-15 Image segmentation system based on deep learning and gray information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310117871.0A CN116152285B (en) 2023-02-15 2023-02-15 Image segmentation system based on deep learning and gray information

Publications (2)

Publication Number Publication Date
CN116152285A true CN116152285A (en) 2023-05-23
CN116152285B CN116152285B (en) 2023-08-18

Family

ID=86338665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310117871.0A Active CN116152285B (en) 2023-02-15 2023-02-15 Image segmentation system based on deep learning and gray information

Country Status (1)

Country Link
CN (1) CN116152285B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233135A (en) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 Retinal vessel segmentation method in fundus image and computer-readable storage medium
CN114066866A (en) * 2021-11-23 2022-02-18 湖南科技大学 Medical image automatic segmentation method based on deep learning
CN114359310A (en) * 2022-01-13 2022-04-15 浙江大学 3D ventricle nuclear magnetic resonance video segmentation optimization system based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233135A (en) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 Retinal vessel segmentation method in fundus image and computer-readable storage medium
CN114066866A (en) * 2021-11-23 2022-02-18 湖南科技大学 Medical image automatic segmentation method based on deep learning
CN114359310A (en) * 2022-01-13 2022-04-15 浙江大学 3D ventricle nuclear magnetic resonance video segmentation optimization system based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TIANXIANG OUYANG,ET AL.: "Rethinking U-Net from an Attention Perspective with Transformers for Osteosarcoma MRI Image Segmentation", 《COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE》 *
YASHU LIU,ET AL.: "Uncertainty-guided symmetric multilevel supervision network for 3D left atrium segmentation in late gadolinium-enhanced MRI", 《MEDICAL PHYSICS》 *
骆功宁: "结合领域知识的心室核磁共振影像分割与定量分析", 《中国博士学位论文全文数据库 医药卫生科技辑》 *

Also Published As

Publication number Publication date
CN116152285B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN110739070A (en) brain disease diagnosis method based on 3D convolutional neural network
CN111667445B (en) Image compressed sensing reconstruction method based on Attention multi-feature fusion
CN111932550A (en) 3D ventricle nuclear magnetic resonance video segmentation system based on deep learning
CN112132878B (en) End-to-end brain nuclear magnetic resonance image registration method based on convolutional neural network
CN110648331B (en) Detection method for medical image segmentation, medical image segmentation method and device
CN111487573B (en) Enhanced residual error cascade network model for magnetic resonance undersampling imaging
CN112819831B (en) Segmentation model generation method and device based on convolution Lstm and multi-model fusion
Hu et al. Hyperspectral image super resolution based on multiscale feature fusion and aggregation network with 3-D convolution
CN110853048A (en) MRI image segmentation method, device and storage medium based on rough training and fine training
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN114565594A (en) Image anomaly detection method based on soft mask contrast loss
CN115457020A (en) 2D medical image registration method fusing residual image information
CN117078692B (en) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion
CN113436237B (en) High-efficient measurement system of complicated curved surface based on gaussian process migration learning
CN114677263A (en) Cross-mode conversion method and device for CT image and MRI image
CN116843679B (en) PET image partial volume correction method based on depth image prior frame
CN116152285B (en) Image segmentation system based on deep learning and gray information
CN117173412A (en) Medical image segmentation method based on CNN and Transformer fusion network
CN115641263A (en) Single-power equipment infrared image super-resolution reconstruction method based on deep learning
CN113298827B (en) Image segmentation method based on DP-Net network
CN114529519A (en) Image compressed sensing reconstruction method and system based on multi-scale depth cavity residual error network
CN114022362A (en) Image super-resolution method based on pyramid attention mechanism and symmetric network
CN115861762B (en) Plug-and-play infinite deformation fusion feature extraction method and application thereof
CN113269815A (en) Deep learning-based medical image registration method and terminal
CN114693753B (en) Three-dimensional ultrasonic elastic registration method and device based on texture retention constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant