CN112472136A - Cooperative analysis method based on twin neural network - Google Patents

Cooperative analysis method based on twin neural network Download PDF

Info

Publication number
CN112472136A
CN112472136A CN202011448450.9A CN202011448450A CN112472136A CN 112472136 A CN112472136 A CN 112472136A CN 202011448450 A CN202011448450 A CN 202011448450A CN 112472136 A CN112472136 A CN 112472136A
Authority
CN
China
Prior art keywords
twin
network module
layer
metric learning
analysis method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011448450.9A
Other languages
Chinese (zh)
Other versions
CN112472136B (en
Inventor
陈芳
叶浩然
谢彦廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202011448450.9A priority Critical patent/CN112472136B/en
Publication of CN112472136A publication Critical patent/CN112472136A/en
Application granted granted Critical
Publication of CN112472136B publication Critical patent/CN112472136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a cooperative analysis method based on a twin neural network, which comprises a twin coding and decoding network module, a twin metric learning network module and a decision network module, wherein a group of picture pairs are input to the twin coding and decoding network module, the twin coding and decoding network module extracts the characteristics of the picture pairs and inputs the characteristics of the picture pairs into the twin metric learning network module, after the twin metric learning network module vectorizes the characteristics of the picture pairs, the distance between two vectors is calculated, the result is input to the decision network module, and the decision network module judges whether the picture pairs are in the same class or not and transmits the result to the twin coding and decoding network module. Tests prove that the method can be used for more effectively analyzing the ultrasonic sequence image and has wide future application prospect.

Description

Cooperative analysis method based on twin neural network
Technical Field
The invention belongs to the technical field of ultrasonic sequence image analysis, and particularly relates to a cooperative analysis method based on a twin neural network.
Background
The ultrasonic sequence image plays an important role in clinical medical diagnosis, greatly changes the clinical diagnosis mode and promotes the development of clinical medicine. As shown in fig. 1, the ultrasound sequence images are acquired by scanning the same target in multiple free directions, or by obtaining sequence images in a continuous scanning mode. The sequence image is not only suitable for examining congenital heart disease, thrombus, internal tumor and other diseases, but also very simple, convenient and accurate to diagnose the visceral calculus. Meanwhile, it also shows the space occupying lesion or trauma of pancreas, kidney, spleen, bladder, adrenal gland and other organs. Therefore, the ultrasonic sequence image analysis technology is especially important for the development of clinical medicine in the future.
Since the data sets of the ultrasound images are acquired continuously or from different angles with respect to the same object, there is a correlation between the images. However, most of the existing image analysis methods are based on single information of the image, and the relevance among pictures is not considered. Therefore, a collaborative analysis method is provided, the collaborative analysis is applied to the ultrasonic images, and the related information between the ultrasonic sequence images is fully utilized to achieve the effect of better analyzing the images.
Disclosure of Invention
The invention provides a cooperative analysis method based on a twin neural network, which aims to solve the problems in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a cooperative analysis method based on a twin neural network comprises a twin coding and decoding network module, a twin metric learning network module and a decision network module, wherein a group of picture pairs (a pair of ultrasonic sequence images) are input to the twin coding and decoding network module, the twin coding and decoding network module extracts the characteristics of the picture pairs and inputs the characteristics of the picture pairs into the twin metric learning network module, after the twin metric learning network module vectorizes the characteristics of the picture pairs, the distance between two vectors is calculated, the result is input into the decision network module, and the decision network module judges whether the picture pairs are in the same class or not and transmits the result to the twin coding and decoding network module.
Further, the twin encoding and decoding network module comprises a five-layer encoder and a five-layer decoder, wherein the encoder is a feature extraction convolutional neural network based on VGG16, each layer comprises 2 to 3 convolutional layers and a maximum pooling layer with a convolutional kernel of 2x2, and the decoder converts an input picture of 224x224x3 (length is 224, width is 224 and depth is 3) into a semantic feature map of 7x7x512 (length is 7, width is 7 and depth is 512), and the semantic feature map is used as an input part of the decoder; the decoder comprises five layers, each layer comprises 2 to 3 transposition convolution layers and 1 nearest neighbor interpolation layer, the size of the feature map is increased by using nearest neighbor interpolation, but the error of picture feature analysis is increased at the same time, so that the transposition convolution layer is added after each nearest neighbor interpolation, and the error is reduced; ReLU operation is performed once at the end of the transposition convolution layer in each layer except the 5 th layer, and the 5 th layer converts the feature map into a co-partitioned prediction result M through a Sigmoid function.
Furthermore, the twin metric learning network module comprises two fully connected layers which are respectively 128-dimensional and 64-dimensional, and the first layer is connected with a nonlinear excitation function ReLU; the twin metric learning network module firstly carries out global average pooling on a pair of feature maps output by the deconvolution layer cT9 of the twin coding and decoding network module to obtain a pair of vectors fu1And fu2A 1 is to fu1And fu2Inputting the vector into a twin metric learning network module and outputting a pair of vectors fs1,fs2To the decision network module.
Further, the decision network module determines that the picture pair is of the same type, and the determination result is 1, and the decision network module determines that the picture pair is not of the same type, and the determination result is 0.
Furthermore, the decision network module obtains a 128-dimensional vector as input by splicing two vectors output by the twin metric learning network module, sequentially passes through two full-connection layers, the first full-connection layer is 32-dimensional and is connected with the nonlinear activation function ReLU, the second full-connection layer is 1-dimensional, and the prediction result is converted into the probability between 0 and 1 by connecting a Sigmoid function, so as to infer whether the group of pictures are of the same type, if the probability value is close to 1, the pictures represent the same type, otherwise, the probability value is not.
Further, in performing the segmentation task, the loss function used is:
Lfinal=w1L1+w2L2+w3L3
wherein: l isfinalIs the total loss of the model, w1、w2、w3Is a weight, L1The binary cross entropy is a loss function for training the twin encoding and decoding network; l is2Is triplet loss, which is a loss function for training twin metric learning networks;
Figure BDA0002825770320000021
wherein: a is an error value and a is a value,
Figure BDA0002825770320000022
two vectors are input; y represents whether the two vector labels are the same; when the same y is 1, when the same y is-1;
loss function L of decision network3Is the BCE loss, and the BCE loss,
Figure BDA0002825770320000023
wherein: i represents the number of samples, yrEqual to 0 or 1 indicates whether the input sample pair is of the same class;
when the detection task is executed, the L3 is changed to smooth L1 loss,
Figure BDA0002825770320000024
Figure BDA0002825770320000031
wherein: x is the element-wise difference between the prediction box and the ground channel.
Compared with the prior art, the invention has the following beneficial effects:
the invention can more effectively analyze the ultrasonic sequence image and has wide future application prospect.
Drawings
FIG. 1 is an ultrasound sequence image acquisition diagram of an ultrasound image acquisition;
FIG. 2 is a framework flow diagram of the present invention;
FIG. 3 is a block diagram of twin encoding and decoding network modules in the present invention;
FIG. 4 is a block diagram of a twin metric learning network module of the present invention;
FIG. 5 is a block diagram of a decision network module in the present invention;
FIG. 6 is a network flow of the present invention;
FIG. 7 is an ultrasound image of a bone;
FIG. 8 is a segmentation result;
FIG. 9 shows the results of the detection.
Detailed Description
The present invention will be further described with reference to the following examples.
Example 1
In order to realize the cooperative analysis of the ultrasonic sequence images, the invention provides a cooperative analysis method based on a twin neural network, which comprises a twin coding and decoding network module, a twin metric learning network module and a decision network module, as shown in fig. 2, a group of picture pairs (a pair of ultrasonic sequence images) are input to the twin coding and decoding network module, the twin coding and decoding network module extracts the characteristics of the picture pairs and inputs the characteristics of the picture pairs into the twin metric learning network module, the twin metric learning network module calculates the distance between two vectors after vectorizing the characteristics of the picture pairs and inputs the result into the decision network module, and the decision network module judges whether the picture pairs are in the same class or not and transmits the result to the twin coding and decoding network module.
The twin encoding and decoding network module is shown in fig. 3, in which: i is1For the input picture, f1For picture features extracted by the encoder, M1For predicting the result, the twin encoding and decoding network module comprises a five-layer encoder and a five-layer decoder, wherein the encoder is a feature extraction convolutional neural network based on VGG16, and each layer comprises 2 to 3 convolutional neural networksThe decoder converts an input picture of 224x224x3 (with the length of 224, the width of 224 and the depth of 3) into a semantic feature map of 7x7x512 (with the length of 7, the width of 7 and the depth of 512), and the semantic feature map is used as an input part of the decoder; the decoder comprises five layers, each layer comprises 2 to 3 transposition convolution layers and 1 nearest neighbor interpolation layer, the size of the feature map is increased by using nearest neighbor interpolation, but the error of picture feature analysis is increased at the same time, so that the transposition convolution layer is added after each nearest neighbor interpolation, and the error is reduced; ReLU operation is performed once at the end of the transposition convolution layer in each layer except the 5 th layer, and the 5 th layer converts the feature map into a co-partitioned prediction result M through a Sigmoid function.
The twin metric learning network module is shown in fig. 4, and comprises two fully connected layers, namely 128-dimension and 64-dimension, and the first layer is followed by a nonlinear excitation function ReLU; the twin metric learning network module firstly carries out global average pooling on a pair of feature maps output by the deconvolution layer cT9 of the twin coding and decoding network module to obtain a pair of vectors fu1And fu2A 1 is to fu1And fu2Inputting the vector into a twin metric learning network module and outputting a pair of vectors fs1,fs2To the decision network module.
The decision network module is shown in fig. 5 and is configured to determine whether a group of input pictures are of the same category, and when the decision network module determines that the pair of pictures are of the same category, the determination result is 1, and when the decision network module determines that the pair of pictures are not of the same category, the determination result is 0. Specifically, the decision network module obtains a 128-dimensional vector as input by splicing two vectors output by the twin metric learning network module, sequentially passes through two fully connected layers, the first fully connected layer is 32-dimensional and is connected with a nonlinear activation function ReLU, the second fully connected layer is 1-dimensional, and a prediction result is converted into a probability between 0 and 1 by connecting a Sigmoid function, so as to infer whether the group of pictures are of the same type, if the probability value is close to 1, the pictures represent the same type, otherwise, the probability value is not.
The loss functions used by the network are different according to the analysis tasks, and when the segmentation tasks are executed, the loss functions used are as follows:
Lfinal=w1L1+w2L2+w3L3
wherein: l isfinalIs the total loss of the model, w1、w2、w3Is a weight, L1The binary cross entropy is a loss function for training the twin encoding and decoding network; l is2Is triplet loss, which is a loss function for training twin metric learning networks;
Figure BDA0002825770320000041
wherein: a is an error value and a is a value,
Figure BDA0002825770320000042
two vectors are input; y represents whether the two vector labels are the same; when the same y is 1, when the same y is-1;
loss function L of decision network3Is the BCE loss, and the BCE loss,
Figure BDA0002825770320000051
wherein: i represents the number of samples, yrEqual to 0 or 1 indicates whether the input sample pair is of the same class;
when the detection task is executed, the L3 is changed to smooth L1 loss,
Figure BDA0002825770320000052
Figure BDA0002825770320000053
wherein: x is the element-wise difference between the prediction box and the ground channel.
Cooperative analysis method of whole networkThe input to the network is a set of image pairs (a pair of ultrasound sequence images I) as shown in FIG. 61、I2) And is transmitted through a twin encoding and decoding network sharing the weight. Inputting the feature graph obtained in decoding into a twin metric learning network, vectorizing the feature graph and transmitting the vectorized feature graph to a decision network, and judging whether the feature graph and the decision network are the same type of sample by the decision network. Training a complete network aiming at the same type of samples and outputting a prediction result M1、M2
By using this network, the image shown in fig. 7 is divided, and the expected division result is shown in fig. 8, and when the network is used for the detection task, the prediction analysis result is shown in fig. 9.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (6)

1. A cooperative analysis method based on a twin neural network is characterized in that: the cooperative analysis method comprises a twin coding and decoding network module, a twin metric learning network module and a decision network module, wherein a group of picture pairs are input to the twin coding and decoding network module, the twin coding and decoding network module extracts the characteristics of the picture pairs and inputs the characteristics of the picture pairs into the twin metric learning network module, after the twin metric learning network module vectorizes the characteristics of the picture pairs, the distance between two vectors is calculated, the result is input into the decision network module, and the decision network module judges whether the picture pairs are in the same class or not and transmits the result to the twin coding and decoding network module.
2. The twin neural network-based collaborative analysis method according to claim 1, wherein: the twin encoding and decoding network module comprises a five-layer encoder and a five-layer decoder, wherein the encoder is a feature extraction convolutional neural network based on VGG16, each layer comprises 2 to 3 convolutional layers and a maximum pooling layer with a convolutional kernel of 2x2, and the decoder converts an input picture of 224x224x3 into a semantic feature map of 7x7x512, and the semantic feature map is used as an input part of the decoder; the decoder comprises five layers, wherein each layer comprises 2 to 3 transposition convolutional layers and 1 nearest neighbor interpolation layer, and the transposition convolutional layers are added after each nearest neighbor interpolation; ReLU operation is performed once at the end of the transposition convolution layer in each layer except the 5 th layer, and the 5 th layer converts the feature map into a co-partitioned prediction result M through a Sigmoid function.
3. The twin neural network-based collaborative analysis method according to claim 1, wherein: the twin metric learning network module comprises two fully-connected layers which are respectively 128-dimensional and 64-dimensional, and a nonlinear excitation function ReLU is connected behind the first layer; the twin metric learning network module firstly carries out global average pooling on a pair of feature maps output by the deconvolution layer cT9 of the twin coding and decoding network module to obtain a pair of vectors fu1And fu2A 1 is to fu1And fu2Inputting the signals into a twin metric learning network module and outputting a pair of vectors fs1,fs2To the decision network module.
4. The twin neural network-based collaborative analysis method according to claim 1, wherein: and the decision network module judges that the picture pairs are of the same type, the judgment result is 1, and the decision network module judges that the picture pairs are not of the same type, the judgment result is 0.
5. The twin neural network-based collaborative analysis method according to claim 1, wherein: the decision network module obtains a 128-dimensional vector as input by splicing two vectors output by the twin metric learning network module, the 128-dimensional vector sequentially passes through two full-connection layers, the first full-connection layer is 32-dimensional and is connected with a nonlinear activation function ReLU, the second full-connection layer is 1-dimensional, a prediction result is converted into a probability between 0 and 1 by connecting a Sigmoid function, whether the group of pictures are of the same type or not is deduced, if the probability value is close to 1, the pictures represent the same type, and if not, the probability value is not.
6. The twin neural network-based collaborative analysis method according to claim 1, wherein: in performing the segmentation task, the loss function used is:
Lfinal=w1L1+w2L2+w3L3
wherein: l isfinalIs the total loss of the model, w1、w2、w3Is a weight, L1The binary cross entropy is a loss function for training the twin encoding and decoding network; l is2Is triplet loss, which is a loss function for training twin metric learning networks;
Figure FDA0002825770310000021
wherein: a is an error value and a is a value,
Figure FDA0002825770310000025
two vectors are input; y represents whether the two vector labels are the same; when the same y is 1, when the same y is-1;
loss function L of decision network3Is a compound of the group BCEloss,
Figure FDA0002825770310000022
wherein: i represents the number of samples, yrEqual to 0 or 1 indicates whether the input sample pair is of the same class;
when the detection task is executed, the L3 is changed to smooth L1 loss,
Figure FDA0002825770310000023
Figure FDA0002825770310000024
wherein: x is the element-wise difference between the prediction box and the ground channel.
CN202011448450.9A 2020-12-09 2020-12-09 Cooperative analysis method based on twin neural network Active CN112472136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011448450.9A CN112472136B (en) 2020-12-09 2020-12-09 Cooperative analysis method based on twin neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011448450.9A CN112472136B (en) 2020-12-09 2020-12-09 Cooperative analysis method based on twin neural network

Publications (2)

Publication Number Publication Date
CN112472136A true CN112472136A (en) 2021-03-12
CN112472136B CN112472136B (en) 2022-06-17

Family

ID=74941561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011448450.9A Active CN112472136B (en) 2020-12-09 2020-12-09 Cooperative analysis method based on twin neural network

Country Status (1)

Country Link
CN (1) CN112472136B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792653A (en) * 2021-09-13 2021-12-14 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136101A (en) * 2019-04-17 2019-08-16 杭州数据点金科技有限公司 A kind of tire X-ray defect detection method compared based on twin distance
CN111127503A (en) * 2019-12-31 2020-05-08 上海眼控科技股份有限公司 Method, device and storage medium for detecting the pattern of a vehicle tyre
CN111242173A (en) * 2019-12-31 2020-06-05 四川大学 RGBD salient object detection method based on twin network
CN111368729A (en) * 2020-03-03 2020-07-03 河海大学常州校区 Vehicle identity discrimination method based on twin neural network
US20200250436A1 (en) * 2018-04-10 2020-08-06 Adobe Inc. Video object segmentation by reference-guided mask propagation
CN111797716A (en) * 2020-06-16 2020-10-20 电子科技大学 Single target tracking method based on Siamese network
CN111833334A (en) * 2020-07-16 2020-10-27 上海志唐健康科技有限公司 Fundus image feature processing and analyzing method based on twin network architecture

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200250436A1 (en) * 2018-04-10 2020-08-06 Adobe Inc. Video object segmentation by reference-guided mask propagation
CN110136101A (en) * 2019-04-17 2019-08-16 杭州数据点金科技有限公司 A kind of tire X-ray defect detection method compared based on twin distance
CN111127503A (en) * 2019-12-31 2020-05-08 上海眼控科技股份有限公司 Method, device and storage medium for detecting the pattern of a vehicle tyre
CN111242173A (en) * 2019-12-31 2020-06-05 四川大学 RGBD salient object detection method based on twin network
CN111368729A (en) * 2020-03-03 2020-07-03 河海大学常州校区 Vehicle identity discrimination method based on twin neural network
CN111797716A (en) * 2020-06-16 2020-10-20 电子科技大学 Single target tracking method based on Siamese network
CN111833334A (en) * 2020-07-16 2020-10-27 上海志唐健康科技有限公司 Fundus image feature processing and analyzing method based on twin network architecture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HSIN-TZU WANG等: "Deep-Learning-Based Block Similarity Evaluation for Image Forensics", 《2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792653A (en) * 2021-09-13 2021-12-14 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image
CN113792653B (en) * 2021-09-13 2023-10-20 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image

Also Published As

Publication number Publication date
CN112472136B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
CN110889853A (en) Tumor segmentation method based on residual error-attention deep neural network
CN111739051B (en) Multi-sequence MRI image segmentation method based on residual error network
CN111597946A (en) Processing method of image generator, image generation method and device
CN112735570A (en) Image-driven brain atlas construction method, device, equipment and storage medium
CN115908253A (en) Knowledge distillation-based cross-domain medical image segmentation method and device
CN111260639A (en) Multi-view information-collaborative breast benign and malignant tumor classification method
CN114299006A (en) Self-adaptive multi-channel graph convolution network for joint graph comparison learning
CN112472136B (en) Cooperative analysis method based on twin neural network
CN111275103A (en) Multi-view information cooperation type kidney benign and malignant tumor classification method
CN112529886A (en) Attention DenseUNet-based MRI glioma segmentation method
CN115327544B (en) Little-sample space target ISAR defocus compensation method based on self-supervision learning
CN115587967B (en) Fundus image optic disk detection method based on HA-UNet network
CN113706546B (en) Medical image segmentation method and device based on lightweight twin network
CN116958154A (en) Image segmentation method and device, storage medium and electronic equipment
CN112686912B (en) Acute stroke lesion segmentation method based on gradual learning and mixed samples
CN109064403A (en) Fingerprint image super-resolution method based on classification coupling dictionary rarefaction representation
Bongini et al. GADA: Generative adversarial data augmentation for image quality assessment
CN114283301A (en) Self-adaptive medical image classification method and system based on Transformer
CN109919162B (en) Model for outputting MR image feature point description vector symbol and establishing method thereof
CN112133366A (en) Face type prediction method based on gene data and generation of anti-convolution neural network
CN112085718B (en) NAFLD ultrasonic video diagnosis system based on twin attention network
Li et al. A deep learning feature fusion algorithm based on Lensless cell detection system
CN117351003B (en) Multi-model integrated multi-phase MRI tumor classification method based on video actions
CN117911427A (en) Method, system, computer equipment and storage medium for segmenting transducer medical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant