CN117952964B - Fundus medical image analysis method based on computer vision technology - Google Patents

Fundus medical image analysis method based on computer vision technology Download PDF

Info

Publication number
CN117952964B
CN117952964B CN202410343063.0A CN202410343063A CN117952964B CN 117952964 B CN117952964 B CN 117952964B CN 202410343063 A CN202410343063 A CN 202410343063A CN 117952964 B CN117952964 B CN 117952964B
Authority
CN
China
Prior art keywords
fundus
feature
fusion
feature map
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410343063.0A
Other languages
Chinese (zh)
Other versions
CN117952964A (en
Inventor
郭劲宏
邹媛媛
王勇
郭九川
李小松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaoxing Keqiao Medical Laboratory Technology Research Center Of Chongqing Medical University
Original Assignee
Shaoxing Keqiao Medical Laboratory Technology Research Center Of Chongqing Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaoxing Keqiao Medical Laboratory Technology Research Center Of Chongqing Medical University filed Critical Shaoxing Keqiao Medical Laboratory Technology Research Center Of Chongqing Medical University
Priority to CN202410343063.0A priority Critical patent/CN117952964B/en
Publication of CN117952964A publication Critical patent/CN117952964A/en
Application granted granted Critical
Publication of CN117952964B publication Critical patent/CN117952964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Veterinary Medicine (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Signal Processing (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The application discloses a fundus medical image analysis method based on a computer vision technology, which is characterized in that the medical image of the fundus of a patient is acquired, and an image processing and analysis algorithm based on the computer vision technology and an artificial intelligence technology is introduced at the rear end to analyze the fundus medical image, so that the shape, the color, the structure and other characteristics of the fundus of the patient are integrated, and the fundus lesions of the patient, such as macular degeneration, diabetic retinopathy, glaucoma and the like, are detected and identified more accurately. In this way, the automatic identification and detection of the fundus lesions of the patient can be performed based on the computer vision technology, so that powerful support is provided for early diagnosis and treatment of the fundus lesions of the patient.

Description

Fundus medical image analysis method based on computer vision technology
Technical Field
The application relates to the field of computer vision, and more particularly relates to a fundus medical image analysis method based on a computer vision technology.
Background
Fundus medical imaging is a very useful diagnostic tool to help doctors detect and diagnose a variety of ocular diseases such as macular degeneration, diabetic retinopathy, glaucoma, and the like. However, conventional analysis of fundus medical images generally requires a doctor to rely on abundant experience and expertise for analysis of fundus medical images and diagnosis of lesions, and different doctors may have different diagnosis results, resulting in inconsistency of the diagnosis results, which subjectivity may affect accurate diagnosis and treatment of diseases. Also, since fundus medical images generally contain a large amount of detailed information, a doctor needs to take a lot of time to carefully analyze each image, which limits the efficiency and speed of diagnosis. This time-consuming nature can delay diagnosis and treatment, particularly in busy medical environments. In addition, there are limitations in human vision and cognition, and some tiny fundus lesions or features may be ignored or misjudged, affecting the ability and accuracy of diagnosis.
Accordingly, a fundus medical image analysis scheme based on computer vision techniques is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a fundus medical image analysis method based on a computer vision technology, which is used for analyzing a fundus medical image of a patient by collecting the medical image of the fundus of the patient and introducing an image processing and analysis algorithm based on the computer vision technology and an artificial intelligence technology at the rear end so as to synthesize the shape, the color, the structure and other characteristics of the fundus of the patient, thereby more accurately detecting and identifying the fundus lesions of the patient, such as macular degeneration, diabetic retinopathy, glaucoma and the like. In this way, the automatic identification and detection of the fundus lesions of the patient can be performed based on the computer vision technology, so that powerful support is provided for early diagnosis and treatment of the fundus lesions of the patient.
According to one aspect of the present application, there is provided a fundus medical image analysis method based on computer vision technology, comprising:
acquiring a fundus medical image to be analyzed;
Extracting shape semantic features of the fundus medical image to be analyzed by a shape feature extractor based on a deep neural network model to obtain a fundus shape feature map;
Performing color semantic feature extraction on the fundus medical image to be analyzed through a color feature extractor based on a deep neural network model to obtain a fundus color feature map;
Extracting structural semantic features of the fundus medical image to be analyzed by a structural feature extractor based on a deep neural network model to obtain a fundus structural feature map;
Passing the fundus color feature map and the fundus shape feature map through a multi-channel feature fusion module to obtain a fundus color-shape fusion feature map;
The fundus color-shape fusion feature map and the fundus structural feature map pass through a cross-order feature fusion device based on an attention fusion mechanism network to obtain a fundus multi-scale fusion feature map as a fundus multi-scale fusion feature;
Based on the fundus multiscale fusion feature, determining whether a fundus lesion exists.
Compared with the prior art, the fundus medical image analysis method based on the computer vision technology provided by the application has the advantages that the medical image of the fundus of a patient is acquired, and the image processing and analysis algorithm based on the computer vision technology and the artificial intelligence technology is introduced at the rear end to analyze the fundus medical image, so that the shape, the color, the structure and other characteristics of the fundus of the patient are integrated, and the fundus lesions of the patient, such as macular degeneration, diabetic retinopathy, glaucoma and the like, are detected and identified more accurately. In this way, the automatic identification and detection of the fundus lesions of the patient can be performed based on the computer vision technology, so that powerful support is provided for early diagnosis and treatment of the fundus lesions of the patient.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a flowchart of a fundus medical image analysis method based on a computer vision technique according to an embodiment of the present application;
fig. 2 is a system architecture diagram of a fundus medical image analysis method based on a computer vision technique according to an embodiment of the present application;
Fig. 3 is a flowchart of a training phase of a fundus medical image analysis method based on a computer vision technique according to an embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Traditional fundus medical image analysis generally requires doctors to rely on abundant experience and expertise to perform fundus medical image analysis and lesion diagnosis, and different doctors may have different diagnosis results, resulting in inconsistency of the diagnosis results, and such subjectivity may affect accurate diagnosis and treatment of diseases. Also, since fundus medical images generally contain a large amount of detailed information, a doctor needs to take a lot of time to carefully analyze each image, which limits the efficiency and speed of diagnosis. This time-consuming nature can delay diagnosis and treatment, particularly in busy medical environments. In addition, there are limitations in human vision and cognition, and some tiny fundus lesions or features may be ignored or misjudged, affecting the ability and accuracy of diagnosis. With the continuous progress of computer vision technology, fundus medical image analysis methods based on computer vision technology are increasingly used in clinical diagnosis. By performing high-precision image analysis using machine vision, a doctor can more accurately identify and evaluate fundus lesions such as macular degeneration, diabetic retinopathy, glaucoma, and the like.
In the technical scheme of the application, a fundus medical image analysis method based on a computer vision technology is provided. Fig. 1 is a flowchart of a fundus medical image analysis method based on a computer vision technique according to an embodiment of the present application. Fig. 2 is a system architecture diagram of a fundus medical image analysis method based on a computer vision technique according to an embodiment of the present application. As shown in fig. 1 and 2, a fundus medical image analysis method based on a computer vision technique according to an embodiment of the present application includes the steps of: s1, acquiring a fundus medical image to be analyzed; s2, extracting shape semantic features of the fundus medical image to be analyzed through a shape feature extractor based on a deep neural network model to obtain a fundus shape feature map; s3, extracting color semantic features of the fundus medical image to be analyzed through a color feature extractor based on a deep neural network model to obtain a fundus color feature map; s4, extracting structural semantic features of the fundus medical image to be analyzed through a structural feature extractor based on a deep neural network model to obtain a fundus structural feature map; s5, enabling the fundus color feature map and the fundus shape feature map to pass through a multi-channel feature fusion module to obtain a fundus color-shape fusion feature map; s6, enabling the fundus color-shape fusion feature map and the fundus structural feature map to pass through a cross-order feature fusion device based on an attention fusion mechanism network to obtain a fundus multi-scale fusion feature map as a fundus multi-scale fusion feature; s7, determining whether fundus lesions exist or not based on the fundus multi-scale fusion characteristics.
In particular, the S1, a fundus medical image to be analyzed is acquired. Among them, fundus medical imaging is an imaging technique for diagnosing and monitoring ocular diseases. This technique can provide detailed information about the structure and lesions of the eye by taking an image of the back (fundus) of the eye.
In particular, the S2 performs shape semantic feature extraction on the fundus medical image to be analyzed through a shape feature extractor based on a deep neural network model to obtain a fundus shape feature map. It is considered that abundant information such as shape information of a blood vessel network and a video disc is contained in the fundus medical image. Therefore, in order to capture the shape semantics in the fundus medical image, in the technical scheme of the application, the fundus medical image to be analyzed needs to be subjected to feature mining through a shape feature extractor based on a first convolution neural network model so as to extract fundus shape semantic feature information in the fundus medical image to be analyzed, thereby obtaining a fundus shape feature map. By extracting the fundus shape features, a doctor can be provided with shape semantics about the fundus, and some lesions can change the fundus shape, thus contributing to the improvement of the accuracy and reliability of disease diagnosis. Specifically, passing the fundus medical image to be analyzed through a shape feature extractor based on a first convolutional neural network model to obtain the fundus shape feature map, including: each layer of the shape feature extractor based on the first convolutional neural network model is used for respectively carrying out input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on the local feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the output of the last layer of the shape feature extractor based on the first convolutional neural network model is the fundus shape feature map, and the input of the first layer of the shape feature extractor based on the first convolutional neural network model is the fundus medical image to be analyzed.
Convolutional neural networks (Convolutional Neural Network, CNN) are a type of deep learning model that is particularly useful for processing data having a grid structure, such as images and video. The following is a general structure and a step-wise expansion of the convolutional neural network model: input layer: accepting input data, typically images, audio, text, or the like; convolution layer: the convolutional layer is one of the core components of the CNN. It extracts local features in the input data by applying a series of filters (also called convolution kernels). The convolution operation multiplies and sums the filter and the input data element by element to generate a feature map; activation function: after the convolutional layer, a nonlinear activation function, such as ReLU, is typically applied to introduce nonlinear characteristics; pooling layer: the pooling layer is used to reduce the spatial size of the feature map and preserve the most important features. Common pooling operations have maximum pooling and average pooling; full tie layer: the fully connected layer connects the outputs of the pooling layer to one or more fully connected layers for mapping features to final output categories or regression values. Each neuron in the fully connected layer is connected with all neurons of the previous layer; output layer: the output layer selects proper activation functions, such as softmax functions, for multi-classification tasks and linear activation functions for regression tasks according to different tasks; loss function: selecting proper loss functions according to different tasks, such as cross entropy loss functions for classifying tasks and mean square error loss functions for regression tasks; back propagation and optimization: the gradient of the model parameters is calculated from the loss function by a back propagation algorithm and a gradient descent optimization algorithm, and the parameters are updated to minimize the loss function.
In particular, the S3 performs color semantic feature extraction on the fundus medical image to be analyzed through a color feature extractor based on a deep neural network model to obtain a fundus color feature map. Considering color information in fundus medical images is critical for diagnosing ocular diseases. Different ocular diseases exhibit different color characteristics in fundus images, such as bleeding, exudation, pigmentation, and the like. By extracting the color features of the fundus image, a physician can be helped to better identify and distinguish between different types of ocular diseases. Therefore, in the technical scheme of the application, the fundus medical image to be analyzed is further subjected to feature mining through a color feature extractor based on a second convolutional neural network model, so that color semantic feature information about the fundus of a patient in the fundus medical image to be analyzed is extracted, and a fundus color feature map is obtained. Specifically, passing the fundus medical image to be analyzed through a color feature extractor based on a second convolutional neural network model to obtain the fundus color feature map, including: each layer of the color feature extractor based on the second convolutional neural network model is used for respectively carrying out input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on the local feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the output of the last layer of the color feature extractor based on the second convolutional neural network model is the fundus color feature map, and the input of the first layer of the color feature extractor based on the second convolutional neural network model is the fundus medical image to be analyzed.
Specifically, the step S4 is to extract structural semantic features of the fundus medical image to be analyzed by a structural feature extractor based on a deep neural network model so as to obtain a fundus structural feature map. Considering structural information in fundus medical images is critical for diagnosing ocular diseases. Different ocular diseases exhibit different structural features in fundus images, such as the density of vascular networks, morphological features of the optic disc, etc. Therefore, in order to help doctors to analyze fundus images more comprehensively and identify eye diseases more accurately, in the technical scheme of the application, the fundus medical image to be analyzed is further processed through a structural feature extractor based on a third convolutional neural network model to obtain a fundus structural feature map. Through extracting the structural feature of eyeground, can help the doctor to know the condition of eye disease more comprehensively, improve diagnostic accuracy and reliability. Specifically, the fundus medical image to be analyzed passes through a structural feature extractor based on a third convolutional neural network model to obtain a fundus structural feature map, which comprises the following steps: each layer of the structural feature extractor based on the third convolutional neural network model is used for respectively carrying out input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on the local feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; the output of the last layer of the structural feature extractor based on the third convolutional neural network model is the fundus structural feature map, and the input of the first layer of the structural feature extractor based on the third convolutional neural network model is the fundus medical image to be analyzed.
In particular, the S5, the fundus color profile and the fundus shape profile are passed through a multi-channel feature fusion module to obtain a fundus color-shape fusion profile. It should be appreciated that since both the color semantic feature information and the shape semantic feature information about the fundus in the fundus medical image contain important diagnostic cues, for example, the color features generally reflect the color change of the fundus lesion area and the shape features reflect the morphological features of the eye, but using one of the information alone may not provide enough information to accurately diagnose the eye disease. Therefore, in order to be able to better understand the semantics of fundus medical images to provide more comprehensive, richer feature information about fundus colors and shapes, fusion processing is required for the fundus color feature map and the fundus shape feature map. Fusing these two types of information can supplement each other, providing more comprehensive information to assist the physician in making an accurate diagnosis. However, in the fundus medical image features of different levels, there is a difference in feature expression between the two, and if the two features are fused through simple features, such as pixel summation, channel stitching and the like, inconsistencies between semantic information and detail features are often ignored. Therefore, in order for the network to be able to focus more fully and accurately on the fused semantic features between the color and shape features in the fundus medical image, it is necessary to use advanced features to provide semantic information to guide the fusion of the multi-channel features, thereby generating more discriminative fused features. Specifically, in the technical scheme of the application, the fundus color feature map and the fundus shape feature map are passed through a multi-channel feature fusion module to obtain a fundus color-shape fusion feature map. The color semantic features and the shape semantic features of the fundus medical image can be combined through the processing of the multichannel feature fusion module, so that the fundus color-shape fusion feature map has richer semantic information, and the recognition and diagnosis of fundus lesions are facilitated. Specifically, passing the fundus color profile and the fundus shape profile through a multi-channel feature fusion module to obtain a fundus color-shape fusion profile, comprising: processing the fundus color feature map and the fundus shape feature map by the multi-channel feature fusion module in the following multi-channel feature fusion formula to obtain the fundus color-shape fusion feature map; the multi-channel feature fusion formula is as follows:
Wherein, And/>The fundus color profile and the fundus shape profile, respectively,/>For splicing operation,/>For convolution operations,/>For batch normalization processing,/>For/>Activating a function,/>For fusing characterization feature map,/>For global averaging processing,/>Activating a function,/>Is a weight vector,/>For weighting characteristic images,/>And (5) fusing the characteristic diagram for the fundus color-shape.
In particular, the S6, the fundus color-shape fusion feature map and the fundus structural feature map are passed through a cross-order feature fusion device based on an attention fusion mechanism network to obtain a fundus multi-scale fusion feature map as a fundus multi-scale fusion feature. The fundus color-shape fusion characteristic diagram and the fundus structure characteristic diagram respectively represent fusion semantic characteristics of the color and the shape of the fundus and deep semantic characteristics of the fundus structure, and fundus characteristics of different levels have important significance for the identification and detection of fundus lesions. Therefore, in order to combine fundus characteristic information of different layers, in the technical scheme of the application, the fundus color-shape fusion characteristic diagram and the fundus structural characteristic diagram are further processed through a cross-order characteristic fusion device based on an attention fusion mechanism network to obtain a fundus multi-scale fusion characteristic diagram. Specifically, after the fundus structural feature map is subjected to global average pooling processing in the channel dimension, the two-dimensional feature matrix is compressed into a real number, and the real number has a global receptive field of each feature in the fundus structural feature map with a high level. And multiplying the value serving as the attention information with different features of the fundus color-shape fusion feature map, guiding the position information in the fundus color-shape fusion feature map of a low level to restore fundus states and lesion types, and finally fusing the generated multi-scale features of different orders about fundus to obtain the fundus multi-scale fusion feature map with richer semantic information. The cross-order feature fusion device is used for processing, so that low-level color-shape features and high-level structural features can be effectively fused, different-order feature information of eyeground is integrated, and feature diversity and expression capacity are improved. Specifically, the fundus color-shape fusion feature map and the fundus structural feature map are passed through a cross-order feature fusion device based on an attention fusion mechanism network to obtain a fundus multi-scale fusion feature map as a fundus multi-scale fusion feature, which comprises the following steps: carrying out global average pooling treatment on each feature matrix along the channel dimension in the fundus structure feature map to obtain a fundus structure deep pooling feature vector; the fundus structure deep-layer pooling feature vector passes through a full-connection layer to carry out full-connection coding on the fundus structure deep-layer pooling feature vector so as to obtain a fundus structure deep-layer semantic full-connection feature vector; carrying out weighted fusion based on channel dimensions on the fundus structure deep semantic full-connection feature vector and the fundus color-shape fusion feature map to obtain a fusion feature map; and fusing the fusion characteristic diagram and the fundus structural characteristic diagram to obtain the fundus multi-scale fusion characteristic diagram.
In particular, the S7 determines whether a fundus lesion is present based on the fundus multi-scale fusion feature. In particular, in one specific example of the present application, the fundus multiscale fusion profile is passed through a classifier-based fundus lesion identifier to obtain a recognition result that is used to indicate whether a fundus lesion is present. That is, it is the fundus lesions that are detected and identified using the multi-scale fusion features of different orders of the fundus, thereby more accurately detecting and identifying the fundus lesions of the patient, such as macular degeneration, diabetic retinopathy, glaucoma, and the like. Specifically, passing the fundus multiscale fusion feature map through a classifier-based fundus lesion identifier to obtain an identification result, wherein the method comprises the following steps of: expanding the fundus multiscale fusion feature map into classification feature vectors based on row vectors or column vectors; performing full-connection coding on the classification feature vectors by using a plurality of full-connection layers of the classifier to obtain coded classification feature vectors; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
That is, in the technical solution of the present application, the label of the classifier includes the presence of fundus lesions (first label) and the absence of fundus lesions (second label), wherein the classifier determines to which classification label the fundus multiscale fusion feature map belongs through a soft maximum function. It should be noted that the first tag p1 and the second tag p2 do not include a manually set concept, and in fact, during the training process, the computer model does not have a concept of "whether there is a fundus disease", which is just two kinds of classification tags, and the probability that the output characteristics are the two classification tags sign, that is, the sum of p1 and p2 is one. Therefore, the classification result of whether the fundus lesion exists is actually converted into the classified probability distribution conforming to the natural rule through classifying the label, and the physical meaning of the natural probability distribution of the label is essentially used instead of the language text meaning of whether the fundus lesion exists.
A classifier refers to a machine learning model or algorithm that is used to classify input data into different categories or labels. The classifier is part of supervised learning, which performs classification tasks by learning mappings from input data to output categories.
Fully connected layers are one type of layer commonly found in neural networks. In the fully connected layer, each neuron is connected to all neurons of the upper layer, and each connection has a weight. This means that each neuron in the fully connected layer receives inputs from all neurons in the upper layer, and weights these inputs together, and then passes the result to the next layer.
The Softmax classification function is a commonly used activation function for multi-classification problems. It converts each element of the input vector into a probability value between 0 and 1, and the sum of these probability values equals 1. The Softmax function is commonly used at the output layer of a neural network, and is particularly suited for multi-classification problems, because it can map the network output into probability distributions for individual classes. During the training process, the output of the Softmax function may be used to calculate the loss function and update the network parameters through a back propagation algorithm. Notably, the output of the Softmax function does not change the relative magnitude relationship between elements, but rather normalizes them. Thus, the Softmax function does not change the characteristics of the input vector, but simply converts it into a probability distribution form.
It should be appreciated that the shape feature extractor based on the first convolutional neural network model, the color feature extractor based on the second convolutional neural network model, the structural feature extractor based on the third convolutional neural network model, the multi-channel feature fusion module, the cross-order feature fusion based on the attention fusion mechanism network, and the classifier-based fundus lesion identifier need to be trained before the inference using the above neural network models. That is, in the fundus medical image analysis method based on the computer vision technology of the present application, a training stage is further included for training the shape feature extractor based on the first convolutional neural network model, the color feature extractor based on the second convolutional neural network model, the structural feature extractor based on the third convolutional neural network model, the multi-channel feature fusion module, the cross-order feature fusion device based on the attention fusion mechanism network, and the fundus lesion identifier based on the classifier.
Fig. 3 is a flowchart of a training phase of a fundus medical image analysis method based on a computer vision technique according to an embodiment of the present application. As shown in fig. 3, a fundus medical image analysis method based on a computer vision technique according to an embodiment of the present application includes: a training phase comprising: s110, training data is acquired, wherein the training data comprises training fundus medical images to be analyzed; s120, passing the fundus medical image to be analyzed through a shape feature extractor based on a first convolutional neural network model to obtain a training fundus shape feature map; s130, passing the fundus medical image to be analyzed through a color feature extractor based on a second convolutional neural network model to obtain a training fundus color feature map; s140, passing the fundus medical image to be analyzed through a structural feature extractor based on a third convolutional neural network model to obtain a training fundus structural feature map; s150, passing the training fundus color feature map and the training fundus shape feature map through a multi-channel feature fusion module to obtain a training fundus color-shape fusion feature map; s160, enabling the training fundus color-shape fusion feature map and the training fundus structural feature map to pass through a cross-order feature fusion device based on an attention fusion mechanism network to obtain a training fundus multi-scale fusion feature map; s170, performing cluster optimization on each characteristic value of the training fundus multiscale fusion characteristic map to obtain an optimized training fundus multiscale fusion characteristic map; s180, the optimized training fundus multiscale fusion feature map passes through a fundus lesion identifier based on a classifier to obtain a classification loss function value; s190, training the shape feature extractor based on the first convolutional neural network model, the color feature extractor based on the second convolutional neural network model, the structural feature extractor based on the third convolutional neural network model, the multi-channel feature fusion module, the cross-order feature fusion device based on the attention fusion mechanism network and the fundus lesion identifier based on the classifier based on the classification loss function value.
The clustering optimization is performed on each feature value of the training fundus multiscale fusion feature map to obtain an optimized training fundus multiscale fusion feature map, and the clustering optimization comprises the following steps: clustering the characteristic values of the fundus multiscale fusion characteristic map based on the distance between the characteristic values to obtain a clustering characteristic set; and optimizing the fundus multiscale fusion feature map based on the intra-class features and the inter-class features clustered in the clustered feature set to obtain the optimized fundus multiscale fusion feature map.
In particular, in the above technical solution, the training fundus shape feature map, the training fundus color feature map, and the training fundus structure feature map respectively represent fundus shape features, fundus color features, and fundus structure features of the fundus medical image to be analyzed, which are obtained based on convolution encoding of different depths. However, considering that the training fundus shape feature map, the training fundus color feature map and the training fundus structure feature map have different feature orders and resolutions due to different convolution coding depths, in order to fully utilize the feature information of each of the training fundus shape feature map, the training fundus color feature map and the training fundus structure feature map, the feature association between each other and the different contribution degrees of the information to final class probability regression, in the technical scheme of the application, the training fundus color feature map and the training fundus shape feature map are fused by utilizing the multichannel feature fusion module first to obtain a training fundus color-shape fusion feature map, and then the training fundus color-shape fusion feature map and the training fundus structure feature map are obtained by a cross-order feature fusion device based on an attention fusion mechanism network. However, the technical contradiction of the fusion mechanism is that if the feature information of each of the training fundus shape feature map, the training fundus color feature map and the training fundus structure feature map is fully reserved, the mutual correlation information between the features and the contribution degree information of the information to the final class probability regression are suppressed, and vice versa, so that the training fundus multi-scale fusion feature map as a whole has local feature distribution discreteness, and class probability convergence is difficult when the training fundus multi-scale fusion feature map is classified by a fundus lesion identifier based on a classifier, and the training speed of the fundus lesion identifier based on the classifier and the accuracy of a finally obtained classification result are affected. Based on this, the applicant performs cluster optimization on the training fundus multiscale fusion feature map, that is, firstly, clusters each feature value of the training fundus multiscale fusion feature map, for example, clusters based on the distance between feature values, and performs optimization based on the clustered feature intra-class and inter-class features, which is expressed as:
Wherein, Is each characteristic value of the training fundus multiscale fusion characteristic diagram,/>Is the number of feature sets corresponding to the training fundus multiscale fusion feature map,/>Is the number of cluster features,/>Representing a set of clustered features. Specifically, the intra-class features and the extra-class features of the training fundus multiscale fusion feature map are used as different example roles to perform cluster proportion distribution-based class example description, and cluster response histories based on intra-class and extra-class dynamic contexts are introduced to keep a coordinated global view of the intra-class distribution and the extra-class distribution of the overall features of the training fundus multiscale fusion feature map, so that the optimized feature clustering operation of the training fundus multiscale fusion feature map array can maintain consistent responses of the intra-class features and the extra-class features, and accordingly a class probability convergence path based on feature clustering keeps consistent in a classification process, and the class probability convergence effect of the training fundus multiscale fusion feature map is improved. Therefore, based on the computer vision technology, the fundus lesions of the patient can be detected and identified more accurately according to the shape, the color, the structure and other semantic features of the fundus of the patient, so that powerful support is provided for early diagnosis and treatment of the fundus lesions of the patient.
In summary, the fundus medical image analysis method based on the computer vision technology according to the embodiment of the application is explained, which performs analysis of the fundus medical image by collecting the medical image of the fundus of the patient and introducing the image processing and analysis algorithm based on the computer vision technology and the artificial intelligence technology at the rear end, so as to integrate the shape, the color, the structure and other characteristics of the fundus of the patient, thereby more accurately detecting and identifying the fundus lesions of the patient, such as macular degeneration, diabetic retinopathy, glaucoma and the like. In this way, the automatic identification and detection of the fundus lesions of the patient can be performed based on the computer vision technology, so that powerful support is provided for early diagnosis and treatment of the fundus lesions of the patient.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (8)

1. A fundus medical image analysis method based on computer vision technology, which is characterized by comprising the following steps:
acquiring a fundus medical image to be analyzed;
Extracting shape semantic features of the fundus medical image to be analyzed by a shape feature extractor based on a deep neural network model to obtain a fundus shape feature map;
Performing color semantic feature extraction on the fundus medical image to be analyzed through a color feature extractor based on a deep neural network model to obtain a fundus color feature map;
Extracting structural semantic features of the fundus medical image to be analyzed by a structural feature extractor based on a deep neural network model to obtain a fundus structural feature map;
Passing the fundus color feature map and the fundus shape feature map through a multi-channel feature fusion module to obtain a fundus color-shape fusion feature map;
The fundus color-shape fusion feature map and the fundus structural feature map pass through a cross-order feature fusion device based on an attention fusion mechanism network to obtain a fundus multi-scale fusion feature map as a fundus multi-scale fusion feature;
Determining whether a fundus lesion exists based on the fundus multi-scale fusion feature;
Wherein, passing the fundus color-shape fusion feature map and the fundus structural feature map through a cross-order feature fusion device based on an attention fusion mechanism network to obtain a fundus multi-scale fusion feature map as a fundus multi-scale fusion feature, comprising:
carrying out global average pooling treatment on each feature matrix along the channel dimension in the fundus structure feature map to obtain a fundus structure deep pooling feature vector;
The fundus structure deep-layer pooling feature vector passes through a full-connection layer to carry out full-connection coding on the fundus structure deep-layer pooling feature vector so as to obtain a fundus structure deep-layer semantic full-connection feature vector;
Carrying out weighted fusion based on channel dimensions on the fundus structure deep semantic full-connection feature vector and the fundus color-shape fusion feature map to obtain a fusion feature map;
And fusing the fusion characteristic diagram and the fundus structural characteristic diagram to obtain the fundus multi-scale fusion characteristic diagram.
2. The method for analyzing fundus medical image based on computer vision technique according to claim 1, wherein the extracting of shape semantic features of the fundus medical image to be analyzed by a shape feature extractor based on a deep neural network model to obtain a fundus shape feature map comprises: and the fundus medical image to be analyzed passes through a shape feature extractor based on a first convolution neural network model to obtain the fundus shape feature map.
3. The method for analyzing fundus medical image based on computer vision technology according to claim 2, wherein the color semantic feature extraction of the fundus medical image to be analyzed by a color feature extractor based on a deep neural network model to obtain a fundus color feature map comprises: and the fundus medical image to be analyzed passes through a color feature extractor based on a second convolution neural network model to obtain the fundus color feature map.
4. A fundus medical image analysis method based on computer vision technique according to claim 3, wherein the extraction of structural semantic features of the fundus medical image to be analyzed by a structural feature extractor based on a deep neural network model to obtain a fundus structural feature map comprises: and the fundus medical image to be analyzed passes through a structural feature extractor based on a third convolution neural network model to obtain the fundus structural feature map.
5. The method of analyzing fundus medical image based on computer vision technique according to claim 4, wherein passing the fundus color profile and the fundus shape profile through a multi-channel feature fusion module to obtain a fundus color-shape fusion profile comprises: processing the fundus color feature map and the fundus shape feature map by the multi-channel feature fusion module in the following multi-channel feature fusion formula to obtain the fundus color-shape fusion feature map;
the multi-channel feature fusion formula is as follows:
Wherein, And/>The fundus color profile and the fundus shape profile, respectively,/>For splicing operation,/>For convolution operations,/>For batch normalization processing,/>For/>Activating a function,/>For fusing characterization feature map,/>For global averaging processing,/>For/>Activating a function,/>As a weight vector of the weight vector,For weighting characteristic images,/>And (5) fusing the characteristic diagram for the fundus color-shape.
6. The method of claim 5, wherein determining whether a fundus lesion exists based on the fundus multi-scale fusion feature comprises: and the fundus multiscale fusion characteristic diagram is passed through a fundus lesion identifier based on a classifier to obtain an identification result, wherein the identification result is used for indicating whether fundus lesions exist or not.
7. The method for analyzing fundus medical images based on computer vision according to claim 6, further comprising a training step of: training the shape feature extractor based on the first convolutional neural network model, the color feature extractor based on the second convolutional neural network model, the structural feature extractor based on the third convolutional neural network model, the multi-channel feature fusion module, the cross-order feature fusion device based on the attention fusion mechanism network and the fundus lesion identifier based on the classifier;
Wherein the training step comprises:
acquiring training data, wherein the training data comprises training fundus medical images to be analyzed;
the fundus medical image to be analyzed is trained to obtain a training fundus shape feature map through a shape feature extractor based on a first convolutional neural network model;
passing the fundus medical image to be analyzed through a color feature extractor based on a second convolutional neural network model to obtain a training fundus color feature map;
The fundus medical image to be analyzed is trained to pass through a structural feature extractor based on a third convolutional neural network model so as to obtain a training fundus structural feature map;
the training fundus color feature map and the training fundus shape feature map are processed through a multi-channel feature fusion module to obtain a training fundus color-shape fusion feature map;
the training fundus color-shape fusion feature map and the training fundus structural feature map are processed through a cross-order feature fusion device based on an attention fusion mechanism network to obtain a training fundus multi-scale fusion feature map;
Clustering optimization is carried out on each characteristic value of the training fundus multiscale fusion characteristic map so as to obtain an optimized training fundus multiscale fusion characteristic map;
The optimized training fundus multiscale fusion feature map passes through a fundus lesion identifier based on a classifier to obtain a classification loss function value;
and training the shape feature extractor based on the first convolutional neural network model, the color feature extractor based on the second convolutional neural network model, the structural feature extractor based on the third convolutional neural network model, the multi-channel feature fusion module, the cross-order feature fusion device based on the attention fusion mechanism network and the fundus lesion identifier based on the classifier based on the classification loss function value.
8. The method for analyzing fundus medical images based on the computer vision technique according to claim 7, wherein performing cluster optimization on each feature value of the training fundus multiscale fusion feature map to obtain an optimized training fundus multiscale fusion feature map comprises:
Clustering the characteristic values of the fundus multiscale fusion characteristic map based on the distance between the characteristic values to obtain a clustering characteristic set;
and optimizing the fundus multiscale fusion feature map based on the intra-class features and the inter-class features clustered in the clustered feature set to obtain the optimized training fundus multiscale fusion feature map.
CN202410343063.0A 2024-03-25 2024-03-25 Fundus medical image analysis method based on computer vision technology Active CN117952964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410343063.0A CN117952964B (en) 2024-03-25 2024-03-25 Fundus medical image analysis method based on computer vision technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410343063.0A CN117952964B (en) 2024-03-25 2024-03-25 Fundus medical image analysis method based on computer vision technology

Publications (2)

Publication Number Publication Date
CN117952964A CN117952964A (en) 2024-04-30
CN117952964B true CN117952964B (en) 2024-06-07

Family

ID=90798174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410343063.0A Active CN117952964B (en) 2024-03-25 2024-03-25 Fundus medical image analysis method based on computer vision technology

Country Status (1)

Country Link
CN (1) CN117952964B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118135407A (en) * 2024-05-07 2024-06-04 中邦生态环境有限公司 Arbor nutrient solution preparation dynamic adjustment system and method based on big data
CN118266854B (en) * 2024-05-30 2024-07-23 美视康健(吉林)医疗设备有限公司 Stereoscopic vision detection control system and method
CN118429339B (en) * 2024-07-03 2024-08-30 吉林大学 Gastric cancer patient identification system and method based on saliva metabolite

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113793301A (en) * 2021-08-19 2021-12-14 首都医科大学附属北京同仁医院 Training method of fundus image analysis model based on dense convolution network model
EP3944185A1 (en) * 2020-07-23 2022-01-26 INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência Computer-implemented method, system and computer program product for detecting a retinal condition from eye fundus images
CN114219687A (en) * 2021-11-02 2022-03-22 三峡大学 Intelligent identification method for potential construction safety hazards by fusing human-computer vision
CN114287878A (en) * 2021-10-18 2022-04-08 江西财经大学 Diabetic retinopathy focus image identification method based on attention model
WO2022166399A1 (en) * 2021-02-04 2022-08-11 北京邮电大学 Fundus oculi disease auxiliary diagnosis method and apparatus based on bimodal deep learning
WO2023155488A1 (en) * 2022-02-21 2023-08-24 浙江大学 Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion
WO2023181072A1 (en) * 2022-03-24 2023-09-28 Mahathma Centre Of Moving Images Private Limited Digital system and 3d tool for training and medical counselling in ophthalmology
CN117611926A (en) * 2024-01-22 2024-02-27 重庆医科大学绍兴柯桥医学检验技术研究中心 Medical image recognition method and system based on AI model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598866B (en) * 2020-05-14 2023-04-11 四川大学 Lens key feature positioning method based on eye B-ultrasonic image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3944185A1 (en) * 2020-07-23 2022-01-26 INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência Computer-implemented method, system and computer program product for detecting a retinal condition from eye fundus images
WO2022166399A1 (en) * 2021-02-04 2022-08-11 北京邮电大学 Fundus oculi disease auxiliary diagnosis method and apparatus based on bimodal deep learning
CN113793301A (en) * 2021-08-19 2021-12-14 首都医科大学附属北京同仁医院 Training method of fundus image analysis model based on dense convolution network model
CN114287878A (en) * 2021-10-18 2022-04-08 江西财经大学 Diabetic retinopathy focus image identification method based on attention model
CN114219687A (en) * 2021-11-02 2022-03-22 三峡大学 Intelligent identification method for potential construction safety hazards by fusing human-computer vision
WO2023155488A1 (en) * 2022-02-21 2023-08-24 浙江大学 Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion
WO2023181072A1 (en) * 2022-03-24 2023-09-28 Mahathma Centre Of Moving Images Private Limited Digital system and 3d tool for training and medical counselling in ophthalmology
CN117611926A (en) * 2024-01-22 2024-02-27 重庆医科大学绍兴柯桥医学检验技术研究中心 Medical image recognition method and system based on AI model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹新容 ; 薛岚燕 ; 林嘉雯 ; 余轮 ; .基于视觉显著性和旋转扫描的视盘分割新方法.生物医学工程学杂志.2018,(02),全文. *
赵荣昌 ; 陈再良 ; 段宣初 ; 陈奇林 ; 刘可 ; 朱承璋 ; .聚合多通道特征的青光眼自动检测.计算机辅助设计与图形学学报.2017,(06),全文. *

Also Published As

Publication number Publication date
CN117952964A (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN117952964B (en) Fundus medical image analysis method based on computer vision technology
CN109886273B (en) CMR image segmentation and classification system
CA3046939A1 (en) System and method for iterative classification using neurophysiological signals
CN111080643A (en) Method and device for classifying diabetes and related diseases based on fundus images
CN114724231A (en) Glaucoma multi-modal intelligent recognition system based on transfer learning
WO2023108418A1 (en) Brain atlas construction and neural circuit detection method and related product
CN111461220B (en) Image analysis method, image analysis device, and image analysis system
Aurangzeb et al. An efficient and light weight deep learning model for accurate retinal vessels segmentation
CN113610118A (en) Fundus image classification method, device, equipment and medium based on multitask course learning
CN111028232A (en) Diabetes classification method and equipment based on fundus images
CN113786185A (en) Static brain network feature extraction method and system based on convolutional neural network
CN114781441B (en) EEG motor imagery classification method and multi-space convolution neural network model
Shenavarmasouleh et al. Drdr ii: Detecting the severity level of diabetic retinopathy using mask rcnn and transfer learning
CN111047590A (en) Hypertension classification method and device based on fundus images
Kamal et al. A comprehensive review on the diabetic retinopathy, glaucoma and strabismus detection techniques based on machine learning and deep learning
Nugroho et al. Image dermoscopy skin lesion classification using deep learning method: systematic literature review
Tian et al. Learning discriminative representations for fine-grained diabetic retinopathy grading
Singh et al. A Deep Learning Approach to Analyze Diabetic Retinopathy Lesions using Scant Data
Ahmed et al. An effective deep learning network for detecting and classifying glaucomatous eye.
CN112614092A (en) Spine detection method and device
CN118542639B (en) Fundus image diagnosis analysis system and method based on pattern recognition
Niranjana et al. Enhanced Skin Diseases Prediction using DenseNet-121: Leveraging Dataset Diversity for High Accuracy Classification
Mounika et al. A Deep Hybrid Neural Network Model to Detect Diabetic Retinopathy from Eye Fundus Images
Rinesh et al. Automatic Retinopathic Diabetic Detection: Data Analyses, Approaches and Assessment Measures Using Deep Learning
Garazhian et al. Hypertensive Retinopathy Detection in Fundus Images Using Deep Learning-Based Model-Shallow ConvNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant