CN113642537A - Medical image recognition method and device, computer equipment and storage medium - Google Patents

Medical image recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113642537A
CN113642537A CN202111194907.2A CN202111194907A CN113642537A CN 113642537 A CN113642537 A CN 113642537A CN 202111194907 A CN202111194907 A CN 202111194907A CN 113642537 A CN113642537 A CN 113642537A
Authority
CN
China
Prior art keywords
image
feature
neural network
network model
white light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111194907.2A
Other languages
Chinese (zh)
Other versions
CN113642537B (en
Inventor
于红刚
卢姿桦
姚理文
张丽辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Endoangel Medical Technology Co Ltd
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202111194907.2A priority Critical patent/CN113642537B/en
Publication of CN113642537A publication Critical patent/CN113642537A/en
Application granted granted Critical
Publication of CN113642537B publication Critical patent/CN113642537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the application provides a medical image recognition method, a device, computer equipment and a storage medium, the method comprises the steps of firstly obtaining an original endoscope image and endoscope report information, extracting pathological information from the endoscope report information, then carrying out classification recognition on the original endoscope image to obtain a white light image and an NBI image, then inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature, and finally carrying out recognition by adopting a trained machine learning classifier according to the pathological information, the first feature set and the second feature to obtain a recognition result of the endoscope image, so that fusion of a plurality of features of the pathological information and the original endoscope image is realized, the recognition accuracy is improved, and the recognition is carried out by the machine learning classifier, the recognition result is more objective, and the recognition efficiency of the medical image is greatly improved.

Description

Medical image recognition method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to a medical image recognition method, apparatus, computer device, and storage medium.
Background
The endoscope image is an image which is output and processed by a medical electronic optical instrument integrating high-precision technologies such as light collection, mechanical and electrical technologies and the like, wherein the image can be inserted into a human body cavity and an internal organ cavity to be directly observed, diagnosed and treated through an electronic endoscope (endoscopy), so that a doctor can conveniently recognize and diagnose according to the endoscope image of the electronic endoscope.
Disclosure of Invention
The embodiment of the application provides a medical image identification method, a medical image identification device, computer equipment and a storage medium, and aims to solve the technical problem of low identification efficiency in manual identification.
In one aspect, the present application provides a medical image recognition method, including:
acquiring an original endoscope image and endoscope report information, and extracting pathological information from the endoscope report information;
classifying and identifying the original endoscope image to obtain a white light image and an NBI image;
Inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, the first neural network model comprises a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature;
inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature;
and identifying in a trained machine learning classifier according to the pathological information, the first characteristic set and the second characteristic to obtain an identification result of the original endoscope image.
In one aspect, the present application provides a medical image recognition apparatus, comprising:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring an original endoscope image and endoscope report information and extracting pathological information from the endoscope report information;
the classification module is used for classifying and identifying the original endoscope image to obtain a white light image and an NBI image;
the first extraction module is used for inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, the first neural network model comprises a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature;
The second extraction module is used for inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature;
and the recognition module is used for recognizing in a trained machine learning classifier according to the pathological information, the first characteristic set and the second characteristic to obtain a recognition result of the original endoscope image.
In one aspect, the present application provides a computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring an original endoscope image and endoscope report information, and extracting pathological information from the endoscope report information;
classifying and identifying the original endoscope image to obtain a white light image and an NBI image;
inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, the first neural network model comprises a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature;
Inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature;
and identifying in a trained machine learning classifier according to the pathological information, the first characteristic set and the second characteristic to obtain an identification result of the original endoscope image.
In one aspect, the present application provides a computer readable medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring an original endoscope image and endoscope report information, and extracting pathological information from the endoscope report information;
classifying and identifying the original endoscope image to obtain a white light image and an NBI image;
inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, the first neural network model comprises a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature;
inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature;
And identifying in a trained machine learning classifier according to the pathological information, the first characteristic set and the second characteristic to obtain an identification result of the original endoscope image.
The embodiment of the application provides a medical image recognition method, a device, computer equipment and a storage medium, the method comprises the steps of firstly obtaining an original endoscope image and endoscope report information, extracting pathological information from the endoscope report information, then carrying out classification recognition on the original endoscope image to obtain a white light image and an NBI image, then inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature, and finally carrying out recognition by adopting a trained machine learning classifier according to the pathological information, the first feature set and the second feature to obtain a recognition result of the endoscope image, so that fusion of a plurality of features of the pathological information and the original endoscope image is realized, the recognition accuracy is improved, and the recognition is carried out by the machine learning classifier, the recognition result is more objective, and the recognition efficiency of the medical image is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a flow diagram of a medical image recognition method in one embodiment;
FIG. 2 is a schematic illustration of an original ID image in one embodiment;
FIG. 3 is a flow chart of a medical image recognition method in another embodiment;
FIG. 4 is a flow diagram of a recognition result determination method in one embodiment;
FIG. 5 is a flowchart of a method for classifying and identifying an original endoscopic image according to an embodiment;
FIG. 6 is a block diagram showing the structure of a medical image recognition apparatus according to an embodiment;
FIG. 7 is a block diagram of a computer device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, in an embodiment, a medical image recognition method is provided, which can be applied to a terminal and also can be applied to a server, and the embodiment is exemplified by being applied to the server. The medical image identification method specifically comprises the following steps:
and 102, acquiring an original endoscope image and endoscope report information, and extracting pathological information from the endoscope report information.
The endoscopic image refers to an image including a lesion area acquired by an electronic endoscope, and in a specific embodiment, the endoscopic image may be an intestinal tumor image. Fig. 2 is a schematic diagram of an original bore diameter image. The endoscope report information refers to a report about clinical and pathological information of a patient, such as name, age, sex, lesion position, lesion size, etc., of the patient, which is generated through detection by a medical examination apparatus, such as an electronic endoscope. Specifically, a keyword extraction method may be adopted, and a structured data extraction method may also be adopted to extract pathological information. It is understood that since the pathological information has a certain reference function for identifying the abnormality degree of the lesion in the original endoscopic image, the pathological information is extracted for further processing based on the pathological information.
And 104, classifying and identifying the original endoscope image to obtain a white light image and an NBI image.
The white light image and NBI (Narrow Band Imaging, NBI, endoscopic narrowband Imaging) refer to images with two different color components of an original endoscopic image, wherein the white light image refers to an image output when endoscopic Imaging is performed in a white light illumination mode. NBI light penetrates only a shallow layer of mucosa with the specific properties of light, blue wavelengths are absorbed by blood vessels on the surface, green wavelengths are reflected, and NBI images are green. Specifically, an artificial feature extraction method, such as extracting color features, can be used for classifying and identifying the original endoscope image, and a deep learning method, such as using a Support Vector Machine (SVM) in a two-classifier, a decision tree, a random forest and other models, can be used for classifying and identifying. Preferably, the RGB color features of the original endoscopic image are extracted for classification and recognition, so as to improve the classification and recognition speed.
It can be understood that, in this embodiment, by obtaining the white light image and the NBI image, the sample size of the medical image is enriched, so that the sample of the medical image is more comprehensive and accurate, and the subsequent identification accuracy is improved.
And 106, inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, the first neural network model comprises a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature.
The trained first neural Network model comprises a plurality of first neural Network submodels for extracting features of the white light image, and the first neural Network submodels can be ResNet networks (Residual networks), UNet + + networks or VGG networks (Visual Geometry groups). The first feature set includes a plurality of first features, which include, but are not limited to, shape features, color features, brightness features, edge features, contour features, and the like, and can be selected according to the type of the original endoscopic image, which is not limited herein. Specifically, the white light image is used as an input of a trained first neural network model, and an output of the first neural network model is a first feature set. In this embodiment, by extracting a plurality of first features of the white light image, the features of the white light image are comprehensive and complete, so that accurate identification can be performed subsequently based on the first feature set.
And 108, inputting the NBI image into the trained second neural network model for feature extraction to obtain a second feature.
The second feature is a feature of the NBI image, and the second feature may be a color feature, a texture feature, an edge feature, or the like. The trained second neural network model is a deep learning model for extracting features of the NBI image, and includes but is not limited to ResNet network, UNet network, VGG network and the like. Specifically, the NBI image is used as an input of a trained second neural network model, and an output of the trained second neural network model is a second feature.
And step 110, according to the pathological information, the first characteristic set and the second characteristic, adopting a trained machine learning classifier to perform recognition to obtain a recognition result of the endoscope image.
Wherein, the identification result refers to an index representing the lesion abnormal degree in the original endoscopic image. The trained machine learning classifier is a multi-class classifier, specifically, pathological information, a first characteristic set and a second characteristic are input into the trained machine learning classifier for recognition, a plurality of first characteristics of the pathological information and white light images and a second characteristic of an NBI image are fused, so that the information participating in recognition is more accurate and comprehensive, and meanwhile, the recognition result is more objective by the machine learning classifier, and the objectivity and accuracy of medical image recognition are improved.
The medical image recognition method comprises the steps of firstly obtaining an original endoscope image and endoscope report information, extracting pathological information from the endoscope report information, then carrying out classification recognition on the original endoscope image to obtain a white light image and an NBI image, then inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature, and finally adopting a trained machine learning classifier for recognition according to the pathological information, the first feature set and the second feature to obtain a recognition result of the endoscope image, so that fusion of a plurality of features of the pathological information and the original endoscope image is realized, the recognition accuracy is improved, and meanwhile, the recognition is carried out through the machine learning classifier to enable the recognition result to be more objective, the identification efficiency of the medical image is greatly improved.
As shown in fig. 3, in an embodiment, before the step of inputting the pathology information, the first feature set, and the second feature into the trained machine learning classifier for recognition to obtain the recognition result of the endoscopic image, the method further includes:
step 112, acquiring a training sample set, wherein the training sample set comprises pathological information, a first characteristic set, a second characteristic and an abnormal grade corresponding to an endoscopic image;
and step 114, taking the pathological information, the first characteristic set and the second characteristic of the endoscopic image as the input of the GBDT machine learning classifier, taking the abnormal level of the endoscopic image as the expected output, and training the GBDT machine learning classifier to obtain the trained machine learning classifier.
Among them, compared with other types of machine learning models, the GBDT (Gradient Boosting Decision Tree) machine learning classifier has better interpretability. Specifically, pathological information, a first characteristic set, a second characteristic and an endoscope image of the endoscope image which are determined to be in each abnormal grade are obtained to be used as input of a GBDT machine learning classifier, the abnormal grade of the endoscope image is used as expected output, the GBDT machine learning classifier is trained, abnormal grades corresponding to the pathological information, the first characteristic set and the second characteristic in a training sample set can be generated, and therefore the GBDT machine learning classifier is trained according to the expected output corresponding to the pathological information, the first characteristic set and the second characteristic of the endoscope image, and the trained machine learning classifier is obtained.
As shown in fig. 4, in an embodiment, the step of obtaining the recognition result of the endoscopic image by using a trained machine learning classifier for recognition according to the pathology information, the first feature set, and the second feature includes:
step 114A, fusing the pathological information, each first feature and second feature in the first feature set to obtain abnormal features;
and step 114B, inputting the abnormal features into the trained machine learning model for recognition to obtain the abnormal grade of the endoscope image.
The abnormal features refer to pathological information, and features after fitting of each first feature and second feature in the first feature set. Specifically, the pathological information, each first feature in the first feature set, and the second feature may be fused in a weighted summation manner, the abnormal feature is used as an input of the trained machine learning model, and an output of the trained machine learning model is a recognition result, that is, an abnormal level of the endoscope image. According to the abnormal feature recognition method, the abnormal features are more accurate and comprehensive through fusion of the pathological information, the first features and the second features, meanwhile, the trained machine learning model is used for recognizing the abnormal features, the recognition objectivity is improved, and therefore compared with manual recognition, the abnormal grade recognition is more convenient and accurate.
In one embodiment, the step of inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set includes: and respectively taking the white light image as the input of each first neural network submodel, and obtaining the output of each first neural network submodel as the corresponding first characteristic.
Specifically, the white light image is simultaneously input to the inputs of the plurality of first neural network submodels, and the output of each first neural network submodel is the corresponding first feature. The plurality of first characteristics are obtained by respectively extracting the characteristics by utilizing the plurality of first neural network submodels, so that the extraction efficiency of the first characteristic set is improved.
In one embodiment, the first feature is one of a shape feature, a color feature, a brightness feature and a contrast feature in the white light image, and the first neural network sub-model is one of a first ResNet neural network model, a second ResNet neural network model, a third ResNet neural network model and a UNet + + neural network model; respectively taking the white light image as the input of each first neural network submodel, obtaining the output of each first neural network submodel as the corresponding first characteristic, and the method comprises the following steps: performing feature extraction on the white light image by using a first ResNet neural network model to obtain shape features; performing feature extraction on the white light image by using a second ResNet neural network model to obtain color features; performing feature extraction on the white light image by using a third ResNet neural network model to obtain brightness features; and (4) carrying out feature extraction on the white light image by using the UNet + + neural network model to obtain contrast features.
Specifically, the first feature set comprises shape features, color features, brightness features and contrast features, and different first neural network submodels are adopted for feature extraction aiming at different features, so that the first features are more accurate. The first neural network submodel is a first ResNet neural network model, a second ResNet neural network model, a third ResNet neural network model and a UNet + + neural network model, the first ResNet neural network model, the second ResNet neural network model and the third ResNet neural network model are ResNet neural networks, and the model parameters and the loss functions are different.
In one embodiment, the second feature is a texture feature, and the trained second neural network model comprises a ResNet network; inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature, wherein the feature extraction comprises the following steps: and (5) performing feature extraction on the NBI image by using a ResNet network to obtain texture features.
Specifically, in the application scenario in this embodiment, for an intestinal tumor image, whether there are ducts and vascular structures can be reflected by the texture features of the NBI image, and therefore, the texture features of the NBI image are extracted through the ResNet network, so that subsequent identification is more accurate.
As shown in fig. 5, in one embodiment, the step of performing classification recognition on the original endoscopic image to obtain a white light image and an NBI image comprises:
step 104A, converting each original endoscope image into an RGB image;
step 104B, extracting G component pixel values corresponding to the RGB images;
step 104C, determining the original endoscopic image with the G component pixel value larger than a preset pixel threshold value as an NBI image;
and step 104D, determining the original endoscopic image with the G component pixel value smaller than or equal to the preset pixel threshold value as a white light image.
Specifically, the white light image is different from the NBI image in that the green component of the NBI image is higher, so in this embodiment, each original endoscopic image is converted into an RGB image, G component pixel values corresponding to the RGB image are extracted, and division is performed according to the G component pixel values, that is, the original endoscopic image whose G component pixel value is greater than a preset pixel threshold value is determined as the NBI image, and the original endoscopic image whose G component pixel value is less than or equal to the preset pixel threshold value is determined as the white light image.
As shown in fig. 6, in one embodiment, a medical image recognition apparatus is proposed, comprising:
An obtaining module 602, configured to obtain an original endoscope image and endoscope report information, and extract pathological information from the endoscope report information;
a classification module 604, configured to perform classification and identification on the original endoscopic image to obtain a white light image and an NBI image;
a first extraction module 606, configured to input the white light image into a trained first neural network model for feature extraction, so as to obtain a first feature set, where the first feature set includes a plurality of first features, the first neural network model includes a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature;
a second extraction module 608, configured to input the NBI image into a trained second neural network model for feature extraction, so as to obtain a second feature;
and the identification module 610 is configured to perform identification in a trained machine learning classifier according to the pathological information, the first feature set, and the second feature, so as to obtain an identification result of the original endoscopic image.
In one embodiment, the medical image recognition apparatus further comprises:
the first control unit is used for carrying out screen turning processing on an electronic screen of the display equipment;
Or the second control unit is used for controlling the display equipment to display preset reminding contents in a preset area of the electronic screen.
In one embodiment, the medical image recognition apparatus further comprises:
the system comprises a sample acquisition module, a comparison module and a comparison module, wherein the sample acquisition module is used for acquiring a training sample set, and the training sample set comprises pathological information of an endoscope image, the first characteristic set, the second characteristic and an abnormal grade corresponding to the endoscope image;
and the training module is used for taking the pathological information, the first characteristic set and the second characteristic of the endoscopic image as the input of a GBDT machine learning classifier, taking the abnormal level of the endoscopic image as the expected output, and training the GBDT machine learning classifier to obtain the trained machine learning classifier.
In one embodiment, the training module comprises:
the fusion module is used for fusing the pathological information, each first feature in the first feature set and the second feature to obtain abnormal features;
and inputting the abnormal features into the trained machine learning model for recognition to obtain the abnormal grade of the endoscope image.
In one embodiment, the first extraction module comprises:
The first extraction unit is used for extracting the characteristics of the white light image by using the first ResNet neural network model to obtain the shape characteristics;
the second extraction unit is used for extracting the characteristics of the white light image by using the second ResNet neural network model to obtain the color characteristics;
the third extraction unit is used for extracting the characteristics of the white light image by using the third ResNet neural network model to obtain the brightness characteristics;
and the fourth extraction unit is used for performing feature extraction on the white light image by using the UNet + + neural network model to obtain the contrast feature.
In one embodiment, the second lifting module comprises: and the fifth extraction unit is used for extracting the features of the NBI image by using the ResNet network to obtain the texture features.
In one embodiment, the classification module comprises:
the conversion unit is used for converting each original endoscope image into an RGB image;
the pixel extraction unit is used for extracting G component pixel values corresponding to the RGB images;
the first dividing unit is used for determining the original endoscopic image with the G component pixel value larger than a preset pixel threshold value as the NBI image;
And the second dividing unit is used for determining the original endoscope image of which the G component pixel value is less than or equal to a preset pixel threshold value as the white light image.
FIG. 7 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be a server including, but not limited to, a high performance computer and a cluster of high performance computers. As shown in fig. 7, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program which, when executed by the processor, may cause the processor to implement the medical image recognition method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a medical image recognition method. Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the medical image recognition method provided by the present application may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 7. The memory of the computer device may store therein the individual program templates which make up the medical image recognition apparatus. Such as an acquisition module 602, a classification module 604, and an identification module 610.
A computer device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring an original endoscope image and endoscope report information, and extracting pathological information from the endoscope report information; classifying and identifying the original endoscope image to obtain a white light image and an NBI image; inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, the first neural network model comprises a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature; inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature; and identifying in a trained machine learning classifier according to the pathological information, the first characteristic set and the second characteristic to obtain an identification result of the original endoscope image.
In one embodiment, before the step of inputting the pathology information, the first feature set, and the second feature into a trained machine learning classifier for recognition to obtain the recognition result of the original endoscopic image, the method further includes: acquiring a training sample set, wherein the training sample set comprises pathological information of an endoscopic image, the first characteristic set, the second characteristic and an abnormal grade corresponding to the endoscopic image; and taking the pathological information, the first characteristic set and the second characteristic of the endoscope image as the input of a GBDT machine learning classifier, taking the abnormal level of the endoscope image as the expected output, and training the GBDT machine learning classifier to obtain the trained machine learning classifier.
In one embodiment, the step of obtaining the recognition result of the original endoscopic image by recognizing the pathological information, the first feature set and the second feature in a trained machine learning classifier includes: fusing the pathological information, each first feature in the first feature set and the second feature to obtain abnormal features; and inputting the abnormal features into the trained machine learning model for recognition to obtain the abnormal grade of the endoscope image.
In one embodiment, the step of inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set includes: and respectively taking the white light image as the input of each first neural network submodel, and obtaining the output of each first neural network submodel as the corresponding first characteristic.
In one embodiment, the first feature is one of a shape feature, a color feature, a brightness feature and a contrast feature in the white light image, and the first neural network sub-model is one of a first ResNet neural network model, a second ResNet neural network model, a third ResNet neural network model and a UNet + + neural network model; the step of taking the white light image as the input of each first neural network submodel, and obtaining the output of each first neural network submodel as the corresponding first characteristic respectively comprises: performing feature extraction on the white light image by using the first ResNet neural network model to obtain the shape feature; performing feature extraction on the white light image by using the second ResNet neural network model to obtain the color features; performing feature extraction on the white light image by using the third ResNet neural network model to obtain the brightness feature; and performing feature extraction on the white light image by using the UNet + + neural network model to obtain the contrast feature.
In one embodiment, the second feature is a texture feature, and the trained second neural network model comprises a ResNet network; inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature, wherein the feature extraction comprises the following steps: and performing feature extraction on the NBI image by using the ResNet network to obtain the texture feature.
In one embodiment, the step of classifying and identifying the raw endoscopic image to obtain a white light image and an NBI image comprises: converting each original endoscope image into an RGB image; extracting G component pixel values corresponding to the RGB images; determining the original endoscopic image with the G component pixel value larger than a preset pixel threshold value as the NBI image; and determining the original endoscopic image with the G component pixel value smaller than or equal to a preset pixel threshold value as the white light image.
A computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of: acquiring an original endoscope image and endoscope report information, and extracting pathological information from the endoscope report information; classifying and identifying the original endoscope image to obtain a white light image and an NBI image; inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, the first neural network model comprises a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature; inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature; and identifying in a trained machine learning classifier according to the pathological information, the first characteristic set and the second characteristic to obtain an identification result of the original endoscope image.
In one embodiment, before the step of inputting the pathology information, the first feature set, and the second feature into a trained machine learning classifier for recognition to obtain the recognition result of the original endoscopic image, the method further includes: acquiring a training sample set, wherein the training sample set comprises pathological information of an endoscopic image, the first characteristic set, the second characteristic and an abnormal grade corresponding to the endoscopic image; and taking the pathological information, the first characteristic set and the second characteristic of the endoscope image as the input of a GBDT machine learning classifier, taking the abnormal level of the endoscope image as the expected output, and training the GBDT machine learning classifier to obtain the trained machine learning classifier.
In one embodiment, the step of obtaining the recognition result of the original endoscopic image by recognizing the pathological information, the first feature set and the second feature in a trained machine learning classifier includes: fusing the pathological information, each first feature in the first feature set and the second feature to obtain abnormal features; and inputting the abnormal features into the trained machine learning model for recognition to obtain the abnormal grade of the endoscope image.
In one embodiment, the step of inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set includes: and respectively taking the white light image as the input of each first neural network submodel, and obtaining the output of each first neural network submodel as the corresponding first characteristic.
In one embodiment, the first feature is one of a shape feature, a color feature, a brightness feature and a contrast feature in the white light image, and the first neural network sub-model is one of a first ResNet neural network model, a second ResNet neural network model, a third ResNet neural network model and a UNet + + neural network model; the step of taking the white light image as the input of each first neural network submodel, and obtaining the output of each first neural network submodel as the corresponding first characteristic respectively comprises: performing feature extraction on the white light image by using the first ResNet neural network model to obtain the shape feature; performing feature extraction on the white light image by using the second ResNet neural network model to obtain the color features; performing feature extraction on the white light image by using the third ResNet neural network model to obtain the brightness feature; and performing feature extraction on the white light image by using the UNet + + neural network model to obtain the contrast feature.
In one embodiment, the second feature is a texture feature, and the trained second neural network model comprises a ResNet network; inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature, wherein the feature extraction comprises the following steps: and performing feature extraction on the NBI image by using the ResNet network to obtain the texture feature.
In one embodiment, the step of classifying and identifying the raw endoscopic image to obtain a white light image and an NBI image comprises: converting each original endoscope image into an RGB image; extracting G component pixel values corresponding to the RGB images; determining the original endoscopic image with the G component pixel value larger than a preset pixel threshold value as the NBI image; and determining the original endoscopic image with the G component pixel value smaller than or equal to a preset pixel threshold value as the white light image.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A medical image recognition method, comprising:
acquiring an original endoscope image and endoscope report information, and extracting pathological information from the endoscope report information;
classifying and identifying the original endoscope image to obtain a white light image and an NBI image;
inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, the first neural network model comprises a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature;
Inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature;
and identifying in a trained machine learning classifier according to the pathological information, the first characteristic set and the second characteristic to obtain an identification result of the original endoscope image.
2. The medical image recognition method according to claim 1, wherein before the step of inputting the pathological information, the first feature set and the second feature into a trained machine learning classifier for recognition to obtain the recognition result of the original endoscopic image, the method further comprises:
acquiring a training sample set, wherein the training sample set comprises pathological information of an endoscopic image, the first characteristic set, the second characteristic and an abnormal grade corresponding to the endoscopic image;
and taking the pathological information, the first characteristic set and the second characteristic of the endoscope image as the input of a GBDT machine learning classifier, taking the abnormal level of the endoscope image as the expected output, and training the GBDT machine learning classifier to obtain the trained machine learning classifier.
3. The medical image recognition method according to claim 2, wherein the step of obtaining the recognition result of the original endoscopic image by using a trained machine learning classifier for recognition according to the pathological information, the first feature set and the second feature comprises:
Fusing the pathological information, each first feature in the first feature set and the second feature to obtain abnormal features;
and inputting the abnormal features into the trained machine learning model for recognition to obtain the abnormal grade of the endoscope image.
4. The medical image recognition method of claim 1, wherein the step of inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set comprises:
and respectively taking the white light image as the input of each first neural network submodel, and obtaining the output of each first neural network submodel as the corresponding first characteristic.
5. A medical image recognition method according to claim 4, wherein the first feature is one of a shape feature, a color feature, a brightness feature and a contrast feature in the white light image, and the first neural network sub-model is one of a first ResNet neural network model, a second ResNet neural network model, a third ResNet neural network model and a UNet + + neural network model; the step of taking the white light image as the input of each first neural network submodel, and obtaining the output of each first neural network submodel as the corresponding first characteristic respectively comprises:
Performing feature extraction on the white light image by using the first ResNet neural network model to obtain the shape feature;
performing feature extraction on the white light image by using the second ResNet neural network model to obtain the color features;
performing feature extraction on the white light image by using the third ResNet neural network model to obtain the brightness feature;
and performing feature extraction on the white light image by using the UNet + + neural network model to obtain the contrast feature.
6. The medical image recognition method of claim 1, wherein the second features are texture features, and the trained second neural network model comprises a ResNet network; inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature, wherein the feature extraction comprises the following steps:
and performing feature extraction on the NBI image by using the ResNet network to obtain the texture feature.
7. The medical image recognition method of claim 4, wherein the step of classifying and recognizing the original endoscopic image to obtain a white light image and an NBI image comprises:
converting each original endoscope image into an RGB image;
Extracting G component pixel values corresponding to the RGB images;
determining the original endoscopic image with the G component pixel value larger than a preset pixel threshold value as the NBI image;
and determining the original endoscopic image with the G component pixel value smaller than or equal to a preset pixel threshold value as the white light image.
8. A medical image recognition apparatus, characterized by comprising:
the system comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring an original endoscope image and endoscope report information and extracting pathological information from the endoscope report information;
the classification module is used for classifying and identifying the original endoscope image to obtain a white light image and an NBI image;
the first extraction module is used for inputting the white light image into a trained first neural network model for feature extraction to obtain a first feature set, wherein the first feature set comprises a plurality of first features, the first neural network model comprises a plurality of first neural network submodels, and one first neural network submodel corresponds to one first feature;
the second extraction module is used for inputting the NBI image into a trained second neural network model for feature extraction to obtain a second feature;
and the recognition module is used for recognizing in a trained machine learning classifier according to the pathological information, the first characteristic set and the second characteristic to obtain a recognition result of the original endoscope image.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the medical image recognition method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of a medical image recognition method as claimed in any one of claims 1 to 7.
CN202111194907.2A 2021-10-14 2021-10-14 Medical image recognition method and device, computer equipment and storage medium Active CN113642537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111194907.2A CN113642537B (en) 2021-10-14 2021-10-14 Medical image recognition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111194907.2A CN113642537B (en) 2021-10-14 2021-10-14 Medical image recognition method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113642537A true CN113642537A (en) 2021-11-12
CN113642537B CN113642537B (en) 2022-01-04

Family

ID=78426731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111194907.2A Active CN113642537B (en) 2021-10-14 2021-10-14 Medical image recognition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113642537B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989125A (en) * 2021-12-27 2022-01-28 武汉楚精灵医疗科技有限公司 Method and device for splicing endoscope images, computer equipment and storage medium
CN113989284A (en) * 2021-12-29 2022-01-28 广州思德医疗科技有限公司 Helicobacter pylori assists detecting system and detection device
CN114241505A (en) * 2021-12-20 2022-03-25 苏州阿尔脉生物科技有限公司 Method and device for extracting chemical structure image, storage medium and electronic equipment
CN114565611A (en) * 2022-04-28 2022-05-31 武汉大学 Medical information acquisition method and related equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117864A (en) * 2018-07-13 2019-01-01 华南理工大学 Coronary heart disease risk prediction technique, model and system based on heterogeneous characteristic fusion
CN109523532A (en) * 2018-11-13 2019-03-26 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN109670532A (en) * 2018-11-23 2019-04-23 腾讯科技(深圳)有限公司 Abnormality recognition method, the apparatus and system of organism organ-tissue image
CN110495847A (en) * 2019-08-23 2019-11-26 重庆天如生物科技有限公司 Alimentary canal morning cancer assistant diagnosis system and check device based on deep learning
US20210153808A1 (en) * 2018-06-22 2021-05-27 Ai Medical Service Inc. Diagnostic assistance method, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium storing therein diagnostic assistance program for disease based on endoscopic image of digestive organ
US11024031B1 (en) * 2020-02-13 2021-06-01 Olympus Corporation System and method for diagnosing severity of gastric cancer
CN113496489A (en) * 2021-09-06 2021-10-12 北京字节跳动网络技术有限公司 Training method of endoscope image classification model, image classification method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210153808A1 (en) * 2018-06-22 2021-05-27 Ai Medical Service Inc. Diagnostic assistance method, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium storing therein diagnostic assistance program for disease based on endoscopic image of digestive organ
CN109117864A (en) * 2018-07-13 2019-01-01 华南理工大学 Coronary heart disease risk prediction technique, model and system based on heterogeneous characteristic fusion
CN109523532A (en) * 2018-11-13 2019-03-26 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN109670532A (en) * 2018-11-23 2019-04-23 腾讯科技(深圳)有限公司 Abnormality recognition method, the apparatus and system of organism organ-tissue image
CN110495847A (en) * 2019-08-23 2019-11-26 重庆天如生物科技有限公司 Alimentary canal morning cancer assistant diagnosis system and check device based on deep learning
US11024031B1 (en) * 2020-02-13 2021-06-01 Olympus Corporation System and method for diagnosing severity of gastric cancer
CN113496489A (en) * 2021-09-06 2021-10-12 北京字节跳动网络技术有限公司 Training method of endoscope image classification model, image classification method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BARBARA POPEK等: "Clinical experience of narrow band imaging (NBI) usage in diagnosis of laryngeal lesions", 《OTOLARYNGOLOGIA POLSKA》 *
李夏,于红刚: "内镜诊断早期胃癌的新进展", 《海南医学院学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241505A (en) * 2021-12-20 2022-03-25 苏州阿尔脉生物科技有限公司 Method and device for extracting chemical structure image, storage medium and electronic equipment
CN114241505B (en) * 2021-12-20 2023-04-07 苏州阿尔脉生物科技有限公司 Method and device for extracting chemical structure image, storage medium and electronic equipment
CN113989125A (en) * 2021-12-27 2022-01-28 武汉楚精灵医疗科技有限公司 Method and device for splicing endoscope images, computer equipment and storage medium
CN113989125B (en) * 2021-12-27 2022-04-12 武汉楚精灵医疗科技有限公司 Method and device for splicing endoscope images, computer equipment and storage medium
CN113989284A (en) * 2021-12-29 2022-01-28 广州思德医疗科技有限公司 Helicobacter pylori assists detecting system and detection device
CN114565611A (en) * 2022-04-28 2022-05-31 武汉大学 Medical information acquisition method and related equipment
CN114565611B (en) * 2022-04-28 2022-07-19 武汉大学 Medical information acquisition method and related equipment

Also Published As

Publication number Publication date
CN113642537B (en) 2022-01-04

Similar Documents

Publication Publication Date Title
CN113642537B (en) Medical image recognition method and device, computer equipment and storage medium
JP7085007B2 (en) Image recognition methods, computer devices and programs
CN111161290B (en) Image segmentation model construction method, image segmentation method and image segmentation system
Pogorelov et al. Deep learning and hand-crafted feature based approaches for polyp detection in medical videos
KR102311654B1 (en) Smart skin disease discrimination platform system constituting API engine for discrimination of skin disease using artificial intelligence deep run based on skin image
CN110263755B (en) Eye ground image recognition model training method, eye ground image recognition method and eye ground image recognition device
CN109390053B (en) Fundus image processing method, fundus image processing apparatus, computer device, and storage medium
CN112017185B (en) Focus segmentation method, device and storage medium
WO2019024568A1 (en) Ocular fundus image processing method and apparatus, computer device, and storage medium
CN111653365A (en) Nasopharyngeal carcinoma auxiliary diagnosis model construction and auxiliary diagnosis method and system
WO2023045231A1 (en) Method and apparatus for facial nerve segmentation by decoupling and divide-and-conquer
CN112508010A (en) Method, system, device and medium for identifying digital pathological section target area
JPWO2020022027A1 (en) Learning device and learning method
CN112419295A (en) Medical image processing method, apparatus, computer device and storage medium
CN117274270B (en) Digestive endoscope real-time auxiliary system and method based on artificial intelligence
CN113129293A (en) Medical image classification method, medical image classification device, computer equipment and storage medium
Junayed et al. ScarNet: development and validation of a novel deep CNN model for acne scar classification with a new dataset
CN113781488A (en) Tongue picture image segmentation method, apparatus and medium
Włodarczyk et al. Estimation of preterm birth markers with U-Net segmentation network
CN105869151B (en) Tongue segmentation and tongue fur tongue nature separation method
CN112819834B (en) Method and device for classifying stomach pathological images based on artificial intelligence
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis
CN110276333B (en) Eye ground identity recognition model training method, eye ground identity recognition method and equipment
CN110334575B (en) Fundus picture recognition method, device, equipment and storage medium
CN111814738A (en) Human face recognition method, human face recognition device, computer equipment and medium based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220624

Address after: 430000 No. 03, floor 5, building B10, phase I, Wuhan hi tech medical equipment Park, No. 818, Gaoxin Avenue, Wuhan East Lake New Technology Development Zone, Wuhan, Hubei (Wuhan area, free trade zone)

Patentee after: Wuhan Chujingling Medical Technology Co.,Ltd.

Address before: 430072 no.299 Bayi Road, Luojiashan street, Wuhan City, Hubei Province

Patentee before: WUHAN University

TR01 Transfer of patent right