CN112668668A - Postoperative medical image evaluation method and device, computer equipment and storage medium - Google Patents

Postoperative medical image evaluation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112668668A
CN112668668A CN202110097980.1A CN202110097980A CN112668668A CN 112668668 A CN112668668 A CN 112668668A CN 202110097980 A CN202110097980 A CN 202110097980A CN 112668668 A CN112668668 A CN 112668668A
Authority
CN
China
Prior art keywords
evaluation
image
medical image
rule
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110097980.1A
Other languages
Chinese (zh)
Other versions
CN112668668B (en
Inventor
张浩曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Kemosheng Medical Technology Co ltd
Original Assignee
Sichuan Keshijian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Keshijian Technology Co ltd filed Critical Sichuan Keshijian Technology Co ltd
Priority to CN202110097980.1A priority Critical patent/CN112668668B/en
Priority claimed from CN202110097980.1A external-priority patent/CN112668668B/en
Publication of CN112668668A publication Critical patent/CN112668668A/en
Application granted granted Critical
Publication of CN112668668B publication Critical patent/CN112668668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention relates to the technical field of image processing, and discloses a method and a device for evaluating a medical image after surgery, computer equipment and a storage medium. According to the method, the same medical image of a target tooth after root canal treatment is input into a rule embedded Convolutional Neural Network (CNN) model in a rule channel and a residual error network (ResNet) model in an image channel respectively, corresponding rule characteristic vectors and image characteristic vectors are obtained, then vector multiplication results of the two channels are input into a full connection layer, finally output results of the full connection layer are input into an output layer adopting a Softmax function, and the probability of classifying the medical image on each evaluation label is obtained.

Description

Postoperative medical image evaluation method and device, computer equipment and storage medium
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a device for evaluating a medical image after surgery, computer equipment and a storage medium.
Background
Deep learning, which requires large amounts of data and computing resources, is becoming increasingly popular due to the development of computers and large data. The deep learning based approach has significantly improved the tasks of Natural Language Processing (NLP) and Computer Vision (CV). For example, the improved ResNet and GoogLeNet models based on the AlexNet model make a great breakthrough in the task of image recognition. Bit error rate testers, as an example of NLP, achieve the most advanced performance over many NLP tasks and make pre-trained models popular due to the popularity of bit error rate testers.
In the field of identification and analysis of medical images (e.g., CT, MRI, X-ray, RGB images, etc., which may relate to various diseases), the most popular methods at present are also identification and analysis based on deep learning models, such as those using Convolutional Neural Networks (CNN). Common tasks of medical image recognition and analysis include classification, detection, and segmentation, among others. In the classification task, a multi-scale CNN is proposed to classify lung nodes, a 3DCNN is proposed to detect alzheimer disease, and a neural network structure of CNN + RNN (recurrent neural network) is proposed to detect whether eyes are cataracts; in the detection task, some propose to use full CNN to realize real-time detection and positioning of the fetus, and some adopt rapid CNN to selectively sample data, and finally successfully apply to the detection of the color image; in the segmentation task, a u-net neural network is proposed and applied to the segmentation of biological images.
However, the drawbacks of deep learning are also emerging: (1) the data is over-dependent, so that medical image identification and analysis for rare diseases, due to lack of sufficient samples and data, limits the improvement of robustness and Accuracy (Accuracy), for example, for endodontics (also called endodontic treatment, which is a procedure for treating pulp necrosis and root infection), if an existing convolutional neural network CNN is used to perform an endodontic evaluation task after acquiring an X-ray image of a post-operative tooth, the Accuracy of evaluating classification results is only 65.4%; (2) the lack of interpretability, i.e., the inexplicability due to deep learning, often leaves doctors with doubt on the results of the treatment.
Disclosure of Invention
In order to solve the problems of excessive dependence on data and lack of interpretability existing in the conventional deep learning method in the root canal treatment evaluation task, the invention aims to provide a method, a device, a computer device and a storage medium for evaluating a post-operation medical image, which can embed a post-operation medical image evaluation rule in a neural network, so that an evaluation result has interpretability and robustness on a small data set, and the accuracy, precision and recall rate of the evaluation classification result are obviously improved compared with the convolutional neural network CNN.
In a first aspect, the present invention provides a method for evaluating a medical image after surgery, comprising:
acquiring a medical image of a target tooth after a root canal treatment;
inputting the medical image into a residual error network ResNet model, and extracting an image characteristic vector through the residual error network ResNet model;
inputting the medical image into a trained rule embedded Convolutional Neural Network (CNN) model, and outputting a rule feature vector corresponding to at least one evaluation rule through the rule embedded Convolutional Neural Network (CNN) model, wherein each evaluation rule in the at least one evaluation rule corresponds to a preset evaluation score one by one;
inputting the multiplication result of the image feature vector and the regular feature vector into a full-connection layer;
and inputting the output result of the full connection layer into an output layer adopting a Softmax function to obtain the probability of the medical image classification on each evaluation label.
Based on the above invention, a technical scheme is provided for automatically evaluating a medical image of a post-operation target tooth in a root canal treatment evaluation task, namely, the same medical image of the post-operation target tooth of the root canal treatment is respectively input into a rule embedded convolutional neural network CNN model which is in a rule channel and is trained and a residual error network ResNet model in an image channel to obtain a corresponding rule characteristic vector and an image characteristic vector, then vector multiplication results of the two channels are input into a full connection layer, and finally an output result of the full connection layer is input into an output layer adopting a Softmax function to obtain the probability of classifying the medical image into each evaluation label, so that the evaluation result has interpretability and robustness on a small data set by embedding the post-operation medical image evaluation rule into the neural network, and compared with the convolutional neural network CNN, the accuracy, precision and recall rate of the evaluation classification result are obviously improved.
In one possible design, obtaining a medical image of a target tooth after a endodontic procedure includes:
acquiring a dental film image shot after a root canal treatment;
carrying out tooth identification and image segmentation processing on the dental film image to obtain a medical image of at least one tooth;
a medical image of the target tooth is determined from the medical images of the at least one tooth.
In one possible design, the dental film image is subjected to tooth recognition and image segmentation processing to obtain a medical image of at least one tooth, including:
and carrying out tooth identification and image segmentation processing on the dental film image by using a YOLO algorithm to obtain a medical image of the at least one tooth.
In one possible design, the residual network ResNet model includes five residual blocks with convolution kernels of 16, 32, 64, 128 and 256, respectively, wherein each of the residual blocks includes a batch normalization layer and three convolution layers, the size of the convolution kernel is 5 × 5, and the convolution step size is 3.
In one possible design, the residual error network ResNet model adopts an optimizer that is an adaptive moment estimation Adam optimizer when compiling.
In one possible design, before inputting the medical image into the trained rule-embedded convolutional neural network CNN model, the method further includes:
acquiring a plurality of post-operation tooth sample images and evaluation labels in one-to-one correspondence with the post-operation tooth sample images, wherein the evaluation labels are classified and determined based on post-operation evaluation total scores of the corresponding post-operation tooth sample images, and the post-operation evaluation total scores are obtained by scoring the post-operation tooth sample images according to the at least one evaluation rule and corresponding preset evaluation scores;
inputting the plurality of post-operation tooth sample images and the evaluation labels into the rule embedded Convolutional Neural Network (CNN) model for training to generate rule characteristics corresponding to the at least one evaluation rule;
and when the fitting degree of the rule features and the evaluation labels reaches a preset condition, stopping training to obtain the trained rule embedded convolutional neural network CNN model.
In one possible design, the iteration number of the rule-embedded convolutional neural network CNN model is 100, the batch processing size is 32, the training rate is 0.001, and a mean square error MSE function is selected as a loss function of the model before training.
In a second aspect, the invention provides a post-operative medical image evaluation device, which comprises an image acquisition module, an image channel module, a rule channel module, a multiplication input module and a result output module;
the image acquisition module is used for acquiring a medical image of a target tooth after a root canal treatment;
the image channel module is in communication connection with the image acquisition module and is used for inputting the medical image into a residual error network ResNet model and extracting an image characteristic vector through the residual error network ResNet model;
the rule channel module is in communication connection with the image acquisition module, and is used for inputting the medical image into a trained rule embedded Convolutional Neural Network (CNN) model and outputting a rule feature vector corresponding to at least one evaluation rule through the rule embedded Convolutional Neural Network (CNN) model, wherein each evaluation rule in the at least one evaluation rule corresponds to a preset evaluation score one by one;
the multiplication input module is respectively in communication connection with the image channel module and the regular channel module and is used for inputting a multiplication result of the image characteristic vector and the regular characteristic vector into a full connection layer;
the result output module is in communication connection with the multiplication input module and is used for inputting the output result of the full connection layer into an output layer adopting a Softmax function to obtain the probability of the medical image classification on each evaluation label.
In a third aspect, the present invention provides a computer device, comprising a memory and a processor, wherein the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the post-operative medical image evaluation method according to the first aspect or any one of the possible designs of the first aspect.
In a fourth aspect, the present invention provides a storage medium having stored thereon instructions for performing the method for post-operative medical image assessment as described in the first aspect or any one of the possible designs of the first aspect, when the instructions are run on a computer.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method for post-operative medical image assessment as described above in the first aspect or any one of the possible designs of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of the method for evaluating medical images after operation according to the present invention.
Fig. 2 is a schematic architecture diagram of an evaluation network model in the post-operation medical image evaluation method according to the present invention.
Fig. 3 is a schematic structural diagram of the postoperative medical image evaluation device provided by the present invention.
Fig. 4 is a schematic structural diagram of a computer device provided by the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists independently, and A and B exist independently; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
It will be understood that when an element is referred to herein as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Conversely, if a unit is referred to herein as being "directly connected" or "directly coupled" to another unit, it is intended that no intervening units are present. In addition, other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, quantities, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, quantities, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative designs, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
It should be understood that specific details are provided in the following description to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
As shown in fig. 1-2, the method for evaluating a postoperative medical image provided in the first aspect of the present embodiment may be, but is not limited to be, applied to a medical image processing device with certain computing resources. The method for evaluating the postoperative medical image may include, but is not limited to, the following steps S101 to S105.
S101, obtaining a medical image of a target tooth after the root canal therapy.
In step S101, since the dental film image obtained by actual shooting includes images of a plurality of teeth, individual images of each tooth need to be cut out from the dental film image, so as to perform post-operation medical image evaluation and classification on the target tooth after the endodontic procedure, that is, to obtain a medical image of the target tooth after the endodontic procedure, the medical image includes but is not limited to: acquiring a dental film image shot after a root canal treatment; carrying out tooth identification and image segmentation processing on the dental film image to obtain a medical image of at least one tooth; a medical image of the target tooth is determined from the medical images of the at least one tooth. In the process of tooth identification and image segmentation, it is preferable to perform tooth identification and image segmentation on the dental film image by using a YOLO algorithm to obtain a medical image of the at least one tooth. The YOLO (i.e., shorthand for You Only Look Once) algorithm is the name of an object detection algorithm, which is named in a research paper by Redmon et al in 2016, and can implement real-time object detection used in leading-edge technologies such as auto-driven cars, and thus medical images of various teeth can be identified and cut out from the dental film images by conventional modification and application. In addition, the specific manner of determining the medical image of the target tooth from the medical images of the at least one tooth may still be determined by using the existing image recognition technology, or may be determined manually by a human-computer interaction manner.
And S102, inputting the medical image into a residual error network ResNet model, and extracting an image feature vector through the residual error network ResNet model.
In step S102, the residual network in the residual network ResNet model is a convolutional neural network proposed by 4 scholars from Microsoft Research, and the residual network is characterized by easy optimization and can improve accuracy by adding a considerable depth. Because the residual block in the residual network uses jump connection, the problem of gradient disappearance caused by increasing the depth in the deep neural network is relieved, namely the residual block mainly comprises a plurality of convolution layers. Therefore, by conventional modification and application, the residual error network ResNet model can be obtained, and the image feature vector of the medical image is extracted. For example, the residual network ResNet model may include, but is not limited to, five residual blocks with convolution kernels of 16, 32, 64, 128, and 256, respectively, wherein each residual block includes a batch normalization layer and three convolution layers, the convolution kernels have a size of 5 × 5, and the convolution step size is 3. In addition, when the residual error network ResNet model is compiled, an adopted optimizer is an adaptive moment estimation Adam optimizer.
S103, inputting the medical image into a trained rule embedded Convolutional Neural Network (CNN) model, and outputting a rule feature vector corresponding to at least one evaluation rule through the rule embedded Convolutional Neural Network (CNN) model, wherein each evaluation rule in the at least one evaluation rule corresponds to a preset evaluation score one by one.
In step S103, a Convolutional Neural Network (CNN) in the regular embedded Convolutional Neural network CNN model is a kind of feed forward Neural network (fed Neural network) that includes convolution calculation and has a deep structure, and is one of the representative algorithms of deep learning (deep learning). Convolutional Neural Networks have a feature learning (rendering) capability, and can perform Shift-Invariant classification (Shift-Invariant classification) on input information according to a hierarchical structure thereof, and are therefore also called "Shift-Invariant Artificial Neural Networks (SIANN)". The prior researchers have combined the priori knowledge or rules with the neural network to obtain a rule embedded neural network, where rule embedding refers to encoding rules into the neural network, so that the neural network can enhance training effects after obtaining the rules, and reduce the dependence on data to a certain extent. Therefore, through routine modification and application, the at least one evaluation rule may be encoded into the convolutional neural network to obtain the rule-embedded convolutional neural network CNN model, as shown in fig. 2, including the conventional CNN model and rule encoding, and a loss between the output of the CNN module and the rule encoding may cause the entire rule-embedded convolutional neural network CNN model to output a rule feature vector.
Before the step S103, in order to train the rule embedded convolutional neural network CNN model, the following steps S1031 to S1033 may be included, but are not limited to be included.
And S1031, obtaining a plurality of post-operation tooth sample images and evaluation labels in one-to-one correspondence with the post-operation tooth sample images, wherein the evaluation labels are determined in a classified manner based on post-operation evaluation total scores corresponding to the post-operation tooth sample images, and the post-operation evaluation total scores are obtained by scoring the post-operation tooth sample images according to the at least one evaluation rule and corresponding preset evaluation scores.
In the step S1031, the at least one evaluation rule may be determined according to the knowledge of the dental professional, for example, 18 rules as shown in the following table 1 may be specified to evaluate the performance of the endodontic procedure of the dentist, and the relationship between each rule and the corresponding preset evaluation score may be shown in the following table 1, for example:
TABLE 1 corresponding relationship table of evaluation rule and preset evaluation score
Figure BDA0002915011790000061
Figure BDA0002915011790000071
According to the above table 1, evaluation scoring after endodontic treatment can be performed on post-operative tooth sample images, and classification of evaluation labels is performed based on the evaluation scoring results, and can be classified into, for example, the following three categories: the evaluation score result is greater than or equal to 80 and is good; the evaluation scoring result is between 50 and 80 points; the evaluation score was poor when the result was below 50 points. Since the number of dental film images in root canal treatment is very small, the inventor collected 1231 post-operation tooth sample images from a dental clinic and enhanced the image data set by normal data enhancement methods (such as transformation, rotation, mirror image, etc.), and finally obtained 3516 post-operation tooth sample images, wherein 1957 evaluation labels are good, 994 evaluation labels are common, and 565 evaluation labels are bad.
S1032, inputting the plurality of post-operation tooth sample images and the evaluation labels into the rule embedded Convolutional Neural Network (CNN) model for training to generate rule characteristics corresponding to the at least one evaluation rule;
s1033, when the fitting degree of the rule features and the evaluation labels reaches a preset condition, stopping training to obtain the trained rule embedded convolutional neural network CNN model.
Through the training process described in the foregoing steps S1031 to S1033, the convolutional neural network may be used to preprocess the post-operation tooth sample image in the sample set to obtain a series of rule features, and after the rule features and the image features are weighted, the rule features may be fully utilized to obtain a final output. In addition, for example, the number of iterations of the regular embedded convolutional neural network CNN model is 100, the batch processing size is 32, the training rate is 0.001, and a mean square error MSE function is selected as a loss function of the model before training.
And S104, inputting a multiplication result of the image characteristic vector and the regular characteristic vector into a full connection layer.
In step S104, each neuron in the fully-connected layer is fully connected to all neurons in the previous layer, so as to integrate local information with category distinction in the previous convolutional layer or pooling layer. In order to improve network performance, a ReLU function is generally adopted as the excitation function of each neuron in the fully-connected layer.
And S105, inputting the output result of the full connection layer into an output layer adopting a Softmax function to obtain the probability of the medical image classification on each evaluation label.
In the step S105, the Softmax function, also called a normalized exponential function, is a generalization of a two-classification function sigmoid on multi-classification, and aims to show the result of multi-classification in a form of probability. Therefore, through the output layer adopting the Softmax function, the probability of the medical image classification on each evaluation label can be obtained and used as the evaluation result of the postoperative medical image.
Based on the post-operation medical image evaluation method described in the foregoing steps S101 to S105, not only can the functions of educating doctors and assisting treatment be realized, but also an evaluation network model for the post-operation medical image of root canal therapy can be constructed through a rule embedded convolutional neural network model combined with a ResNet model, as shown in fig. 2, through the evaluation network model, rules can be fully utilized, and the expression capability and learning performance of a neural network can be enhanced. In addition, after the test using the small data sample containing 3516 post-operative tooth sample images, the performance comparison of the evaluation network model and the convolutional neural network CNN model on the evaluation classification comparison result shown in table 2 below can be obtained:
table 2 comparison table of performance of evaluation network model and CNN model of the present invention in evaluation of classification results
Figure BDA0002915011790000081
As can be seen from table 2, the performance of the evaluation network model of the present invention on the evaluation classification result can reach an accuracy of 79.2%, which is significantly higher than 65.4% of the CNN model, and the accuracy and recall rate on each evaluation label are far better than those of the CNN model, i.e., by embedding the evaluation rule of the post-endodontic medical image into the neural network, the evaluation result can have interpretability and robustness on a small data set.
As shown in fig. 3, a second aspect of this embodiment provides a virtual device for implementing the method for evaluating a medical image after surgery as designed in any one of the first aspect or the first aspect, including an image acquisition module, an image channel module, a rule channel module, a multiplication input module, and a result output module;
the image acquisition module is used for acquiring a medical image of a target tooth after a root canal treatment;
the image channel module is in communication connection with the image acquisition module and is used for inputting the medical image into a residual error network ResNet model and extracting an image characteristic vector through the residual error network ResNet model;
the rule channel module is in communication connection with the image acquisition module, and is used for inputting the medical image into a trained rule embedded Convolutional Neural Network (CNN) model and outputting a rule feature vector corresponding to at least one evaluation rule through the rule embedded Convolutional Neural Network (CNN) model, wherein each evaluation rule in the at least one evaluation rule corresponds to a preset evaluation score one by one;
the multiplication input module is respectively in communication connection with the image channel module and the regular channel module and is used for inputting a multiplication result of the image characteristic vector and the regular characteristic vector into a full connection layer;
the result output module is in communication connection with the multiplication input module and is used for inputting the output result of the full connection layer into an output layer adopting a Softmax function to obtain the probability of the medical image classification on each evaluation label.
For the working process, working details and technical effects of the foregoing device provided in the second aspect of this embodiment, reference may be made to the post-operative medical image evaluation method in the first aspect or any one of the possible designs in the first aspect, which is not described herein again.
As shown in fig. 4, a third aspect of the present embodiment provides a computer device for executing the post-operative medical image evaluation method according to any one of the first aspect or the possible designs of the first aspect, including a memory and a processor, which are communicatively connected, wherein the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the post-operative medical image evaluation method according to any one of the first aspect or the possible designs of the first aspect. For example, the Memory may include, but is not limited to, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a First-in First-out (FIFO), and/or a First-in Last-out (FILO), and the like; the processor may not be limited to the use of a microprocessor of the model number STM32F105 family. In addition, the computer device may also include, but is not limited to, a power module, a display screen, and other necessary components.
For the working process, working details and technical effects of the computer device provided in the third aspect of this embodiment, reference may be made to the first aspect or any one of the possible designs of the post-operation medical image evaluation method in the first aspect, which is not described herein again.
A fourth aspect of the present embodiment provides a storage medium storing instructions for the method for evaluating a post-operative medical image according to any one of the first aspect and the possible designs of the first aspect, that is, the storage medium stores instructions for executing the method for evaluating a post-operative medical image according to any one of the first aspect and the possible designs of the first aspect when the instructions are executed on a computer. The storage medium refers to a carrier for storing data, and may include, but is not limited to, a floppy disk, an optical disk, a hard disk, a flash Memory, a flash disk and/or a Memory Stick (Memory Stick), etc., and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
For the working process, working details and technical effects of the storage medium provided in the fourth aspect of this embodiment, reference may be made to the first aspect or any one of the possible designs of the method for evaluating a post-operative medical image in the first aspect, which is not described herein again.
A fifth aspect of the present embodiment provides a computer program product comprising instructions which, when executed on a computer, cause the computer to perform the method for post-operative medical image assessment as set forth in the first aspect or any one of the possible designs of the first aspect. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
The embodiments described above are merely illustrative, and may or may not be physically separate, if referring to units illustrated as separate components; if reference is made to a component displayed as a unit, it may or may not be a physical unit, and may be located in one place or distributed over a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications may be made to the embodiments described above, or equivalents may be substituted for some of the features described. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. A method for post-operative medical image assessment, comprising:
acquiring a medical image of a target tooth after a root canal treatment;
inputting the medical image into a residual error network ResNet model, and extracting an image characteristic vector through the residual error network ResNet model;
inputting the medical image into a trained rule embedded Convolutional Neural Network (CNN) model, and outputting a rule feature vector corresponding to at least one evaluation rule through the rule embedded Convolutional Neural Network (CNN) model, wherein each evaluation rule in the at least one evaluation rule corresponds to a preset evaluation score one by one;
inputting the multiplication result of the image feature vector and the regular feature vector into a full-connection layer;
and inputting the output result of the full connection layer into an output layer adopting a Softmax function to obtain the probability of the medical image classification on each evaluation label.
2. The method for post-operative medical image evaluation according to claim 1, wherein obtaining a medical image of the target tooth after the endodontic procedure comprises:
acquiring a dental film image shot after a root canal treatment;
carrying out tooth identification and image segmentation processing on the dental film image to obtain a medical image of at least one tooth;
a medical image of the target tooth is determined from the medical images of the at least one tooth.
3. The method of claim 2, wherein the performing tooth recognition and image segmentation on the dental film image to obtain at least one medical image of a tooth comprises:
and carrying out tooth identification and image segmentation processing on the dental film image by using a YOLO algorithm to obtain a medical image of the at least one tooth.
4. The method of claim 1, wherein the residual network ResNet model comprises five residual blocks with convolution kernels of 16, 32, 64, 128 and 256, wherein each residual block comprises a batch normalization layer and three convolution layers, and the convolution kernels have a size of 5 x 5 and convolution step size of 3.
5. The method of claim 4, wherein the residual network ResNet model is compiled using an optimizer that is an adaptive moment estimation Adam optimizer.
6. The method for post-operative medical image evaluation according to claim 1, wherein before inputting the medical image into the trained rule-embedded Convolutional Neural Network (CNN) model, the method further comprises:
acquiring a plurality of post-operation tooth sample images and evaluation labels in one-to-one correspondence with the post-operation tooth sample images, wherein the evaluation labels are classified and determined based on post-operation evaluation total scores of the corresponding post-operation tooth sample images, and the post-operation evaluation total scores are obtained by scoring the post-operation tooth sample images according to the at least one evaluation rule and corresponding preset evaluation scores;
inputting the plurality of post-operation tooth sample images and the evaluation labels into the rule embedded Convolutional Neural Network (CNN) model for training to generate rule characteristics corresponding to the at least one evaluation rule;
and when the fitting degree of the rule features and the evaluation labels reaches a preset condition, stopping training to obtain the trained rule embedded convolutional neural network CNN model.
7. The method of claim 6, wherein the regular embedded Convolutional Neural Network (CNN) model has an iteration number of 100, a batch size of 32 and a training rate of 0.001, and a Mean Square Error (MSE) function is used as a loss function of the pre-training model.
8. A postoperative medical image assessment device is characterized by comprising an image acquisition module, an image channel module, a regular channel module, a multiplication input module and a result output module;
the image acquisition module is used for acquiring a medical image of a target tooth after a root canal treatment;
the image channel module is in communication connection with the image acquisition module and is used for inputting the medical image into a residual error network ResNet model and extracting an image characteristic vector through the residual error network ResNet model;
the rule channel module is in communication connection with the image acquisition module, and is used for inputting the medical image into a trained rule embedded Convolutional Neural Network (CNN) model and outputting a rule feature vector corresponding to at least one evaluation rule through the rule embedded Convolutional Neural Network (CNN) model, wherein each evaluation rule in the at least one evaluation rule corresponds to a preset evaluation score one by one;
the multiplication input module is respectively in communication connection with the image channel module and the regular channel module and is used for inputting a multiplication result of the image characteristic vector and the regular characteristic vector into a full connection layer;
the result output module is in communication connection with the multiplication input module and is used for inputting the output result of the full connection layer into an output layer adopting a Softmax function to obtain the probability of the medical image classification on each evaluation label.
9. A computer device comprising a memory and a processor, wherein the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the post-operative medical image assessment method according to any one of claims 1 to 7.
10. A storage medium having stored thereon instructions for performing the method of post-operative medical image assessment according to any one of claims 1 to 7 when the instructions are run on a computer.
CN202110097980.1A 2021-01-25 Postoperative medical image evaluation method and device, computer equipment and storage medium Active CN112668668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110097980.1A CN112668668B (en) 2021-01-25 Postoperative medical image evaluation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110097980.1A CN112668668B (en) 2021-01-25 Postoperative medical image evaluation method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112668668A true CN112668668A (en) 2021-04-16
CN112668668B CN112668668B (en) 2024-07-09

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469987A (en) * 2021-07-13 2021-10-01 山东大学 Dental X-ray image lesion area positioning system based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144311A1 (en) * 2018-01-24 2019-08-01 悦享趋势科技(北京)有限责任公司 Rule embedded artificial neural network system and training method thereof
CN111627014A (en) * 2020-05-29 2020-09-04 四川大学 Root canal detection and scoring method and system based on deep learning
CN111783944A (en) * 2020-06-19 2020-10-16 中国人民解放军军事科学院战争研究院 Rule embedded multi-agent reinforcement learning method and device based on combination training

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144311A1 (en) * 2018-01-24 2019-08-01 悦享趋势科技(北京)有限责任公司 Rule embedded artificial neural network system and training method thereof
CN111627014A (en) * 2020-05-29 2020-09-04 四川大学 Root canal detection and scoring method and system based on deep learning
CN111783944A (en) * 2020-06-19 2020-10-16 中国人民解放军军事科学院战争研究院 Rule embedded multi-agent reinforcement learning method and device based on combination training

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HU WANG: "ReNN: Rule-embedded Neural Networks", 《ARXIV:1801.09856V2》, 31 August 2018 (2018-08-31), pages 1 - 6 *
THANATHORNWONG BHORNSAWAN等: "Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks", 《IMAGING SCIENCE IN DENTISTRY》, vol. 50, no. 2, 1 January 2020 (2020-01-01), pages 169 - 174 *
王栋等: "基于残差神经网络的木马通信流量分析研究", 《计算机应用研究》, vol. 37, no. 2, 31 December 2020 (2020-12-31), pages 250 - 252 *
苟苗: "基于牙齿CT图像数据的分割研究", 《中国优秀硕士学位论文全文数据库_医药卫生科技辑》, 15 July 2020 (2020-07-15), pages 074 - 12 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469987A (en) * 2021-07-13 2021-10-01 山东大学 Dental X-ray image lesion area positioning system based on deep learning

Similar Documents

Publication Publication Date Title
US10706333B2 (en) Medical image analysis method, medical image analysis system and storage medium
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
CN109215013B (en) Automatic bone age prediction method, system, computer device and storage medium
CN109685819B (en) Three-dimensional medical image segmentation method based on feature enhancement
WO2020087960A1 (en) Image recognition method and device, terminal apparatus, and medical system
CN110427486B (en) Body condition text classification method, device and equipment
CN109376663A (en) A kind of human posture recognition method and relevant apparatus
CN111611851B (en) Model generation method, iris detection method and device
CN114494263B (en) Medical image lesion detection method, system and equipment integrating clinical information
CN113240655B (en) Method, storage medium and device for automatically detecting type of fundus image
CN111341437A (en) Digestive tract disease judgment auxiliary system based on tongue image
CN115294075A (en) OCTA image retinal vessel segmentation method based on attention mechanism
CN113555087A (en) Artificial intelligence film reading method based on convolutional neural network algorithm
CN114925320B (en) Data processing method and related device
CN115147640A (en) Brain tumor image classification method based on improved capsule network
CN114491289A (en) Social content depression detection method of bidirectional gated convolutional network
CN116912253B (en) Lung cancer pathological image classification method based on multi-scale mixed neural network
CN116701635A (en) Training video text classification method, training video text classification device, training video text classification equipment and storage medium
CN112668668B (en) Postoperative medical image evaluation method and device, computer equipment and storage medium
CN112668668A (en) Postoperative medical image evaluation method and device, computer equipment and storage medium
CN114649092A (en) Auxiliary diagnosis method and device based on semi-supervised learning and multi-scale feature fusion
CN113779295A (en) Retrieval method, device, equipment and medium for abnormal cell image features
CN110287991A (en) Plant crude drug authenticity verification method, apparatus, computer equipment and storage medium
CN112270347B (en) Medical waste classification detection method based on improved SSD
CN116188879B (en) Image classification and image classification model training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240604

Address after: 200000 No. 58 Xiangchun Road, Shuxin Town, Chongming District, Shanghai (Shanghai Shuxin Economic Development Zone)

Applicant after: Shanghai Kemosheng Medical Technology Co.,Ltd.

Country or region after: China

Address before: Room 315, 3rd floor, building 19, 169 Haichang Road, Tianfu New District, Chengdu, Sichuan 610000

Applicant before: Sichuan keshijian Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant