CN111445440A - Medical image analysis method, equipment and storage medium - Google Patents

Medical image analysis method, equipment and storage medium Download PDF

Info

Publication number
CN111445440A
CN111445440A CN202010123168.7A CN202010123168A CN111445440A CN 111445440 A CN111445440 A CN 111445440A CN 202010123168 A CN202010123168 A CN 202010123168A CN 111445440 A CN111445440 A CN 111445440A
Authority
CN
China
Prior art keywords
image analysis
network model
result
neural network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010123168.7A
Other languages
Chinese (zh)
Other versions
CN111445440B (en
Inventor
薛忠
曹晓欢
施俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202010123168.7A priority Critical patent/CN111445440B/en
Publication of CN111445440A publication Critical patent/CN111445440A/en
Application granted granted Critical
Publication of CN111445440B publication Critical patent/CN111445440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a medical image analysis method, equipment and a storage medium. And acquiring a manual revision result of the image analysis result, processing the manual revision result to generate a degree of attention graph, inputting the degree of attention graph into the first neural network model, and simultaneously inputting the medical image into the first neural network model to obtain the characteristic information after manual revision and enhancement. And fusing and inputting the feature information and the image analysis result after the attention enhancement into a second neural network model to obtain a final image analysis result after manual revision. According to the method, the accuracy of model output is improved by enhancing the attention of model training manual revision, the accuracy of image analysis is improved by combining the manual revision and machine learning methods, and the image analysis and processing mode of a human in a loop is realized.

Description

Medical image analysis method, equipment and storage medium
Technical Field
The present invention relates to the field of image analysis, and in particular, to a medical image analysis method, apparatus, and storage medium.
Background
Medical image analysis is a process of segmenting an image into regions based on similarities or differences between the regions. At present, images of various cells, tissues and organs are mainly taken as objects of processing.
In recent years, with the development of other emerging disciplines, some new image segmentation techniques have been developed. Such as methods based on statistics, methods based on fuzzy theory, methods based on neural networks, methods based on wavelet analysis, combinatorial optimization models, etc. Although new image analysis methods are continuously proposed, none of the results of analyzing medical images is ideal.
When medical images are analyzed, it is difficult for any single image segmentation algorithm to obtain satisfactory results for general images, and due to the complexity of human anatomy and functional systematicness, although methods for distinguishing desired organs and tissues or detecting lesion regions by automatic segmentation of medical images have been studied, the accuracy of image analysis tasks performed by computers in the prior art is still insufficient.
Disclosure of Invention
The invention provides a medical image analysis method, medical image analysis equipment and a storage medium, which can improve the accuracy of medical image analysis.
In one aspect, the present invention provides a method of medical image analysis, the method comprising:
acquiring a medical image;
processing the medical image based on an image analysis network model to obtain an image analysis result of the medical image;
acquiring a manual revision result of the image analysis result;
obtaining a degree of attention graph related to the manual revision result according to the manual revision result;
based on a first neural network model, performing reinforcement learning on the medical image according to the attention degree graph to obtain feature information of the medical image after artificial revision and reinforcement;
and fusing the image analysis result and the feature information after the artificial revision and enhancement based on a second neural network model to obtain an image analysis result after the artificial revision.
Another aspect provides an electronic device, which includes a processor and a memory, where at least one instruction and at least one program are stored in the memory, and the at least one instruction and the at least one program are loaded by the processor and executed to implement the medical image processing method.
Another aspect provides a computer storage medium, in which at least one instruction, at least one program, are stored, and the at least one instruction, the at least one program, and the at least one instruction are loaded by a processor and execute the medical image processing method as described above.
The invention provides a medical image analysis method, equipment and a storage medium. And acquiring a manual revision result of the image analysis result, processing the manual revision result to generate a degree of attention graph, inputting the degree of attention graph into the first neural network model, and simultaneously inputting the medical image into the first neural network model to obtain the characteristic information after manual revision and enhancement. And fusing and inputting the feature information and the image analysis result after the attention enhancement into a second neural network model to obtain a final image analysis result after manual revision. According to the method, the accuracy of model output is improved by enhancing the attention of model training manual revision, the accuracy of image analysis is improved by combining the manual revision and machine learning methods, and the image analysis and processing mode of a human in a loop is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a medical image analysis method according to an embodiment of the present invention;
fig. 2 is a flowchart of a medical image analysis method according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for obtaining an image analysis result of a medical image in a medical image analysis method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for obtaining a result of manually revising an image segmentation result in a medical image analysis method according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for obtaining a result of manually revising an image detection result in a medical image analysis method according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for obtaining a manually revised result-related attention map in a medical image analysis method according to an embodiment of the present invention;
FIG. 7 is a flowchart of a method for image segmentation by a human knowledge input network model according to an embodiment of the present invention;
fig. 8 is a schematic model diagram of image detection of a lung nodule in a medical image analysis method according to an embodiment of the present invention;
FIG. 9 is a flowchart of a method for obtaining a manually revised image analysis result in a medical image analysis method according to an embodiment of the present invention;
FIG. 10 is a flowchart of a method for training an image analysis network model in a medical image analysis method according to an embodiment of the present invention;
fig. 11 is a flowchart of a method for training the first neural network model and the second neural network model in a medical image analysis method according to an embodiment of the present invention;
FIG. 12 is a network model of image segmentation of brain tumor in a medical image analysis method according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating manual correction performed by multiple experts in a medical image analysis method according to an embodiment of the present invention;
fig. 14 is a schematic hardware structure diagram of an apparatus for implementing the method provided by the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. Moreover, the terms "first," "second," and the like, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
Please refer to fig. 1, which shows an application scenario diagram of a medical image analysis method according to an embodiment of the present invention, where the application scenario includes a user terminal 110 and a server 120, the user terminal 110 is configured to receive an image analysis result sent by the server 120 and obtained based on an image analysis network model, a user manually revises the image analysis result through the user terminal 110 and sends the revised image analysis result to the server 120, the server 120 inputs the manually revised result as a degree of interest graph into an image feature enhancement network model, performs feature enhancement on a medical image to obtain feature information after the manual revision enhancement, and fuses the image analysis result and the manually revised enhanced feature information in a second neural network model to obtain a final manually revised image analysis result.
In the embodiment of the present invention, the user terminal 110 includes a physical device of a smart phone, a desktop computer, a tablet computer, a notebook computer, a digital assistant, a smart wearable device, and the like, and may also include software running in the physical device, such as an application program and the like. The operating system running on the network node in the embodiment of the present application may include, but is not limited to, an android system, an IOS system, linux, Unix, windows, and the like. The manually revised data is transmitted to the server 110 based on an API (Application Programming Interface), and the manually revised image analysis result transmitted by the server 110 is received.
In the embodiment of the present invention, the server 120 may include a server running independently, or a distributed server, or a server cluster composed of a plurality of servers. The server 120 may include a network communication unit, a processor, a memory, and the like. Specifically, the server 120 may process the input medical image information based on a plurality of image analysis network models and image fusion models.
Referring to fig. 2, a medical image analysis method, which can be applied to a server side, is shown, and the method includes:
s210, acquiring a medical image;
in particular, the medical image is image information with an object to be analyzed output by an examination apparatus. Aiming at different types of objects to be analyzed, different network segmentation models can be trained to segment the objects to be segmented. And identifying the input medical image to obtain an object to be analyzed, and inputting the object to be analyzed into the corresponding network analysis model. For example, a network model for image segmentation of brain tumors, the input medical image is a medical image of the brain, a network model for image detection of lung nodules, and the input medical image is a medical image of the lungs. A specific class of conditions corresponds to training a network model for image analysis.
The model for network segmentation is designed in a targeted manner, so that the model has specificity on the same object to be analyzed, the training of the network segmentation model is facilitated, and the accuracy of image segmentation on the object to be analyzed can be improved due to the correspondence between the model and the object to be analyzed.
S220, processing the medical image based on an image analysis network model to obtain an image analysis result of the medical image;
further, referring to fig. 3, the processing the medical image based on the image analysis network model to obtain the image analysis result of the medical image includes:
s310, based on an image analysis network model, performing target object segmentation on the medical image to obtain an image segmentation result of the medical image;
s320, or based on the image analysis network model, carrying out target object detection on the medical image to obtain an image detection result of the medical image.
Specifically, the image analysis network model may perform image segmentation or image detection, and the image analysis network model may be a network model formed by an algorithm for performing pixel segmentation by semantic information. When image segmentation is required, the object to be detected may be segmented first, for example, a region of a brain tumor is segmented from a medical image of the brain. When image detection is needed, the object to be detected needs to be segmented first, so that abnormal conditions of the object to be detected are judged according to the segmented object to be detected, for example, lung nodules are detected in lung CT images. The algorithm structure of semantic segmentation is a structure of an encoder-decoder, the encoder is a classification network trained in advance, pixel information in the medical image is distinguished, and a low-resolution recognition feature semantic head learned by the encoder is guided to a high-resolution pixel space through the decoder to obtain dense classification. When semantic segmentation is performed, the semantic segmentation can be performed based on the region, based on a full convolution network, or in a weak supervision mode.
The region-based semantic segmentation method extracts and describes free-form regions from a medical image, and then performs region-based classification. Under test, region-based predictions are converted into pixel predictions, typically by labeling pixels according to the highest scoring region that contains the prediction. Regional feature extraction can be performed through algorithms such as R-CNN, Fast R-CNN or Fast R-CNN, and a target segmentation region of the medical image, namely an image analysis result, is obtained.
The method for performing semantic segmentation based on the full convolution network marks pixel information in a medical image through the full convolution neural network, performs up-sampling on a low-resolution image generated by the full convolution layer, and finally obtains the characteristics of a target segmentation area in the medical image, thereby obtaining an image analysis result.
The method for semantic segmentation by a weak supervision mode replaces pixel-by-pixel truth value labeling with more easily obtained truth value labeling when model training is carried out on input information, wherein the more easily obtained truth value can be label information or a labeling box and the like, and then model training is carried out. And inputting the medical image into a model constructed by a weak supervision semantic segmentation method for feature extraction to obtain an image analysis result.
S230, acquiring a manual revision result of the image analysis result;
further, referring to fig. 4, the obtaining of the manual revision result of the image analysis result includes:
s410, if the image analysis result is an image segmentation result;
and S420, acquiring a manual revision result of a missing segmentation area or a mistaken segmentation area of the target object in the image segmentation result.
Further, referring to fig. 5, the obtaining of the manual revision result of the image analysis result includes:
s510, if the image analysis result is an image detection result;
s520, acquiring a manual revision result of the false detection area or the missing detection area of the target object in the image analysis result.
Specifically, the manual revision of the image analysis result is realized in a human-computer interaction manner, that is, the image analysis result is manually revised and then input into the first neural network model. The man-machine interaction mode comprises the interaction of key points or key areas and the interaction of image analysis results. In the man-machine interaction mode of the key points or the key areas, a user can click the positions of the key points by using a mouse and adjust the size of the image areas by taking the positions as the center. The manual interaction of the image analysis results is accomplished by using the interface to modify the segmentation edges or click on the inside/outside key points.
When the image analysis result is an analysis result obtained by image segmentation, an undivided region can be increased, and an unnecessary segmented region can be eliminated. For example, when the target segmented by the image analysis result is a brain tumor after the image analysis result is obtained from the image analysis network model, the part of the brain tumor that is not segmented and the redundant part in the existing segmented region are manually searched and corrected, so that the manual revision result can be obtained.
When the image analysis result is an analysis result obtained by performing image detection, the false detection region may be corrected, or the non-detection region may be corrected, for example, if a lung nodule is detected by the image analysis network model, but actually, the lung nodule is not found in manual revision, correction is required to be performed, and the false detection result is removed.
When the image analysis result is an analysis result obtained by performing image detection, the manual revision may also be intervention on the detection result, and the intervention mode on the detection result may be to select and revise classification on the image detection result, that is, to manually revise inaccurate information in auxiliary diagnosis, and to reclassify the image detection result. For example, a lung tumor, in fact a lung nodule, is detected, and can be corrected by manual revision.
S240, obtaining a degree of interest graph related to the manual revision result according to the manual revision result;
further, referring to fig. 6, obtaining the attention graph related to the manual revision result according to the manual revision result further includes:
s610, if the number of the manual revision results is one, outputting the manual revision results as a degree of attention graph;
and S620, if the number of the manual revision results is multiple, acquiring each manual revision result, and fusing the multiple revision results according to a preset rule to obtain a fused attention degree graph.
Specifically, after the manual revision is made, one manual revision result may be obtained, or a plurality of manual revision results may be obtained. Referring to fig. 7, the agent in fig. 7 refers to a neural network model, and human knowledge is input into the neural network model by manual revision, and since there may be a certain difference in revisions performed by different users, rules for fusing manual revision results of different user revisions, such as a loss function or a weight manually assigned, may be preset based on the ranking of the importance of the users themselves. Specifically, the loss function can be set through the loss function, one loss function is designed for the manual revision result of each user, and different loss functions are expected to be automatically given through parameter adjustment in combination with the strategy loss function of the original machine system. And fusing the manual revision results based on the loss functions corresponding to different manual revision results to obtain the attention degree graph. For example, the image detection results output by the image detection model for detecting lung nodules are manually revised by three users, namely a user a, B user C, and the loss function of the user a is a, the loss function of the user B is B, and the loss function of the user C is C, then the manual revision results of the user a, the user B, and the user C are weighted according to the loss function values, and the weighted manual revision results are fused into a degree of attention graph. In addition, a reward function and artificial confidence can be set on the output of the network model so as to improve the accuracy of the prediction result obtained by the network model.
The manual revision is input into the network model for self-attention enhancement, so that the network model can obtain more accurate output based on the result of the manual revision, and the accuracy of image segmentation of the network model is improved.
And S250, based on a first neural network model, performing reinforcement learning on the medical image according to the attention degree graph to obtain the feature information of the medical image after artificial revision and reinforcement.
Specifically, the attention map is input into a first neural network model as a reference of the medical image, and the first neural network model is used for feature enhancement of the medical image. The input of the first neural network model is still medical images, the attention map is used for reminding the first neural network model of the parts needing attention when image analysis is carried out, and the medical images obtained by enhancing the parts needing attention, namely feature information obtained by artificially revising and enhancing the parts needing attention, are output.
After the training of the image analysis network model, the first neural network model and the second neural network model is completed, the accuracy of the final output information of the second neural network model can still be enhanced through the attention map.
In addition, when image detection is carried out, the results of automatic judgment of the machine by the generated confrontation network and the results of diagnosis and correction of doctors can be adopted for continuous training, and the network performance is improved. Meanwhile, the result of the image detection can also be directly returned to the image analysis network model for global network optimization. For example, in the stage of detecting pulmonary nodules, the results of automatic judgment of the machine by the generation of the confrontation network and the results of detection and correction of doctors can be continuously trained, so that the network performance is improved, and the detection results of the pulmonary nodules are obtained.
The advantage of the countermeasure network is that it is semi-supervised learning, i.e. only part of the data is required to have the labeled information, thus saving a lot of manpower and material resources in the process of preparing training data and more effectively completing the model training work.
In a specific embodiment, referring to fig. 8, fig. 8 shows an image detection model of lung nodules, wherein the result of the preliminary detection is manually revised, and the result of the manual revision is fed back to the input end of the model to obtain the final detection result. When the chest CT image is input into a deep neural network for image detection, the deep neural network automatically detects the lung nodule image in the chest CT image to obtain an image detection result. And then, the doctor manually revises the image detection result, fuses the manually revised result into a degree of interest graph, and feeds the degree of interest graph back to the input end of the deep neural network. When the neural network carries out image detection again, the artificial correction knowledge of the attention map is added into a new rule, and the attention of the chest CT image which is detected next time is enhanced to obtain a better detection result.
And S260, fusing the image analysis result and the manually revised and enhanced feature information based on a second neural network model to obtain a manually revised image analysis result.
Further, referring to fig. 9, the fusing the image analysis result and the manually revised and enhanced feature information based on the second neural network model to obtain the manually revised image analysis result includes:
s910, cascading the image analysis result and the feature information after artificial revision and enhancement;
and S920, inputting the image analysis result after the cascade connection and the characteristic information after the artificial revision and the enhancement into a second neural network model for fusion to obtain an image analysis result after the artificial revision.
Specifically, the second neural network model is used for image fusion of the image analysis result and the artificially revised and enhanced feature information, the image analysis result and the artificially revised and enhanced feature information are fused in the second neural network model, and the output result of machine learning in the image analysis network model and the output result of the artificial revision in the first neural network model are fused, so that a relatively accurate image analysis result after artificial revision can be obtained. The image analysis result and the manually revised and enhanced feature information can be concatenated by using a Concat function, wherein the Concat function is a function for concatenating two or more arrays, does not change the existing arrays, and only returns one copy of the concatenated arrays.
When the image fusion is carried out, the feature information of the image analysis result and the feature information of the feature information after the artificial revision and the enhancement can be respectively extracted, and the image segmentation range of the image analysis result and the feature information after the artificial revision and the enhancement are fused to obtain the image analysis result after the artificial revision. The hierarchical multi-task learning method based on the image analysis network model, the first neural network model and the second neural network model can effectively enhance the stability of the segmentation system.
Further, referring to fig. 10, the method further includes a step of training the image analysis network model, and the training the image analysis network model includes:
s1010, obtaining a first training sample set, wherein the first training sample set is a medical image with labeling information;
s1020, constructing an initial image analysis network model;
s1030, training the initial image analysis network model based on the first training sample set to obtain a trained image analysis network model;
and S1040, wherein the input of model training comprises medical images in the first training sample set, and the medical images with labeling information are labeled as training targets of the model training.
Specifically, when the image analysis network model is trained, the medical image with the labeling information is used as a first training set to train the image analysis network model. The labeling information is a region to be segmented in the medical image, and the region to be segmented may be an abnormal part, a part determined as a focus, and the like. When the initial image analysis network model is constructed, an initial structure can be constructed by using a convolution network, and parameter information such as the depth of the network, the number of layers of convolution, a loss function and the like is set to obtain the initial image analysis network model. Training an initial image analysis network model based on a first training sample set, inputting medical images into the initial image analysis network model, recording a difference value between a result output by the model and the medical images with labeling information in the first training sample set based on a loss function, and properly adjusting parameters of the initial image analysis network model based on the difference value to finally obtain the trained image analysis network model.
Further, please refer to fig. 11, the method further comprises the step of training the first and second neural network models, the training the first and second neural network models comprising;
s1110, acquiring output information of the image analysis network;
s1120, manually revising the output information of the image analysis network to obtain a degree of interest graph;
s1130, obtaining a second training sample set, wherein the second training sample set is a medical image with labeling information and the degree of interest graph;
s1140, constructing an initial first neural network model and an initial second neural network model;
s1150, performing attention enhancement training on the initial first neural network model based on the attention map in the second training sample set to obtain output information of the initial first neural network model;
s1160, carrying out manual revision fusion training on the initial second neural network model based on the output information of the image analysis network and the output information of the initial first neural network model;
s1170, acquiring a trained first neural network model and a trained second neural network model;
s1180, wherein the input of model training comprises the medical images and the attention degree graph in the second training sample set, and the medical images with the labeling information are labeled as training targets of the model training.
Specifically, output information of a trained image analysis network model is obtained, the number of the output information can be multiple, the output information of the image analysis network model is manually revised at least once, and the results of the manual revision are fused to obtain a degree of attention graph. And when the first neural network model and the second neural network model are trained, the medical image with the labeling information and the attention map are used as a second training set, and the first neural network model and the second neural network model are trained. The labeling information is a region to be segmented in the medical image, and the region to be segmented may be an abnormal part, a part determined as a focus, and the like. When the initial first neural network model is built, an initial structure can be built by using a convolution network, and parameter information such as the depth of the network, the number of layers of convolution, a loss function and the like is set to obtain the initial first neural network model. And inputting the medical image into the initial first neural network model based on the second training sample set, wherein the attention degree graph is used as auxiliary input to remind the first neural network model of the region needing attention when the image segmentation is carried out, and the medical image subjected to feature enhancement, namely the output information of the first neural network model, is output.
And inputting the output information of the trained image analysis network model and the output information of the first neural network model into the second neural network model for manual revision fusion training. And fusing output information obtained by inputting the medical image into the image analysis network model and output information obtained by inputting the same medical image into the first neural network model in the second neural network model to obtain the output information of the second neural network model.
And recording a difference value between a result output by the second neural network model and the medical image with the labeling information in the second training sample set based on the loss function, and properly adjusting parameters of the initial first neural network model and the initial second neural network model based on the difference value to finally obtain the trained first neural network model and the trained second neural network model.
In one embodiment, please refer to fig. 12, which is a diagram of an image segmentation network model of a brain tumor in fig. 12, wherein the full convolution neural network FCN1, i.e., an image segmentation network, corresponds to an image analysis network model, the full convolution neural network FCN2, i.e., a human-in-circuit enhancement network, corresponds to a first neural network model, the full convolution neural network FCN3, i.e., a hybrid enhancement network, corresponds to a second neural network model, the golden standard is a medical image with labeling information, and the annotation map is an attention map. As shown in fig. 12, the medical images of the brain to be segmented are input into the FCN1 network and the FCN2 network, respectively, the segmentation result of the brain tumor output by the FCN1 network is manually corrected, and the doctor related to the brain tumor removes the segmented part or adds the under-segmented part, as shown in fig. 13. And the brain tumor segmentation results corrected by multiple experts are fused into a brain tumor attention map as the input of the FCN2 network, so that the FCN2 network can be trained in the application stage to obtain a feature enhancement map of the brain tumor. The input of FCN3 is the brain tumor segmentation result after the cascade and the feature enhancement map of the brain tumor, the image fusion is performed in FCN3, and the output of FCN3 is the final brain tumor segmentation result.
The embodiment of the invention provides a medical image analysis method which comprises the steps of obtaining a medical image and inputting the medical image into an image analysis network model to obtain an image analysis result of the medical image. And acquiring a manual revision result of the image analysis result, processing the manual revision result to generate a degree of attention graph, inputting the degree of attention graph into the first neural network model, and simultaneously inputting the medical image into the first neural network model to obtain the characteristic information after manual revision and enhancement. And fusing and inputting the feature information and the image analysis result after the attention enhancement into a second neural network model to obtain a final image analysis result after manual revision. According to the method, the accuracy of model output is improved by enhancing the attention of model training manual revision, the accuracy of image analysis is improved by combining the manual revision and machine learning methods, and the image analysis and processing mode of a human in a loop is realized.
The present embodiment also provides a computer-readable storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are loaded by a processor and execute a medical image analysis method of the present embodiment.
The present embodiment also provides an apparatus comprising a processor and a memory, wherein the memory stores a computer program adapted to be loaded by the processor and to perform a method of medical image analysis as described above in the present embodiment.
The device may be a computer terminal, a mobile terminal or a server, and the device may also participate in forming the apparatus or system provided by the embodiments of the present invention. As shown in fig. 14, the mobile terminal 14 (or computer terminal 14 or server 14) may include one or more (shown here as 1402a, 1402b, … …, 1402 n) processors 1402 (processor 1402 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 1404 for storing data, and a transmitting device 1406 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 14 is only an illustration and is not intended to limit the structure of the electronic device. For example, mobile device 14 may also include more or fewer components than shown in FIG. 14, or have a different configuration than shown in FIG. 14.
It should be noted that the one or more processors 1402 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuitry may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the mobile device 14 (or computer terminal). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 1404 may be used for storing software programs and modules of application software, such as program instructions/data storage devices corresponding to the method described in the embodiment of the present invention, and the processor 1402 executes various functional applications and data processing by running the software programs and modules stored in the memory 1404, so as to implement a self-attention network-based time-series behavior capture block generation method described above. The memory 1404 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 1404 may further include memory located remotely from processor 1402, which may be connected to mobile device 14 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 1406 is used for receiving or sending data via a network. Specific examples of such networks may include wireless networks provided by the communication provider of the mobile terminal 14. In one example, the transmission device 1406 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmitting device 1406 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen-type liquid crystal display (L CD) that may enable a user to interact with a user interface of the mobile device 14 (or computer terminal).
The present specification provides method steps as described in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive labor. The steps and sequences recited in the embodiments are but one manner of performing the steps in a multitude of sequences and do not represent a unique order of performance. In the actual system or interrupted product execution, it may be performed sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The configurations shown in the present embodiment are only partial configurations related to the present application, and do not constitute a limitation on the devices to which the present application is applied, and a specific device may include more or less components than those shown, or combine some components, or have an arrangement of different components. It should be understood that the methods, apparatuses, and the like disclosed in the embodiments may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a division of one logic function, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or unit modules.
Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of medical image analysis, the method comprising:
acquiring a medical image;
processing the medical image based on an image analysis network model to obtain an image analysis result of the medical image;
acquiring a manual revision result of the image analysis result;
obtaining a degree of attention graph related to the manual revision result according to the manual revision result;
based on a first neural network model, performing reinforcement learning on the medical image according to the attention degree graph to obtain feature information of the medical image after artificial revision and reinforcement;
and fusing the image analysis result and the feature information after the artificial revision and enhancement based on a second neural network model to obtain an image analysis result after the artificial revision.
2. The method of claim 1, wherein the fusing the image analysis result and the artificially revised and enhanced feature information based on the second neural network model to obtain the artificially revised image analysis result comprises:
cascading the image analysis result and the manually revised and enhanced feature information;
and inputting the image analysis result after the cascade connection and the characteristic information after the artificial revision enhancement into a second neural network model for fusion to obtain an image analysis result after the artificial revision.
3. The method of claim 1, wherein obtaining the attention map associated with the manual revision result according to the manual revision result further comprises:
if the number of the manual revision results is one, outputting the manual revision results as a degree of attention graph;
and if the number of the manual revision results is multiple, acquiring each manual revision result, and fusing the multiple revision results according to a preset rule to obtain a fused attention degree graph.
4. The method according to claim 1, wherein the processing the medical image based on the image analysis network model to obtain the image analysis result of the medical image comprises:
based on an image analysis network model, performing target object segmentation on the medical image to obtain an image segmentation result of the medical image;
or based on an image analysis network model, carrying out target object detection on the medical image to obtain an image detection result of the medical image.
5. The method according to claim 4, wherein the obtaining of the manual revision of the image analysis result comprises:
if the image analysis result is an image segmentation result;
and acquiring a manual revision result of the mistakenly-segmented region or the missed-segmented region of the target object in the image segmentation result.
6. The method according to claim 4, wherein the obtaining of the manual revision of the image analysis result comprises:
if the image analysis result is an image detection result;
and acquiring a manual revision result of the false detection area or the missing detection area of the target object in the image analysis result.
7. A method of medical image analysis according to claim 1, the method further comprising the step of training the image analysis network model, the training the image analysis network model comprising:
acquiring a first training sample set, wherein the first training sample set is a medical image with labeling information;
constructing an initial image analysis network model;
training the initial image analysis network model based on the first training sample set to obtain a trained image analysis network model;
wherein, the input of the model training comprises the medical images in the first training sample set, and the medical images with the labeling information are labeled as the training targets of the model training.
8. A method for medical image analysis according to claim 7, further comprising the step of jointly training the first neural network model and the second neural network model, said jointly training the first neural network model and the second neural network model comprising;
acquiring output information of the image analysis network;
manually revising the output information of the image analysis network to obtain a degree of attention graph;
acquiring a second training sample set, wherein the second training sample set is a medical image with labeling information and the degree of attention map;
constructing an initial first neural network model and an initial second neural network model;
performing attention enhancement training on the initial first neural network model based on the attention map in the second training sample set to obtain output information of the initial first neural network model;
performing manual revision fusion training on the initial second neural network model based on the output information of the image analysis network and the output information of the initial first neural network model;
acquiring a trained first neural network model and a trained second neural network model;
and the input of the model training comprises the medical images and the attention degree graph in the second training sample set, and the medical images with the labeling information are labeled as the training targets of the model training.
9. An electronic device, comprising a processor and a memory, wherein at least one instruction, at least one program, is stored in the memory, and the at least one instruction, the at least one program are loaded by the processor and executed to implement the medical image analysis method according to any one of claims 1 to 8.
10. A computer storage medium, wherein at least one instruction, at least one program, is stored, and wherein the at least one instruction, the at least one program, is loaded by a processor and executes the method for medical image analysis according to any one of claims 1 to 8.
CN202010123168.7A 2020-02-20 2020-02-20 Medical image analysis method, device and storage medium Active CN111445440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010123168.7A CN111445440B (en) 2020-02-20 2020-02-20 Medical image analysis method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010123168.7A CN111445440B (en) 2020-02-20 2020-02-20 Medical image analysis method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111445440A true CN111445440A (en) 2020-07-24
CN111445440B CN111445440B (en) 2023-10-31

Family

ID=71653965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010123168.7A Active CN111445440B (en) 2020-02-20 2020-02-20 Medical image analysis method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111445440B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132232A (en) * 2020-10-19 2020-12-25 武汉千屏影像技术有限责任公司 Medical image classification labeling method and system and server
CN112581092A (en) * 2020-12-23 2021-03-30 上海研鼎信息技术有限公司 Laboratory management method, laboratory management equipment and storage medium
CN112735565A (en) * 2020-10-30 2021-04-30 衡阳市大井医疗器械科技有限公司 Detection result acquisition method, electronic equipment and server
CN113077445A (en) * 2021-04-01 2021-07-06 中科院成都信息技术股份有限公司 Data processing method and device, electronic equipment and readable storage medium
CN113255756A (en) * 2021-05-20 2021-08-13 联仁健康医疗大数据科技股份有限公司 Image fusion method and device, electronic equipment and storage medium
CN113273963A (en) * 2021-05-07 2021-08-20 中国人民解放军西部战区总医院 Postoperative wound hemostasis system and method for hepatobiliary pancreatic patient
CN114298979A (en) * 2021-12-09 2022-04-08 北京工业大学 Liver nuclear magnetic image sequence generation method guided by focal lesion symptom description
EP4109463A1 (en) * 2021-06-24 2022-12-28 Siemens Healthcare GmbH Providing a second result dataset

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201231017A (en) * 2011-01-25 2012-08-01 Univ Nat Yunlin Sci & Tech Semi-automatic knee cartilage MRI image segmentation based on cellular automata
CN107292887A (en) * 2017-06-20 2017-10-24 电子科技大学 A kind of Segmentation Method of Retinal Blood Vessels based on deep learning adaptive weighting
CN107492135A (en) * 2017-08-21 2017-12-19 维沃移动通信有限公司 A kind of image segmentation mask method, device and computer-readable recording medium
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN108111757A (en) * 2017-12-21 2018-06-01 广东欧珀移动通信有限公司 Photographic method, device, storage medium and terminal
CN108229490A (en) * 2017-02-23 2018-06-29 北京市商汤科技开发有限公司 Critical point detection method, neural network training method, device and electronic equipment
CN108345890A (en) * 2018-03-01 2018-07-31 腾讯科技(深圳)有限公司 Image processing method, device and relevant device
WO2018236674A1 (en) * 2017-06-23 2018-12-27 Bonsai Al, Inc. For hiearchical decomposition deep reinforcement learning for an artificial intelligence model
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109783062A (en) * 2019-01-14 2019-05-21 中国科学院软件研究所 A kind of machine learning application and development method and system of people in circuit
CN109872306A (en) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 Medical image cutting method, device and storage medium
CN109993735A (en) * 2019-03-29 2019-07-09 成都信息工程大学 Image partition method based on concatenated convolutional
CN110111271A (en) * 2019-04-24 2019-08-09 北京理工大学 A kind of single pixel imaging method based on lateral inhibition network
CN110189334A (en) * 2019-05-28 2019-08-30 南京邮电大学 The medical image cutting method of the full convolutional neural networks of residual error type based on attention mechanism
CN110210487A (en) * 2019-05-30 2019-09-06 上海商汤智能科技有限公司 A kind of image partition method and device, electronic equipment and storage medium
CN110232696A (en) * 2019-06-20 2019-09-13 腾讯科技(深圳)有限公司 A kind of method of image region segmentation, the method and device of model training
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110502654A (en) * 2019-08-26 2019-11-26 长光卫星技术有限公司 A kind of object library generation system suitable for multi-source heterogeneous remotely-sensed data
US20190370587A1 (en) * 2018-05-29 2019-12-05 Sri International Attention-based explanations for artificial intelligence behavior
CN110660480A (en) * 2019-09-25 2020-01-07 上海交通大学 Auxiliary diagnosis method and system for spondylolisthesis
CN110706207A (en) * 2019-09-12 2020-01-17 上海联影智能医疗科技有限公司 Image quantization method, image quantization device, computer equipment and storage medium

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201231017A (en) * 2011-01-25 2012-08-01 Univ Nat Yunlin Sci & Tech Semi-automatic knee cartilage MRI image segmentation based on cellular automata
CN108229490A (en) * 2017-02-23 2018-06-29 北京市商汤科技开发有限公司 Critical point detection method, neural network training method, device and electronic equipment
CN107292887A (en) * 2017-06-20 2017-10-24 电子科技大学 A kind of Segmentation Method of Retinal Blood Vessels based on deep learning adaptive weighting
WO2018236674A1 (en) * 2017-06-23 2018-12-27 Bonsai Al, Inc. For hiearchical decomposition deep reinforcement learning for an artificial intelligence model
CN107492135A (en) * 2017-08-21 2017-12-19 维沃移动通信有限公司 A kind of image segmentation mask method, device and computer-readable recording medium
CN108111757A (en) * 2017-12-21 2018-06-01 广东欧珀移动通信有限公司 Photographic method, device, storage medium and terminal
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN108345890A (en) * 2018-03-01 2018-07-31 腾讯科技(深圳)有限公司 Image processing method, device and relevant device
US20190370587A1 (en) * 2018-05-29 2019-12-05 Sri International Attention-based explanations for artificial intelligence behavior
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109783062A (en) * 2019-01-14 2019-05-21 中国科学院软件研究所 A kind of machine learning application and development method and system of people in circuit
CN109872306A (en) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 Medical image cutting method, device and storage medium
CN109993735A (en) * 2019-03-29 2019-07-09 成都信息工程大学 Image partition method based on concatenated convolutional
CN110111271A (en) * 2019-04-24 2019-08-09 北京理工大学 A kind of single pixel imaging method based on lateral inhibition network
CN110189334A (en) * 2019-05-28 2019-08-30 南京邮电大学 The medical image cutting method of the full convolutional neural networks of residual error type based on attention mechanism
CN110210487A (en) * 2019-05-30 2019-09-06 上海商汤智能科技有限公司 A kind of image partition method and device, electronic equipment and storage medium
CN110232696A (en) * 2019-06-20 2019-09-13 腾讯科技(深圳)有限公司 A kind of method of image region segmentation, the method and device of model training
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110502654A (en) * 2019-08-26 2019-11-26 长光卫星技术有限公司 A kind of object library generation system suitable for multi-source heterogeneous remotely-sensed data
CN110706207A (en) * 2019-09-12 2020-01-17 上海联影智能医疗科技有限公司 Image quantization method, image quantization device, computer equipment and storage medium
CN110660480A (en) * 2019-09-25 2020-01-07 上海交通大学 Auxiliary diagnosis method and system for spondylolisthesis

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CHAITANYA KAUL 等: "FocusNet: An attention-based Fully Convolutional Network for Medical Image Segmentation", 2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), pages 455 - 458 *
EMANUELE PESCE 等: "Learning to detect chest radiographs containing pulmonary lesions using visual attention networks", pages 26 - 38 *
JO SCHLEMPER 等: "Attention gated networks: Learning to leverage salient regions in medical images", pages 197 - 207 *
NAJI KHOSRAVAN 等: "Gaze2Segment: A Pilot Study for Integrating Eye-Tracking Technology into Medical Image Segmentation", MEDICAL COMPUTER VISION AND BAYESIAN AND GRAPHICAL MODELS FOR BIOMEDICAL IMAGING, pages 94 - 104 *
NAN-NING ZHENG 等: "Hybrid-augmented intelligence: collaboration and cognition", vol. 18, no. 2, pages 153 - 180 *
刘定鸣: "交互式视频分割技术研究", 中国优秀硕士学位论文全文数据库 信息科技辑, vol. 2013, no. 3, pages 138 - 1528 *
艾玲梅 等: "基于注意力U-Net的脑肿瘤磁共振图像分割", pages 1 - 13 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132232A (en) * 2020-10-19 2020-12-25 武汉千屏影像技术有限责任公司 Medical image classification labeling method and system and server
CN112735565A (en) * 2020-10-30 2021-04-30 衡阳市大井医疗器械科技有限公司 Detection result acquisition method, electronic equipment and server
CN112581092A (en) * 2020-12-23 2021-03-30 上海研鼎信息技术有限公司 Laboratory management method, laboratory management equipment and storage medium
CN113077445A (en) * 2021-04-01 2021-07-06 中科院成都信息技术股份有限公司 Data processing method and device, electronic equipment and readable storage medium
CN113273963A (en) * 2021-05-07 2021-08-20 中国人民解放军西部战区总医院 Postoperative wound hemostasis system and method for hepatobiliary pancreatic patient
CN113255756A (en) * 2021-05-20 2021-08-13 联仁健康医疗大数据科技股份有限公司 Image fusion method and device, electronic equipment and storage medium
CN113255756B (en) * 2021-05-20 2024-05-24 联仁健康医疗大数据科技股份有限公司 Image fusion method and device, electronic equipment and storage medium
EP4109463A1 (en) * 2021-06-24 2022-12-28 Siemens Healthcare GmbH Providing a second result dataset
US20230005600A1 (en) * 2021-06-24 2023-01-05 Siemens Healthcare Gmbh Providing a second result dataset
CN114298979A (en) * 2021-12-09 2022-04-08 北京工业大学 Liver nuclear magnetic image sequence generation method guided by focal lesion symptom description
CN114298979B (en) * 2021-12-09 2024-05-31 北京工业大学 Method for generating hepatonuclear magnetic image sequence guided by description of focal lesion symptom

Also Published As

Publication number Publication date
CN111445440B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN111445440B (en) Medical image analysis method, device and storage medium
US11423541B2 (en) Assessment of density in mammography
Chen et al. Deep feature learning for medical image analysis with convolutional autoencoder neural network
EP3989119A1 (en) Detection model training method and apparatus, computer device, and storage medium
US20220222932A1 (en) Training method and apparatus for image region segmentation model, and image region segmentation method and apparatus
TWI747120B (en) Method, device and electronic equipment for depth model training and storage medium thereof
Vijayanarasimhan et al. Cost-sensitive active visual category learning
US20220036561A1 (en) Method for image segmentation, method for training image segmentation model
JP7158563B2 (en) Deep model training method and its device, electronic device and storage medium
US11373309B2 (en) Image analysis in pathology
CN110866469B (en) Facial five sense organs identification method, device, equipment and medium
CN111414946A (en) Artificial intelligence-based medical image noise data identification method and related device
Cordeiro et al. Analysis of supervised and semi-supervised GrowCut applied to segmentation of masses in mammography images
CN110796135A (en) Target positioning method and device, computer equipment and computer storage medium
CN113724185B (en) Model processing method, device and storage medium for image classification
CN112102929A (en) Medical image labeling method and device, storage medium and electronic equipment
CN111681247A (en) Lung lobe and lung segment segmentation model training method and device
CN110889437A (en) Image processing method and device, electronic equipment and storage medium
CN112420125A (en) Molecular attribute prediction method and device, intelligent equipment and terminal
CN115345938A (en) Global-to-local-based head shadow mark point positioning method, equipment and medium
CN115393376A (en) Medical image processing method, medical image processing device, computer equipment and storage medium
CN117372416B (en) High-robustness digital pathological section diagnosis system and method for countermeasure training
CN117237351B (en) Ultrasonic image analysis method and related device
CN112801940B (en) Model evaluation method, device, equipment and medium
CN114255219B (en) Symptom identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant