CN110738643A - Method for analyzing cerebral hemorrhage, computer device and storage medium - Google Patents

Method for analyzing cerebral hemorrhage, computer device and storage medium Download PDF

Info

Publication number
CN110738643A
CN110738643A CN201910950855.3A CN201910950855A CN110738643A CN 110738643 A CN110738643 A CN 110738643A CN 201910950855 A CN201910950855 A CN 201910950855A CN 110738643 A CN110738643 A CN 110738643A
Authority
CN
China
Prior art keywords
network
feature map
detection frame
bleeding
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910950855.3A
Other languages
Chinese (zh)
Other versions
CN110738643B (en
Inventor
崔益峰
石峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN201910950855.3A priority Critical patent/CN110738643B/en
Publication of CN110738643A publication Critical patent/CN110738643A/en
Application granted granted Critical
Publication of CN110738643B publication Critical patent/CN110738643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to an analysis method of cerebral hemorrhage, computer equipment and a storage medium, wherein the method comprises the steps of obtaining an image to be analyzed comprising at least hemorrhage areas, inputting the image to a convolutional neural network for feature extraction to obtain a feature map of the image, inputting the feature map to an interested area extraction network to obtain a feature map of a detection frame comprising each hemorrhage area, inputting the feature map of the detection frame comprising each hemorrhage area to a classification network to obtain a classification result of each hemorrhage area, and inputting the feature map of the detection frame comprising each hemorrhage area to a detection frame regression network to obtain the position of each hemorrhage area.

Description

Method for analyzing cerebral hemorrhage, computer device and storage medium
Technical Field
The application relates to the technical field of neural network learning, in particular to an analysis method, computer equipment and a storage medium for cerebral hemorrhage.
Background
Cerebral hemorrhage refers to hemorrhage caused by intracranial vascular rupture due to trauma or non-trauma, has the characteristics of acute morbidity, critical and complex disease conditions, high mortality rate and disability rate and the like, and is the second leading cause of death which is only second to ischemic heart disease in the world at present. The early mortality rate of cerebral hemorrhage is very high, about half of patients die within the days of morbidity, and most survivors have sequelae with different degrees, so how to accurately find the position of cerebral hemorrhage in the early stage is particularly important for treating cerebral hemorrhage.
At present, according to the bleeding position, the cerebral hemorrhage can be divided into five types, i.e., intraparenchymal hemorrhage, intracerebroventricular hemorrhage, subarachnoid hemorrhage, subdural hemorrhage and epidural hemorrhage, and currently, the type diagnosis of the cerebral hemorrhage mainly includes inputting a scanned CT image into a classification network through a detection device to directly classify the type of the cerebral hemorrhage, so as to obtain a classification result. The classification network is a 2D convolutional neural network or a 3D convolutional neural network which is trained in advance according to different cerebral hemorrhage case samples.
However, the above method of classifying the cerebral hemorrhage type directly by the classification network has a problem of low classification accuracy.
Disclosure of Invention
In view of the above, there is a need to provide methods, computer devices, and storage media for analyzing cerebral hemorrhage, which can effectively improve the accuracy of classification.
, a method of analysis of cerebral hemorrhages, the method comprising:
acquiring an image to be analyzed, wherein the image comprises at least bleeding areas;
inputting the image into a convolutional neural network for feature extraction to obtain a feature map of the image;
inputting the feature map into an interested area extraction network to obtain a feature map of a detection frame containing each bleeding area;
inputting the feature map of the detection frame containing each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature map of the detection frame containing each bleeding area into a detection frame regression network to obtain the position of each bleeding area.
In examples, the image includes a th bleeding area and a second bleeding area,
inputting the feature map into an interested area extraction network to obtain a feature map of a detection frame containing each bleeding area, wherein the feature map comprises the following steps:
inputting the feature map into an interested area extraction network to obtain a feature map comprising an th detection box and a second detection box, wherein the th detection box comprises a th bleeding area, and the second detection box comprises a second bleeding area;
inputting the feature map of the detection frame containing each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature map of the detection frame containing each bleeding area into a detection frame regression network to obtain the position of each bleeding area, wherein the method comprises the following steps:
inputting the characteristic diagram comprising the th detection frame and the second detection frame into a classification network to obtain a classification result comprising the th bleeding area and a classification result comprising the second bleeding area, and inputting the characteristic diagram comprising the th detection frame and the second detection frame into a detection frame regression network to obtain the th bleeding area and the second bleeding area.
In embodiments, the region-of-interest extracting network includes a region-of-interest positioning network and a region-of-interest acquiring network, and the feature map is input to the region-of-interest extracting network to obtain a feature map including detection boxes of the bleeding regions, including:
inputting the characteristic diagram into an interested area positioning network to obtain the position of a detection frame of each bleeding area in the characteristic diagram;
and inputting the positions and the characteristic maps of the detection frames of the bleeding areas into an interested area acquisition network to obtain the characteristic maps of the detection frames containing the bleeding areas.
In embodiments, the region-of-interest locating network includes a window classification network and a window scoring network, and inputting the feature map into the region-of-interest locating network to obtain the positions of the detection boxes of the bleeding regions in the feature map, including:
selecting a plurality of candidate areas in the feature map according to a preset sliding window;
inputting the feature maps in the candidate regions into a window classification network for foreground and background classification to obtain a classification result of each candidate region;
inputting each classification result into a window scoring network for scoring quantization to obtain a scoring quantization value of each classification result;
and taking the position of the candidate area corresponding to the scoring quantification value larger than the preset threshold value as the position of the detection frame of the bleeding area.
In embodiments, inputting the position and the feature map of the detection box of each bleeding area into the area-of-interest acquisition network to obtain the feature map of the detection box containing each bleeding area, including:
inputting the positions and the characteristic diagrams of the detection frames of the bleeding areas into an interested area acquisition network to obtain the characteristic diagrams of the block diagrams containing the bleeding areas;
and (3) carrying out interpolation processing on the characteristic diagram of the block diagram containing each bleeding area by adopting a bilinear interpolation algorithm to obtain the characteristic diagram of the detection frame containing each bleeding area with a preset size.
In of these embodiments, training the detection box regression network and classification network includes:
training an initial detection frame regression network by using a Smooth loss function Smooth L1 of detection frame regression to obtain a detection frame regression network;
and after the regression network of the detection frame is obtained, training an initial classification network by adopting a cross entropy loss function to obtain a classification network.
In of these embodiments, training the detection box regression network and classification network includes:
and simultaneously training an initial detection frame regression network and an initial classification network by adopting a weighted accumulation sum function of a cross entropy loss function and a Smooth loss function Smooth L1 of detection frame regression to obtain the classification network and the detection frame regression network.
In of these embodiments, the convolutional neural network is a three-dimensional residual network and the region of interest localization network generates the RPN network for the region.
In a second aspect, computer apparatus comprising a memory storing a computer program and a processor, the processor when executing the computer program implementing the method of analysis of cerebral hemorrhage according to any of the aspect.
In a third aspect, computer readable storage media having stored thereon a computer program which, when executed by a processor, implements a method of analysis of cerebral hemorrhage according to any embodiment of the aspect.
The analysis method, the computer device and the storage medium for cerebral hemorrhage provided by the application can provide a classification method for a plurality of hemorrhage areas in an image based on a feature map of a detection frame comprising the plurality of hemorrhage areas, which is equivalent to that the positions of the plurality of hemorrhage areas are detected first, then , and further can provide a classification method for a plurality of hemorrhage areas in a patient sample, so that the classification method can provide classification information for a limited number of hemorrhage areas, and can provide classification information for a plurality of hemorrhage areas in a patient sample classification process.
Drawings
FIG. 1 is a schematic diagram of the internal structure of computer devices provided by embodiments;
FIG. 2 is a flow chart of an analysis method of cerebral hemorrhages provided by examples;
FIG. 2A is a schematic structural diagram of a regression network of classification and detection blocks provided by exemplary embodiments;
FIG. 3 is a flow chart of an analysis method of cerebral hemorrhages provided by examples;
FIG. 4 is a schematic structural diagram of a ROI (regions of interest) positioning network provided by embodiments;
FIG. 5 is a flow chart of another implementation manners of S201 in the embodiment of FIG. 3;
FIG. 6 is a flow chart of another implementations of S202 in the embodiment of FIG. 3;
FIG. 7 is a schematic structural diagram of an analysis network for cerebral hemorrhages provided by embodiments;
FIG. 8 is a schematic structural diagram of training networks provided by embodiments;
FIG. 9 is a flow chart of the training methods provided by examples;
FIG. 10 is a schematic structural diagram of training networks provided by exemplary embodiments;
FIG. 11 is a schematic structural diagram of an analysis device for kinds of cerebral hemorrhage provided by embodiments;
FIG. 12 is a schematic structural diagram of an analysis device for kinds of cerebral hemorrhage provided by embodiments;
fig. 13 is a schematic structural diagram of an analysis device for cerebral hemorrhages, which is provided by embodiments.
Detailed Description
For purposes of making the present application, its objects, aspects and advantages more apparent, the present application is described in further detail with reference to the drawings and the examples.
The method for analyzing the cerebral hemorrhage provided by the application can be applied to a computer device as shown in fig. 1, wherein the computer device can be a terminal, the internal structure of the computer device can be as shown in fig. 1, the computer device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus, the processor of the computer device is used for providing calculation and control capabilities, the memory of the computer device comprises a nonvolatile storage medium and an internal memory, the nonvolatile storage medium stores an operating system and a computer program, the internal memory provides an environment for the operating system in the nonvolatile storage medium and the operation of the computer program, the network interface of the computer device is used for communicating with an external terminal through a network connection, the computer program is executed by the processor to realize the methods for analyzing the cerebral hemorrhage, the display screen of the computer device can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer device can be a touch layer covered on the display screen, can also be a keyboard, a button track or a keyboard arranged on a shell of the computer device, and can also be an external touch pad or the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The following describes in detail the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems by embodiments and with reference to the drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a flowchart of exemplary methods for analyzing types of cerebral hemorrhage, which are executed by the computer device in fig. 1, and relate to a specific process in which the computer device analyzes and detects an image of a cerebral hemorrhage to be analyzed, and obtains a classification result of a hemorrhage region and a position of the hemorrhage region, as shown in fig. 2, the method specifically includes the following steps:
s101, acquiring an image to be analyzed, wherein the image comprises at least bleeding areas.
In practical applications, a computer device may scan the brain structure of a human body through a scanning device to obtain an image to be analyzed, for example, the image to be analyzed may include bleeding areas to be analyzed or may include a plurality of bleeding areas to be analyzed, and therefore, if the image to be analyzed provided by the present embodiment is trained as sample data, the problem that the amount of data is small due to the conventional method that patient cases are used as patient cases is overcome, and the image to be analyzed provided by the present embodiment may be used as a plurality of patient cases to be analyzed, which may increase the amount of data needed for analyzing a plurality of patient cases indirectly.
Optionally, the computer device may acquire an image including a brain structure of a human body in advance, and then preprocess the acquired image to obtain an image to be analyzed. The preprocessing may include removing skull, normalizing the image, organizing data, and so on. First, the process for removing skull includes: in practical applications, because the skull in the skull image is usually a high-brightness signal and is far higher than the brain tissue, the contrast in the brain tissue is relatively low, and the detection precision is affected, so the skull removing process is required. The implementation adopts a simple 3D V-Net network to segment brain tissues, and realizes the head and bone removing treatment of the acquired images. Next, the image normalization process includes: CT shadows of the head can result from movement of the patient's head during imaging, or problems with the imaging equipmentThe offset affects the accuracy of the network detection at degrees, so it is necessary to calculate the head deflection angle by principal component analysis and the like, and to correct the rotation of the whole data to obtain a standardized image to be analyzed, and finally, for data organization, including resampling the sample to 512X 512mm in size due to the different layer thicknesses of CT acquisitions3Resolution of 1X 1mm3Considering that the range of CT values for CT image bleeding is between 60 and 85, the CT values for all three-dimensional images are set between 0 and 95 to summarize the data range in images.
And S102, inputting the image into a convolutional neural network for feature extraction to obtain a feature map of the image.
In practical application, when a computer device acquires an image to be analyzed, the image can be input to a pre-trained convolutional neural network for feature extraction in steps , so that a feature map of the image is obtained.
S103, inputting the feature map into the interested area extraction network to obtain the feature map of the detection frame containing each bleeding area.
The method comprises the steps of obtaining a characteristic diagram of an input image, inputting the characteristic diagram to a pre-trained Region of interest extraction Network in steps, extracting the characteristic image in the detection frame, and obtaining the characteristic diagram of the detection frame comprising each bleeding Region, wherein the detection frame is used for positioning the bleeding Region in the input image, and the Region of interest extraction Network is used for extracting the detection frame of each bleeding Region from the input characteristic diagram.
S104, inputting the feature map of the detection frame containing each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature map of the detection frame containing each bleeding area into a detection frame regression network to obtain the position of each bleeding area.
The classification result in the present embodiment is any type of the five types of cerebral hemorrhage classifications, the classification result in the present embodiment is a classification result including a classification result of each of a plurality of bleeding areas.
The detection frame regression network is used for adjusting the detection frames of the bleeding areas in the characteristic diagram, repositioning the position coordinates of the detection frames and obtaining the accurate positions of the bleeding areas.
In this embodiment, after the computer device obtains the feature map including the detection box of each bleeding area, the feature map may be further input to the classification network and the detection box regression network at step , so as to obtain the classification result of each bleeding area and the position of each bleeding area.
The method for analyzing cerebral hemorrhage provided by the embodiment includes the steps of obtaining an image to be analyzed including at least hemorrhage areas, inputting the image to a convolutional neural network for feature extraction, obtaining a feature map of the image, inputting the feature map to an area-of-interest extraction network, obtaining a feature map of a detection frame including each hemorrhage area, inputting the feature map of the detection frame including each hemorrhage area to a classification network, obtaining a classification result of each hemorrhage area, and inputting the feature map of the detection frame including each hemorrhage area to a detection frame regression network, obtaining a position of each hemorrhage area.
Optionally, the present application further provides specific embodiments, that is, after the computer device acquires the feature map of the detection box including each bleeding area based on the step of S103 in the foregoing embodiment, the feature map may be further input to the classification and detection box regression network shown in fig. 2A by steps, so as to obtain the classification result of each bleeding area and the position of each bleeding area respectively.
In this embodiment, the classification and detection frame regression network first uses the th feature processing layer (including convolution layer, batch layer, and activation function in the drawing) and the second feature processing layer (including convolution layer, batch layer, and activation function in the drawing) to perform feature extraction processing on the input feature map of the detection frame including each bleeding area, and then inputs the processed feature map into the full connection layer to output a processed image to the detection frame regression network, so as to achieve accurate positioning of the detection frame of each bleeding area, and further obtain the position of each bleeding area, and then inputs the coordinates of the position of each bleeding area into the classification network to perform bleeding type analysis, and obtain the classification result corresponding to each bleeding area.
In embodiments, the present application provides a specific embodiment of an analysis method for cerebral hemorrhages, i.e., the image to be analyzed in the above embodiment includes a th hemorrhage region and a second hemorrhage region.
In such an application, the step S103 of inputting the feature map into the area-of-interest extraction network to obtain the feature map including the detection boxes of the bleeding areas may specifically include inputting the feature map into the area-of-interest extraction network to obtain the feature map including the th detection box and the second detection box, where the th detection box includes the th bleeding area, and the second detection box includes the second bleeding area.
The step S104 of inputting the feature map of the detection frame including each bleeding region into the classification network to obtain the classification result of each bleeding region, and inputting the feature map of the detection frame including each bleeding region into the detection frame regression network to obtain the position of each bleeding region in the above embodiment may specifically include inputting the feature map including the th detection frame and the second detection frame into the classification network to obtain the classification result including the th bleeding region and the classification result including the second bleeding region, and inputting the feature map including the th detection frame and the second detection frame into the detection frame regression network to obtain the th bleeding region position and the second bleeding region position.
In practical applications, the region-of-interest extraction network in the above-mentioned embodiment may include multiple types of region extraction networks, and the present application provides types of region-of-interest extraction networks, where the region-of-interest extraction network includes a region-of-interest positioning network and a region-of-interest acquiring network, and in this application, the step of S103 "inputting the feature map into the region-of-interest extraction network to obtain the feature map of the detection box including each bleeding region" as shown in fig. 3 includes:
s201, inputting the feature map into the interested area positioning network to obtain the position of the detection frame of each bleeding area in the feature map.
Specifically, the Region-of-interest positioning Network may employ a Region generation Network (RPN) Network, and the position of the detection box may be represented by coordinates.
S202, inputting the positions and the characteristic diagrams of the detection frames of the bleeding areas into an interested area acquisition network to obtain the characteristic diagrams of the detection frames containing the bleeding areas.
Optionally, the ROI acquisition network may employ an ROI Align network, a ROI Align network, or the like, and the ROI acquisition network in this embodiment specifically employs the ROI Align network, which may avoid a situation that imaging pixels are shifted due to the ROI Align network.
In embodiments, the present application provides a structure of a network for locating a region of interest, as shown in fig. 4, the network includes a window classification network and a window scoring network, wherein an output end of the window classification network is connected to an input end of the window scoring network.
Based on the structure of the region-of-interest positioning network described in the embodiment of fig. 4, fig. 5 is a flowchart of another implementation manners of S201 in the embodiment of fig. 3, and as shown in fig. 5, the above S201 "inputting the feature map into the region-of-interest positioning network to obtain the positions of the detection boxes of the bleeding areas in the feature map" includes:
s301, selecting a plurality of candidate areas in the feature map according to a preset sliding window.
Wherein the candidate region represents an alternative detection box. The sliding window is used for selecting a candidate region on the feature map, and the attribute of the sliding window can be determined by the computer device in advance according to the actual application requirements, for example, the size of the sliding window, the sliding step length of the sliding window, and the like. In this embodiment, when the computer device acquires the feature map of the image based on the foregoing step S201, a preset step may be slid in the feature map according to a preset sliding window, so as to determine a plurality of candidate regions from the feature map, which are used as candidate detection frames.
S302, inputting the feature maps in the candidate regions into a window classification network for foreground and background classification to obtain a classification result of each candidate region.
In this embodiment, after the computer device determines a plurality of candidate regions, steps may be further performed to input feature maps in the plurality of candidate regions into a pre-trained window classification network for foreground and background classification, so as to obtain a classification result of each candidate region.
And S303, inputting the classification results into a window scoring network for scoring quantization to obtain scoring quantization values of the classification results.
In this embodiment, after the computer device obtains the classification results of a plurality of candidate regions, steps may further input each classification result into a window scoring network trained in advance to perform scoring quantization on each candidate region, so as to obtain a scoring quantization value of each classification result.
S304, taking the position of the candidate area corresponding to the scoring quantization value larger than the preset threshold value as the position of the detection frame of the bleeding area.
Wherein the preset threshold value represents an expected score value, and is determined by the computer device according to actual requirements. In this embodiment, after the computer device obtains the scoring quantization value of each candidate region, the scoring quantization value of each candidate region may be compared with a preset threshold, specifically, the scoring quantization value greater than the preset threshold is determined, and then, a position of the candidate region corresponding to the scoring quantization value greater than the preset threshold is used as a position of the detection frame of the bleeding region.
Fig. 6 is a flowchart of another implementation manners of S202 in the embodiment of fig. 3, and as shown in fig. 6, the aforementioned S202 "inputting the feature map into the area of interest positioning network to obtain the positions of the detection boxes of the bleeding areas in the feature map" includes:
s401, inputting the positions and the characteristic diagrams of the detection frames of the bleeding areas into an interested area acquisition network to obtain the characteristic diagrams of the detection frames containing the bleeding areas.
When the computer device acquires the positions of the detection frames of the bleeding areas on the feature map and the feature map based on the method described in the foregoing embodiment, the positions of the detection frames of the bleeding areas and the feature map may be input to the region-of-interest acquisition network, so that feature maps in all the detection frames are extracted from the feature map, and a feature map including the detection frames of the bleeding areas is obtained.
S402, interpolation processing is carried out on the characteristic diagram of the block diagram containing each bleeding area by adopting a bilinear interpolation algorithm, and the characteristic diagram of the detection frame containing each bleeding area with the preset size is obtained.
In practical application, because the sizes of the feature maps obtained based on the step S401 are different, there is certain influence when the feature maps are classified by using a classification network subsequently, so this embodiment provides a method of the size of the feature map of the system , that is, a bilinear interpolation algorithm is used to interpolate the feature map of the block diagram including each bleeding area, so as to obtain a feature map of a detection box including each bleeding area with a preset size.
In summary, the present application further provides analysis networks for cerebral hemorrhage, as shown in fig. 7, the analysis networks include a convolutional neural network, a region-of-interest positioning network, a region-of-interest obtaining network, a classification network, and a detection frame regression network, where the convolutional neural network is used to perform feature extraction on an input image to be analyzed to obtain a feature map of the image, the region-of-interest positioning network is used to position a detection frame of a hemorrhage region in the input feature map to obtain a position of a detection frame of each hemorrhage region, the region-of-interest obtaining network is used to extract a feature map included in each detection frame from the input feature map according to the position of the input detection frame, the classification network is used to perform analysis and classification on the input feature map including each detection frame to obtain a classification result of each hemorrhage region, the detection frame regression network is used to adjust the detection frames of each hemorrhage region in the input feature map including the detection frame to reposition the position coordinates of the detection frame to obtain an accurate position of each hemorrhage region, that is a detection result, and the aforementioned analysis networks for cerebral hemorrhage in all embodiments may be described in detail, and this description is not repeated.
In embodiments, the present application further provides training networks for training the detection frame regression network and the classification network, as shown in fig. 8, the training networks include a convolutional neural network, a region of interest extraction network, a region of interest acquisition network, an initial detection frame regression network, an initial classification network, a optimization module, and a second optimization module, where the initial detection frame regression network and the initial classification network represent networks to be trained, and the convolutional neural network, the region of interest extraction network, the region of interest acquisition network, the initial detection frame regression network, the initial classification network, and the convolutional neural network, the region of interest extraction network, the region of interest acquisition network, the detection frame regression network, and the classification network described in the embodiment of fig. 2 correspond to the description of the embodiment of fig. 2, and the description is not repeated here.
Based on the training network described in the embodiment of fig. 8, the present application further provides methods for training a regression network and a classification network of a detection box, as shown in fig. 9, the method specifically includes:
s501, training an initial detection frame regression network by using a Smooth loss function Smooth L1 of detection frame regression to obtain the detection frame regression network.
The embodiment relates to a training process of training an initial detection box regression network by a computer device, and in the training process, the embodiment may train the initial detection box regression network by using a training network as shown in fig. 8. When the initial detection frame regression network in fig. 8 outputs the detection result (the position of each bleeding area), the detection result is substituted into the Smooth loss function Smooth L1 of the detection frame regression to obtain the value of the Smooth loss function Smooth L1, and then the parameters of the initial detection frame regression network are adjusted according to the value until the value of the Smooth loss function Smooth L1 meets the preset condition, so that a trained detection frame regression network is obtained to be provided for the embodiment of fig. 2.
S502, after the regression network of the detection frame is obtained, training an initial classification network by adopting a cross entropy loss function to obtain a classification network.
The embodiment relates to a training process of training an initial classification network by a computer device, and in the training process, the embodiment may train the initial classification network by using a training network as shown in fig. 8. When the initial classification network in fig. 8 outputs the classification result, the classification result is substituted into the cross entropy loss function to obtain a value of the cross entropy loss function, and then the parameter of the initial classification network is adjusted according to the value until the value of the cross entropy loss function meets the preset condition, so as to obtain a trained classification network, so as to provide the trained classification network for the embodiment in fig. 2.
In embodiments, the present application further provides training networks for training the detection frame regression network and the classification network, as shown in fig. 10, the training networks include a convolutional neural network, a region of interest extraction network, a region of interest acquisition network, an initial detection frame regression network, an initial classification network, and a third optimization module, where the initial detection frame regression network and the initial classification network represent networks to be trained, the convolutional neural network, the region of interest extraction network, the region of interest acquisition network, the initial detection frame regression network, and the initial classification network correspond to those described in the embodiment of fig. 2, and the detailed description please refer to the description of the embodiment of fig. 2, where the detailed description is not repeated.
Based on the training network described in the embodiment of fig. 10, in practical applications, there are methods for training the above detection box regression network and classification network, where the method includes using the weighted cumulative sum function of the cross-entropy loss function and the Smooth loss function Smooth L1 of the detection box regression to train the initial detection box regression network and the initial classification network at the same time, so as to obtain the classification network and the detection box regression network.
The third optimization module in the training network can specifically train the initial classification network and the initial detection frame regression network by using a weighted accumulation sum function of a cross entropy loss function and a Smooth loss function Smooth L1 of detection frame regression, and the specific process is that when the initial classification network in FIG. 10 outputs a classification result and a detection result of the initial detection frame regression network, the computer device firstly obtains a weighted accumulation sum function of the cross entropy loss function and the Smooth loss function Smooth L1 of the detection frame regression, then obtains a value of the weighted accumulation sum function by adding the classification result and the detection result to the weighted accumulation sum function, and finally simultaneously adjusts parameters of the initial classification network and the initial detection frame regression network according to the value until the value of the weighted accumulation sum function meets a preset condition, obtains a trained classification network and detection frame regression network, and provides the trained classification network and detection frame regression network for the use example 2.
It should be understood that although the various steps in the flowcharts of fig. 2-6 and 9 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in the sequence indicated by the arrows, unless explicitly stated herein, the steps are not strictly limited to being performed in the order in which they are performed, and the steps may be performed in other orders, further, at least the portion of the steps in fig. 2-6 and 9 may include multiple sub-steps or stages, which are not necessarily performed at the same time , but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily performed in sequence.
In embodiments, as shown in fig. 11, there is provided an analysis device for cerebral hemorrhages, comprising an acquisition module 11, a feature extraction module 12, a region extraction module 13 and a classification and detection module 14, wherein:
an acquisition module 11 for acquiring an image to be analyzed, the image comprising at least bleeding areas;
the feature extraction module 12 is configured to input the image to a convolutional neural network for feature extraction, so as to obtain a feature map of the image;
the region extraction module 13 is configured to input the feature map into an interested region extraction network, so as to obtain a feature map of a detection frame including each bleeding region;
the classification and detection module 14 is configured to input the feature map of the detection frame including each bleeding area into the classification network to obtain a classification result of each bleeding area, and input the feature map of the detection frame including each bleeding area into the detection frame regression network to obtain a position of each bleeding area.
In embodiments, as shown in fig. 12, the region extracting module 13 includes:
the positioning unit 131 is configured to input the feature map into the interest area positioning network to obtain the positions of the detection frames of the bleeding areas in the feature map;
the obtaining unit 132 is configured to input the position and the feature map of the detection frame of each bleeding area into the area-of-interest obtaining network, so as to obtain a feature map of the detection frame including each bleeding area.
In embodiments, the positioning unit 131 is specifically configured to select multiple candidate regions from the feature map according to a preset sliding window, input the feature map in the multiple candidate regions into a window classification network to perform foreground and background classification, so as to obtain classification results of the candidate regions, input the classification results into a window scoring network to perform scoring quantization, so as to obtain scoring quantization values of the classification results, and use a position of the candidate region corresponding to the scoring quantization value greater than a preset threshold as a position of the detection frame of the bleeding region.
In embodiments, the obtaining unit 132 is specifically configured to input the position and the feature map of the detection frame in each bleeding area into the area-of-interest obtaining network, to obtain the feature map including the block diagram of each bleeding area, and perform interpolation processing on the feature map including the block diagram of each bleeding area by using a bilinear interpolation algorithm, to obtain the feature map of the detection frame including each bleeding area in a preset size.
In embodiments, as shown in FIG. 13, training devices are provided, including a training module 21 of an initial detection box regression network and a training module 22 of an initial classification network, wherein:
the training module 21 of the initial detection frame regression network is used for training the initial detection frame regression network by using a smooth loss function SmoothL1 of detection frame regression to obtain a detection frame regression network;
and the training module 22 of the initial classification network is used for training the initial classification network by adopting a cross entropy loss function after the regression network of the detection frame is obtained, so as to obtain the classification network.
In embodiments, training devices are further provided, and are used for simultaneously training the initial detection frame regression network and the initial classification network by adopting a cross entropy loss function and a weighted accumulation sum function of a Smooth loss function Smooth L1 of the detection frame regression, so as to obtain the classification network and the detection frame regression network.
The specific limitations of the analysis apparatus for cerebral hemorrhage can be referred to the limitations of the analysis method for cerebral hemorrhage, which are not described herein again.
In embodiments, computer devices are provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program implementing the steps of:
acquiring an image to be analyzed, wherein the image comprises at least bleeding areas;
inputting the image into a convolutional neural network for feature extraction to obtain a feature map of the image;
inputting the feature map into an interested area extraction network to obtain a feature map of a detection frame containing each bleeding area;
inputting the feature map of the detection frame containing each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature map of the detection frame containing each bleeding area into a detection frame regression network to obtain the position of each bleeding area.
The implementation principle and technical effect of the computer devices provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
In embodiments, there are provided computer readable storage media having stored thereon a computer program which when executed by a processor further performs the steps of:
acquiring an image to be analyzed, wherein the image comprises at least bleeding areas;
inputting the image into a convolutional neural network for feature extraction to obtain a feature map of the image;
inputting the feature map into an interested area extraction network to obtain a feature map of a detection frame containing each bleeding area;
inputting the feature map of the detection frame containing each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature map of the detection frame containing each bleeding area into a detection frame regression network to obtain the position of each bleeding area.
The implementation principle and technical effect of the computer-readable storage media provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those of ordinary skill in the art that all or a portion of the processes of the methods of the embodiments described above may be implemented by a computer program that may be stored in a non-volatile computer-readable storage medium, which when executed, may include the processes of the embodiments of the methods described above, wherein any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1, A method for analyzing cerebral hemorrhage, the method comprising:
acquiring an image to be analyzed, wherein the image comprises at least bleeding areas;
inputting the image into a convolutional neural network for feature extraction to obtain a feature map of the image;
inputting the feature map into an interested area extraction network to obtain a feature map of a detection frame containing each bleeding area;
inputting the feature map of the detection frame containing each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature map of the detection frame containing each bleeding area into a detection frame regression network to obtain the position of each bleeding area.
2. The method according to claim 1, wherein the image includes an th bleeding area and a second bleeding area,
inputting the feature map into an interested area extraction network to obtain a feature map of a detection frame containing each bleeding area, wherein the feature map comprises:
inputting the feature map into an interested area extraction network to obtain a feature map comprising an th detection box and a second detection box, wherein the th detection box comprises the th bleeding area, and the second detection box comprises the second bleeding area;
the inputting the feature map of the detection frame containing each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature map of the detection frame containing each bleeding area into a detection frame regression network to obtain a position of each bleeding area includes:
inputting the feature map comprising the th detection box and the second detection box into the classification network to obtain a classification result comprising the th bleeding area and a classification result comprising the second bleeding area, and inputting the feature map comprising the th detection box and the second detection box into the detection box regression network to obtain a position of the th bleeding area and a position of the second bleeding area.
3. The method according to claim 1 or 2, wherein the region of interest extraction network comprises a region of interest positioning network and a region of interest acquisition network, and the inputting the feature map into the region of interest extraction network to obtain a feature map containing detection boxes of the bleeding areas comprises:
inputting the characteristic diagram into an interested area positioning network to obtain the position of a detection frame of each bleeding area in the characteristic diagram;
and inputting the position of the detection frame of each bleeding area and the characteristic diagram into an interested area acquisition network to obtain the characteristic diagram of the detection frame containing each bleeding area.
4. The method according to claim 3, wherein the region of interest positioning network comprises a window classification network and a window scoring network, and the inputting the feature map into the region of interest positioning network to obtain the position of the detection box of each bleeding region in the feature map comprises:
selecting a plurality of candidate areas in the feature map according to a preset sliding window;
inputting the feature maps in the candidate regions into the window classification network for foreground and background classification to obtain a classification result of each candidate region;
inputting each classification result into the window scoring network for scoring quantization to obtain a scoring quantization value of each classification result;
and taking the position of the candidate area corresponding to the scoring quantification value larger than the preset threshold value as the position of the detection frame of the bleeding area.
5. The method according to claim 3, wherein the inputting the position of the detection box of each bleeding area and the feature map into an area-of-interest acquisition network to obtain the feature map of the detection box containing each bleeding area comprises:
inputting the positions of the detection frames of the bleeding areas and the characteristic diagram into an interested area acquisition network to obtain a characteristic diagram containing a block diagram of each bleeding area;
and performing interpolation processing on the characteristic diagram of the block diagram containing each bleeding area by adopting a bilinear interpolation algorithm to obtain a characteristic diagram of a detection frame containing each bleeding area with a preset size.
6. The method of claim 1, wherein training the detection box regression network and the classification network comprises:
training an initial detection frame regression network by using a Smooth loss function Smooth L1 of detection frame regression to obtain the detection frame regression network;
and after the detection frame regression network is obtained, training an initial classification network by adopting a cross entropy loss function to obtain the classification network.
7. The method of claim 1, wherein training the detection box regression network and the classification network comprises:
and simultaneously training an initial detection frame regression network and an initial classification network by adopting a weighted accumulation sum function of a cross entropy loss function and a Smooth L1 loss function of detection frame regression to obtain the classification network and the detection frame regression network.
8. The method of claim 1, wherein the convolutional neural network is a three-dimensional residual network; the region of interest positioning network generates an RPN network for the region.
Computer device of , comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program is configured to carry out the steps of the method of any of claims 1 to 8 as claimed in .
10, computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any of claims 1 to 8, wherein is defined.
CN201910950855.3A 2019-10-08 2019-10-08 Analysis method for cerebral hemorrhage, computer device and storage medium Active CN110738643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910950855.3A CN110738643B (en) 2019-10-08 2019-10-08 Analysis method for cerebral hemorrhage, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910950855.3A CN110738643B (en) 2019-10-08 2019-10-08 Analysis method for cerebral hemorrhage, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN110738643A true CN110738643A (en) 2020-01-31
CN110738643B CN110738643B (en) 2023-07-28

Family

ID=69268529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910950855.3A Active CN110738643B (en) 2019-10-08 2019-10-08 Analysis method for cerebral hemorrhage, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN110738643B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402218A (en) * 2020-03-11 2020-07-10 北京深睿博联科技有限责任公司 Cerebral hemorrhage detection method and device
CN111445451A (en) * 2020-03-20 2020-07-24 上海联影智能医疗科技有限公司 Brain image processing method, system, computer device and storage medium
CN111445457A (en) * 2020-03-26 2020-07-24 北京推想科技有限公司 Network model training method and device, network model identification method and device, and electronic equipment
CN111598882A (en) * 2020-05-19 2020-08-28 联想(北京)有限公司 Organ detection method and device and computer equipment
CN114511908A (en) * 2022-01-27 2022-05-17 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium
CN114972255A (en) * 2022-05-26 2022-08-30 深圳市铱硙医疗科技有限公司 Image detection method and device for cerebral microhemorrhage, computer equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274406A (en) * 2017-08-07 2017-10-20 北京深睿博联科技有限责任公司 A kind of method and device of detection sensitizing range
CN107292884A (en) * 2017-08-07 2017-10-24 北京深睿博联科技有限责任公司 The method and device of oedema and hemotoncus in a kind of identification MRI image
CN108369642A (en) * 2015-12-18 2018-08-03 加利福尼亚大学董事会 Acute disease feature is explained and quantified according to head computer tomography
CN108376235A (en) * 2018-01-15 2018-08-07 深圳市易成自动驾驶技术有限公司 Image detecting method, device and computer readable storage medium
US20190065897A1 (en) * 2017-08-28 2019-02-28 Boe Technology Group Co., Ltd. Medical image analysis method, medical image analysis system and storage medium
WO2019051271A1 (en) * 2017-09-08 2019-03-14 The General Hospital Corporation Systems and methods for brain hemorrhage classification in medical images using an artificial intelligence network
WO2019051411A1 (en) * 2017-09-08 2019-03-14 The General Hospital Corporation Method and systems for analyzing medical image data using machine learning
CN109543662A (en) * 2018-12-28 2019-03-29 广州海昇计算机科技有限公司 Object detection method, system, device and the storage medium proposed based on region
CN109584209A (en) * 2018-10-29 2019-04-05 深圳先进技术研究院 Vascular wall patch identifies equipment, system, method and storage medium
CN110096960A (en) * 2019-04-03 2019-08-06 罗克佳华科技集团股份有限公司 Object detection method and device
CN110189318A (en) * 2019-05-30 2019-08-30 江南大学附属医院(无锡市第四人民医院) Pulmonary nodule detection method and system with semantic feature score

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108369642A (en) * 2015-12-18 2018-08-03 加利福尼亚大学董事会 Acute disease feature is explained and quantified according to head computer tomography
CN107274406A (en) * 2017-08-07 2017-10-20 北京深睿博联科技有限责任公司 A kind of method and device of detection sensitizing range
CN107292884A (en) * 2017-08-07 2017-10-24 北京深睿博联科技有限责任公司 The method and device of oedema and hemotoncus in a kind of identification MRI image
US20190065897A1 (en) * 2017-08-28 2019-02-28 Boe Technology Group Co., Ltd. Medical image analysis method, medical image analysis system and storage medium
WO2019051271A1 (en) * 2017-09-08 2019-03-14 The General Hospital Corporation Systems and methods for brain hemorrhage classification in medical images using an artificial intelligence network
WO2019051411A1 (en) * 2017-09-08 2019-03-14 The General Hospital Corporation Method and systems for analyzing medical image data using machine learning
CN108376235A (en) * 2018-01-15 2018-08-07 深圳市易成自动驾驶技术有限公司 Image detecting method, device and computer readable storage medium
CN109584209A (en) * 2018-10-29 2019-04-05 深圳先进技术研究院 Vascular wall patch identifies equipment, system, method and storage medium
CN109543662A (en) * 2018-12-28 2019-03-29 广州海昇计算机科技有限公司 Object detection method, system, device and the storage medium proposed based on region
CN110096960A (en) * 2019-04-03 2019-08-06 罗克佳华科技集团股份有限公司 Object detection method and device
CN110189318A (en) * 2019-05-30 2019-08-30 江南大学附属医院(无锡市第四人民医院) Pulmonary nodule detection method and system with semantic feature score

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402218A (en) * 2020-03-11 2020-07-10 北京深睿博联科技有限责任公司 Cerebral hemorrhage detection method and device
CN111445451A (en) * 2020-03-20 2020-07-24 上海联影智能医疗科技有限公司 Brain image processing method, system, computer device and storage medium
CN111445451B (en) * 2020-03-20 2023-04-25 上海联影智能医疗科技有限公司 Brain image processing method, system, computer device and storage medium
CN111445457A (en) * 2020-03-26 2020-07-24 北京推想科技有限公司 Network model training method and device, network model identification method and device, and electronic equipment
CN111445457B (en) * 2020-03-26 2021-06-22 推想医疗科技股份有限公司 Network model training method and device, network model identification method and device, and electronic equipment
CN111598882A (en) * 2020-05-19 2020-08-28 联想(北京)有限公司 Organ detection method and device and computer equipment
CN111598882B (en) * 2020-05-19 2023-11-24 联想(北京)有限公司 Organ detection method, organ detection device and computer equipment
CN114511908A (en) * 2022-01-27 2022-05-17 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium
CN114972255A (en) * 2022-05-26 2022-08-30 深圳市铱硙医疗科技有限公司 Image detection method and device for cerebral microhemorrhage, computer equipment and storage medium
CN114972255B (en) * 2022-05-26 2023-05-12 深圳市铱硙医疗科技有限公司 Image detection method and device for cerebral micro-bleeding, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110738643B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN110738643A (en) Method for analyzing cerebral hemorrhage, computer device and storage medium
US10810735B2 (en) Method and apparatus for analyzing medical image
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
US11633169B2 (en) Apparatus for AI-based automatic ultrasound diagnosis of liver steatosis and remote medical diagnosis method using the same
US11593943B2 (en) RECIST assessment of tumour progression
CN109978037B (en) Image processing method, model training method, device and storage medium
US11380084B2 (en) System and method for surgical guidance and intra-operative pathology through endo-microscopic tissue differentiation
US9480439B2 (en) Segmentation and fracture detection in CT images
US7792339B2 (en) Method and apparatus for intracerebral hemorrhage lesion segmentation
KR20210048523A (en) Image processing method, apparatus, electronic device and computer-readable storage medium
Ikhsan et al. An analysis of x-ray image enhancement methods for vertebral bone segmentation
CN110415792B (en) Image detection method, image detection device, computer equipment and storage medium
EP3998579B1 (en) Medical image processing method, apparatus and device, medium and endoscope
CN111539956B (en) Cerebral hemorrhage automatic detection method based on brain auxiliary image and electronic medium
CN110599421A (en) Model training method, video fuzzy frame conversion method, device and storage medium
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
WO2021178419A1 (en) Method and system for performing image segmentation
CN115359066B (en) Focus detection method and device for endoscope, electronic device and storage medium
CN111951276A (en) Image segmentation method and device, computer equipment and storage medium
CN114332132A (en) Image segmentation method and device and computer equipment
CN110533120B (en) Image classification method, device, terminal and storage medium for organ nodule
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
US20230237657A1 (en) Information processing device, information processing method, program, model generating method, and training data generating method
CN111640127A (en) Accurate clinical diagnosis navigation method for orthopedics department
CN111160442A (en) Image classification method, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant