CN110738643B - Analysis method for cerebral hemorrhage, computer device and storage medium - Google Patents

Analysis method for cerebral hemorrhage, computer device and storage medium Download PDF

Info

Publication number
CN110738643B
CN110738643B CN201910950855.3A CN201910950855A CN110738643B CN 110738643 B CN110738643 B CN 110738643B CN 201910950855 A CN201910950855 A CN 201910950855A CN 110738643 B CN110738643 B CN 110738643B
Authority
CN
China
Prior art keywords
network
detection frame
bleeding
region
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910950855.3A
Other languages
Chinese (zh)
Other versions
CN110738643A (en
Inventor
崔益峰
石峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN201910950855.3A priority Critical patent/CN110738643B/en
Publication of CN110738643A publication Critical patent/CN110738643A/en
Application granted granted Critical
Publication of CN110738643B publication Critical patent/CN110738643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to a cerebral hemorrhage analysis method, computer equipment and storage medium. The method comprises the steps of obtaining an image to be analyzed comprising at least one bleeding area, inputting the image into a convolutional neural network for feature extraction to obtain a feature image of the image, inputting the feature image into an interested area extraction network to obtain a feature image of a detection frame comprising each bleeding area, inputting the feature image of the detection frame comprising each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature image of the detection frame comprising each bleeding area into a detection frame regression network to obtain the position of each bleeding area. The cerebral hemorrhage analysis method realizes the detection and classification of images containing a plurality of hemorrhage areas at the same time, and improves the applicability and classification accuracy of the cerebral hemorrhage analysis method.

Description

Analysis method for cerebral hemorrhage, computer device and storage medium
Technical Field
The application relates to the technical field of neural network learning, in particular to a cerebral hemorrhage analysis method, computer equipment and a storage medium.
Background
Cerebral hemorrhage refers to hemorrhage caused by intracranial vascular rupture caused by trauma or non-trauma, has the characteristics of urgent morbidity, critical and complicated illness state, high mortality rate, high disability rate and the like, and is the second leading cause of death next to ischemic heart disease in the world. The early mortality rate of cerebral hemorrhage is very high, about half of patients die within days of the onset of the cerebral hemorrhage, and most survivors have sequelae with different degrees, so how to accurately find the position of cerebral hemorrhage in early stage is particularly important for treating cerebral hemorrhage.
At present, according to the bleeding position, cerebral hemorrhage can be divided into five types of cerebral parenchymal hemorrhage, intraventricular hemorrhage, subarachnoid hemorrhage, subdural hemorrhage and epidural hemorrhage, and at present, the type diagnosis of cerebral hemorrhage is mainly to input a scanned CT image into a classification network through detection equipment to directly classify the type of cerebral hemorrhage, so that a classification result is obtained. The classification network is a 2D convolutional neural network or a 3D convolutional neural network which is trained in advance according to different cerebral hemorrhage case samples.
However, the above method for classifying cerebral hemorrhage types directly through the classification network has a problem of low classification accuracy.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an analysis method, a computer device, and a storage medium for cerebral hemorrhage, which can effectively improve classification accuracy.
In a first aspect, a method of analyzing cerebral hemorrhage, the method comprising:
acquiring an image to be analyzed; the image includes at least one bleeding area;
inputting the image into a convolutional neural network for feature extraction to obtain a feature map of the image;
inputting the feature map into a region of interest extraction network to obtain a feature map of a detection frame containing each bleeding region;
inputting the feature images of the detection frames containing the bleeding areas into a classification network to obtain classification results of the bleeding areas, and inputting the feature images of the detection frames containing the bleeding areas into a detection frame regression network to obtain positions of the bleeding areas.
In one embodiment, the image includes a first bleed area and a second bleed area,
inputting the feature map to a region of interest extraction network to obtain a feature map of a detection frame containing each bleeding region, including:
inputting the feature map into a region of interest extraction network to obtain a feature map comprising a first detection frame and a second detection frame; the first detection frame comprises a first bleeding area, and the second detection frame comprises a second bleeding area;
Inputting the feature map of the detection frame containing each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature map of the detection frame containing each bleeding area into a detection frame regression network to obtain the position of each bleeding area, wherein the method comprises the following steps:
inputting the feature images comprising the first detection frame and the second detection frame into a classification network to obtain a classification result comprising the first bleeding area and a classification result comprising the second bleeding area, and inputting the feature images comprising the first detection frame and the second detection frame into a detection frame regression network to obtain the position of the first bleeding area and the position of the second bleeding area.
In one embodiment, the region of interest extraction network includes a region of interest positioning network and a region of interest acquisition network, and inputting the feature map to the region of interest extraction network to obtain a feature map including detection frames of each bleeding region, including:
inputting the feature map to a region of interest positioning network to obtain the positions of detection frames of all bleeding regions in the feature map;
and inputting the position and the feature map of the detection frame of each bleeding area into a region-of-interest acquisition network to obtain the feature map of the detection frame containing each bleeding area.
In one embodiment, the region of interest positioning network includes a window classification network and a window scoring network, and inputting the feature map to the region of interest positioning network to obtain the positions of the detection frames of the bleeding areas in the feature map, including:
selecting a plurality of candidate areas from the feature map according to a preset sliding window;
inputting the feature images in the multiple candidate areas into a window classification network to classify the foreground and the background, and obtaining classification results of the candidate areas;
inputting each classification result into a window scoring network for scoring and quantifying to obtain a scoring quantification value of each classification result;
and taking the position of the candidate region corresponding to the scoring quantization value larger than the preset threshold as the position of the detection frame of the bleeding region.
In one embodiment, inputting the position and the feature map of the detection frame of each bleeding area into the region of interest acquisition network to obtain a feature map of the detection frame including each bleeding area, including:
inputting the position and the feature map of the detection frame of each bleeding area into a region-of-interest acquisition network to obtain a feature map containing a block diagram of each bleeding area;
and carrying out interpolation processing on the characteristic diagram of the block diagram containing each bleeding area by adopting a bilinear interpolation algorithm to obtain the characteristic diagram of the detection frame containing each bleeding area with a preset size.
In one embodiment, training the detection box regression network and classification network includes:
training an initial detection frame regression network by adopting a smoothing loss function Smooth L1 of detection frame regression to obtain a detection frame regression network;
after obtaining the detection frame regression network, training an initial classification network by adopting a cross entropy loss function to obtain a classification network.
In one embodiment, training the detection box regression network and classification network includes:
and simultaneously training an initial detection frame regression network and an initial classification network by adopting a weighted accumulation sum function of a cross entropy loss function and a smoothing loss function Smooth L1 of detection frame regression to obtain a classification network and a detection frame regression network.
In one embodiment, the convolutional neural network is a three-dimensional residual network; the region of interest location network generates an RPN network for the region.
In a second aspect, a computer device includes a memory and a processor, where the memory stores a computer program, and the processor implements the method for analyzing cerebral hemorrhage according to any embodiment of the first aspect when executing the computer program.
In a third aspect, a computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method for analyzing cerebral hemorrhage according to any one of the embodiments of the first aspect.
According to the analysis method, the computer equipment and the storage medium for cerebral hemorrhage, the image to be analyzed comprising at least one hemorrhage area is obtained, the image is input into the convolutional neural network for feature extraction, the feature image of the image is obtained, the feature image is input into the region extraction network of interest, the feature image of the detection frame comprising each hemorrhage area is obtained, the feature image of the detection frame comprising each hemorrhage area is input into the classification network, the classification result of each hemorrhage area is obtained, and the feature image of the detection frame comprising each hemorrhage area is input into the detection frame regression network, so that the position of each hemorrhage area is obtained. The cerebral hemorrhage analysis method realizes the detection and classification of images containing a plurality of hemorrhage areas at the same time, and improves the applicability of the cerebral hemorrhage analysis method. In addition, in the analysis method, the computer equipment classifies the plurality of bleeding areas in the image based on the feature map of the detection frame containing the plurality of bleeding areas, which is equivalent to initially detecting the positions of the plurality of bleeding areas and further classifying the bleeding areas on the positions of the bleeding areas, so that the targeted classification is realized. In addition, because the image to be analyzed in the analysis method provided by the application can contain a plurality of bleeding areas, the sample containing the image can be regarded as a case containing a plurality of patients, and the problem of small data volume caused by limited cases which can be provided by the patients in practical application is further relieved.
Drawings
FIG. 1 is a schematic diagram of an internal structure of a computer device according to one embodiment;
FIG. 2 is a flow chart of a method for analyzing cerebral hemorrhage according to one embodiment;
FIG. 2A is a schematic diagram of a classification and detection frame regression network according to one embodiment;
FIG. 3 is a flow chart of a method for analyzing cerebral hemorrhage according to one embodiment;
FIG. 4 is a schematic diagram of a region of interest location network according to one embodiment;
FIG. 5 is a flow chart of another implementation of S201 in the embodiment of FIG. 3;
FIG. 6 is a flow chart of another implementation of S202 in the embodiment of FIG. 3;
FIG. 7 is a schematic diagram of a network for analyzing cerebral hemorrhage according to one embodiment;
FIG. 8 is a schematic diagram of a training network according to one embodiment;
FIG. 9 is a flow chart of a training method provided by one embodiment;
FIG. 10 is a schematic diagram of a training network according to one embodiment;
FIG. 11 is a schematic diagram showing a structure of an apparatus for analyzing cerebral hemorrhage according to an embodiment;
FIG. 12 is a schematic diagram showing a structure of an apparatus for analyzing cerebral hemorrhage according to an embodiment;
fig. 13 is a schematic structural diagram of an analysis device for cerebral hemorrhage according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The analysis method of cerebral hemorrhage provided by the application can be applied to computer equipment shown in figure 1. The computer device may be a terminal, and its internal structure may be as shown in fig. 1. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of analyzing cerebral hemorrhage. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
The following will specifically describe the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems by means of examples and with reference to the accompanying drawings. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a flowchart of a method for analyzing cerebral hemorrhage, where the method is implemented by using the computer device in fig. 1, and the method involves the computer device analyzing and detecting an image of cerebral hemorrhage to be analyzed, and obtaining a classification result of a hemorrhage area and a specific process of a location of the hemorrhage area. As shown in fig. 2, the method specifically includes the following steps:
s101, acquiring an image to be analyzed; the image includes at least one bleeding area.
The image to be analyzed represents an image which is currently required to be subjected to detection of the position of a cerebral hemorrhage area and classification treatment of the hemorrhage type of the cerebral hemorrhage area, and is an image containing a brain structure. The images include, but are not limited to, conventional CT images, MRI images, PET-MRI images, etc., which are not limited in this embodiment. In practical applications, the computer device may scan the brain structure of the human body through the connection scanning device to obtain an image to be analyzed, for example, may scan the brain structure of the human body through the electronic computer tomography device to obtain a CT image. Alternatively, the computer device may also obtain the image including the brain structure directly from the database or from the internet, which is not limited to this embodiment. It should be noted that, the image to be analyzed in this embodiment may include one bleeding area to be analyzed, or may include a plurality of bleeding areas to be analyzed, so if the image to be analyzed provided in this embodiment is used as sample data for training, the problem that the data volume is small caused by the traditional method that one patient case is used as one sample data is solved, and the image to be analyzed provided in this embodiment may be used as a plurality of patient cases, thereby indirectly increasing the required data volume.
Alternatively, the computer device may acquire an image including the brain structure of the human body in advance, and then perform preprocessing on the acquired image, thereby obtaining an image to be analyzed. The preprocessing may include, among other things, removing the skull, normalizing the image, organizing the data, etc., various methods of processing the image. First, for the deheader treatment, it includes: in practical applications, since the skull is usually a highlight signal and is much higher than the brain tissue in the skull image, the contrast in the brain tissue is relatively low, which affects the accuracy of detection, and therefore, a skull removing process is required. The implementation adopts a simple 3D V-Net network to divide brain tissues, and achieves the head bone removal treatment of the acquired images. Next, the normalization processing for the image includes: the head position in the head CT image may be shifted due to head movement of the patient during imaging or problems with the imaging equipment. The offset affects the accuracy of network detection to a certain extent, so that the deflection angle of the head needs to be calculated by adopting methods such as principal component analysis and the like, and the whole data is rotationally corrected to obtain a standardized image to be analyzed. Finally, for data organization: due to the different layer thicknesses of CT acquisitions, it is necessary to resample the samples to dimensions 512X 512mm 3 Resolution of 1X 1mm 3 . The CT values of all three-dimensional images are set between 0 and 95 to unify the data ranges in the images, considering that the CT values of the CT image bleeding range between 60 and 85.
S102, inputting the image into a convolutional neural network for feature extraction, and obtaining a feature map of the image.
The convolutional neural network is used for extracting characteristics of an input image to be analyzed. The convolutional neural network may take various forms of network structures, for example, the convolutional neural network may be a CNN network structure or a Vnet network structure. Optionally, the convolutional neural network in this embodiment adopts a three-dimensional residual network Resnet50 with 50 layers, and the residual network can solve the gradient vanishing problem of the deep network. In practical application, when the computer device obtains the image to be analyzed, the image can be further input into a convolutional neural network trained in advance to perform feature extraction, so as to obtain a feature map of the image.
S103, inputting the feature map into a region of interest extraction network to obtain a feature map of a detection frame containing each bleeding region.
Wherein the detection frame is used for locating the bleeding area in the input image. The region of interest extraction network is used for extracting detection frames of each bleeding region from the input feature map. Specifically, the region of interest extraction network may be a network for identifying a detection frame of a bleeding region in the feature map, and optionally, the region of interest extraction network may also be a network for locating a detection frame of a bleeding region in the feature map. Specifically, the Region of interest extraction network may include a Region generation (Region Proposal Network, region Proposal, RPN) network, a Region of interest selection network (ROI alignment), and other different types of Region selection networks. In this embodiment, when the computer device obtains the feature map of the input image, the feature map may be further input to a pre-trained region of interest extraction network to extract the feature image in the detection frame, so as to obtain the feature map of the detection frame including each bleeding region for later use.
S104, inputting the feature map of the detection frame containing each bleeding area into a classification network to obtain a classification result of each bleeding area, and inputting the feature map of the detection frame containing each bleeding area into a detection frame regression network to obtain the position of each bleeding area.
The classification result indicates qualitative classification of the bleeding area, and in practical application, according to the different cerebral hemorrhage positions, cerebral hemorrhage can be classified into five types of cerebral hemorrhage, i.e. intraparenchymal hemorrhage, intraventricular hemorrhage, subarachnoid hemorrhage, subdural hemorrhage and epidural hemorrhage. The classification result in this embodiment is any one of the above five types of cerebral hemorrhage classification. The classification result in this embodiment is a classification result including the classification result of each of the plurality of bleeding areas. The classification network is used for classifying the feature images of the detection frames of the bleeding areas to obtain classification results containing the bleeding areas.
The detection frame regression network is used for adjusting the detection frames of the bleeding areas in the feature map, repositioning the position coordinates of the detection frames, and obtaining the accurate positions of the bleeding areas.
In this embodiment, after the computer device obtains the feature map of the detection frame including each bleeding area, the feature map may be further input to the classification network and the detection frame regression network, respectively, to obtain the classification result of each bleeding area and the position of each bleeding area.
According to the analysis method for cerebral hemorrhage, an image to be analyzed comprising at least one hemorrhage area is obtained, the image is input into a convolutional neural network for feature extraction, a feature image of the image is obtained, the feature image is input into a region-of-interest extraction network, a feature image of a detection frame comprising each hemorrhage area is obtained, the feature image of the detection frame comprising each hemorrhage area is input into a classification network, a classification result of each hemorrhage area is obtained, and the feature image of the detection frame comprising each hemorrhage area is input into a detection frame regression network, so that the position of each hemorrhage area is obtained. The cerebral hemorrhage analysis method realizes the detection and classification of images containing a plurality of hemorrhage areas at the same time, and improves the applicability of the cerebral hemorrhage analysis method. In addition, in the analysis method, the computer equipment classifies the plurality of bleeding areas in the image based on the feature map of the detection frame containing the plurality of bleeding areas, which is equivalent to initially detecting the positions of the plurality of bleeding areas and further classifying the bleeding areas on the positions of the bleeding areas, so that the targeted classification is realized. In addition, because the image to be analyzed in the analysis method provided by the application can contain a plurality of bleeding areas, the sample containing the image can be regarded as a case containing a plurality of patients, and the problem of small data volume caused by limited cases which can be provided by the patients in practical application is further relieved.
Optionally, the present application further provides a specific embodiment, that is, after the computer device obtains the feature map of the detection frame including each bleeding area based on step S103 in the foregoing embodiment, the feature map may be further input into a classification and detection frame regression network as shown in fig. 2A, to obtain a classification result of each bleeding area and a position of each bleeding area respectively.
In this embodiment, the classification and detection frame regression network firstly adopts a first feature processing layer (including a convolution layer, a batch normalization layer and an activation function in the figure) and a second feature processing layer (including a convolution layer, a batch normalization layer and an activation function in the figure) to perform feature extraction processing on the input feature images of the detection frames including the bleeding areas, then inputs the processed feature images to the full-connection layer to output processing images to the detection frame regression network, realizes accurate positioning of the detection frames of the bleeding areas, further obtains the positions of the bleeding areas, and then inputs the coordinates of the positions of the bleeding areas to the classification network to perform analysis on the bleeding types, so as to obtain classification results corresponding to the bleeding areas. According to the analysis method, the computer equipment detects the accurate positions of the detection frames, namely the accurate positions of the bleeding areas according to the detection regression network, and then classifies the bleeding types based on the accurate positions of the bleeding areas, so that the classification results are more accurate.
In one embodiment, the present application provides a specific embodiment of an analysis method for cerebral hemorrhage, that is, the image to be analyzed in the above embodiment includes a first hemorrhage area and a second hemorrhage area.
Under such an application, S103 "input the feature map to the region of interest extraction network to obtain a feature map including a detection frame of each bleeding region" in the above embodiment may specifically include: inputting the feature map into a region of interest extraction network to obtain a feature map comprising a first detection frame and a second detection frame; the first detection frame comprises a first bleeding area, and the second detection frame comprises a second bleeding area.
In the above embodiment, S104 "inputting the feature map of the detection frame including each bleeding area into the classification network to obtain the classification result of each bleeding area, and inputting the feature map of the detection frame including each bleeding area into the detection frame regression network to obtain the position of each bleeding area" may specifically include: inputting the feature images comprising the first detection frame and the second detection frame into a classification network to obtain a classification result comprising the first bleeding area and a classification result comprising the second bleeding area, and inputting the feature images comprising the first detection frame and the second detection frame into a detection frame regression network to obtain the position of the first bleeding area and the position of the second bleeding area.
In practical applications, the region of interest extraction network in the above embodiment may include multiple types of region extraction networks, and the application provides a region of interest extraction network, where the region of interest extraction network includes a region of interest positioning network and a region of interest obtaining network, and in this application, the step of S103 "inputting a feature map into the region of interest extraction network to obtain a feature map of a detection frame including each bleeding region" includes, as shown in fig. 3:
s201, inputting the feature map into a region of interest positioning network to obtain the positions of detection frames of the bleeding regions in the feature map.
The region of interest positioning network is used for positioning a detection frame of the bleeding region in the input feature map. In particular, the Region of interest location network may employ a Region Proposal (RPN) network. The position of the detection frame may be expressed in coordinates. In this embodiment, when the computer device obtains the feature map of the input image, the feature map may be further input to a pre-trained region of interest positioning network, so as to obtain the positions of the detection frames of the bleeding regions in the feature map.
S202, inputting the position and the feature map of the detection frame of each bleeding area into an interested area acquisition network to obtain the feature map of the detection frame containing each bleeding area.
The region of interest acquisition network is used for extracting feature graphs contained in each detection frame from the feature graphs according to the input positions of the detection frames. Optionally, the region of interest acquisition network may adopt an ROI alignment network, a ROI imaging network, and the like, and in this embodiment, the region of interest acquisition network specifically adopts the ROI alignment network, so that the condition that the imaging pixel is offset due to the use of the ROI imaging network can be avoided. In this embodiment, when the computer device obtains the feature map of the input image and the positions of the detection frames of the respective bleeding areas on the input image, the feature map and the positions of the detection frames may be further input to the pre-trained region-of-interest obtaining network, so as to obtain the feature map of the detection frame including the respective bleeding areas.
In one embodiment, the present application provides an architecture for a region of interest location network, as shown in FIG. 4, that includes a window classification network and a window scoring network. The output end of the window classification network is connected with the input end of the window scoring network. The window classification network is used for classifying the foreground and the background of the input feature images containing the candidate areas to obtain classification results of the candidate areas. The window scoring network is used for scoring and quantifying each classification result output by the window classification network to obtain a scoring and quantifying value of each classification result, and then outputting a feature map contained in a candidate region corresponding to the classification result meeting the scoring requirement.
Based on the structure of the region of interest positioning network according to the embodiment of fig. 4, fig. 5 is a flowchart of another implementation manner of S201 in the embodiment of fig. 3, as shown in fig. 5, S201 "inputs a feature map to the region of interest positioning network to obtain a position of a detection frame of each bleeding region in the feature map", where:
s301, selecting a plurality of candidate areas from the feature map according to a preset sliding window.
Wherein the candidate region represents an alternative detection box. The sliding window is used for selecting candidate areas on the feature map, and the attribute of the sliding window can be determined by the computer equipment in advance according to the actual application requirements, for example, the size of the sliding window, the sliding step length of the sliding window and the like. In this embodiment, when the computer device obtains the feature map of the image based on the step S201, sliding with a preset step length may be performed in the feature map according to a preset sliding window, so as to determine a plurality of candidate areas as the candidate detection frames.
S302, inputting feature images in a plurality of candidate areas into a window classification network to classify the foreground and the background, and obtaining classification results of the candidate areas.
The window classification network is the window classification network in the embodiment of fig. 4, and the detailed description will be referred to the foregoing description, and the redundant description will not be repeated here. In this embodiment, after the computer device determines a plurality of candidate areas, the feature maps in the plurality of candidate areas may be further input to a pre-trained window classification network to perform foreground and background classification, so as to obtain a classification result of each candidate area.
S303, inputting each classification result into a window scoring network for scoring and quantization to obtain a scoring and quantization value of each classification result.
The window scoring network is the window scoring network in the embodiment of fig. 4, and the detailed description will be referred to the foregoing description, and the redundant description is not repeated here. In this embodiment, after the computer device obtains the classification results of the multiple candidate regions, each classification result may be further input to a pre-trained window scoring network to perform scoring quantization on each candidate region, so as to obtain a scoring quantization value of each classification result.
S304, taking the position of the candidate region corresponding to the scoring quantization value larger than the preset threshold as the position of the detection frame of the bleeding region.
Wherein the preset threshold represents a desired score value, which is determined by the computer device according to the actual requirements. In this embodiment, after the computer device obtains the scoring quantization value of each candidate region, the scoring quantization value of each candidate region may be compared with a preset threshold, specifically, the scoring quantization value greater than the preset threshold is determined, and then the position of the candidate region corresponding to the scoring quantization value greater than the preset threshold is used as the position of the detection frame of the bleeding region.
Fig. 6 is a flowchart of another implementation manner of S202 in the embodiment of fig. 3, where, as shown in fig. 6, S202 "inputs the feature map to the region of interest positioning network to obtain the positions of the detection frames of the bleeding areas in the feature map", and includes:
s401, inputting the position and the feature map of the detection frame of each bleeding area into an interested area acquisition network to obtain the feature map of the detection frame containing each bleeding area.
When the computer device obtains the positions of the detection frames of the bleeding areas on the feature map and the feature map based on the method described in the foregoing embodiment, the positions of the detection frames of the bleeding areas and the feature map may be input to the region-of-interest obtaining network, so as to extract the feature map of all the detection frames from the feature map, and obtain the feature map including the detection frames of the bleeding areas.
And S402, carrying out interpolation processing on the characteristic diagram of the block diagram containing each bleeding area by adopting a bilinear interpolation algorithm to obtain the characteristic diagram of the detection frame containing each bleeding area with a preset size.
In practical application, because the feature graphs obtained in the step S401 are different in size, there is a certain influence on the subsequent classification of the feature graphs by using the classification network, so that the embodiment provides a method for unifying the feature graphs, that is, a bilinear interpolation algorithm is adopted to interpolate the feature graphs of the block diagram including each bleeding area, so as to obtain the feature graphs of the detection frame including each bleeding area with a preset size. The preset size may be determined in advance according to practical application requirements, which is not limited in this embodiment.
In summary, the present application further provides an analysis network for cerebral hemorrhage, as shown in fig. 7, where the network includes a convolutional neural network, a region of interest positioning network, a region of interest acquisition network, a classification network, and a detection frame regression network. The convolutional neural network is used for extracting characteristics of an input image to be analyzed to obtain a characteristic diagram of the image; the interested region positioning network is used for positioning the detection frames of the bleeding regions in the input feature map to obtain the positions of the detection frames of the bleeding regions; the region of interest acquisition network is used for extracting feature images contained in each detection frame from the input feature images according to the positions of the input detection frames; the classifying network is used for analyzing and classifying the input feature images containing the detection frames to obtain classifying results of the bleeding areas; the detection frame regression network is used for adjusting the detection frames of all bleeding areas in the input feature map containing the detection frames, repositioning the position coordinates of the detection frames, and obtaining accurate positions of all the bleeding areas, namely detection results. The above analysis network for cerebral hemorrhage is applied to the method described in any one of the above embodiments, and the specific content can be seen from the foregoing description, and the detailed description is not repeated here.
In one embodiment, the present application further provides a training network for training a detection frame regression network and a classification network, as shown in fig. 8, the training network comprising: the device comprises a convolutional neural network, a region of interest extraction network, a region of interest acquisition network, an initial detection frame regression network, an initial classification network, a first optimization module and a second optimization module. Wherein the initial detection box regression network and the initial classification network represent the network to be trained. The convolutional neural network, the region of interest extraction network, the region of interest acquisition network, the initial detection frame regression network, and the initial classification network correspond to the convolutional neural network, the region of interest extraction network, the region of interest acquisition network, the detection frame regression network, and the classification network described in the embodiment of fig. 2, and the detailed description is please refer to the description of the embodiment of fig. 2, and the redundant description is not repeated here. The first optimization module is used for optimizing the initial detection frame regression network, and specifically can be realized by adopting a Smooth loss function Smooth L1 of detection frame regression in the implementation. The second optimization module is used for optimizing the initial classification network, and can be specifically realized by adopting a cross entropy loss function in the implementation.
Based on the training network described in the embodiment of fig. 8, the present application further provides a method for training a detection frame regression network and a classification network, as shown in fig. 9, where the method specifically includes:
s501, training an initial detection frame regression network by adopting a smoothing loss function Smooth L1 of detection frame regression to obtain the detection frame regression network.
The present embodiment relates to a training process for training an initial detection frame regression network by using a computer device, where the training network as shown in fig. 8 may be used to train the initial detection frame regression network. When the initial detection frame regression network in fig. 8 outputs the detection result (the position of each bleeding area), the detection result is substituted into the Smooth loss function smoothl 1 of the detection frame regression to obtain the value of the Smooth loss function smoothl 1, and then the parameters of the initial detection frame regression network are adjusted according to the value until the value of the Smooth loss function smoothl 1 meets the preset condition, so as to obtain the trained detection frame regression network for use in the embodiment of fig. 2.
S502, training an initial classification network by adopting a cross entropy loss function after obtaining a detection frame regression network, and obtaining the classification network.
The present embodiment relates to a training process in which a computer device trains an initial classification network, in which the present embodiment may train the initial classification network using a training network as shown in fig. 8. When the initial classification network in fig. 8 outputs a classification result, substituting the classification result into the cross entropy loss function to obtain a value of the cross entropy loss function, and then adjusting parameters of the initial classification network according to the value until the value of the cross entropy loss function meets a preset condition, so as to obtain a trained classification network for use in the embodiment of fig. 2.
In one embodiment, the present application further provides a training network for training a detection frame regression network and a classification network, as shown in fig. 10, the training network comprising: the device comprises a convolutional neural network, a region of interest extraction network, a region of interest acquisition network, an initial detection frame regression network, an initial classification network and a third optimization module. Wherein the initial detection box regression network and the initial classification network represent the network to be trained. The convolutional neural network, the region of interest extraction network, the region of interest acquisition network, the initial detection frame regression network, and the initial classification network correspond to the convolutional neural network, the region of interest extraction network, the region of interest acquisition network, the detection frame regression network, and the classification network described in the embodiment of fig. 2, and the detailed description is please refer to the description of the embodiment of fig. 2, and the redundant description is not repeated here. The third optimization module is used for optimizing the initial detection frame regression network and the initial classification network, and specifically can be implemented by adopting a weighted accumulation sum function of a smoothing loss function Smooth L1 and a cross entropy loss function of detection frame regression in the implementation.
Based on the training network described in the embodiment of fig. 10, in practical application, there is also a method for training the detection frame regression network and the classification network, where the method includes: and simultaneously training an initial detection frame regression network and an initial classification network by adopting a weighted accumulation sum function of a cross entropy loss function and a smoothing loss function Smooth L1 of detection frame regression to obtain a classification network and a detection frame regression network.
The present embodiment relates to another training process in which a computer device trains an initial classification network, in which the present embodiment can train the initial classification network and the initial detection box regression network using a training network as shown in fig. 10. The third optimization module in the training network may specifically train the initial classification network and the initial detection frame regression network by using a weighted accumulated sum function of the cross entropy loss function and the smoothing loss function Smooth L1 of the detection frame regression. The specific process is as follows: the computer equipment firstly acquires a weighted accumulation sum function of a cross entropy loss function and a Smooth loss function Smooth L1 of detection frame regression, then substitutes the classification result and the detection result into the weighted accumulation sum function to obtain the value of the weighted accumulation sum function when the initial classification network in fig. 10 outputs the classification result and the initial detection frame regression network outputs the detection result, and finally adjusts the parameters of the initial classification network and the parameters of the initial detection frame regression network according to the value until the value of the weighted accumulation sum function meets the preset condition, so as to obtain the trained classification network and detection frame regression network for use in the embodiment of fig. 2.
It should be understood that, although the steps in the flowcharts of fig. 2-6 and 9 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps of fig. 2-6, 9 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily occur in sequence.
In one embodiment, as shown in fig. 11, there is provided an analysis apparatus for cerebral hemorrhage, comprising: an acquisition module 11, a feature extraction module 12, a region extraction module 13, and a classification and detection module 14, wherein:
an acquisition module 11 for acquiring an image to be analyzed; the image includes at least one bleeding area;
the feature extraction module 12 is configured to input an image to a convolutional neural network for feature extraction, so as to obtain a feature map of the image;
the region extraction module 13 is configured to input the feature map to a region of interest extraction network to obtain a feature map of a detection frame including each bleeding region;
The classifying and detecting module 14 is configured to input the feature map of the detecting frame including each bleeding area to the classifying network to obtain a classifying result of each bleeding area, and input the feature map of the detecting frame including each bleeding area to the detecting frame regression network to obtain a position of each bleeding area.
In one embodiment, as shown in fig. 12, the region extraction module 13 includes:
the positioning unit 131 is configured to input the feature map to a region of interest positioning network, so as to obtain positions of detection frames of each bleeding region in the feature map;
an obtaining unit 132, configured to input the position and the feature map of the detection frame of each bleeding area to the area of interest obtaining network, and obtain a feature map of the detection frame including each bleeding area.
In one embodiment, the positioning unit 131 is specifically configured to select a plurality of candidate areas from the feature map according to a preset sliding window; inputting the feature images in the multiple candidate areas into a window classification network to classify the foreground and the background, and obtaining classification results of the candidate areas; inputting each classification result into a window scoring network for scoring and quantifying to obtain a scoring quantification value of each classification result; and taking the position of the candidate region corresponding to the scoring quantization value larger than the preset threshold as the position of the detection frame of the bleeding region.
In one embodiment, the acquiring unit 132 is specifically configured to input the position and the feature map of the detection frame of each bleeding area into the area of interest acquiring network, to obtain a feature map including a block diagram of each bleeding area; and carrying out interpolation processing on the characteristic diagram of the block diagram containing each bleeding area by adopting a bilinear interpolation algorithm to obtain the characteristic diagram of the detection frame containing each bleeding area with a preset size.
In one embodiment, as shown in fig. 13, there is provided a training device comprising: the initial detection box regresses the training module 21 of the network and the training module 22 of the initial classification network, wherein:
the training module 21 of the initial detection frame regression network is configured to train the initial detection frame regression network by using a Smooth loss function smoothl 1 of the detection frame regression to obtain a detection frame regression network;
the training module 22 of the initial classification network is configured to train the initial classification network with the cross entropy loss function after obtaining the detection frame regression network, to obtain the classification network.
In one embodiment, a training device is further provided, where the training device is configured to train an initial detection frame regression network and an initial classification network simultaneously by using a weighted accumulation sum function of a cross entropy loss function and a smoothing loss function smoothl 1 of the detection frame regression, so as to obtain a classification network and a detection frame regression network.
For specific limitations of the device for analyzing cerebral hemorrhage, reference may be made to the above limitation of a method for analyzing cerebral hemorrhage, and a detailed description thereof will be omitted. The above-described respective modules in the analysis device for cerebral hemorrhage may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring an image to be analyzed; the image includes at least one bleeding area;
inputting the image into a convolutional neural network for feature extraction to obtain a feature map of the image;
inputting the feature map into a region of interest extraction network to obtain a feature map of a detection frame containing each bleeding region;
inputting the feature images of the detection frames containing the bleeding areas into a classification network to obtain classification results of the bleeding areas, and inputting the feature images of the detection frames containing the bleeding areas into a detection frame regression network to obtain positions of the bleeding areas.
The computer device provided in the foregoing embodiments has similar implementation principles and technical effects to those of the foregoing method embodiments, and will not be described herein in detail.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor further performs the steps of:
acquiring an image to be analyzed; the image includes at least one bleeding area;
inputting the image into a convolutional neural network for feature extraction to obtain a feature map of the image;
inputting the feature map into a region of interest extraction network to obtain a feature map of a detection frame containing each bleeding region;
inputting the feature images of the detection frames containing the bleeding areas into a classification network to obtain classification results of the bleeding areas, and inputting the feature images of the detection frames containing the bleeding areas into a detection frame regression network to obtain positions of the bleeding areas.
The foregoing embodiment provides a computer readable storage medium, which has similar principles and technical effects to those of the foregoing method embodiment, and will not be described herein.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. A method of analyzing cerebral hemorrhage, the method comprising:
acquiring an image to be analyzed; the image includes at least one bleeding area;
inputting the image into a convolutional neural network for feature extraction to obtain a feature map of the image;
inputting the feature map to a region of interest extraction network to obtain a feature map of a detection frame containing each bleeding region;
Inputting the characteristic diagram of the detection frame containing the bleeding areas into a classification network to obtain classification results of the bleeding areas, and inputting the characteristic diagram of the detection frame containing the bleeding areas into a detection frame regression network to obtain positions of the bleeding areas;
the region of interest extraction network comprises a region of interest positioning network and a region of interest acquisition network, the region of interest positioning network comprises a window classification network and a window scoring network, the feature map is input into the region of interest extraction network to obtain a feature map of a detection frame containing each bleeding region, and the method comprises the following steps:
selecting a plurality of candidate areas from the feature map according to a preset sliding window;
inputting the feature images in the candidate areas into the window classification network to classify the foreground and the background, and obtaining classification results of the candidate areas;
inputting each classification result into the window scoring network for scoring quantification to obtain a scoring quantification value of each classification result;
taking the position of the candidate region corresponding to the scoring quantization value larger than the preset threshold as the position of the detection frame of the bleeding region;
And inputting the position of the detection frame of each bleeding area and the characteristic map into a region-of-interest acquisition network to obtain the characteristic map of the detection frame containing each bleeding area.
2. The method of claim 1, wherein the image comprises a first bleeding area and a second bleeding area,
inputting the feature map to a region of interest extraction network to obtain a feature map of a detection frame containing each bleeding region, including:
inputting the feature map to a region of interest extraction network to obtain a feature map comprising a first detection frame and a second detection frame; the first detection frame comprises the first bleeding area, and the second detection frame comprises the second bleeding area;
inputting the feature map of the detection frame including the bleeding areas into a classification network to obtain classification results of the bleeding areas, and inputting the feature map of the detection frame including the bleeding areas into a detection frame regression network to obtain positions of the bleeding areas, wherein the method comprises the following steps:
inputting the feature map comprising the first detection frame and the second detection frame into the classification network to obtain a classification result comprising the first bleeding area and a classification result comprising the second bleeding area, and inputting the feature map comprising the first detection frame and the second detection frame into the detection frame regression network to obtain the position of the first bleeding area and the position of the second bleeding area.
3. The method of claim 1, wherein inputting the location of the detection frame of each bleeding area and the feature map to a region of interest acquisition network to obtain the feature map of the detection frame including each bleeding area comprises:
inputting the position of the detection frame of each bleeding area and the feature map into a region-of-interest acquisition network to obtain a feature map containing a block diagram of each bleeding area;
and carrying out interpolation processing on the characteristic diagram of the block diagram containing each bleeding area by adopting a bilinear interpolation algorithm to obtain the characteristic diagram of the detection frame containing each bleeding area with a preset size.
4. The method of claim 1, wherein training the detection box regression network and the classification network comprises:
training an initial detection frame regression network by adopting a smoothing loss function Smooth L1 of detection frame regression to obtain the detection frame regression network;
and training an initial classification network by adopting a cross entropy loss function after the detection frame regression network is obtained, so as to obtain the classification network.
5. The method of claim 1, wherein training the detection box regression network and the classification network comprises:
And training an initial detection frame regression network and an initial classification network simultaneously by adopting a weighted accumulation sum function of the cross entropy loss function and a Smooth L1 loss function of detection frame regression to obtain the classification network and the detection frame regression network.
6. The method of claim 1, wherein the convolutional neural network is a three-dimensional residual network; the region of interest location network generates an RPN network for the region.
7. An analysis device for cerebral hemorrhage, the device comprising:
the acquisition module is used for acquiring an image to be analyzed; the image includes at least one bleeding area;
the feature extraction module is used for inputting the image into a convolutional neural network to perform feature extraction to obtain a feature map of the image;
the region extraction module is used for inputting the feature images into a region extraction network of interest to obtain feature images of detection frames containing the bleeding regions;
the classifying and detecting module is used for inputting the characteristic diagram of the detecting frame containing the bleeding areas into a classifying network to obtain classifying results of the bleeding areas, and inputting the characteristic diagram of the detecting frame containing the bleeding areas into a detecting frame regression network to obtain positions of the bleeding areas;
The region of interest extraction network comprises a region of interest positioning network and a region of interest acquisition network, the region of interest positioning network comprises a window classification network and a window scoring network, and the region extraction module comprises:
the positioning unit is used for selecting a plurality of candidate areas from the feature map according to a preset sliding window; inputting the feature images in the candidate areas into the window classification network to classify the foreground and the background, and obtaining classification results of the candidate areas; inputting each classification result into the window scoring network for scoring quantification to obtain a scoring quantification value of each classification result; taking the position of the candidate region corresponding to the scoring quantization value larger than the preset threshold as the position of the detection frame of the bleeding region;
and the acquisition unit is used for inputting the position of the detection frame of each bleeding area and the characteristic map into a region-of-interest acquisition network to obtain the characteristic map of the detection frame containing each bleeding area.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the acquisition unit is specifically configured to input the position of the detection frame of each bleeding area and the feature map into an area-of-interest acquisition network, so as to obtain a feature map including a block diagram of each bleeding area; and carrying out interpolation processing on the characteristic diagram of the block diagram containing each bleeding area by adopting a bilinear interpolation algorithm to obtain the characteristic diagram of the detection frame containing each bleeding area with a preset size.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN201910950855.3A 2019-10-08 2019-10-08 Analysis method for cerebral hemorrhage, computer device and storage medium Active CN110738643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910950855.3A CN110738643B (en) 2019-10-08 2019-10-08 Analysis method for cerebral hemorrhage, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910950855.3A CN110738643B (en) 2019-10-08 2019-10-08 Analysis method for cerebral hemorrhage, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN110738643A CN110738643A (en) 2020-01-31
CN110738643B true CN110738643B (en) 2023-07-28

Family

ID=69268529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910950855.3A Active CN110738643B (en) 2019-10-08 2019-10-08 Analysis method for cerebral hemorrhage, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN110738643B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402218A (en) * 2020-03-11 2020-07-10 北京深睿博联科技有限责任公司 Cerebral hemorrhage detection method and device
CN111445451B (en) * 2020-03-20 2023-04-25 上海联影智能医疗科技有限公司 Brain image processing method, system, computer device and storage medium
CN111445457B (en) * 2020-03-26 2021-06-22 推想医疗科技股份有限公司 Network model training method and device, network model identification method and device, and electronic equipment
CN111598882B (en) * 2020-05-19 2023-11-24 联想(北京)有限公司 Organ detection method, organ detection device and computer equipment
CN114511908A (en) * 2022-01-27 2022-05-17 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium
CN114972255B (en) * 2022-05-26 2023-05-12 深圳市铱硙医疗科技有限公司 Image detection method and device for cerebral micro-bleeding, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108376235A (en) * 2018-01-15 2018-08-07 深圳市易成自动驾驶技术有限公司 Image detecting method, device and computer readable storage medium
WO2019051411A1 (en) * 2017-09-08 2019-03-14 The General Hospital Corporation Method and systems for analyzing medical image data using machine learning
CN109543662A (en) * 2018-12-28 2019-03-29 广州海昇计算机科技有限公司 Object detection method, system, device and the storage medium proposed based on region
CN110096960A (en) * 2019-04-03 2019-08-06 罗克佳华科技集团股份有限公司 Object detection method and device
CN110189318A (en) * 2019-05-30 2019-08-30 江南大学附属医院(无锡市第四人民医院) Pulmonary nodule detection method and system with semantic feature score

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3391284B1 (en) * 2015-12-18 2024-04-17 The Regents of The University of California Interpretation and quantification of emergency features on head computed tomography
CN107274406A (en) * 2017-08-07 2017-10-20 北京深睿博联科技有限责任公司 A kind of method and device of detection sensitizing range
CN107292884B (en) * 2017-08-07 2020-09-29 杭州深睿博联科技有限公司 Method and device for identifying edema and hematoma in MRI (magnetic resonance imaging) image
CN107492099B (en) * 2017-08-28 2021-08-20 京东方科技集团股份有限公司 Medical image analysis method, medical image analysis system, and storage medium
WO2019051271A1 (en) * 2017-09-08 2019-03-14 The General Hospital Corporation Systems and methods for brain hemorrhage classification in medical images using an artificial intelligence network
CN109584209B (en) * 2018-10-29 2023-04-28 深圳先进技术研究院 Vascular wall plaque recognition apparatus, system, method, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019051411A1 (en) * 2017-09-08 2019-03-14 The General Hospital Corporation Method and systems for analyzing medical image data using machine learning
CN108376235A (en) * 2018-01-15 2018-08-07 深圳市易成自动驾驶技术有限公司 Image detecting method, device and computer readable storage medium
CN109543662A (en) * 2018-12-28 2019-03-29 广州海昇计算机科技有限公司 Object detection method, system, device and the storage medium proposed based on region
CN110096960A (en) * 2019-04-03 2019-08-06 罗克佳华科技集团股份有限公司 Object detection method and device
CN110189318A (en) * 2019-05-30 2019-08-30 江南大学附属医院(无锡市第四人民医院) Pulmonary nodule detection method and system with semantic feature score

Also Published As

Publication number Publication date
CN110738643A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN110738643B (en) Analysis method for cerebral hemorrhage, computer device and storage medium
CN111179231B (en) Image processing method, device, equipment and storage medium
CN109978037B (en) Image processing method, model training method, device and storage medium
CN111179372B (en) Image attenuation correction method, image attenuation correction device, computer equipment and storage medium
CN110210519B (en) Classification method, computer device, and storage medium
CN110415792B (en) Image detection method, image detection device, computer equipment and storage medium
CN110363774B (en) Image segmentation method and device, computer equipment and storage medium
CN110473226B (en) Training method of image processing network, computer device and readable storage medium
CN110706207A (en) Image quantization method, image quantization device, computer equipment and storage medium
CN111951272A (en) Method and device for segmenting brain image, computer equipment and readable storage medium
CN111951276A (en) Image segmentation method and device, computer equipment and storage medium
CN111028212A (en) Key point detection method and device, computer equipment and storage medium
CN114332132A (en) Image segmentation method and device and computer equipment
CN115359066A (en) Focus detection method and device for endoscope, electronic device and storage medium
CN111243052A (en) Image reconstruction method and device, computer equipment and storage medium
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
CN111160442B (en) Image classification method, computer device, and storage medium
CN111160441B (en) Classification method, computer device, and storage medium
CN110473285B (en) Image reconstruction method, device, computer equipment and storage medium
CN111161240B (en) Blood vessel classification method, apparatus, computer device, and readable storage medium
CN111161369B (en) Image reconstruction storage method, device, computer equipment and storage medium
CN116486304A (en) Key frame extraction method based on ultrasonic video and related equipment
CN115496765A (en) Image processing method and device for brain area, computer equipment and storage medium
CN110874614B (en) Brain image classification method, computer device, and readable storage medium
CN111210414B (en) Medical image analysis method, computer device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant