CN115187600B - Brain hemorrhage volume calculation method based on neural network - Google Patents
Brain hemorrhage volume calculation method based on neural network Download PDFInfo
- Publication number
- CN115187600B CN115187600B CN202211106689.7A CN202211106689A CN115187600B CN 115187600 B CN115187600 B CN 115187600B CN 202211106689 A CN202211106689 A CN 202211106689A CN 115187600 B CN115187600 B CN 115187600B
- Authority
- CN
- China
- Prior art keywords
- neural network
- bleeding
- area
- volume
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 25
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 14
- 208000008574 Intracranial Hemorrhages Diseases 0.000 title claims abstract description 13
- 208000032843 Hemorrhage Diseases 0.000 claims abstract description 92
- 230000000740 bleeding effect Effects 0.000 claims abstract description 87
- 210000004556 brain Anatomy 0.000 claims abstract description 55
- 238000003062 neural network model Methods 0.000 claims abstract description 51
- 238000012549 training Methods 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims description 31
- 239000010410 layer Substances 0.000 claims description 14
- 238000004891 communication Methods 0.000 claims description 10
- 239000011229 interlayer Substances 0.000 claims description 3
- 230000002093 peripheral effect Effects 0.000 claims description 3
- 206010008111 Cerebral haemorrhage Diseases 0.000 abstract description 8
- 230000009286 beneficial effect Effects 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract description 6
- 230000007246 mechanism Effects 0.000 abstract description 3
- 238000010801 machine learning Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 7
- 206010018852 Haematoma Diseases 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 3
- 210000005013 brain tissue Anatomy 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000002861 ventricular Effects 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
- A61B6/50—Clinical applications
- A61B6/501—Clinical applications involving diagnosis of head, e.g. neuroimaging, craniography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Radiology & Medical Imaging (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Optics & Photonics (AREA)
- High Energy & Nuclear Physics (AREA)
- Neurosurgery (AREA)
- Geometry (AREA)
- Physiology (AREA)
- Quality & Reliability (AREA)
- Neurology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention relates to the technical field of machine learning, in particular to a brain hemorrhage volume calculation method based on a neural network, which comprises the following steps of: obtaining a plurality of brain medical image sequences; establishing a UNet neural network model, and pre-training by using a nonstandard brain medical image sequence; marking a bleeding area of the brain medical image manually to obtain a sample image sequence; training a UNet neural network model by using a sample image sequence; inputting a brain medical image sequence of a patient with a bleeding volume to be evaluated into the trained UNet neural network model to obtain a bleeding area; establishing a 3D connected region of the bleeding region; the bleeding volume is obtained from the 3D connected area. The beneficial technical effects of the invention comprise: an efficient pre-training mechanism is provided, the calculation of the cerebral hemorrhage volume of a patient is realized, and the beneficial technical effects of high training efficiency and high accuracy of calculating the amount of hemorrhage are achieved.
Description
Technical Field
The invention relates to the technical field of machine learning, in particular to a brain hemorrhage volume calculation method based on a neural network.
Background
In clinic, doctors judge whether operations are needed for treating cerebral hemorrhage, and the main judgment is based on the amount of hemorrhage. When a patient with cerebral hemorrhage happens, the first thing to take the CT is to naturally check the bleeding part and evaluate the bleeding amount. The judgment of the bleeding part is the basic function of doctors in and out of the spirit, so disputes do not exist basically and are not repeated; however, estimates of bleeding volume are often different. Namely, finding out a bleeding part in a brain CT scanning image, and calculating according to the image. Generally, the doctor judges the amount of bleeding by fitting the bleeding area to an ellipsoid calculation according to a multi-field formula. The specific method for calculating the bleeding volume V is as follows: v = axbxcxpi/6, wherein a: longest diameter of the largest hematoma area level hematoma, b: longest diameter perpendicular to longest diameter on the level of the largest hematoma area, c: the number of layers in the CT film at which bleeding occurred was 1cm as a default layer thickness. Such an algorithm has advantages such as convenience, but also has disadvantages such as inaccuracy. Particularly, for the focus with unequal image density and irregular shape, the simple application of a spherical volume calculation formula easily causes larger error. Therefore, it is necessary to develop a more accurate technique for determining the amount of bleeding.
For example, chinese patent CN114299052A, published 2022, 4/8, discloses a method, device, apparatus and medium for determining bleeding region based on brain image. The method comprises the following steps: determining a brain tissue region for each image in a sequence of CT images of the brain; determining a ventricular edge corresponding to each brain tissue region according to each brain tissue region; carrying out Hough transform straight line detection on each ventricular edge to obtain a target line segment of each ventricular edge; determining a target area according to the target line segment at the edge of each ventricle, performing parabolic fitting on pixel points of the target area to obtain a fitting result, and determining whether a bleeding area exists in the target area according to the fitting result. The technical scheme realizes the automatic determination of the bleeding area based on the brain CT image, thereby not needing manual participation, having high recognition speed and high accuracy. However, it can only identify the bleeding area and cannot calculate the bleeding volume of the patient.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: at present, the technical problem of low judgment accuracy exists when the cerebral hemorrhage volume of a patient is judged manually. The brain bleeding volume calculation method based on the neural network is provided, the automatic calculation of the bleeding volume is realized by means of the neural network model, and the accuracy of the calculation of the bleeding volume is improved.
To solve the technical problem, the invention adopts the following technical scheme: a brain hemorrhage volume calculation method based on a neural network comprises the following steps:
obtaining a plurality of brain medical image sequences;
establishing a UNet neural network model, and pre-training the UNet neural network model by using a non-standard brain medical image sequence;
continuously training a UNet neural network model by using the sample image sequence;
inputting a brain medical image sequence of a patient with a bleeding volume to be evaluated into the trained UNet neural network model to obtain a bleeding area of each medical image;
establishing a 3D connected region of the bleeding region;
the bleeding volume is obtained from the 3D connected area.
Preferably, the method for pre-training the UNet neural network model by using the unmarked brain medical image comprises the following steps:
shielding a partial region of any layer of brain medical image in the nonstandard brain medical image sequence;
the UNet neural network model tries to restore the shielded area, and the loss function is the difference between the restored image of the area and the original area image.
Preferably, the method for blocking a partial region of the brain medical image comprises:
setting the length and width of a shielding area as a fixed value, and setting a pixel value threshold interval;
and adjusting the position of the shielding area to enable the number of pixels shielded by the shielding area and located in the pixel value threshold interval to be the maximum or meet a preset lower number limit value.
Preferably, the method for continuously training the UNet neural network model by using the sample image sequence comprises the following steps:
sequentially dividing each medical image in the sample image sequence into areas, wherein each area is regarded as a shielded area;
restoring the image of each region by using a UNet neural network model, and comparing the restored image of the region with the corresponding original region image in the sample image;
if the difference exceeds a preset threshold value, judging that a bleeding area exists in the area, and taking the area with the difference exceeding the preset threshold value as the bleeding area;
setting a UNet neural network model loss function as the difference between the identified bleeding area and the manually marked bleeding area;
and when the loss function is smaller than a preset threshold value or the training reaches a preset number of times, finishing the training of the UNet neural network model.
Preferably, the difference between the restored image and the original region image is the sum of differences of pixel values of the original region image and the original region image after the pixels at the upper left corner are aligned.
Preferably, when the UNet neural network model continues to be trained using the sample image sequence, the weight parameters of the left half of the UNet neural network model are locked during the initial n times of training, and the adjustment of the weight parameters of the left half of the UNet neural network model is allowed after the n times of training.
Preferably, the 3D connected region of the bleeding region is established using a 3D growing algorithm, the 3D growing algorithm comprising:
starting from a pixel of a bleeding area of any layer of brain medical image, searching pixels of the bleeding area by taking 8 pixels around the pixel and 2 pixels of corresponding pixel positions of upper and lower adjacent layers of brain medical images as search ranges;
sequentially taking 8 pixels around the searched bleeding area pixels and 2 pixels at corresponding pixel positions of the brain medical images of the upper and lower adjacent layers as search ranges, and continuing searching;
all the searched bleeding area pixels form a 3D connected area of the bleeding area.
Preferably, the method for obtaining the bleeding volume according to the 3D connected region comprises:
calculating the bleeding volume V _ i of each 3D connected region, wherein i represents the serial number of the 3D connected region;
sequentially traversing pairwise combinations of the 3D communication areas, wherein the two 3D communication areas are respectively marked as V _1 and V _2;
calculating the intersection ratio V _ iou of the outer volumes of the two 3D connected regions;
if the external volume intersection ratio V _ iou is larger than a preset threshold value, judging that the two 3D connected regions belong to the same cluster, and merging the two 3D connected regions to be regarded as a 3D connected region;
after traversing the pairwise combination of all the 3D connected regions, finishing the clustering of all the 3D connected regions;
and calculating the bleeding volume V _ j of each 3D connected region after clustering, wherein the sum of all V _ j is the final bleeding volume.
Preferably, the bleeding volume Vi of the 3D connected region is calculated by: v _ i = sum (P _ Vi) SliceThickness PixelSpacing ^2, where sum (P _ V _ i) represents the total number of pixels in the connected region, sliceThickness is the inter-layer distance, and PixelSpacing is the inter-pixel distance.
Preferably, the calculation method of the external volume intersection ratio V _ iou comprises the following steps:
V_outter=(xmax-xmin)*(ymax-ymin)*(zmax-zmin),
V_iou=(V_1UV_2)/V_outter,
wherein xmax, xmin, ymax, ymin, zmax, and zmin respectively represent pixels included in the two 3D connected regions, and V _ outter represents a volume of a minimum peripheral volume block enclosing the two volume blocks at maximum and minimum values of x, y, and z-axis coordinates, and V _1uv _2represents a volume obtained by merging the two 3D connected regions.
The beneficial technical effects of the invention comprise: the method has the advantages that an efficient pre-training mechanism is provided, the extraction and reconstruction capabilities of a neural network model on brain image features are trained in advance by utilizing a large amount of non-standard data and the structural characteristics of the U-shaped network, the calculation of the cerebral hemorrhage volume of a patient is realized by combining a 3D communication region fusion processing method, and the beneficial technical effects of high training efficiency and high calculating accuracy of the amount of hemorrhage are achieved.
Other features and advantages of the present invention will be disclosed in more detail in the following detailed description of the invention and the accompanying drawings.
Drawings
The invention is further described with reference to the accompanying drawings:
fig. 1 is a flow chart of a method for calculating a volume of cerebral hemorrhage according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of a pre-training method according to an embodiment of the present invention.
Fig. 3 is a schematic flow chart of a method for blocking a partial region of a brain medical image according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a pre-training process according to an embodiment of the present invention.
Fig. 5 is a schematic flow chart of a method for training a UNet neural network model according to an embodiment of the present invention.
Fig. 6 is a flowchart illustrating a method for establishing a 3D connected area of a bleeding area according to an embodiment of the present invention.
FIG. 7 is a flow chart illustrating a procedure for obtaining a bleeding volume according to a 3D connected region according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of a 3D connected region according to an embodiment of the present invention.
Wherein: 1. brain medical image, 2, occlusion area, 3, restoration image, 4,3D connected area.
Detailed Description
The technical solutions of the embodiments of the present invention are explained and illustrated below with reference to the drawings of the embodiments of the present invention, but the embodiments described below are only preferred embodiments of the present invention, and not all of them. Other embodiments obtained by persons skilled in the art without making creative efforts based on the embodiments in the implementation belong to the protection scope of the invention.
In the following description, the appearances of the indicating orientation or positional relationship such as the terms "inner", "outer", "upper", "lower", "left", "right", etc. are only for convenience in describing the embodiments and for simplicity in description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and are not to be construed as limiting the present invention.
A brain hemorrhage volume calculation method based on neural network, please refer to fig. 1, comprising the following steps:
step A01) obtaining a plurality of brain medical image 1 sequences;
step A02) establishing a UNet neural network model, and pre-training the UNet neural network model by using a non-standard brain medical image 1 sequence;
step A03) continuously training a UNet neural network model by using the sample image sequence;
step A04) inputting a brain medical image 1 sequence of a patient with a bleeding volume to be evaluated into the trained UNet neural network model to obtain a bleeding area of each medical image;
step A05) establishing a 3D connected area 4 of the bleeding area;
step a 06) obtains a bleeding volume based on the 3D connected domain 4.
A sequence of medical brain images 1 of a patient is acquired, i.e. a plurality of medical brain images 1 of the brain of a patient are acquired, with a spacing of typically 1cm between adjacent medical images, typically obtained using a CT scan. Thereby obtaining a plurality of medical images 1 of the brain of the patient with preset intervals. The unmarked brain medical image 1 sequence used in the step A02) is an unmarked and non-bleeding brain medical image, and the UNet neural network model is pre-trained firstly, so that the UNet neural network model can restore the occluded brain medical image. The sample image used in the step a 03) is sample data formed by labeling a bleeding area of a brain medical image of a patient with cerebral hemorrhage. When the brain medical image of the patient is occluded by blood, the UNet neural network model can identify an occluded region, which is a bleeding region of the brain medical image 1. Combining the bleeding areas of the plurality of brain medical images 1 to obtain the volume of the 3D communication area 4 and the 3D communication area 4, namely the bleeding volume.
Referring to fig. 2, the method for pre-training the UNet neural network model by using the unlabeled brain medical image 1 includes:
step B01) shielding a partial region of any layer of brain medical image 1 in the nonstandard brain medical image 1 sequence;
step B02) the UNet neural network model tries to recover the shielded area 2, and the loss function is the difference between the recovered image 3 of the area and the original area image.
Referring to fig. 3, the method for blocking a partial region of a brain medical image 1 includes:
step C01) setting the length and width of the shielding area 2 as fixed values, and setting a pixel value threshold interval;
and step C02) adjusting the position of the shielding area 2 to enable the number of pixels shielded by the shielding area 2 in the pixel value threshold interval to be the maximum or meet a preset lower number limit value.
Fig. 4 is a schematic diagram of a pre-training process performed in this embodiment. After the complete brain medical image is covered by the covering region, a UNet neural network model is trained to restore the covered region, and a restored image 3 is obtained. And finishing the pre-training of the UNet neural network model after the preset accuracy is reached. The pre-training process uses no standard sample, and the sample source is sufficient.
Referring to fig. 5, the method for continuously training the UNet neural network model by using the sample image sequence includes:
step D01) sequentially dividing each medical image in the sample image sequence into areas, wherein each area is regarded as a shielded area 2;
step D02) restoring the image of each region by using the UNet neural network model, and comparing the restored image 3 of the region with the corresponding original region image in the sample image;
step D03), if the difference exceeds a preset threshold value, judging that a bleeding area exists in the area, and taking the area with the difference exceeding the preset threshold value as the bleeding area;
step D04) setting the UNet neural network model loss function as the difference between the identified bleeding area and the manually marked bleeding area;
and D05) finishing the training of the UNet neural network model when the loss function is smaller than a preset threshold value or the training reaches a preset number of times.
The difference between the restored image 3 and the original region image is the sum of the difference values of the pixel values of the original region image and the original region image after the pixels at the upper left corner are aligned.
And when the UNet neural network model is continuously trained by using the sample image sequence, locking the weight parameters of the left half of the UNet neural network model during initial n times of training, and allowing the weight parameters of the left half of the UNet neural network model to be adjusted after n times of training.
Referring to fig. 6, the 3D connected region 4,3D growing algorithm for creating a bleeding region using the 3D growing algorithm includes:
step E01) starting from the pixels of the bleeding area of any layer of brain medical image 1, and searching the pixels of the bleeding area by taking 8 pixels around the pixels and 2 pixels at the corresponding pixel positions of the upper and lower adjacent layers of brain medical image 1 as search ranges;
step E02) sequentially taking 8 pixels around the searched bleeding area pixel and 2 pixels at corresponding pixel positions of the upper and lower adjacent layers of brain medical images 1 as search ranges, and continuing searching;
all the bleeding area pixels searched in the step E03) form a 3D connected area 4 of the bleeding area.
Referring to fig. 7, the method for obtaining the bleeding volume according to the 3D connected region 4 includes:
step F01) calculating the bleeding volume V _ i of each 3D connected region 4, wherein i represents the serial number of the 3D connected region 4;
step F02) sequentially traversing pairwise combinations of the 3D communication areas 4, and recording the two 3D communication areas 4 as V _1 and V _2 respectively;
step F03) calculating the intersection ratio V _ iou of the outer volumes of the two 3D connected regions 4;
step F04) if the external volume intersection ratio V _ iou is larger than a preset threshold value, judging that the two 3D connected regions 4 belong to the same cluster, and merging the two 3D connected regions 4 to obtain a 3D connected region 4;
step F05), traversing the pairwise combination of all the 3D connected regions 4, and finishing the clustering of all the 3D connected regions 4;
step F06) calculates the bleeding volume V _ j of each 3D connected region 4 after clustering, and the sum of all V _ j is the final bleeding volume.
Referring to fig. 8, a 3D connected region 4 restored in this embodiment is shown. Two 3D connected areas 4 are shown in fig. 8, the sum of the volumes of the two 3D connected areas 4 being the total bleeding volume.
The calculation method of the bleeding volume Vi of the 3D connected region 4 comprises the following steps: v _ i = sum (P _ Vi) SliceThickness PixelSpacing ^2, where sum (P _ V _ i) represents the total number of pixels in the connected region, sliceThickness is the inter-layer distance, and PixelSpacing is the inter-pixel distance.
The calculation method of the external volume intersection ratio V _ iou comprises the following steps: v _ outter = (xmax-xmin) (ymax-ymin) (zmax-zmin), and V _ iou = (V _1uv _2)/V _ outter, wherein xmax, xmin, ymax, ymin, zmax, and zmin respectively represent pixels included in the two 3D connected regions 4, and V _ outter represents a volume of a minimum peripheral volume block enclosing the two volume blocks at maximum and minimum values of x, y, and z-axis coordinates, and (V _1uv _2) represents a volume obtained by merging the two 3D connected regions 4.
The beneficial technical effects of the embodiment include: the method has the advantages that an efficient pre-training mechanism is provided, the extraction and reconstruction capabilities of a neural network model on brain image features are trained in advance by utilizing a large amount of non-standard data and the structural characteristics of the U-shaped network, the calculation of the cerebral hemorrhage volume of a patient is realized by combining a 3D communication area 4 fusion processing method, and the method has the beneficial technical effects of high training efficiency and high calculating accuracy of the amount of hemorrhage.
While the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Any modification which does not depart from the functional and structural principles of the present invention is intended to be included within the scope of the claims.
Claims (7)
1. A brain hemorrhage volume calculation method based on a neural network is characterized in that,
the method comprises the following steps:
obtaining a plurality of brain medical image sequences;
establishing a UNet neural network model, and pre-training the UNet neural network model by using a non-standard brain medical image sequence;
continuously training a UNet neural network model by using the sample image sequence;
inputting the brain medical image sequence of the patient with the bleeding volume to be evaluated into the trained UNet neural network model to obtain the bleeding area of each medical image;
establishing a 3D connected region of the bleeding region;
obtaining a bleeding volume from the 3D connected region; the method for pre-training the UNet neural network model by using the label-free brain medical image comprises the following steps:
shielding a partial region of any layer of brain medical image in the nonstandard brain medical image sequence;
the UNet neural network model tries to restore the shielded area, and a loss function is the difference between a restored image of the area and an original area image;
the method for continuously training the UNet neural network model by using the sample image sequence comprises the following steps:
sequentially dividing each medical image in the sample image sequence into areas, wherein each area is regarded as a shielded area;
restoring the image of each region by using a UNet neural network model, and comparing the restored image of the region with the corresponding original region image in the sample image;
if the difference exceeds a preset threshold value, judging that a bleeding area exists in the area, and taking the area with the difference exceeding the preset threshold value as the bleeding area;
setting a UNet neural network model loss function as the difference between the identified bleeding area and the artificially marked bleeding area;
when the loss function is smaller than a preset threshold value or the training reaches a preset number of times, finishing the training of the UNet neural network model;
the method for obtaining a bleeding volume according to a 3D connected region comprises the following steps:
calculating the bleeding volume V _ i of each 3D connected region, wherein i represents the serial number of the 3D connected region;
sequentially traversing pairwise combinations of the 3D communication areas, wherein the two 3D communication areas are respectively marked as V _1 and V _2;
calculating the intersection ratio V _ iou of the outer volumes of the two 3D connected regions;
if the external volume intersection ratio V _ iou is larger than a preset threshold value, judging that the two 3D connected regions belong to the same cluster, and merging the two 3D connected regions to be regarded as a 3D connected region;
traversing pairwise combinations of all the 3D connected regions, and finishing clustering of all the 3D connected regions;
and calculating the bleeding volume V _ j of each 3D connected region after clustering, wherein the sum of all V _ j is the final bleeding volume.
2. The neural network-based brain hemorrhage volume calculation method according to claim 1,
the method for shielding partial area of brain medical image comprises the following steps:
setting the length and the width of a shielding area as a fixed value, and setting a pixel value threshold interval;
and adjusting the position of the shielding area to enable the number of pixels shielded by the shielding area and located in the pixel value threshold interval to be the maximum or meet a preset lower number limit value.
3. The neural network-based brain hemorrhage volume calculation method according to claim 1 or 2,
the difference between the restored image and the original region image is the sum of the difference values of each pixel value after the restored image and the upper left pixel of the original region image are aligned.
4. The neural network-based brain hemorrhage volume calculation method according to claim 1 or 2,
and when the UNet neural network model is continuously trained by using the sample image sequence, locking the weight parameters of the left half of the UNet neural network model during initial n times of training, and allowing the weight parameters of the left half of the UNet neural network model to be adjusted after n times of training.
5. The neural network-based brain hemorrhage volume calculation method according to claim 1 or 2,
establishing a 3D connected region of the hemorrhage region using a 3D growth algorithm, the 3D growth algorithm comprising:
starting from a pixel of a bleeding area of any layer of brain medical image, searching pixels of the bleeding area by taking 8 pixels around the pixel and 2 pixels of corresponding pixel positions of upper and lower adjacent layers of brain medical images as search ranges;
sequentially taking 8 pixels around the searched bleeding area pixels and 2 pixels at corresponding pixel positions of the brain medical images of the upper and lower adjacent layers as search ranges, and continuing searching;
all the searched bleeding area pixels form a 3D connected area of the bleeding area.
6. The neural network-based brain hemorrhage volume calculation method according to claim 1,
the bleeding volume Vi of the 3D connected region is calculated by: v _ i = sum (P _ Vi) SliceThickness PixelSpacing ^2, where sum (P _ V _ i) represents the total number of pixels in the connected region, sliceThickness is the inter-layer distance, and PixelSpacing is the inter-pixel distance.
7. The neural network-based brain hemorrhage volume calculation method according to claim 1,
the calculation method of the external volume intersection ratio V _ iou comprises the following steps:
V_outter=(xmax-xmin)*(ymax-ymin)*(zmax-zmin),
V_iou=(V_1UV_2)/V_outter,
wherein xmax, xmin, ymax, ymin, zmax, and zmin respectively represent pixels included in the two 3D connected regions, and V _ outter represents a volume of a minimum peripheral volume block enclosing the two volume blocks at maximum and minimum values of x, y, and z-axis coordinates, and V _1uv _2represents a volume obtained by merging the two 3D connected regions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211106689.7A CN115187600B (en) | 2022-09-13 | 2022-09-13 | Brain hemorrhage volume calculation method based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211106689.7A CN115187600B (en) | 2022-09-13 | 2022-09-13 | Brain hemorrhage volume calculation method based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115187600A CN115187600A (en) | 2022-10-14 |
CN115187600B true CN115187600B (en) | 2022-12-09 |
Family
ID=83524575
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211106689.7A Active CN115187600B (en) | 2022-09-13 | 2022-09-13 | Brain hemorrhage volume calculation method based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115187600B (en) |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109003299A (en) * | 2018-07-05 | 2018-12-14 | 北京推想科技有限公司 | A method of the calculating cerebral hemorrhage amount based on deep learning |
CN110610497B (en) * | 2019-08-05 | 2023-06-16 | 史记生物技术(南京)有限公司 | Method for determining content of living pig carcass tissue based on CT image processing |
CN111402218A (en) * | 2020-03-11 | 2020-07-10 | 北京深睿博联科技有限责任公司 | Cerebral hemorrhage detection method and device |
CN111724397B (en) * | 2020-06-18 | 2024-04-16 | 上海应用技术大学 | Automatic segmentation method for craniocerebral CT image bleeding area |
CN111539956B (en) * | 2020-07-07 | 2020-12-29 | 南京安科医疗科技有限公司 | Cerebral hemorrhage automatic detection method based on brain auxiliary image and electronic medium |
CN112614145B (en) * | 2020-12-31 | 2022-04-12 | 湘潭大学 | Deep learning-based intracranial hemorrhage CT image segmentation method |
CN113724184A (en) * | 2021-03-01 | 2021-11-30 | 腾讯科技(深圳)有限公司 | Cerebral hemorrhage prognosis prediction method and device, electronic equipment and storage medium |
CN113205490A (en) * | 2021-04-19 | 2021-08-03 | 华中科技大学 | Mask R-CNN network-based auxiliary diagnosis system and auxiliary diagnosis information generation method |
CN113768528A (en) * | 2021-09-26 | 2021-12-10 | 华中科技大学 | CT image cerebral hemorrhage auxiliary positioning system |
CN114092446A (en) * | 2021-11-23 | 2022-02-25 | 中国人民解放军总医院 | Intracranial hemorrhage parameter acquisition method and device based on self-supervision learning and M-Net |
CN114005514B (en) * | 2021-11-26 | 2022-07-29 | 杭州涿溪脑与智能研究所 | Medical image diagnosis method, system and device |
CN114266739A (en) * | 2021-12-14 | 2022-04-01 | 南京邮电大学 | Medical image segmentation method of semi-supervised convolutional neural network based on contrast learning |
CN114299052A (en) * | 2021-12-31 | 2022-04-08 | 沈阳东软智能医疗科技研究院有限公司 | Bleeding area determination method, device, equipment and medium based on brain image |
CN114663715B (en) * | 2022-05-26 | 2022-08-26 | 浙江太美医疗科技股份有限公司 | Medical image quality control and classification model training method and device and computer equipment |
-
2022
- 2022-09-13 CN CN202211106689.7A patent/CN115187600B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115187600A (en) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765363A (en) | A kind of automatic after-treatment systems of coronary artery CTA based on artificial intelligence | |
US9659390B2 (en) | Tomosynthesis reconstruction with rib suppression | |
WO2018095058A1 (en) | Three-dimensional ultrasonic fetal face profile image processing method and system | |
KR20110014067A (en) | Method and system for transformation of stereo content | |
CN113643353B (en) | Measurement method for enhancing resolution of vascular caliber of fundus image | |
CN116580068B (en) | Multi-mode medical registration method based on point cloud registration | |
CN117522719B (en) | Bronchoscope image auxiliary optimization system based on machine learning | |
CN115187600B (en) | Brain hemorrhage volume calculation method based on neural network | |
CN112529900B (en) | Method, device, terminal and storage medium for matching ROI in mammary gland image | |
CN109447948B (en) | Optic disk segmentation method based on focus color retina fundus image | |
CN111967462A (en) | Method and device for acquiring region of interest | |
CN113920068B (en) | Body part detection method and device based on artificial intelligence and electronic equipment | |
CN115456974A (en) | Strabismus detection system, method, equipment and medium based on face key points | |
CN115187640A (en) | CT and MRI3D/3D image registration method based on point cloud | |
CN114298986A (en) | Thoracic skeleton three-dimensional construction method and system based on multi-viewpoint disordered X-ray film | |
Xu et al. | Auto-adjusted 3-D optic disk viewing from low-resolution stereo fundus image | |
CN110648333B (en) | Real-time segmentation system of mammary gland ultrasonic video image based on middle-intelligence theory | |
CN114240893A (en) | Method for measuring and calculating spinal Cobb angle in external image | |
CN113989345A (en) | Depth image processing method and system and electronic equipment | |
CN112258533A (en) | Method for segmenting earthworm cerebellum in ultrasonic image | |
Otero-Mateo et al. | A fast and robust iris segmentation method | |
CN117557587B (en) | Endoscope cold light source brightness automatic regulating system | |
CN115346002B (en) | Virtual scene construction method and rehabilitation training application thereof | |
CN113222886B (en) | Jugular fossa and sigmoid sinus groove positioning method and intelligent temporal bone image processing system | |
CN116109687A (en) | Endoscope image depth estimation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |