CN114708240A - Flat scan CT-based automated ASPECTS scoring method, computer device, readable storage medium, and program product - Google Patents

Flat scan CT-based automated ASPECTS scoring method, computer device, readable storage medium, and program product Download PDF

Info

Publication number
CN114708240A
CN114708240A CN202210406433.1A CN202210406433A CN114708240A CN 114708240 A CN114708240 A CN 114708240A CN 202210406433 A CN202210406433 A CN 202210406433A CN 114708240 A CN114708240 A CN 114708240A
Authority
CN
China
Prior art keywords
image
key frames
boundary
scoring
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210406433.1A
Other languages
Chinese (zh)
Inventor
刘凯政
鲁伟
冷晓畅
向建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arteryflow Technology Co ltd
Original Assignee
Arteryflow Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arteryflow Technology Co ltd filed Critical Arteryflow Technology Co ltd
Priority to CN202210406433.1A priority Critical patent/CN114708240A/en
Publication of CN114708240A publication Critical patent/CN114708240A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an automatic ASPECTS scoring method based on flat scan CT, a computer device, a readable storage medium and a program product, wherein the method comprises the following steps: obtaining an image sequence based on flat scanning CT, screening the image sequence, and obtaining a plurality of key frames comprising a lateral ventricle connected boundary frame; locating a brain sickle on a plurality of key frames, wherein the brain sickle is represented as a boundary on the plurality of key frames; carrying out image registration on the plurality of key frames locating the boundary to obtain each pair of partitions for ASPECTS scoring; training a deep learning model by utilizing training data to obtain a plurality of scoring models, wherein for any one scoring model, the training data correspondingly comprises one pair of subareas of the plurality of key frames; and performing ASPECTS scoring on a plurality of key frames to be scored by using the scoring model to obtain an ASPECTS scoring result. Each pair of partitions positioned on two opposite sides of the boundary are used as training data of the scoring model, and the accuracy of the scoring model can be improved.

Description

Flat scan CT-based automated ASPECTS scoring method, computer device, readable storage medium, and program product
Technical Field
The present application relates to the field of medical engineering, and in particular, to an automated ASPECTS scoring method based on flat-scan CT, a computer device, a readable storage medium, and a program product.
Background
Stroke, also known as stroke or cerebral infarction, is a serious disease that seriously threatens the health of the population and hinders the development of socioeconomic performance. The acute ischemic stroke is the most common stroke type, accounts for about 60-80% of all strokes, is an acute cerebrovascular disease caused by local cerebral tissue blood supply insufficiency, and has extremely high fatal risk.
Flat-scan CT (Non-Contrast CT), also called NCCT, is the most common imaging technique and method for diagnosing brain lesions, and has the advantages of fast imaging speed, convenient scanning and relatively low price. The ASPECTS (Alberta Stroke Program Early CT score) score based on NCCT is an important basis for diagnosing and treating ischemic Stroke.
The scoring method is used for dividing the important level of blood supply of the middle cerebral artery into 10 regions according to the brain NCCT image data of a patient with acute stroke, wherein the 10 regions comprise a caudate nucleus head (C) positioned at the level of basal ganglia, a lenticular nucleus (L), an Inner Capsule (IC), an islet ligament (I), M1 (a middle cerebral artery anterior cortical region), M2 (a middle cerebral artery islet lateral cortical region), M3 (a middle cerebral artery posterior cortical region), M4 (a middle cerebral artery cortex above M1), M5 (a middle cerebral artery cortex above M2) and M6 (a middle cerebral artery cortex above M3) positioned at the level of upper ganglia. The 10 regions have the same weight, each of which takes 1 point, and the total point is 10 points. The number of areas with early ischemic change is subtracted from the total score, and the obtained numerical value is used as a scoring result to provide basis for judging and treating the disease condition.
In current clinical applications, the ASPECTS scoring method mainly relies on manual image reading by clinicians to judge each region for evaluation. On the one hand, the manual reading method has a lack of stability due to differences in imaging equipment, differences in patient conditions, and subjectivity of the reader. On the other hand, manual image reading is time-consuming, ischemic stroke is a disease with more urgent time requirement, and patients need to be diagnosed and treated for seconds to avoid rapid deterioration of the disease. Therefore, it is clinically significant to rapidly, accurately and stably score patients for ASPECTS.
Disclosure of Invention
In view of the above, there is a need to provide an automatic ASPECTS scoring method based on flat scan CT.
The automatic ASPECTS scoring method based on flat-scan CT comprises the following steps:
the method comprises the following steps of obtaining an image sequence based on flat scanning CT, screening the image sequence, and obtaining a plurality of key frames, wherein the key frames comprise boundary frames connected with lateral ventricles, and the method for obtaining the key frames comprises the following steps: selecting a sample image based on an image sequence of flat scanning CT, carrying out two-class deep learning on the sample image by using a deep learning model to obtain a trained screening model, and screening a plurality of key frames of an output image sequence by using the trained screening model;
locating a sickle brain on the number of keyframes, the sickle brain appearing as a boundary on the number of keyframes;
carrying out image registration on the plurality of key frames for positioning the boundary to obtain each pair of subareas for ASPECTS scoring, and carrying out circumscribed rectangle processing and image interpolation processing on any pair of subareas;
training a deep learning model by utilizing training data to obtain a plurality of scoring models, wherein for any one scoring model, the training data correspondingly comprises a pair of partitions of the plurality of key frames, for any one scoring model, the pair of partitions comprises a first side partition and a second side partition which are positioned at two opposite sides relative to the boundary, and the training data comprises a mirror image of the first side partition relative to the boundary and/or a mirror image of the second side partition relative to the boundary;
and carrying out ASPECTS scoring on a plurality of key frames to be scored by utilizing the scoring model to obtain an ASPECTS scoring result.
Optionally, the obtaining of the flat-scan CT-based image sequence includes the following specific steps:
acquiring an image sequence based on flat-scan CT, and acquiring an upright image sequence according to the image sequence based on flat-scan CT;
sequentially calculating a circumscribed rectangle and intercepting the circumscribed rectangle according to the upright image sequence to obtain an intercepted image sequence;
selecting a sample image from the intercepted image sequence, and performing two-class deep learning on the sample image by using a deep learning model to obtain a trained screening model;
and judging the intercepted image sequence by using the trained screening model, and outputting a plurality of key frames of the image sequence, wherein the plurality of key frames comprise boundary frames connected with the lateral ventricles.
Optionally, positioning the brain sickle on the plurality of key frames specifically includes:
and intercepting an ellipse based on the plurality of key frames, obtaining a plurality of ellipse key frames, and positioning the brain sickle on the plurality of ellipse key frames.
Optionally, the positioning of the brain sickle on the plurality of elliptical keyframes specifically includes: and carrying out binarization processing on the plurality of oval key frames to obtain corresponding binary images, adjusting HU values of the binary images to a first threshold value, and enabling boundary lines of the binary images to be displayed clearly relative to the binary images as a whole, wherein the boundary lines are the positioned sickles cerebri.
Optionally, when performing image registration on the plurality of key frames locating the boundary, the method further includes: and rotating the plurality of elliptical key frames according to the boundary to enable the boundary to be vertical, and finishing fine tilt correction.
Optionally, the two categories include a first category of images belonging to the superior ganglion layer and a second category of images not belonging to the superior ganglion layer.
Optionally, for any one scoring model, the training data comprises a first set of data and a second set of data; the first set of data comprises a first contrast image, and a second side region; the second set of data comprises a second contrast image, and a first side zone;
the first contrast image is obtained by carrying out image registration and difference processing on a mirror image of the first side partition relative to the boundary and the second side partition;
the second contrast image is obtained by a mirror image of the second side partition with respect to the boundary line and the first side partition, which are subjected to image registration and difference processing.
The present application further provides a computer device comprising a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to implement the steps of the automated flat-scan CT-based ASPECTS scoring method described herein.
The present application also provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the steps of the flat-scan CT-based automated ASPECTS scoring method described herein.
The present application further provides a computer program product comprising computer instructions which, when executed by a processor, implement the steps of the automated flat-scan CT-based ASPECTS scoring method described herein.
The automatic ASPECTS scoring method based on flat-scan CT at least has the following effects:
the method comprises the steps of positioning boundary lines on a plurality of key frames, carrying out image registration after positioning, and taking each pair of subareas on two opposite sides of the boundary lines as training data of a scoring model after registration. Because the image information of the two opposite sides of the boundary has an auxiliary judgment function on the judgment of cerebral infarction, the training data comprising each pair of subareas can improve the precision of the scoring model, and is more suitable for the case of asymmetric brain shape.
Drawings
FIG. 1 is a schematic flow chart illustrating an automated flat-scan CT-based ASPECTS scoring method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a circumscribed rectangle of an erected image sequence according to an embodiment of the present application;
FIG. 3 is a schematic view of an image of FIG. 2 taken 70% of its length and width;
FIG. 4 is a schematic view of an image sequence based on flat-scan CT according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a residual error structure of the ResNet50 model according to an embodiment of the present application;
FIG. 6 is a diagram illustrating an embodiment of a method for converting an image sequence captured by a screening model into an array;
FIG. 7 is a diagram illustrating key frames before an ellipse is truncated according to an embodiment of the present application;
FIG. 8 is a diagram illustrating an elliptical keyframe after an ellipse is truncated according to an embodiment of the present application
FIG. 9 is a binary graph of FIG. 7;
FIG. 10 is a binary graph of FIG. 8;
FIG. 11 is a schematic diagram of a fine tilt correction process according to an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating a pair of partitions occupying the entire picture in accordance with an embodiment of the present application;
FIG. 13 is a section on one side of a boundary line after an ROI is truncated and resized in a unified manner in an embodiment of the present application;
FIG. 14 is a partition on the other side relative to the dividing line after the ROI is truncated and resized uniformly in one embodiment of the present application;
FIG. 15 is a schematic illustration of one of the comparison images included in the training data in an embodiment of the present application;
FIG. 16 is a schematic structural diagram of a scoring model according to an embodiment of the present application;
FIG. 17 is a schematic flowchart of scoring model training in an embodiment of the present application;
FIG. 18 is a schematic flow chart illustrating scoring model prediction according to an embodiment of the present application;
fig. 19 is an internal structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In current clinical application, the ASPECTS scoring method mainly depends on manual image reading of clinicians to judge and evaluate each region. On the one hand, manual interpretation lacks stability due to differences in imaging equipment, differences in patient condition, and subjectivity of the reader. On the other hand, manual image reading is time-consuming, so that the rapid, accurate and stable ASPECTS scoring of patients has important clinical significance.
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In order to solve the above technical problem, referring to fig. 1, the present application provides an automatic ASPECTS scoring method based on flat scan CT, including the following steps S100 to S500, wherein:
step S100, obtaining an image sequence based on flat scanning CT, screening the image sequence, and obtaining a plurality of key frames, wherein the key frames comprise boundary frames connected with lateral ventricles. In one embodiment, step S100 includes steps S110 through S140. Wherein:
step S110, an image sequence based on flat-scan CT is obtained, and an upright image sequence is obtained according to the image sequence based on flat-scan CT.
The step comprises skull removing processing, translation processing and preliminary inclination correction processing which are sequentially carried out on an image sequence based on flat scan CT. Wherein: (1) the skull removing treatment comprises the following steps: the skull part is removed by adopting a binary method and a method for finding a maximum connected domain, for example, the skull part is removed by using an open source image processing library such as OpenCV and sketch. (2) The translation processing includes: and (5) solving the image mass center, and translating the intracranial image to the image center. (3) The preliminary inclination correction process includes: rotating the image by 1 degree step length between the angle values of-80 degrees to +80 degrees by taking the center of mass as an origin, calculating the superposition quantity of pixels on two sides of each angle value after being mirrored in the horizontal direction and the size of the longest axis of the image in the vertical direction, respectively giving the two weights alpha and beta, confirming the rotating angle required by alignment when the sum of the alpha and the beta is maximum, and rotating each image to the angle to obtain an upright image sequence. The weight α may be 1, for example, and the weight β may have a value of 75, for example.
And step S120, sequentially calculating a circumscribed rectangle and intercepting the circumscribed rectangle according to the upright image sequence to obtain an intercepted image sequence.
Referring to fig. 2 and 3, in this step, the method of capturing the circumscribed rectangle includes capturing the length and width of the circumscribed rectangle, respectively, excluding the area not including the skull, and making the centroid of the captured image sequence be at the center of each image. Since the shape of the skull of each person may be very different, in order to eliminate the influence of the shape of the skull on the computer automatic layer selection algorithm, the length and the width of the rectangle are respectively cut off by 70%. Namely, the interception is completed by the following formula:
Figure BDA0003602040780000061
wherein the content of the first and second substances,
x 1' is the minimum value of the x-axis coordinate in the intercepted image;
x 2' is the maximum value of x-axis coordinate in the intercepted image;
x1 is the minimum value of the x-axis coordinate in the image;
x2 is the maximum value of the x-axis coordinate in the image;
y 1' is the minimum value of the y-axis coordinate in the intercepted image;
y 2' is the maximum value of the y-axis coordinate in the intercepted image;
y1 is the minimum value of the y-axis coordinate in the image;
y2 is the maximum value of the y-axis coordinate in the image;
it is understood that the truncated image does not include regions other than the skull, and that the HU values (hounsfield units) for the coordinates (x1, x2, y1, and y2) in the image sequence are not zero.
In one embodiment, automated screening of the keyframe (S) is accomplished by steps S130 and step 140.
Step S130, selecting a sample image from the intercepted image sequence, and performing two-class deep learning on the sample image by using a deep learning model to obtain a trained screening model;
as can be known from the conventional knowledge of intracranial imaging, the basal ganglia layer and the superior ganglia layer are close to each other in position, and in this embodiment, whether the lateral ventricles are connected up and down is used as a boundary (expressed as a boundary frame in a CT image sequence) between the basal ganglia layer and the superior ganglia layer, and the sample image at least includes a boundary frame connected with the lateral ventricles.
Referring to fig. 4, if the keyframes selected by the method are used for the ASPECTS scoring, three pictures can be taken as the images of the basal ganglia layer and the superior ganglia layer, respectively, and six images are taken as the keyframes. Wherein, the first picture (the boundary frame with the sequence number of 14) connected with the lateral ventricle in the sequence is used as the first image of the upper ganglion layer; the next two connected pictures are used as the second and third images (the images with the serial numbers of 15 and 16) of the upper ganglion layer; the top three pictures in succession served as three images of the basal ganglia layer (images with serial numbers 11, 12, 13). In this embodiment, the sample image includes an image of the basal ganglia layer and an image of the superior ganglia layer, specifically six images with serial numbers of 11-16 in fig. 4.
In step S130, in the training phase, for example, a ResNet50 model with pre-training weights may be used, and a ResNet50 model with pre-training weights is applied to the training task of the deep learning model by applying the weight parameters of the ResNet50 model pre-trained on the ImageNet data set through a transfer learning method. And during model training, a cross entropy loss function, an Adam optimizer, a batch size of 64, a learning rate of 2e-4 and a learning rate attenuation strategy of reducing the learning rate by 10% when the performance of each 100 batch networks is not improved are adopted. After 50 epochs, the highest model accuracy rate can reach 99.95% of a training set and 92.7% of a verification set. Regarding the meaning of the related parameters involved in the training process, reference may be made to the prior art, and the applicant does not give further details.
In step S130, the two classes included in the two-class deep learning are a first class image belonging to the upper ganglion layer and a second class image not belonging to the upper ganglion layer.
Specifically, since the three images of the basal ganglia layer and the three images of the superior ganglia layer are connected in the flat scanning CT sequence, and the characteristics of the ventricles on the lateral ventricle of the superior ganglia layer are obvious, the deep learning task does not need to be set as the three classification tasks of the basal ganglia layer key frame, the superior ganglia layer key frame and the rest of image frames, and can be directly set as the two classification tasks of the superior ganglia layer key frame and the rest of image frames.
And step S140, judging the intercepted image sequence by using the trained screening model, and outputting a plurality of key frames of the image sequence, wherein the plurality of key frames comprise boundary frames connected with the lateral ventricles.
Step S140 specifically includes steps S141 to S142, in which:
step S141, referring to fig. 6, converting the captured image sequence into an array by using the trained screening model, where the array includes a first type image marked as a first symbol and a second type image marked as a second symbol;
specifically, the captured image sequence obtained in step S130 is input into a trained screening model, the trained screening model outputs a 1000-dimensional vector, the 1000-dimensional vector is accessed into a ReLu activation function and then is accessed into a full-link layer to output a two-dimensional vector, and whether the frame belongs to the upper ganglion layer or not is determined by comparing the magnitude relationship between two values of the two-dimensional vector.
It is understood that the deep learning of the binary classification is adopted in step S130, so the output result in step S140 corresponds to the binary classification described above. That is, the first type of image belongs to the superior ganglion layer, and the second type of image does not belong to the superior ganglion layer. In this embodiment, the first type of image and the second type of image are respectively labeled as "1" and "0", that is, the trained screening model predicts according to each frame of image of the captured image sequence and outputs two labeling results of "1" and "0", each labeling result integrally forms an array, the array includes n elements, and n corresponds to the output frequency of the trained screening model.
And step S142, determining the area with the most continuous occurrence times of the first symbol as a target area, determining the first frame of the image sequence of the target area as a boundary frame by the trained screening model, and obtaining a plurality of key frames according to the boundary frame and the image frame connected with the boundary frame.
In step S142, there is a small amount of false detection due to the output of the trained screening model, such as the second symbol "0" is mistakenly labeled as the first symbol "1" (false positive). In this embodiment, a target region with the largest number of consecutive occurrences of "1" (i.e., a segment with the largest number of consecutive occurrences of "1") in the array is searched, and the target region is used as an image region of the superior ganglion layer, so that the influence caused by a small number of false detections can be eliminated. Further, the first frame of the target region is determined as a boundary frame of the basal ganglia layer and the upper ganglia layer. The key frames specifically comprise a basal ganglia layer key frame and an upper ganglia layer key frame, and specifically comprise a boundary frame and image frames connected with the boundary frame in front and back. The basal ganglia layer key frame and the superior ganglia layer key frame may be, for example, three frames each.
Screening process of trained screening model referring to fig. 4, the trained screening model identifies the boundary frame as the image with sequence number 14 in fig. 4. And determining the key frame as the three frames of images of the upper ganglion layer and the three frames of images of the basal ganglion layer according to the boundary frame. The three frames of images of the superior ganglion layer include images with sequence numbers 14, 15, and 16, and the three frames of images of the basal ganglion layer include images with sequence numbers 11, 12, and 13 in fig. 4.
In the method, in the steps S110 to S140, the brain image circumscribed rectangle with the length and width reduced by 30% is used as the ROI area for model input, so that the method not only can reduce the interference of the appearance characteristics of the head and improve the model precision, but also can reduce the size of the input dimension of the original image and accelerate the speed of model training and prediction. The basal ganglia layer, the superior ganglia layer, was selected for the ASPECTS scoring task using a two-classification rather than a three-classification model. According to the layer selection target of the task, only the boundary frame of the layer selection target and the boundary frame of the layer selection target are needed to be found, and the accuracy rate of layer selection can be improved.
Step S110 to step 140 of the present application: the automatic layer selection is realized, the layer selection precision is high, the layer selection speed is high, the subjective interference of a marker can be eliminated, and the automatic scoring of a patient can be conveniently and accurately carried out. Avoids layer selection errors, can greatly improve the diagnosis speed of patients, and has higher clinical application value.
And step S200, positioning the brain sickle on a plurality of key frames, wherein the brain sickle is represented as a boundary on the plurality of key frames.
Since the shape of the brain contour of each case is different, a fine tilt correction method is needed to avoid the influence of image tilt on the subsequent image registration and infarction prediction, and the embodiments are referred to as "fine tilt correction". In intracranial flat scan CT, the HU values of the brain sickle are significantly higher than the brain tissue, and the image appears as a line from the front of the brain to the back of the brain (the demarcation line between the left and right half of the brain as shown in fig. 7). Moreover, the line is clearer at the upper ganglion layer, and the positioning of the brain sickle is shown as a positioning boundary line on the image sequence. More specifically, the detailed description will be given in substeps S210 to S230 of step S200.
And step S210, an ellipse is intercepted based on a plurality of key frames, a plurality of ellipse key frames are obtained, the brain sickle is positioned on the plurality of ellipse key frames, and the brain sickle represents that the plurality of key frames are boundaries.
Referring to fig. 7 and 8, the manner of obtaining several elliptical key frames includes: the brain image centroids of several key frames and the brain image lengths (pixel values) in the horizontal and vertical directions passing through the centroids are calculated. From these values, an elliptical template is drawn with its major and minor axes being 2/3 respectively the above-mentioned vertical and horizontal direction length values.
It can be understood that all the pixels of the plurality of elliptical keyframes outside the elliptical template are discarded, and the discarding manner may be, for example, to set all the HU values of the pixels in the region to 0, so as to discard the pixels with higher HU values outside the image and avoid interfering with finding the boundary as much as possible.
Step S220, performing binarization processing on the plurality of elliptical key frames to obtain corresponding binary images, and adjusting HU values of the binary images to a first threshold value, so that boundaries of the binary images are displayed clearly relative to the binary images as a whole.
Referring to fig. 7 to fig. 10, in this step, a plurality of elliptical keyframes are binarized, and a binarized hu (hounsfiled unit) value is determined. Specifically, the HU value is adjusted to cycle through from 20 to 50 in steps of 1 to obtain the first threshold. The clear display may further be understood as the number of pixels in the elliptical template exceeding the first threshold does not exceed 1.5% of the total number of pixels.
Specifically, when the number of pixels above the HU threshold value is reduced to 1.5% or less of the total number of pixels, traversal is stopped, and the number is used as a first threshold value to perform binarization and then perform dilation operation. Referring to fig. 9 and 10, fig. 9 is the binary map of fig. 7, and fig. 10 is the binary map of fig. 9. FIG. 10 is a view showing the boundary line more clearly than FIG. 9.
Step S230, rotating a plurality of oval key frames according to the boundary to make the boundary vertical, thereby completing the fine tilt correction.
When the image is displayed in the default state, the image has a horizontal direction, and is vertical, i.e. perpendicular to the horizontal direction. Referring to fig. 10 and 11, on the binarized black-and-white image, a line passing through the centroid in the vertical direction (i.e., the vertical direction in fig. 11) and having a length value of the major axis of the ellipse template as a length is drawn, and rotation is performed between angle values of-70 ° to +70 ° with the centroid as an origin in steps of 1 degree, as shown in fig. 11. And recording the number of the white pixel values of each angle value, wherein the angle value with the largest number of the white pixel values is the inclination angle value of the brain image, and finishing fine inclination correction by reversely rotating the angle. In this step, for example, the upper ganglion layer key frame may be used to perform fine tilt correction, and it can be understood that the line of the upper ganglion layer key frame is clearer. It is understood that the fine tilt correction in this step can improve the convenience of the image registration in step S300.
In step S210 to step S230, the characteristics of the difference between the values of the sickle and HU in intracranial plain CT are extracted for tilt correction, which is more robust than the prior art algorithm and more suitable for the case of asymmetric brain shape.
And step S300, carrying out image registration on a plurality of key frames for positioning the boundary, and obtaining each pair of partitions for ASPECTS scoring.
A pair of partitions is a scoring area for ASPECTS scoring. Since the shape of each of the ten scoring regions is greatly different and the infarct performance and noise characteristics are also inconsistent, it is necessary to train a model for each partition separately. In the step, by using a prepared ASPECTS partition template of ten partitions of the left brain and the right brain respectively and adopting a nonlinear image registration algorithm, the partition template is respectively registered to six frames of key frames in a case flat scan CT sequence by affine transformation and B spline transformation.
And S400, training the deep learning model by using the training data to obtain a plurality of scoring models, wherein for any one scoring model, the training data correspondingly comprises one pair of partitions of a plurality of key frames.
Referring to fig. 12 to 16, in step S400, the "training data is included" to indicate that a scoring model is obtained by training a pair of partitions as a set of training data. Since the image information of the two opposite sides has an auxiliary judgment function for judging cerebral infarction, the model input increases the mirror image of the two opposite side partitions relative to the boundary.
As shown in fig. 12, the size of each pair of partitions is extremely low compared with the whole picture, and therefore, the pair of partitions cannot be directly used as input of the depth learning model, and for any pair of partitions, circumscribed rectangle processing and image interpolation processing are also required. Before the training begins, the method also comprises the steps of intercepting an image interesting Region (ROI) and changing the image size of the interesting region.
Further, for any of the scoring models, the pair of partitions includes a first side partition and a second side partition on opposite sides with respect to the boundary, and the training data includes a mirror image of the first side partition with respect to the boundary and/or a mirror image of the second side partition with respect to the boundary.
In one embodiment, for any one scoring model, the training data comprises a first set of data and a second set of data; the first set of data includes a mirror image of the first side partition relative to the boundary, and a second side partition; the second set of data includes a mirror image of the second side partition relative to the boundary, and the first side partition.
In one embodiment, referring to fig. 15 and 16, for either scoring model, the training data comprises a first set of data and a second set of data; the first set of data comprises a first contrast image, and a second side zone; the second set of data comprises a second contrast image, and a first side zone; the first contrast image is obtained by carrying out image registration and difference processing on a mirror image of the first side partition relative to the boundary and the second side partition; the second contrast image is obtained by mirroring the second side region with respect to the boundary and performing image registration and difference processing on the first side region.
The mirror image of one side partition relative to the boundary is closer to the shape of the image of the other side partition by an image registration method, so that the input information quantity of the model is increased, the input of noise points is reduced, and the reliability of the scoring model is improved.
And S500, carrying out ASPECTS scoring on a plurality of key frames to be scored by using a scoring model to obtain an ASPECTS scoring result.
Referring to fig. 17 and 18, in the stage of training the score model, the deep learning model is trained by using training data, and the score model is obtained after the training is completed. In a scoring model prediction stage, a plurality of registered key frames are predicted, and the segmented infarction is judged, so that scores of ten scoring areas are obtained as scoring results. It can be understood that the data form processed during the prediction of the scoring model is the same as the data form adopted during the training of the scoring model, and the scoring model prediction is directed to a plurality of key frames after the registration of the segmented images, which is not described herein again.
In step S200 to step S500: first, six images for image registration are obtained, the six images including three images of the basal ganglia layer and the superior ganglia layer each. And registering and outputting ten pairs of subareas according to the six images, wherein each pair of subareas comprises 60 subarea images of three frames on both sides of the left brain and the right brain, each image only comprises a specific subarea region part of the original image, and the gray values of the rest parts are 0. It will be appreciated that each pair of sections used for scoring, seven of which are on three images of the basal ganglia layer and three of which are on three images of the superior ganglia layer.
And respectively generating two 3D images from the three images of the left brain and the right brain, and intercepting an ROI (region of interest) of the image in the width dimension and the height dimension by adopting the method for externally connecting rectangles for each 3D image. Since the size of the clipped image is not uniform, the image size needs to be uniformly changed (3,128,128) by image interpolation processing, which may be, for example, bicubic interpolation in the neighborhood of 4 × 4 pixels. The results of the ROI clipping and uniform resizing are shown in fig. 13 and 14. The method of intercepting the minimum external rectangle of the three-dimensional image area (namely the 3D image) and changing the minimum external rectangle into the same size is adopted, so that the problems of inconsistent and irregular shapes of the ASPECTS subarea images input by the convolutional neural network can be solved.
Second, contrast images on opposite sides of the boundary are acquired. The model input of a certain scoring area on a certain side not only comprises the processed image of the side of the area, but also comprises a contrast image of the difference between the side of the area and the opposite side. As shown in fig. 15, since the shapes of both sides of the same partition in the same case are not necessarily similar, the mirror image of the contralateral image based on the boundary is non-linearly transformed by the image registration method so that the shape of the mirror image of the contralateral image is as close to the shape of the contralateral image as possible. The contrast image can be obtained by performing difference processing on the side image and the mirror image of the opposite side image.
And finally, designing a grading model. And taking the image on one side and the comparison image as two groups of input, respectively accessing the two groups of 3D CNN modules (the modules need to contain BN structures), and outputting two groups of characteristic graphs. And connecting the two groups of characteristic graphs on the channel dimension, and outputting the judgment whether the region has the ischemic stroke focus or not through a global pooling function, a full connection layer function and a Sigmoid function after accessing the other group of CNN modules. The model diagram is shown in fig. 16.
In the stage of training the scoring model, the left and right brain sides of a certain region of a case can be obtained with contrast images. Both sides can be used as data sets, i.e. the amount of training set data per area is twice the number of training set cases.
In the stage of forecasting the scoring model, each case forecasts the infarct area of the left and right brains through ten trained models respectively, and outputs the ASPECTS scores of the left and right brains.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 19. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an automated ASPECTS scoring method based on flat scan CT. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
step S100, obtaining an image sequence based on flat scanning CT, screening the image sequence, and obtaining a plurality of key frames, wherein the plurality of key frames comprise boundary frames connected with lateral ventricles;
step S200, positioning the brain sickle on a plurality of key frames, wherein the brain sickle is represented as a boundary on the plurality of key frames;
step S300, carrying out image registration on a plurality of key frames of the positioning boundary to obtain each pair of subareas for ASPECTS scoring;
step S400, training the deep learning model by using training data to obtain a plurality of scoring models, wherein for any one scoring model, the training data correspondingly comprises one pair of subareas of a plurality of key frames;
and S500, carrying out ASPECTS scoring on a plurality of key frames to be scored by using a scoring model to obtain an ASPECTS scoring result.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
step S100, obtaining an image sequence based on flat scanning CT, screening the image sequence, and obtaining a plurality of key frames, wherein the plurality of key frames comprise boundary frames connected with lateral ventricles;
step S200, positioning the brain sickle on a plurality of key frames, wherein the brain sickle is represented as a boundary on the plurality of key frames;
step S300, carrying out image registration on a plurality of key frames of the positioning boundary to obtain each pair of subareas for ASPECTS scoring;
step S400, training the deep learning model by using training data to obtain a plurality of scoring models, wherein for any one scoring model, the training data correspondingly comprises one pair of subareas of a plurality of key frames;
and S500, performing ASPECTS scoring on a plurality of key frames to be scored by using a scoring model to obtain an ASPECTS scoring result.
In one embodiment, a computer program product is provided comprising computer instructions which, when executed by a processor, perform the steps of:
step S100, obtaining an image sequence based on flat scanning CT, screening the image sequence, and obtaining a plurality of key frames, wherein the plurality of key frames comprise boundary frames connected with lateral ventricles;
step S200, positioning the brain sickle on a plurality of key frames, wherein the brain sickle is represented as a boundary on the plurality of key frames;
step S300, carrying out image registration on a plurality of key frames of the positioning boundary to obtain each pair of subareas for ASPECTS scoring;
step S400, training the deep learning model by using training data to obtain a plurality of scoring models, wherein for any one scoring model, the training data correspondingly comprises one pair of subareas of a plurality of key frames;
and S500, carrying out ASPECTS scoring on a plurality of key frames to be scored by using a scoring model to obtain an ASPECTS scoring result.
In this embodiment, the computer program product includes program code portions for performing the steps of the automated flat-scan CT-based ASPECTS scoring method in the embodiments of the present application when the computer program product is executed by one or more computing devices. The computer program product may be stored on a computer-readable recording medium. The computer program product may also be provided for downloading via a data network, e.g. via a RAN, via the internet and/or via an RBS. Alternatively or additionally, the method may be encoded in a Field Programmable Gate Array (FPGA) and/or an Application Specific Integrated Circuit (ASIC), or the functionality may be provided for downloading by means of a hardware description language.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features. When technical features in different embodiments are represented in the same drawing, it can be seen that the drawing also discloses a combination of the embodiments concerned.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. Automatic ASPECTS scoring method based on flat scanning CT is characterized by comprising the following steps:
the method comprises the following steps of obtaining an image sequence based on flat scanning CT, screening the image sequence, and obtaining a plurality of key frames, wherein the key frames comprise boundary frames connected with lateral ventricles, and the method for obtaining the key frames comprises the following steps: selecting a sample image based on an image sequence of flat scanning CT, carrying out two-class deep learning on the sample image by using a deep learning model to obtain a trained screening model, and screening a plurality of key frames of an output image sequence by using the trained screening model;
locating a sickle brain on the number of keyframes, the sickle brain appearing as a boundary on the number of keyframes;
carrying out image registration on the plurality of key frames positioned on the boundary to obtain each pair of partitions for ASPECTS scoring, and carrying out circumscribed rectangle processing and image interpolation processing on any pair of partitions;
training a deep learning model by utilizing training data to obtain a plurality of scoring models, wherein for any one scoring model, the training data correspondingly comprises a pair of partitions of the plurality of key frames, for any one scoring model, the pair of partitions comprises a first side partition and a second side partition which are positioned at two opposite sides relative to the boundary, and the training data comprises a mirror image of the first side partition relative to the boundary and/or a mirror image of the second side partition relative to the boundary;
and carrying out ASPECTS scoring on a plurality of key frames to be scored by utilizing the scoring model to obtain an ASPECTS scoring result.
2. The automated ASPECTS scoring method of claim 1, wherein obtaining a plurality of key frames specifically includes:
acquiring an image sequence based on flat-scan CT, and acquiring an upright image sequence according to the image sequence based on flat-scan CT;
sequentially calculating a circumscribed rectangle and intercepting the circumscribed rectangle according to the upright image sequence to obtain an intercepted image sequence;
selecting a sample image from the intercepted image sequence, and performing two-class deep learning on the sample image by using a deep learning model to obtain a trained screening model;
and judging the intercepted image sequence by using the trained screening model, and outputting a plurality of key frames of the image sequence, wherein the plurality of key frames comprise boundary frames connected with the lateral ventricles.
3. The automated ASPECTS scoring method of claim 1, wherein locating a brain sickle on the plurality of keyframes includes:
and intercepting an ellipse based on the plurality of key frames, obtaining a plurality of ellipse key frames, and positioning the brain sickle on the plurality of ellipse key frames.
4. The automated ASPECTS scoring method of claim 3, wherein locating a brain sickle on the plurality of elliptical keyframes comprises:
and carrying out binarization processing on the plurality of elliptical key frames to obtain corresponding binary images, adjusting the HU value of the binary images to a first threshold value, so that the boundary of the binary images is displayed clearly relative to the binary images as a whole, and positioning the boundary, wherein the boundary is the positioned brain sickle.
5. The automated ASPECTS scoring method of claim 3, wherein in image matching the key frames that locate the boundary, further comprising:
and rotating the plurality of elliptical key frames according to the boundary to enable the boundary to be vertical, and finishing fine tilt correction.
6. The automated ASPECTS scoring method according to claim 2, wherein the two categories include a first category of images belonging to the superior ganglion layer and a second category of images not belonging to the superior ganglion layer.
7. The automated ASPECTS scoring method of claim 1, wherein the training data includes, for any one scoring model, a first set of data and a second set of data; the first set of data comprises a first contrast image, and a second side region; the second set of data comprises a second contrast image, and a first side zone;
the first comparison image is obtained by carrying out image registration and difference processing on a mirror image of the first side partition relative to the boundary and the second side partition;
the second contrast image is obtained by image registration and difference processing of a mirror image of the second side partition with respect to the boundary and the first side partition.
8. Computer apparatus comprising a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to perform the steps of the automated flat-scan CT-based ASPECTS scoring method of any one of claims 1-7.
9. Computer readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the automated flat-scan CT-based ASPECTS scoring method according to any one of claims 1 to 7.
10. Computer program product comprising computer instructions, characterized in that the computer instructions, when executed by a processor, implement the steps of the automated flat-scan CT-based ASPECTS scoring method according to any one of claims 1 to 7.
CN202210406433.1A 2022-04-18 2022-04-18 Flat scan CT-based automated ASPECTS scoring method, computer device, readable storage medium, and program product Pending CN114708240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210406433.1A CN114708240A (en) 2022-04-18 2022-04-18 Flat scan CT-based automated ASPECTS scoring method, computer device, readable storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210406433.1A CN114708240A (en) 2022-04-18 2022-04-18 Flat scan CT-based automated ASPECTS scoring method, computer device, readable storage medium, and program product

Publications (1)

Publication Number Publication Date
CN114708240A true CN114708240A (en) 2022-07-05

Family

ID=82173774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210406433.1A Pending CN114708240A (en) 2022-04-18 2022-04-18 Flat scan CT-based automated ASPECTS scoring method, computer device, readable storage medium, and program product

Country Status (1)

Country Link
CN (1) CN114708240A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115148340A (en) * 2022-07-19 2022-10-04 徐俊 Online evaluation system for cerebral small vessel disease image markers
CN116630812A (en) * 2023-07-21 2023-08-22 四川发展环境科学技术研究院有限公司 Water body feature detection method and system based on visible light image analysis

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115148340A (en) * 2022-07-19 2022-10-04 徐俊 Online evaluation system for cerebral small vessel disease image markers
CN116630812A (en) * 2023-07-21 2023-08-22 四川发展环境科学技术研究院有限公司 Water body feature detection method and system based on visible light image analysis
CN116630812B (en) * 2023-07-21 2023-09-26 四川发展环境科学技术研究院有限公司 Water body feature detection method and system based on visible light image analysis

Similar Documents

Publication Publication Date Title
CN110245662B (en) Detection model training method and device, computer equipment and storage medium
Lin et al. Automatic retinal vessel segmentation via deeply supervised and smoothly regularized network
US20180365824A1 (en) Interpretation and Quantification of Emergency Features on Head Computed Tomography
CN114708240A (en) Flat scan CT-based automated ASPECTS scoring method, computer device, readable storage medium, and program product
Lin et al. Bsda-net: A boundary shape and distance aware joint learning framework for segmenting and classifying octa images
CN111681230A (en) System and method for scoring high-signal of white matter of brain
JP2020166809A (en) System, apparatus, and learning method for training models
CN111951265B (en) Brain stem scoring method and device based on brain CT image, computer equipment and storage medium
CN112862022A (en) ASPECTS scoring method for calculating non-enhanced CT
CN111681205B (en) Image analysis method, computer device, and storage medium
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN114332132A (en) Image segmentation method and device and computer equipment
CN114757908A (en) Image processing method, device and equipment based on CT image and storage medium
CN114170440A (en) Method and device for determining image feature points, computer equipment and storage medium
CN111161240A (en) Blood vessel classification method, computer device and readable storage medium
CN114627136B (en) Tongue image segmentation and alignment method based on feature pyramid network
Dandıl et al. A Mask R-CNN based Approach for Automatic Lung Segmentation in Computed Tomography Scans
CN112991289B (en) Processing method and device for standard section of image
Khan et al. A Computer-Aided Diagnostic System to Identify Diabetic Retinopathy, Utilizing a Modified Compact Convolutional Transformer and Low-Resolution Images to Reduce Computation Time. Biomedicines. 2023. No. 11. Art. 1566
CN114387221A (en) Method for automated layer selection of intracranial key frames, computer device, readable storage medium and program product
Jacob et al. Tibia bone segmentation in X-ray images-a comparative analysis
CN114463288B (en) Brain medical image scoring method and device, computer equipment and storage medium
Fang et al. Lens structure segmentation from AS-OCT images via shape-based learning
Dan et al. Fusion of multi-source retinal fundus images via automatic registration for clinical diagnosis
Lasek The impact of data preprocessing on the accuracy of CNN-based heart segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination