CN117474877A - Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium - Google Patents

Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117474877A
CN117474877A CN202311466463.2A CN202311466463A CN117474877A CN 117474877 A CN117474877 A CN 117474877A CN 202311466463 A CN202311466463 A CN 202311466463A CN 117474877 A CN117474877 A CN 117474877A
Authority
CN
China
Prior art keywords
image
region
true
aortic
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311466463.2A
Other languages
Chinese (zh)
Inventor
张迪
彭成宝
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Original Assignee
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd filed Critical Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority to CN202311466463.2A priority Critical patent/CN117474877A/en
Publication of CN117474877A publication Critical patent/CN117474877A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method, a device, electronic equipment and a storage medium for dividing a true and false cavity of an aortic dissection, which relate to the field of image processing and comprise the following steps: preprocessing an aortic CTA image to obtain an input image of a true and false cavity segmentation model, wherein the preprocessing at least comprises preprocessing, window width and window level adjustment and resampling; inputting an input image into a pre-trained true and false cavity segmentation model to obtain a plurality of two-class segmentation inference values of the input image corresponding to a plurality of true and false cavity areas; reflecting to obtain a first segmentation result image corresponding to the input image based on a plurality of two-classification segmentation reasoning values; and carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image. The aortic segmentation method and device can solve the problems that the segmentation effect of the aorta on each category is inconsistent and the segmentation accuracy is low.

Description

Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and apparatus for dividing a true and false aortic dissection, an electronic device, and a storage medium.
Background
Aortic dissection is characterized by the presence of an intermediate dissection valve formed by blood penetrating the intima of the aorta and entering the middle layer, and the expansion of the aortic dissection valve along the long axis direction of the aorta forms a true and false two-cavity separated state of the aortic wall, which is an acute serious cardiovascular disease with rapid disease progression. If aortic dissection diseases cannot be diagnosed and treated as early as possible, life is seriously threatened.
With the development of deep learning methods, more and more advanced deep learning techniques have been widely applied to medical image segmentation. At present, when the real and false cavities of the aorta are segmented, the main real cavity, the main false cavity, the branch real cavity and the branch false cavity are segmented generally, the frequency difference between different categories is large, the difference is influenced by contrast agents, the contrast ratio of the real and false cavities of the data is low, the boundary is weak and difficult to identify, and the problems of inconsistent segmentation effect and low segmentation accuracy of the aorta on each category are caused.
Disclosure of Invention
The application provides a method, a device, electronic equipment and a storage medium for dividing true and false cavities of aortic dissection, which can solve the problems of inconsistent dividing effect and lower dividing precision of the aortic dissection in various categories.
In a first aspect, a method for dividing a true and false aortic dissection is provided, which comprises the following steps:
preprocessing an aortic CTA image to obtain an input image of a true and false cavity segmentation model, wherein the preprocessing at least comprises preprocessing, window width and window level adjustment and resampling;
inputting an input image into a pre-trained true and false cavity segmentation model to obtain a plurality of two-class segmentation inference values of the input image corresponding to a plurality of true and false cavity areas, wherein the plurality of true and false cavity areas at least comprise: an aortic region, a main trunk false cavity region, a branch true cavity region and a branch false cavity region;
reflecting to obtain a first segmentation result image corresponding to the input image based on a plurality of two-classification segmentation reasoning values;
and carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image.
In a second aspect, there is provided an aortic dissection true and false lumen segmentation apparatus comprising:
the processing module is used for preprocessing the aortic CTA image to obtain an input image of the true and false cavity segmentation model, wherein the preprocessing at least comprises preprocessing, window width and window level adjustment and resampling;
the input module is used for inputting an input image into the pre-trained true and false cavity segmentation model to obtain a plurality of two-class segmentation inference values of the input image corresponding to a plurality of true and false cavity areas, wherein the true and false cavity areas at least comprise: an aortic region, a main trunk false cavity region, a branch true cavity region and a branch false cavity region;
The reflection module is used for obtaining a first segmentation result image corresponding to the input image based on the multiple classification segmentation reasoning values through reflection;
the registration module is used for carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image.
In a third aspect, there is provided an electronic device comprising: a processor and a memory for storing a computer program, the processor being for invoking and running the computer program stored in the memory for performing the method as in the first aspect or in various implementations thereof.
In a fourth aspect, a computer-readable storage medium is provided for storing a computer program for causing a computer to perform the method as in the first aspect or in various implementations thereof.
According to the technical scheme provided by the application, after the aortic CTA image is obtained, the aortic CTA image can be subjected to pre-cutting treatment, window width and window level adjustment treatment and resampling treatment to obtain an input image of a true and false cavity segmentation model; inputting an input image into a pre-trained true and false cavity segmentation model to obtain a plurality of two-class segmentation inference values of the input image corresponding to a plurality of true and false cavity areas, and reflecting to obtain a first segmentation result image corresponding to the input image based on the two-class segmentation inference values; and finally, carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image. According to the technical scheme, the aortic CTA image can be preprocessed in advance, the image segmentation range is reduced, each true and false cavity area in the input image is converted into one channel of a segmentation network through a segmentation network model of the area, two-class segmentation reasoning is carried out on each area, two-class segmentation reasoning values corresponding to the aortic area, the main false cavity area, the branch true cavity area and the branch false cavity area are obtained, and anti-mapping is carried out to obtain different classes of segmentation result images. By the segmentation mode, four classification processing of the model can be converted into two classification processing, and the main trunk interlayer and the branch interlayer of the aorta can be completely and accurately segmented.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application. Additional features and advantages of the present application will be set forth in the detailed description which follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario diagram provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of an aortic dissection true and false cavity segmentation method according to an embodiment of the present application;
FIG. 3 is a flow chart of a method for dividing a true and false aortic dissection according to another embodiment of the present application;
FIG. 4 is a schematic diagram of an example window width and level adjustment process according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an aortic dissection true and false cavity segmentation apparatus according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of an aortic dissection true and false cavity segmentation apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
With the development of deep learning methods, more and more advanced deep learning techniques are applied to medical image segmentation, and the Unet network has been widely applied to medical image segmentation. Due to the complexity of the complete aortic anatomy, the complete aorta contains the main trunk, which is composed of the ascending aorta, the aortic arch, and the descending aorta (containing the thoracic and abdominal aorta), and branches, which include three parts: the four branches of the upper arch part are mainly left and right carotid arteries and left and right collarbone arteries; five branches of the middle abdomen, mainly including the abdominal dry artery, the superior and inferior mesenteric arteries, and the left and right renal arteries; the lower iliac portion includes four branches including left and right internal iliac arteries, and left and right external iliac arteries. In addition, the complete aortic dissection comprises a trunk dissection and a branch dissection, the trunk dissection is a true cavity and a false cavity with trunks, the dissection affects the branches, and the true cavity and the false cavity of the branches are formed. And because of contrast agent problems, some data true and false cavities have low contrast and weak boundaries are difficult to identify, and the accurate segmentation of the true and false cavities of the main trunk and branches of the aorta is challenging.
It should be understood that the technical solution of the present application may be applied to the following scenarios, but is not limited to:
In some implementations, fig. 1 is an application scenario diagram provided in an embodiment of the present application, where, as shown in fig. 1, an electronic device 110 and a network device 120 may be included in the application scenario. The electronic device 110 may establish a connection with the network device 120 through a wired network or a wireless network.
By way of example, the electronic device 110 may be, but is not limited to, a desktop computer, a notebook computer, a tablet computer, and the like. The network device 120 may be a terminal device or a server, but is not limited thereto. In one embodiment of the present application, the electronic device 110 may send a request message to the network device 120, which may be used to request acquisition of an aortic CTA image, and further, the electronic device 110 may receive a response message sent by the network device 120, which includes the aortic CTA image.
In addition, fig. 1 illustrates one electronic device 110 and one network device 120, and may actually include other numbers of electronic devices and network devices, which is not limited in this application.
In other realizations, the technical solutions of the present application may also be executed by the electronic device 110, or the technical solutions of the present application may also be executed by the network device 120, which is not limited in this application.
After the application scenario of the embodiment of the present application is introduced, the following details of the technical solution of the present application will be described:
fig. 2 is a flowchart of an aortic dissection true and false cavity segmentation method according to an embodiment of the present application, which may be performed by the electronic device 110 shown in fig. 1, but is not limited thereto. As shown in fig. 2, the method may include the steps of:
step 210, preprocessing the aortic CTA image to obtain an input image of the true and false cavity segmentation model, wherein the preprocessing at least comprises preprocessing, window width and window level adjustment processing and resampling processing.
In a specific application scenario, a three-dimensional CTA image of an aortic dissection patient can be obtained by reading an original image, where the image size is m×n×l, and typically, m=512, n=512. Aortic CTA images tend to be higher in resolution with more layers (typically about 600-800 layers) and 0.5 to 2mm axial spacing. Because the image file is too large, the reasoning time length is increased when the image reasoning is carried out on the aortic CTA image, so that the model reasoning is difficult to meet the application performance requirement. The aortic CTA image may include an aortic blood vessel region and other redundant images in addition to the aortic dissection region, so after the aortic CTA image is acquired, the aortic CTA image may be first pre-cut, the image segmentation range is narrowed, and only the initial image corresponding to the aortic dissection region is reserved, for example, the 512×512 image is reduced to 256×256. The segmentation reasoning speed of the aortic image can be improved through the pre-cutting treatment. In addition, in order to meet the image input requirements of the true and false cavity segmentation model, after the aortic CTA image is pre-cut, window width and level adjustment processing and resampling processing can be performed on the image accumulation, so that the adjustment of the pixel value and the pixel interval of each pixel point in the image is realized. And further obtaining an input image suitable for inputting the true and false cavity segmentation model.
Step 220, inputting the input image into the pre-trained true and false cavity segmentation model to obtain a plurality of two-class segmentation inference values corresponding to a plurality of true and false cavity regions of the input image, wherein the plurality of true and false cavity regions at least comprise: an aortic region, a main trunk prosthetic cavity region, a branch true cavity region, and a branch prosthetic cavity region.
For the embodiment of the disclosure, relevant image data of an input image may be input to a trained true and false lumen segmentation model, a region-based image segmentation result mask graph may be obtained through forward propagation calculation, the size is 4 x (M/2) x (N/2) xl, 4 represents a binary segmentation reasoning value of the region class number (four regions), and binary segmentation reasoning values corresponding to the aortic region, the main false lumen region, the branch true lumen region and the branch false lumen region may be respectively: x is x aorta 、x main_false_lumen 、x branch_true_lumen X branch_false_lumen
The true and false cavity segmentation model is a task model after training the aortic dissection true and false cavity segmentation task by utilizing a large number of sample images in advance. When the true and false cavity segmentation model is pre-trained, a preset feature label of the sample image can be firstly configured, wherein the preset feature label at least comprises an aortic region label, a main false cavity region label, a branch true cavity region label and a branch false cavity region label. In addition, the sample image may be pre-cut to reduce the image segmentation range, and only the initial sample image corresponding to the aortic dissection region is reserved, for example, the 512×512 image is reduced to 256×256 of the preset size. Through pre-cutting processing, the training speed of the true and false cavity segmentation model can be improved, interference of an aortic blood vessel region and a redundant image in a sample image to model training is avoided, and the accuracy of model training is further improved. In addition, window width and window level adjustment processing and resampling processing can be performed on the sample image accumulation, so that the adjustment of the pixel value and the pixel interval of each pixel point in the image can be realized. The specific preprocessing procedure can be referred to in the embodiment step 210, and will not be described herein.
When the preset characteristic labels are generated, different categories can be mapped into different partially overlapped areas, and label images are manufactured so as to perform area-based training and obtain an area-based true and false cavity segmentation model. The original is a label image of different categories, a main true cavity (label value 1), a main false cavity (label value 2), a branch true cavity (label value 3), a branch false cavity (label value 4), and labels to be mapped to the partial overlapping area. The main trunk cavity, the main branch cavity and the branch false cavity of the aorta are combined to form an aorta area which is used as a first area, then the main trunk false cavity is used as a main trunk false cavity area which is used as a second area, the branch true cavity is used as a branch true cavity area which is used as a third area, and the branch false cavity is used as a branch false cavity area which is used as a fourth area, so that a label image based on area training is obtained. The size is 4 (M/2) (N/2) L,4 represents the number of region classes (four regions), and the label value of each pixel value is 1 or 0 (1 represents one region, and 0 represents background).
And then, the sample image with the preset characteristic label is input to a real and false cavity segmentation model after pretreatment, and resampling and background removal processing are respectively carried out on the sample image by the real and false cavity segmentation model to obtain a sample input image of the real and false cavity segmentation model. In the training process, the true and false cavity segmentation model can carry out enhancement processing on the sample input image so as to obtain derivative images corresponding to the sample input image, further enrich the number of the sample images and improve the training accuracy of the model. The enhancement method may include elastic transformation, telescopic transformation, random rotation, gamma transformation, contrast enhancement, cutout enhancement, etc. The true and false cavity segmentation model can adopt a distributed sampling loading mode in the data loading process, an optimizer adopts Adam, and the model optimization adopts a mixed precision training method.
The true and false cavity segmentation model may be any deep learning model, such as a Residual Network model (Residual Network), a lightweight model (MobileNet), or a Network model modified by some Network layers based on the above Network models, which is not limited specifically herein. In the following embodiments of the present disclosure, a true and false cavity segmentation model using a 3D Unet network structure is taken as an example, and the technical solutions in the present disclosure are described, but not limited to the specific embodiments. The 3D Unet network is a U-shaped network. The size of the input interested region is 1 (M/2) (N/2) L, each step in the middle is input into a feature map with the number of channels being N, and the feature map with the number of channels being 2N of the primary extracted features is obtained through three-layer operation of 3D conv+BN+Relu. 3D conv is a convolution kernel filter with a convolution kernel size of 3 x 3, BN refers to the normalization layer, and (3) performing an average value reduction and variance division operation on the input feature map, wherein Relu is an active layer, and performing nonlinear mapping. The image obtained above with the number of channels of 2D is downsampled (downsampling), downsampling kernel is 2 x 2. After 5 downsampling, 5 upsampling (upsampling) is performed to obtain a feature map with the same size as the input data, and then the feature map is mapped between [0,1] through a Sigmoid activation function layer to obtain a segmentation network prediction result of each region, wherein the corresponding four regions are a 4-channel mask map with the size of 4 (M/2) x (N/2) x L.
The model training can adopt a region-based multi-layer label deep supervision learning method. The loss function may take the form of a combination of binary cross-entcopy and Dice. The predicted value of each region and the true value of the label image are obtained through the segmentation network, the deviation between the predicted value and the true value is calculated by using a loss function, and when the deviation is larger than a set threshold value, training is continued, and network parameters are updated by back propagation to optimize the true and false cavity segmentation model. Until the loss function reaches a minimum value or a set threshold value, the training is finished.
Binary cross-entopy is a Binary cross entropy loss function, the true value x for each pixel i Predicted value y i The definition of the loss function for N pixels is:
the Dice is a region-related Dice, X and Y represent real values and predicted values, respectively, and in the segmentation task, X represents a real mask map and Y represents a predicted mask map. The numerator is the intersection between the real and predicted values and the denominator is the union of the real and predicted values, multiplied by 2 because the denominator has the reason to repeatedly calculate the common element between X and Y.
The final total loss is the sum of the Binary cross-entcopy and the Dice.
In addition, super parameters can be set for the true and false cavity segmentation model, the super parameters can be configuration parameters of the true and false cavity segmentation model, and the training speed of the model can be improved and the reasoning effect is ensured by setting specific super parameters for the true and false cavity segmentation model. Super parameters may include, for example, learning rate (e.g., learning rate=0.001), iteration number (e.g., epoch=1000), batch size (e.g., batch size=2), etc. After training, the segmentation network model is saved to obtain a true and false cavity segmentation model based on the region for subsequent use.
Accordingly, in training the true and false cavity segmentation model, the embodiment steps may include: generating a sample image configured with a preset feature tag, wherein the sample image is a pre-cut image comprising an aortic region, the aortic region at least comprises a main real cavity region, a main false cavity region, a branch real cavity region and a branch false cavity region, and the preset feature tag at least comprises an aortic region tag, a main false cavity region tag, a branch real cavity region tag and a branch false cavity region tag; preprocessing a sample image to obtain a sample input image of a true and false cavity segmentation model; and inputting a sample input image into the true and false cavity segmentation model, and performing true and false cavity segmentation training on the true and false cavity segmentation model, wherein in the process of the true and false cavity training, a derivative image obtained by enhancing the sample input image and the sample input image is used as an input feature, a preset feature label is used as a training label, and model parameters in the true and false cavity segmentation model are iteratively updated until the accuracy of the true and false cavity segmentation model on the true and false cavity segmentation is greater than a preset accuracy threshold, and the true and false cavity segmentation model training is judged to be completed.
Step 230, reflecting and obtaining a first segmentation result image corresponding to the input image based on the two-classification segmentation inference values.
For the disclosed embodiments, the predicted values of different regions (i.e., the classification cut inferred values) may be demapped to obtain mask values of different classifications.
Specifically, a preset threshold T can be set for dividing the inferred value x by two classes for the aortic region aorta And setting the pixels larger than a preset threshold value T as 1, and setting the rest pixels as 0 to obtain a mask value of an aortic region, wherein the mask value comprises four categories, namely a main real cavity, a main false cavity, a branch real cavity and a branch false cavity.
Binary segmentation reasoning value x for trunk false cavity region main_false_lumen And setting the pixels larger than the preset threshold value T to be 2, and keeping the rest values unchanged to obtain the mask value of the main false cavity.
Similarly, the inferred value x is divided by two categories for the branch true cavity region branch true lumen And setting the pixel larger than the preset threshold value T to be 3, and keeping the rest values unchanged to obtain the mask value of the branch true cavity.
Similarly, the inferred value x is divided by two categories for the branch false cavity region branch_false_lumen And setting the pixels larger than the preset threshold value T to be 4, and keeping the rest values unchanged to obtain the mask value of the branch false cavity.
The mask values of four different categories are obtained through the steps, namely a main true cavity (with the value of 1), a main false cavity (with the value of 2), a branch true cavity (with the value of 3) and a branch false cavity (with the value of 4).
After the mask values of different categories are obtained, the mask values of the main false cavity, the branch true cavity and the branch false cavity can be further combined in the mask values of the main true cavity, so that four types of segmentation results, namely a first segmentation result image, are obtained.
And 240, carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image.
In a specific application scenario, since the first segmentation result image obtained by inverse mapping is a segmentation result image with a smaller size corresponding to the input image, it is required to restore the segmentation result image to a complete segmentation result through post-processing, and thus a second segmentation result image corresponding to the aortic CTA image is obtained. The post-processing step may include, but is not limited to, the processing of noise points from reasoning, registration with the original image, etc., and is not specifically limited herein.
In summary, according to the aortic dissection true and false cavity segmentation method provided by the application, after an aortic CTA image is obtained, pre-cutting treatment, window width and window level adjustment treatment and resampling treatment can be carried out on the aortic CTA image to obtain an input image of a true and false cavity segmentation model; inputting an input image into a pre-trained true and false cavity segmentation model to obtain a plurality of two-class segmentation inference values of the input image corresponding to a plurality of true and false cavity areas, and reflecting to obtain a first segmentation result image corresponding to the input image based on the two-class segmentation inference values; and finally, carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image. According to the technical scheme, the aortic CTA image can be preprocessed in advance, the image segmentation range is reduced, each true and false cavity area in the input image is converted into one channel of a segmentation network through a segmentation network model of the area, two-class segmentation reasoning is carried out on each area, two-class segmentation reasoning values corresponding to the aortic area, the main false cavity area, the branch true cavity area and the branch false cavity area are obtained, and anti-mapping is carried out to obtain different classes of segmentation result images. By the segmentation mode, four classification processing of the model can be converted into two classification processing, and the main trunk interlayer and the branch interlayer of the aorta can be completely and accurately segmented.
Based on the embodiment shown in fig. 2, as a refinement and extension of the above embodiment, in order to fully describe the specific implementation procedure of the method of this embodiment, this embodiment provides a specific method as shown in fig. 3. Fig. 3 further defines step 230 based on the embodiment shown in fig. 2. In the embodiment shown in fig. 2, step 230 includes steps 330 through 340. As shown in fig. 3, the method comprises the steps of:
step 310, preprocessing the aortic CTA image to obtain an input image of the true and false cavity segmentation model.
The preprocessing at least comprises preprocessing cutting processing, window width and window level adjusting processing and resampling processing.
For the embodiment of the disclosure, in order to better enable the deep learning model to realize effective segmentation, the method can reserve the effective image part in the aortic CTA image through pre-cutting treatment, and reduce the larger original size of the aortic CTA image to a preset size. The preset size is a slightly larger image size than the actual real and false cavity area, and can be specifically set according to the actual application scene. As empirically found, the preset size can be set to 256×256, that is, by performing the pre-cutting process on the aortic CTA image, it is possible to reduce the aortic CTA image of 512×512 to an initial image of 256×256.
Wherein, when the pre-cutting process is performed, the region of interest image including the aortic dissection region can be extracted based on the threshold segmentation. Specifically, the pre-cutting treatment comprises the following steps:
1) Extracting skeleton regions: because the aorta is contained within the sternal skeleton, which has a higher density of influence (higher CT value), it is easy to extract. Therefore, by setting the pixel threshold (for example, 200 or more), the image processing tool is used to perform image threshold segmentation, and an image region with a pixel value larger than the set pixel threshold is extracted as a skeleton region, which region is to include a skeleton, a high-density blood vessel, and the like. And sequencing the extracted areas from large to small, wherein the area with the largest extraction is the skeleton area.
2) Extraction skeleton region component: the image processing tool can be utilized to analyze the connected domain components in the skeleton region, and the connected domain components are ordered from large to small, and the first connected domain component (namely the largest connected domain component) is extracted, namely the skeleton region component.
3) Acquiring a precut image: using an image processing tool, performing shape characteristic analysis of the skeleton region assembly, and then reading initial vertex coordinates (X0, Y0, Z0) of the circumscribed cube of the skeleton region assembly and corresponding axial 3 side lengths (a, b, c). Layer center point coordinates (Xc, yc) are then calculated:
Xc=X0+a/2
Yc=Y0+b/2
Setting the side length of the region of interest as R (such as 256), and according to the region [ Xc-R/2:Xc+R/2; and (3) performing image extraction (all the Z-axis directions are reserved) on the aortic CTA image by Yc-R/2:Yc+R/2 to obtain an image after pre-cutting treatment, wherein the image is used as an image of the region of interest.
In addition, in order to meet the image input requirements of the true and false cavity segmentation model, after the aortic CTA image is pre-cut, window width and window level adjustment processing and resampling processing can be performed on the accumulation of the region of interest image, so that the adjustment of the pixel value and the pixel interval of each pixel point in the region of interest image can be realized.
When window width and window level adjustment is performed, a window width value w and a window level value c of an image of a region of interest can be obtained first, and a minimum window_min and a maximum window_max of a wide window range are further calculated according to the window width value w and the window level value c:
window_min=c-w/2
window_max=c+w/2
the pixel value p' for each pixel p in the region of interest image may then be adjusted based on the following formula:
p′=(p-window_min)/(window_max-window_min)
p′=p′*255
after the adjustment of each pixel point in the image of the region of interest is completed, a first processed image including the aortic dissection region shown in fig. 4 can be obtained, and the image has a clearer image effect.
When resampling processing is carried out, the space interval between pixel points in the image can be consistent, and the model reasoning analysis is facilitated. The transformation method is as follows: the spatial intervals between the pixels of the original CTA image are different, so that the spatial interval (DeltaX) between the pixels of the first processed image can be obtained i ,ΔY i ,ΔZ i ) The average value of them is calculated,where i is the i-th pixel point and n is the number of pixels in the first processed image. With the mean +.>As a target space interval between pixel points, a cubic spline interpolation algorithm (optionally other interpolation methods) is used to perform input image transformation, so that the space intervals between pixel points in the transformed image are all kept the same (all equal to the target space interval). If r=256, the image size of the second processed image obtained after the resampling process is (M/2) ×n/2×l, which is the basis of the next process.
Accordingly, for the embodiments of the present disclosure, when preprocessing an aortic CTA image to obtain an input image of a true and false lumen segmentation model, the steps of the embodiments may include: pre-cutting the aortic CTA image to obtain an interested area image containing an aortic dissection area; calculating the minimum value and the maximum value of the wide window range according to the window width value and the window level value of the region-of-interest image; based on the minimum value and the maximum value of the wide window range, window width window level adjustment processing is carried out on each pixel value in the region of interest image, and a first processed image is obtained; calculating a space interval average value between adjacent pixel points in the first processed image; and resampling the first processed image based on the space interval average value to obtain a second processed image, and taking the second processed image as an input image of the true and false cavity segmentation model, wherein the space interval between any adjacent pixel points in the input image is equal to the space interval average value.
Accordingly, when performing a pre-cutting process on the aortic CTA image to obtain a region of interest image including an aortic dissection region, the steps of the embodiment may include: extracting a skeleton region image from the aortic CTA image based on a pixel threshold; analyzing the connected domain components in the skeleton region image, and screening the connected domain components with the largest corresponding areas as skeleton region components; and taking the circumscribed quadrilateral central point of the skeleton region assembly as a reference, and extracting an image region according to the side length of the preset section to obtain an interested region image containing the aortic dissection region.
And 320, inputting the input image into the pre-trained true and false cavity segmentation model to obtain a plurality of two-class segmentation inference values of the input image corresponding to a plurality of true and false cavity regions.
Wherein, the plurality of true and false cavity areas at least comprise: an aortic region, a main trunk prosthetic cavity region, a branch true cavity region, and a branch prosthetic cavity region.
For the embodiment of the present disclosure, the specific implementation process may refer to the related description in the embodiment step 220, which is not repeated herein.
And 330, determining mask values corresponding to the true and false cavity areas respectively based on the two-class segmentation inference values.
For embodiments of the present disclosure, mask values for the aortic region may be determined reflectively based on the bi-classified segmentation inference values for the aortic region; determining a mask value of the main false cavity by reflection based on the binary classification segmentation reasoning value of the main false cavity region; determining a mask value of the branch true cavity by reflection based on the binary classification segmentation reasoning value of the branch true cavity region; the mask value of the branch false cavity is determined by reflection based on the binary classification segmentation reasoning value of the branch false cavity region. The specific implementation steps can be referred to the relevant descriptions in the embodiment step 230, and will not be repeated here.
Accordingly, the embodiment steps may include: determining a first mask value of the aorta region based on a comparison result of the two classification segmentation reasoning values corresponding to each pixel point in the aorta region and a preset threshold value; determining a second mask value of the trunk false cavity region based on a comparison result of the two classification segmentation reasoning values corresponding to each pixel point in the trunk false cavity region and a preset threshold value; determining a third mask value of the branch true cavity region based on a comparison result of the two classification segmentation reasoning values corresponding to each pixel point in the branch true cavity region and a preset threshold value; and determining a fourth mask value of the branch false cavity region based on a comparison result of the two classification segmentation reasoning values corresponding to each pixel point in the branch false cavity region and a preset threshold value. The preset threshold may be set according to an actual application scenario, and is not specifically limited herein.
Step 340, merging mask values corresponding to the main false cavity region, the branch true cavity region and the branch false cavity region in mask values corresponding to the aortic region, to obtain a first segmentation result image of four true and false cavity categories corresponding to the aortic region, wherein the four true and false cavity categories include: a main trunk true cavity, a main trunk false cavity, a branch true cavity and a branch false cavity.
For the embodiment of the disclosure, after mask values corresponding to the multiple true and false lumen regions are obtained, a mask image of the main trunk and branches of the aorta may be obtained, and further, the mask image of the main trunk and branches of the aorta needs to be converted into a true and false lumen segmentation result, that is, a first segmentation result image. Specifically, the mask value of the true and false aortic cavity obtained in the previous step is (M/2) ×n/2) ×l, the original image is m×n×l, and in view of the mask value of the aortic region, the main true cavity, the main false cavity, the branch true cavity and the branch false cavity are included in the mask value of the aortic region, so that the center point coordinates (Xc, yc) and the side length R of the region of interest obtained in the preprocessing are needed to set the mask values of the positions in the corresponding regions (i.e., the main false cavity region, the branch true cavity region and the branch false cavity region) in the aortic region as the mask image value obtained in the previous step, and the mask values of the other positions are 0, which represent the background region, so that the true and false cavity segmentation results of the main and branch of the aorta, i.e., the first segmentation result image, are obtained.
Accordingly, to accept embodiment step 330, the embodiment step may include: updating a first mask value of a main trunk false cavity region in the aortic region by using the second mask value, updating a first mask value of a branch true cavity region in the aortic region by using the third mask value, and updating a first mask value of a branch false cavity region in the aortic region by using the fourth mask value to obtain a first segmentation result image of the aortic region corresponding to four true and false cavity categories.
And 350, carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image.
In a specific application scenario, since the first segmentation result image obtained by inverse mapping is a segmentation result image with a smaller size corresponding to the input image, it is required to restore the segmentation result image to a complete segmentation result through post-processing, and thus a second segmentation result image corresponding to the aortic CTA image is obtained. The post-processing step may include, but is not limited to, the processing of noise points from reasoning, registration with the original image, etc., and is not specifically limited herein. The concrete explanation is as follows:
1) Noise point processing
The segmentation reasoning result obtained through the steps usually has noise points of a main real cavity, in order to remove the noise points, a first segmentation result image is firstly required to be converted into a binary image (background, main real cavity, branch real cavity and branch real cavity are 0, and main real cavity is 1), then a connected domain component analysis is carried out by using an image processing tool, the components are ordered from large to small, and the first (the largest one is the noise point or block) is taken out as a segmentation result binary image. The binary image is utilized to cut back the first segmentation result image, so that a segmentation result image with noise removed can be obtained.
2) Registration with the original image
After the image segmentation result is processed, the image is also required to be registered by the image so as to be corresponding to the aortic CTA image. Specifically, an image processing tool is used to convert the aortic CTA image into an array and acquire shape information, and a 0-value array is constructed according to the shape information. The segmentation result image is also converted into a plurality of groups, and the groups are placed in the positions corresponding to the previously constructed 0-value groups according to the center point and the cross section side length (256×256) calculated during image preprocessing, so as to form the segmentation result groups corresponding to the input image. Finally, the array is converted into an image and initial image information is copied, so that a final segmentation result image can be obtained. Wherein the initial image information is pixel position information of the input image in the aortic CTA image. The first segmentation result image can be restored to 512 x 512 image sizes from 256 x 256 through image registration, and the image position of the first segmentation result image corresponds to the pixel position of the input image in the aortic artery CTA image, so that a second segmentation result image corresponding to the aortic artery CTA image is obtained and is used as a final true and false cavity segmentation image of the main aortic artery plus branch.
Accordingly, for embodiments of the present disclosure, the embodiment steps may include: extracting image shape information of an aortic CTA image, and constructing a 0-value array corresponding to the aortic CTA image based on the image shape information; after converting the first segmentation result image into an array, inserting the array into a corresponding position of a 0-value array according to the circumscribed quadrilateral central point and the preset cross section side length of the skeleton region assembly obtained by calculation in the pre-cutting process to form a segmentation result array corresponding to the aortic CTA image; and carrying out pixel adjustment on the image converted by the segmentation result array based on initial image information of the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image, wherein the initial image information is pixel position information of the region-of-interest image in the aortic CTA image.
In summary, according to the aortic dissection true and false cavity segmentation method provided by the application, after an aortic CTA image is obtained, pre-cutting treatment, window width and window level adjustment treatment and resampling treatment can be carried out on the aortic CTA image to obtain an input image of a true and false cavity segmentation model; inputting an input image into a pre-trained true and false cavity segmentation model to obtain a plurality of two-class segmentation inference values of the input image corresponding to a plurality of true and false cavity areas, and reflecting to obtain a first segmentation result image corresponding to the input image based on the two-class segmentation inference values; and finally, carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image. According to the technical scheme, the aortic CTA image can be preprocessed in advance, the image segmentation range is reduced, each true and false cavity area in the input image is converted into one channel of a segmentation network through a segmentation network model of the area, two-class segmentation reasoning is carried out on each area, two-class segmentation reasoning values corresponding to the aortic area, the main false cavity area, the branch true cavity area and the branch false cavity area are obtained, and anti-mapping is carried out to obtain different classes of segmentation result images. By the segmentation mode, four classification processing of the model can be converted into two classification processing, and the main trunk interlayer and the branch interlayer of the aorta can be completely and accurately segmented.
Based on the above detailed description of the aortic dissection true and false lumen segmentation method provided in fig. 2 and 3, as shown in fig. 5, fig. 5 is a block diagram of an aortic dissection true and false lumen segmentation apparatus according to an exemplary embodiment. As shown in fig. 5, the apparatus includes:
the processing module 41 is configured to perform preprocessing on the aortic CTA image to obtain an input image of the true and false cavity segmentation model, where the preprocessing at least includes preprocessing, window width and window level adjustment processing and resampling processing;
the input module 42 is configured to input an input image to the pre-trained true and false cavity segmentation model, and obtain a plurality of two-class segmentation inference values corresponding to a plurality of true and false cavity regions of the input image, where the plurality of true and false cavity regions at least includes: an aortic region, a main trunk false cavity region, a branch true cavity region and a branch false cavity region;
the reflection module 43 is configured to obtain a first segmentation result image corresponding to the input image based on the multiple segmentation inference values;
the registration module 44 may be configured to perform image registration on the first segmentation result image according to the aortic CTA image, so as to obtain a second segmentation result image corresponding to the aortic CTA image.
In some embodiments of the present application, the processing module 41 may be configured to perform a pre-cutting process on the aortic CTA image to obtain a region of interest image including an aortic dissection region; calculating the minimum value and the maximum value of the wide window range according to the window width value and the window level value of the region-of-interest image; based on the minimum value and the maximum value of the wide window range, window width window level adjustment processing is carried out on each pixel value in the region of interest image, and a first processed image is obtained; calculating a space interval average value between adjacent pixel points in the first processed image; and resampling the first processed image based on the space interval average value to obtain a second processed image, and taking the second processed image as an input image of the true and false cavity segmentation model, wherein the space interval between any adjacent pixel points in the input image is equal to the space interval average value.
In some embodiments of the present application, the processing module 41 is operable to extract a skeleton region image in the aortic CTA image based on the pixel threshold; analyzing the connected domain components in the skeleton region image, and screening the connected domain components with the largest corresponding areas as skeleton region components; and taking the circumscribed quadrilateral central point of the skeleton region assembly as a reference, and extracting an image region according to the side length of the preset section to obtain an interested region image containing the aortic dissection region.
In some embodiments of the present application, as shown in fig. 6, the apparatus further includes: a training module 45;
the training module 45 is configured to generate a sample image configured with a preset feature tag, where the sample image is a pre-cut image including an aortic region, and the aortic region includes at least a main real cavity region, a main false cavity region, a branch real cavity region, and a branch false cavity region, and the preset feature tag includes at least an aortic region tag, a main false cavity region tag, a branch real cavity region tag, and a branch false cavity region tag; preprocessing a sample image to obtain a sample input image of a true and false cavity segmentation model; and inputting a sample input image into the true and false cavity segmentation model, and performing true and false cavity segmentation training on the true and false cavity segmentation model, wherein in the process of the true and false cavity training, a derivative image obtained by enhancing the sample input image and the sample input image is used as an input feature, a preset feature label is used as a training label, and model parameters in the true and false cavity segmentation model are iteratively updated until the accuracy of the true and false cavity segmentation model on the true and false cavity segmentation is greater than a preset accuracy threshold, and the true and false cavity segmentation model training is judged to be completed.
In some embodiments of the present application, the demapping module 43 is configured to determine mask values corresponding to the true and false cavity regions respectively based on the multiple binary classification segmentation inference values; and merging mask values corresponding to the main trunk false cavity region, the branch true cavity region and the branch false cavity region in mask values corresponding to the aortic region to obtain a first segmentation result image of four true and false cavity categories corresponding to the aortic region, wherein the four true and false cavity categories comprise: a main trunk true cavity, a main trunk false cavity, a branch true cavity and a branch false cavity.
In some embodiments of the present application, the reflection module 43 may be configured to determine a first mask value of the aortic region based on a comparison result of the two classification segmentation inference values corresponding to each pixel point in the aortic region with a preset threshold; determining a second mask value of the trunk false cavity region based on a comparison result of the two classification segmentation reasoning values corresponding to each pixel point in the trunk false cavity region and a preset threshold value; determining a third mask value of the branch true cavity region based on a comparison result of the two classification segmentation reasoning values corresponding to each pixel point in the branch true cavity region and a preset threshold value; determining a fourth mask value of the branch false cavity region based on a comparison result of the two classification segmentation reasoning values corresponding to each pixel point in the branch false cavity region and a preset threshold value; updating a first mask value of a main trunk false cavity region in the aortic region by using the second mask value, updating a first mask value of a branch true cavity region in the aortic region by using the third mask value, and updating a first mask value of a branch false cavity region in the aortic region by using the fourth mask value to obtain a first segmentation result image of the aortic region corresponding to four true and false cavity categories.
In some embodiments of the present application, the registration module 44 may be configured to extract image shape information of the aortic CTA image and construct a 0-value array corresponding to the aortic CTA image based on the image shape information; after converting the first segmentation result image into an array, inserting the array into a corresponding position of a 0-value array according to the circumscribed quadrilateral central point and the preset cross section side length of the skeleton region assembly obtained by calculation in the pre-cutting process to form a segmentation result array corresponding to the aortic CTA image; and carrying out pixel adjustment on the image converted by the segmentation result array based on initial image information of the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image, wherein the initial image information is pixel position information of the region-of-interest image in the aortic CTA image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
According to the embodiment of the application, the aortic CTA image can be preprocessed in advance, the image segmentation range is reduced, each true and false cavity area in the input image is converted into one channel of a segmentation network through a segmentation network model of the area, two-class segmentation reasoning is carried out on each area, two-class segmentation reasoning is carried out to obtain two-class segmentation reasoning values corresponding to the aortic area, the main false cavity area, the branch true cavity area and the branch false cavity area respectively, and inverse mapping is carried out to obtain different classes of segmentation result images. By the segmentation mode, four classification processing of the model can be converted into two classification processing, and the main trunk interlayer and the branch interlayer of the aorta can be completely and accurately segmented.
The aortic dissection true and false cavity segmentation device according to the embodiment of the invention is described above from the perspective of functional modules with reference to the accompanying drawings. It should be understood that the functional module may be implemented in hardware, or may be implemented by instructions in software, or may be implemented by a combination of hardware and software modules. Specifically, each step of the aortic dissection true and false cavity segmentation method embodiment in the embodiment of the invention can be completed through an integrated logic circuit of hardware in a processor and/or instructions in a software form, and the steps of the aortic dissection true and false cavity segmentation method applied in combination with the embodiment of the invention can be directly embodied as the completion of the execution of a hardware decoding processor or the completion of the execution of the combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is positioned in the memory, the processor reads the information in the memory, and the steps in the embodiment of the aortic dissection true and false cavity segmentation method are completed by combining the hardware of the processor.
Fig. 7 is a schematic block diagram of an electronic device 700 in accordance with one embodiment of the present invention.
As shown in fig. 7, the electronic device 700 may include:
a memory 710 and a processor 720, the memory 710 being configured to store a computer program and to transfer the program code to the processor 720. In other words, the processor 720 may call and run a computer program from the memory 710 to implement the method in the embodiment of the present invention.
For example, the processor 720 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the invention, the processor 720 may include, but is not limited to:
a general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the invention, the memory 710 includes, but is not limited to:
volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DR RAM).
In some embodiments of the invention, the computer program may be partitioned into one or more modules that are stored in the memory 710 and executed by the processor 720 to perform the methods provided by the invention. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, the instruction segments describing the execution of the computer program in the controller.
As shown in fig. 7, the electronic device 700 may further include:
a transceiver 730, the transceiver 730 being connectable to the processor 720 or the memory 710.
The processor 720 may control the transceiver 730 to communicate with other devices, and in particular, may transmit data or data to other devices or receive data or data transmitted by other devices. Transceiver 730 may include a transmitter and a receiver. Transceiver 730 may further include antennas, the number of which may be one or more.
It will be appreciated that the various components in the electronic device are connected by a bus system that includes, in addition to a data bus, a power bus, a control bus, and a status signal bus.
The present invention also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, an embodiment of the present invention also provides a computer program product containing instructions which, when executed by a computer, cause the computer to perform the method of the method embodiment described above.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a digital video disc (Digital Video Disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that changes and substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for dividing a true and false aortic dissection, which is characterized by comprising the following steps:
preprocessing an aortic CTA image to obtain an input image of a true and false cavity segmentation model, wherein the preprocessing at least comprises preprocessing, window width and window level adjustment and resampling;
Inputting the input image into the pre-trained true and false cavity segmentation model to obtain a plurality of two-class segmentation inference values of the input image corresponding to a plurality of true and false cavity areas, wherein the plurality of true and false cavity areas at least comprise: an aortic region, a main trunk false cavity region, a branch true cavity region and a branch false cavity region;
reflecting and obtaining a first segmentation result image corresponding to the input image based on the two-classification segmentation inference values;
and carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image.
2. The method of claim 1, wherein preprocessing the aortic CTA image to obtain an input image of a true and false lumen segmentation model comprises:
pre-cutting the aortic CTA image to obtain an interested area image containing an aortic dissection area;
calculating the minimum value and the maximum value of the wide window range according to the window width value and the window level value of the region of interest image;
based on the minimum value and the maximum value of the wide window range, window width window level adjustment processing is carried out on each pixel value in the region-of-interest image, and a first processed image is obtained;
Calculating a space interval average value between adjacent pixel points in the first processed image;
and resampling the first processed image based on the space interval average value to obtain a second processed image, and taking the second processed image as an input image of a true and false cavity segmentation model, wherein the space interval between any adjacent pixel points in the input image is equal to the space interval average value.
3. The method of claim 2, wherein pre-cutting the aortic CTA image to obtain a region of interest image including an aortic dissection region comprises:
extracting a skeleton region image from the aortic CTA image based on a pixel threshold;
analyzing the connected domain components in the skeleton region image, and screening the connected domain components with the largest corresponding areas as skeleton region components;
and taking the circumscribed quadrilateral central point of the skeleton region assembly as a reference, and extracting an image region according to the side length of a preset section to obtain an interested region image containing the aortic dissection region.
4. The method of claim 1, further comprising a training method of the true and false lumen segmentation model:
Generating a sample image configured with a preset feature tag, wherein the sample image is a pre-cut image comprising an aortic region, the aortic region at least comprises a main real cavity region, a main false cavity region, a branch real cavity region and a branch false cavity region, and the preset feature tag at least comprises an aortic region tag, a main false cavity region tag, a branch real cavity region tag and a branch false cavity region tag;
preprocessing the sample image to obtain a sample input image of the true and false cavity segmentation model;
and inputting the sample input image into a true and false cavity segmentation model, and performing true and false cavity segmentation training on the true and false cavity segmentation model, wherein in the true and false cavity training process, a derivative image obtained by enhancing the sample input image and the sample input image is used as an input characteristic, and the preset characteristic label is used as a training label, and model parameters in the true and false cavity segmentation model are iteratively updated until the accuracy of the true and false cavity segmentation model on true and false cavity segmentation is larger than a preset accuracy threshold value, and judging that the true and false cavity segmentation model training is completed.
5. The method of claim 1, wherein the reflecting the first segmentation result image corresponding to the input image based on the plurality of two-class segmentation inference values comprises:
Determining mask values corresponding to the true and false cavity areas respectively based on the two-class segmentation inference values;
and merging mask values corresponding to the main false cavity region, the branch true cavity region and the branch false cavity region in mask values corresponding to the aortic region to obtain a first segmentation result image of four true and false cavity categories corresponding to the aortic region, wherein the four true and false cavity categories comprise: a main trunk true cavity, a main trunk false cavity, a branch true cavity and a branch false cavity.
6. The method of claim 5, wherein determining mask values for the respective true and false cavity regions based on the plurality of two-class segmentation inference results comprises:
determining a first mask value of the aorta region based on a comparison result of the two classification segmentation inference values corresponding to each pixel point in the aorta region and a preset threshold value;
determining a second mask value of the trunk false cavity region based on a comparison result of the two classification segmentation inference values corresponding to each pixel point in the trunk false cavity region and a preset threshold value;
determining a third mask value of the branch true cavity region based on a comparison result of the two classification segmentation inference values corresponding to each pixel point in the branch true cavity region and a preset threshold value;
Determining a fourth mask value of the branch false cavity region based on a comparison result of the two classification segmentation inference values corresponding to each pixel point in the branch false cavity region and a preset threshold value;
and merging mask values corresponding to the main false cavity region, the branch true cavity region and the branch false cavity region in mask values corresponding to the aortic region to obtain a first segmentation result image of four true and false cavity categories corresponding to the aortic region, wherein the first segmentation result image comprises:
and updating a first mask value of the main false cavity region in the aortic region by using the second mask value, updating a first mask value of the branch true cavity region in the aortic region by using the third mask value, and updating a first mask value of the branch false cavity region in the aortic region by using the fourth mask value to obtain a first segmentation result image of the aortic region corresponding to four true and false cavity categories.
7. A method according to claim 3, wherein said performing image registration on said first segmentation result image from said aortic CTA image to obtain a second segmentation result image corresponding to said aortic CTA image comprises:
Extracting image shape information of the aortic CTA image, and constructing a 0-value array corresponding to the aortic CTA image based on the image shape information;
after converting the first segmentation result image into an array, inserting the array into a corresponding position of the 0-value array according to the external quadrilateral central point and the preset cross section side length of the skeleton region assembly calculated during the pre-cutting treatment to form a segmentation result array corresponding to the aortic CTA image;
and carrying out pixel adjustment on the image converted by the segmentation result array based on the initial image information of the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image, wherein the initial image information is the pixel position information of the region-of-interest image in the aortic CTA image.
8. An aortic dissection true and false lumen segmentation apparatus, comprising:
the processing module is used for preprocessing the aortic CTA image to obtain an input image of the true and false cavity segmentation model, and the preprocessing at least comprises preprocessing, window width and window level adjustment processing and resampling processing;
the input module is used for inputting the input image into the true and false cavity segmentation model after pre-training is completed, and obtaining a plurality of two-class segmentation reasoning values of a plurality of true and false cavity areas corresponding to the input image, wherein the plurality of true and false cavity areas at least comprise: an aortic region, a main trunk false cavity region, a branch true cavity region and a branch false cavity region;
The reflection module is used for obtaining a first segmentation result image corresponding to the input image based on the multiple two-classification segmentation reasoning values through reflection;
the registration module is used for carrying out image registration on the first segmentation result image according to the aortic CTA image to obtain a second segmentation result image corresponding to the aortic CTA image.
9. An electronic device, comprising:
a processor and a memory for storing a computer program, the processor being for invoking and running the computer program stored in the memory to perform the method of any of claims 1-7.
10. A computer readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 1-7.
CN202311466463.2A 2023-11-03 2023-11-03 Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium Pending CN117474877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311466463.2A CN117474877A (en) 2023-11-03 2023-11-03 Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311466463.2A CN117474877A (en) 2023-11-03 2023-11-03 Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117474877A true CN117474877A (en) 2024-01-30

Family

ID=89625202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311466463.2A Pending CN117474877A (en) 2023-11-03 2023-11-03 Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117474877A (en)

Similar Documents

Publication Publication Date Title
US10430949B1 (en) Automatic method and system for vessel refine segmentation in biomedical images using tree structure based deep learning model
US11748879B2 (en) Method and system for intracerebral hemorrhage detection and segmentation based on a multi-task fully convolutional network
CN110706246B (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
CN110008971B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
Wu et al. Skin cancer classification with deep learning: a systematic review
US10846854B2 (en) Systems and methods for detecting cancer metastasis using a neural network
CN111899252B (en) Pathological image processing method and device based on artificial intelligence
CN110599500A (en) Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN111667459B (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN110570394B (en) Medical image segmentation method, device, equipment and storage medium
CN113012155A (en) Bone segmentation method in hip image, electronic device, and storage medium
CN112258514A (en) Segmentation method of pulmonary blood vessels of CT (computed tomography) image
CN116228792A (en) Medical image segmentation method, system and electronic device
CN112750137A (en) Liver tumor segmentation method and system based on deep learning
CN108305268A (en) A kind of image partition method and device
CN113313728B (en) Intracranial artery segmentation method and system
CN116740081A (en) Method, device, terminal equipment and medium for segmenting pulmonary vessels in CT image
CN113706684A (en) Three-dimensional blood vessel image reconstruction method, system, medical device and storage medium
CN116129124A (en) Image segmentation method, system and equipment
CN117474877A (en) Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium
CN113920099B (en) Polyp segmentation method based on non-local information extraction and related components
CN116721072A (en) Method, apparatus, device, medium and program product for obtaining aortic centerline
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN116128895A (en) Medical image segmentation method, apparatus and computer readable storage medium
CN117474879A (en) Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination