CN112330708A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112330708A
CN112330708A CN202011331292.9A CN202011331292A CN112330708A CN 112330708 A CN112330708 A CN 112330708A CN 202011331292 A CN202011331292 A CN 202011331292A CN 112330708 A CN112330708 A CN 112330708A
Authority
CN
China
Prior art keywords
target
image
aortic
determining
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011331292.9A
Other languages
Chinese (zh)
Other versions
CN112330708B (en
Inventor
尹延伟
何光宇
程万军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Original Assignee
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd filed Critical Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority to CN202011331292.9A priority Critical patent/CN112330708B/en
Publication of CN112330708A publication Critical patent/CN112330708A/en
Application granted granted Critical
Publication of CN112330708B publication Critical patent/CN112330708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to an image processing method, apparatus, storage medium, and electronic device to quickly and accurately implement aortic dissection membrane extraction. The method comprises the following steps: determining a first target region corresponding to an aorta from a target CT image; performing binary segmentation on the first target region through a preset threshold segmentation algorithm to obtain a first image; determining a target edge according to the first image; judging whether the aortic interlayer membrane can be successfully separated or not according to the target edge; if the aortic interlayer membrane cannot be successfully separated, determining one of the two sub-regions contained in the first image with smaller gray average value as a region to be processed; determining the probability that each pixel point belongs to the aortic interlayer membrane aiming at each pixel point in the region to be processed; and extracting a second target region corresponding to the aortic interlayer membrane from the first target region according to the target pixel points in the region to be processed, wherein the probability of the target pixel points is greater than a preset probability threshold.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, a storage medium, and an electronic device.
Background
Aortic dissection (aortic dissection) refers to the state in which blood in the aorta enters the aortic media from the aortic intimal tear to separate the media, and expands along the major axis of the aorta to form a true-false two-lumen separation of the aortic wall. Generally, after the patient is subjected to the image examination, it often takes several hours or more for the doctor to analyze the examination result of the patient, and if the patient suffers from a serious aortic dissection lesion, the time greatly increases the death rate of the patient. In addition, it is critical to determine the true lumen and the false lumen if the aortic dissection is to be treated. Therefore, there is a need for a method for timely examining aortic dissection diseases, and at the same time, determining the true and false lumens of aortic dissection.
Disclosure of Invention
The purpose of the present disclosure is to provide an image processing method, an image processing apparatus, a storage medium, and an electronic device, so as to quickly and accurately implement aortic dissection membrane extraction.
In order to achieve the above object, according to a first aspect of the present disclosure, there is provided an image processing method including:
determining a first target region corresponding to an aorta from a target CT image;
performing binary segmentation on the first target region through a preset threshold segmentation algorithm to obtain a first image;
determining a target edge according to the first image;
judging whether the aortic interlayer membrane can be successfully separated or not according to the target edge;
if the aortic interlayer membrane is judged to be unsuccessfully separated, determining one of the two sub-regions contained in the first image with smaller average value of gray scale as a region to be processed, wherein if the aortic interlayer membrane is judged to be unsuccessfully separated, the first image is composed of sub-regions corresponding to a true lumen and a false lumen and a sub-region corresponding to the aortic interlayer membrane, and the region to be processed is the sub-region corresponding to the aortic interlayer membrane;
determining the probability that each pixel point belongs to the aortic interlayer membrane aiming at each pixel point in the region to be processed;
and extracting a second target region corresponding to the aortic interlayer membrane from the first target region according to the target pixel points in the region to be processed, wherein the probability of the target pixel points is greater than a preset probability threshold.
Optionally, the determining a first target region corresponding to an aorta from the target CT image comprises:
determining an aorta region in the target CT image through a preset aorta segmentation algorithm;
and carrying out filtering denoising processing and/or sharpening processing on the aorta region to obtain the first target region.
Optionally, the determining a target edge according to the first image includes:
performing expansion corrosion treatment on the first image to obtain a second image;
determining an initial edge from the second image by using an edge detection algorithm;
and obtaining the target edge according to the initial edge through a preset edge connection algorithm.
Optionally, the determining whether the aortic sandwich membrane can be successfully separated according to the target edge includes:
determining whether the target edge can divide the first target area into two connected areas according to a preset connected area marking algorithm;
if the target edge can divide the first target area into two connected areas, determining the difference value of the gray level mean values corresponding to the two divided connected areas;
if the difference is larger than a preset difference threshold value, judging that the aortic interlayer membrane can be successfully separated;
if the target edge cannot divide the first target region into two connected regions, or if the difference is less than or equal to the difference threshold, it is determined that the aortic dissection membrane cannot be successfully separated.
Optionally, the target CT image is taken from a sequence of CT images;
the determining the probability that the pixel point belongs to the aortic sandwich membrane comprises:
determining a three-dimensional Hessian matrix corresponding to the pixel points according to the target CT image and two CT images adjacent to the target CT image in the CT image sequence;
determining a three-dimensional Hessian matrix characteristic value corresponding to the pixel point according to the three-dimensional Hessian matrix;
and determining the probability that the pixel points belong to the aortic interlayer membrane according to the characteristic value of the three-dimensional Hessian matrix.
Optionally, the eigenvalues of the three-dimensional Hessian matrix include a first eigenvalue, a second eigenvalue, and a third eigenvalue;
the determining the probability that the pixel point belongs to the aortic sandwich membrane according to the characteristic value of the three-dimensional Hessian matrix comprises the following steps:
determining the probability S (P) that the pixel point P belongs to the aortic interlayer membrane according to the following formula:
Figure BDA0002795901300000031
wherein λ is1、λ2、λ3Is the first, second and third eigenvalues, respectively, and λ1|<|λ2|<|λ3L, omega is the set of the aortic dissection membrane thicknesses, sigma is one of the thicknesses contained in omega, S (sigma) is the probability that the pixel point P belongs to the aortic dissection membrane under the sigma thickness, alpha is a first preset parameter value, beta is a second preset parameter value, c is a third preset parameter value, R issheetIs the probability that a pixel P belongs to a plane, RblobIs the probability that a pixel P belongs to a block, RnoiseIs the probability that the pixel point P belongs to the noise point, and
Figure BDA0002795901300000041
optionally, the method further comprises:
and if the aortic interlayer membrane can be successfully separated, extracting a second target region corresponding to the aortic interlayer membrane from the first target region according to the target edge.
According to a second aspect of the present disclosure, there is provided an image processing apparatus, the apparatus comprising:
a first determination module for determining a first target region corresponding to an aorta from a target CT image;
the segmentation module is used for performing binary segmentation on the first target region through a preset threshold segmentation algorithm to obtain a first image;
the second determining module is used for determining the target edge according to the first image;
the judging module is used for judging whether the aortic interlayer membrane can be successfully separated or not according to the target edge;
a third determining module, configured to determine, if it is determined that the aortic dissection membrane cannot be successfully separated, one of the two sub-regions included in the first image that has a smaller mean value of gray levels as a region to be processed, wherein, if it is determined that the aortic dissection membrane cannot be successfully separated, the first image is composed of a sub-region corresponding to a true lumen and a false lumen and a sub-region corresponding to the aortic dissection membrane, and the region to be processed is the sub-region corresponding to the aortic dissection membrane;
a fourth determining module, configured to determine, for each pixel point in the region to be processed, a probability that the pixel point belongs to an aortic interlayer membrane;
and the first extraction module is used for extracting a second target region corresponding to the aortic interlayer membrane from the first target region according to the target pixel points in the region to be processed, wherein the probability of the target pixel points is greater than a preset probability threshold.
Optionally, the first determining module includes:
the first determining submodule is used for determining an aorta region in the target CT image through a preset aorta segmentation algorithm;
and the first processing sub-module is used for carrying out filtering denoising processing and/or sharpening processing on the aorta region so as to obtain the first target region.
Optionally, the second determining module includes:
the second processing submodule is used for carrying out expansion corrosion processing on the first image to obtain a second image;
a second determining submodule, configured to determine an initial edge from the second image by using an edge detection algorithm;
and the third processing submodule is used for obtaining the target edge according to the initial edge through a preset edge connection algorithm.
Optionally, the determining module includes:
a third determining submodule, configured to determine whether the target edge can divide the first target region into two connected regions according to a preset connected region labeling algorithm;
a fourth determining submodule, configured to determine, if the target edge can divide the first target region into two connected regions, a difference between grayscale mean values corresponding to the two divided connected regions;
the first judgment submodule is used for judging that the aortic sandwich membrane can be successfully separated if the difference value is larger than a preset difference value threshold value;
and the second judging submodule is used for judging that the aortic sandwich membrane cannot be successfully separated if the target edge cannot divide the first target region into two communicating regions or if the difference is smaller than or equal to the difference threshold.
Optionally, the target CT image is taken from a sequence of CT images;
the fourth determining module includes:
a fifth determining submodule, configured to determine a three-dimensional Hessian matrix corresponding to the pixel point according to the target CT image and two CT images adjacent to the target CT image in the CT image sequence;
a sixth determining submodule, configured to determine, according to the three-dimensional Hessian matrix, a three-dimensional Hessian matrix eigenvalue corresponding to the pixel point;
and the seventh determining submodule is used for determining the probability that the pixel point belongs to the aortic sandwich membrane according to the characteristic value of the three-dimensional Hessian matrix.
Optionally, the eigenvalues of the three-dimensional Hessian matrix include a first eigenvalue, a second eigenvalue, and a third eigenvalue;
the seventh determining submodule is used for determining the probability S (P) that the pixel point P belongs to the aortic sandwich membrane according to the following formula:
Figure BDA0002795901300000061
wherein λ is1、λ2、λ3Is the first, second and third eigenvalues, respectively, and λ1|<|λ2|<|λ3L, omega is the set of the aortic dissection membrane thicknesses, sigma is one of the thicknesses contained in omega, S (sigma) is the probability that the pixel point P belongs to the aortic dissection membrane under the sigma thickness, alpha is a first preset parameter value, beta is a second preset parameter value, c is a third preset parameter value, R issheetIs the probability that a pixel P belongs to a plane, RblobIs the probability that a pixel P belongs to a block, RnoiseIs the probability that the pixel point P belongs to the noise point, and
Figure BDA0002795901300000062
optionally, the apparatus further comprises:
and the second extraction module is used for extracting a second target region corresponding to the aortic interlayer membrane from the first target region according to the target edge if the aortic interlayer membrane can be successfully separated.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the disclosure.
According to the technical scheme, a first target region corresponding to an aorta is determined from a target CT image, binary segmentation is carried out on the first target region through a preset threshold segmentation algorithm to obtain a first image, a target edge is determined according to the first image, whether an aortic interlayer membrane can be successfully separated or not is judged according to the target edge, if the aortic interlayer membrane cannot be successfully separated, one of two sub-regions contained in the first image with a smaller gray mean value is determined as a region to be processed, the probability that a pixel belongs to the aortic interlayer membrane is determined for each pixel in the region to be processed, and then a second target region corresponding to the aortic interlayer membrane is extracted from the first target region according to the target pixel in the region to be processed with the probability greater than a preset probability threshold. Therefore, by the threshold segmentation algorithm, the distribution characteristic of the contrast agent during imaging is combined, the aortic interlayer membrane is extracted, under the condition of failure of primary extraction, the aortic interlayer membrane (and the pixels nearby the aortic interlayer membrane) is utilized, the aortic interlayer membrane is further extracted by a method for analyzing the eigenvalue of the three-dimensional Hessian matrix, and because only the eigenvalue of the three-dimensional Hessian matrix is analyzed on the aortic interlayer membrane and the pixels nearby in a small range, the error classification of the eigenvalue analysis of the three-dimensional Hessian matrix can be reduced, the accuracy is improved, and meanwhile, because the processing area is small, the data processing speed can also be improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram of an image processing method provided in accordance with one embodiment of the present disclosure;
FIG. 2 is a flow diagram of an image processing method provided in accordance with another embodiment of the present disclosure;
FIG. 3 is a block diagram of an image processing apparatus provided in accordance with one embodiment of the present disclosure;
FIG. 4 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a flowchart of an image processing method provided according to an embodiment of the present disclosure. As shown in fig. 1, the method may include the steps of:
in step 11, a first target region corresponding to an aorta is determined from a target CT image;
in step 12, performing binary segmentation on the first target region through a preset threshold segmentation algorithm to obtain a first image;
in step 13, determining a target edge according to the first image;
in step 14, judging whether the aortic interlayer membrane can be successfully separated according to the target edge;
in step 15, if it is determined that the aortic sandwich membrane cannot be successfully separated, determining one of the two sub-regions included in the first image with a smaller gray scale mean value as a region to be processed;
in step 16, determining the probability that each pixel point belongs to the aortic interlayer membrane for each pixel point in the region to be processed;
in step 17, according to the target pixel points in the region to be processed, the probability of which is greater than the preset probability threshold, a second target region corresponding to the aortic interlayer membrane is extracted from the first target region.
The target CT image is taken from an aortic CTA image, where CTA refers to CT (Computed Tomography) vessel imaging. Aortic CTA generally requires a patient to visualize the aorta with a contrast agent injected through a body vein under a CT machine, and then obtain a CT image sequence of the aorta to reconstruct the three-dimensional structure of the aorta. If it is required to perform image processing on which image in the CT image sequence of the aorta, the image can be used as the target CT image, and a series of steps of the method provided by the present disclosure can be performed.
In step 11, a first target region corresponding to the aorta is determined from the target CT image.
In one possible embodiment, step 11 may comprise the steps of:
an aorta region in the target CT image is determined and is taken as the first target region.
The aorta segmentation algorithm for determining the aorta region in the target CT image may adopt any method in the art that can realize aorta extraction. Illustratively, the aorta region can be extracted by a method based on Hough transform deformable surface model initialization.
In another possible embodiment, step 11 may include the steps of:
determining an aorta region in the target CT image through a preset aorta segmentation algorithm;
and carrying out filtering denoising processing and/or sharpening processing on the aorta region to obtain a first target region.
The manner of determining the aorta region in the target CT image is consistent with that described above, and is not described herein again.
After the aorta region in the target CT image is determined, it may be further optimized, for example, the aorta region is subjected to filtering denoising and/or sharpening to obtain a better first target region. The filtering and denoising processing is carried out on the aorta region, so that the noise in the aorta region can be effectively removed, and the influence of excessive noise on the subsequent image processing effect is avoided. The edge information can be effectively enhanced by sharpening the aorta region, so that the subsequent extraction is facilitated. The filtering denoising process and the sharpening process can adopt a processing mode commonly used in the field.
For example, the filtering and denoising processing on the aorta region may adopt a median filtering mode. The median filtering is a typical nonlinear filtering mode, and is a nonlinear signal processing technology which is based on a sorting statistical theory and is capable of effectively suppressing noise, and removing impulse noise and salt and pepper noise, and meanwhile, edge details of an image can be retained. When median filtering is carried out on the aorta region, the median of the gray value of the neighborhood of the pixel point can be used as the gray value of the pixel point, so that the surrounding pixel values can be close to the real value, and isolated noise points are eliminated. By adopting the mode, the noise can be effectively inhibited, and meanwhile, the edge information of the aortic sandwich membrane can be kept.
In step 12, a preset threshold segmentation algorithm is used to perform binary segmentation on the first target region to obtain a first image.
After the first target area is determined, binary segmentation can be performed on the first target area through a preset threshold segmentation algorithm to obtain a first image. The threshold segmentation algorithm may adopt any algorithm capable of implementing threshold segmentation in the field. For example, the maximum inter-class variance method may be used as the threshold segmentation algorithm.
In step 13, a target edge is determined from the first image.
In one possible embodiment, step 13 may include the steps of:
performing expansion corrosion treatment on the first image to obtain a second image;
determining an initial edge from the second image by using an edge detection algorithm;
and obtaining the target edge according to the initial edge through a preset edge connection algorithm.
Wherein, the expansion corrosion treatment, the edge detection algorithm and the edge connection algorithm can adopt the conventional mode in the field.
The expansion erosion treatment is carried out on the first image, so that the boundaries of a true lumen and a false lumen, namely an aortic sandwich membrane, can be smoothed. Thereafter, an initial edge is determined from the second image using an edge detection algorithm, which extracts the aortic dissection. Illustratively, the edge detection algorithm may employ the Robert algorithm. After the initial edge is extracted, the connected target edge is obtained through an edge connection algorithm, so that the aortic sandwich membrane can be more complete, and the true lumen and the false lumen are all closed regions.
In step 14, judging whether the aortic interlayer membrane can be successfully separated according to the target edge;
because the contrast agent is partially uneven in the aorta true lumen and the false lumen, the gray values of the pixels of the aorta true lumen and the false lumen in the target CT image are different, so that the aorta true lumen and the false lumen can be divided into two different regions through the threshold segmentation algorithm, and the gray values of the two regions are greatly different. According to the characteristic, whether the aortic sandwich membrane can be successfully obtained directly through the result of the threshold segmentation algorithm can be judged,
in one possible embodiment, step 14 may include the steps of:
determining whether the target edge can divide the first target area into two connected areas or not according to a preset connected area marking algorithm;
if the target edge can divide the first target area into two connected areas, determining the difference value of the gray level mean values corresponding to the two divided connected areas;
if the difference value is larger than a preset difference value threshold value, judging that the aortic interlayer membrane can be successfully separated;
if the target edge cannot divide the first target region into two connected regions, or if the difference is less than or equal to the difference threshold, it is determined that the aortic dissection membrane cannot be successfully separated.
The connected region marking algorithm can judge whether the two regions are connected, and can adopt a common method in the field. After the target edge is determined, the first target area can be divided into two areas by the target edge, and whether the two areas are connected or not can be determined according to the two areas and the connection marking algorithm.
If the target edge can divide the first target area into two connected areas, the next determination can be performed. Namely, the difference value of the gray level mean values corresponding to the two divided connected regions is determined. According to the two connected regions, the gray level mean value of each connected region can be determined according to the gray level value of each pixel point contained in each connected region, and then the difference (absolute value) of the gray level mean values of the two connected regions can be obtained according to the gray level mean values corresponding to the two connected regions.
If the difference value is larger than the preset difference threshold value, it is indicated that due to the fact that the contrast agent is partially uneven in the aorta true lumen and the false lumen, the gray values of the aorta true lumen and the false lumen in the target CT image are different, and therefore it can be judged that the aortic sandwich membrane can be successfully separated.
If the target edge cannot divide the first target region into two connected regions, or if the difference is less than or equal to the difference threshold, it indicates that the two connected regions cannot be directly separated, and therefore, it can be determined that the aortic sandwich membrane cannot be successfully separated.
In some cases, due to the waiting time and the like, the distribution of the contrast agent in the aorta real cavity and the false cavity is almost uniform in the CT image, so that the gray values of the aorta real cavity and the false cavity pixels in the CT image are very close. Therefore, the aortic dissection membranes cannot be accurately extracted through the above steps, i.e., the aortic dissection membranes cannot be successfully separated as determined in step 14. In this case, the aortic dissection membrane can be further processed through steps 15 to 17, and the process will be described in detail below.
First, in step 15, if it is determined that the aortic sandwich membrane cannot be successfully separated, the region with the smaller gray-scale mean value in the two sub-regions included in the first image is determined as the region to be processed.
Wherein the region to be treated is a subregion corresponding to the aortic sandwich membrane.
When the aortic dissection membrane is judged to be unable to be successfully separated, the aortic true lumen and the aortic false lumen are divided into a region through the processing of the threshold segmentation algorithm and the like, and the aortic dissection membrane and the nearby pixels are divided into a region. That is, if it is determined that the aortic dissection membrane cannot be successfully separated, the first image is composed of sub-regions corresponding to the true lumen and the false lumen and a sub-region corresponding to the aortic dissection membrane.
Therefore, during further processing, the sub-region corresponding to the aortic sandwich membrane should be selected for processing. Typically, the gray values of pixels corresponding to the true and false lumens are higher than the gray values of pixels corresponding to the aortic interlayer membrane. Based on the characteristic, the gray average values corresponding to the two sub-regions included in the first image region can be respectively determined, and the sub-region corresponding to the sub-region with the smaller gray average value is used as the region to be processed so as to be processed subsequently.
Then, step 16 may be executed to determine, for each pixel point in the region to be processed, a probability that the pixel point belongs to the aortic interlayer membrane.
By way of example, determining the probability that a pixel belongs to the aortic dissection membrane may include the following steps:
determining a three-dimensional Hessian matrix corresponding to a pixel point according to a target CT image and two CT images adjacent to the target CT image in a CT image sequence;
determining a three-dimensional Hessian matrix characteristic value corresponding to a pixel point according to the three-dimensional Hessian matrix;
and determining the probability that the pixel points belong to the aortic interlayer membrane according to the characteristic value of the three-dimensional Hessian matrix.
As mentioned above, the target CT image is taken from the CT image sequence, i.e. from the three-dimensional image, and therefore, in order to obtain more abundant information, two images adjacent to the target CT image, i.e. the previous CT image and the next CT image of the target CT image in the CT image sequence, can be selected from the CT image sequence. From such three images, a three-dimensional Hessian matrix (pixel matrix) corresponding to the pixel points can be determined.
And then, according to the three-dimensional Hessian matrix, determining the characteristic value of the three-dimensional Hessian matrix corresponding to the pixel point. The calculation method of the three-dimensional Hessian matrix eigenvalue belongs to the prior art, and is not described herein. Wherein, the eigenvalue of the three-dimensional Hessian matrix may include three eigenvalues.
And then, determining the probability that the pixel points belong to the aortic interlayer membrane according to the characteristic value of the three-dimensional Hessian matrix.
By way of example, the probability s (P) that a pixel point P belongs to the aortic dissection membrane can be determined according to the following formula:
Figure BDA0002795901300000131
wherein λ is1、λ2、λ3Is a first eigenvalue, a second eigenvalue and a third eigenvalue, respectively, and lambda1|<|λ2|<|λ3I, Ω is a set of aortic dissection thicknesses (e.g., may include 1mm and 2mm), σ is one of thicknesses included in Ω (e.g., σ may be selected manually or set based on empirical values), S (σ) is a probability that the pixel point P belongs to the aortic dissection at σ thickness, α is a first preset parameter value (e.g., 0.5), β is a second preset parameter value (e.g., 0.5), c is a third preset parameter value (e.g., 1/2 set based on maximum norm of three-dimensional Hessian matrix), R is a first preset parameter value (e.g., a first preset parameter value or a second preset parameter value, and R is a third preset parameter valuesheetIs the probability that a pixel P belongs to a plane, RblobIs the probability that a pixel P belongs to a block, RnoiseIs the probability that the pixel point P belongs to the noise point, and
Figure BDA0002795901300000132
Figure BDA0002795901300000133
thus, the probability that each pixel point in the region to be processed belongs to the aortic sandwich membrane can be determined.
In step 17, according to the target pixel points in the region to be processed, the probability of which is greater than the preset probability threshold, a second target region corresponding to the aortic interlayer membrane is extracted from the first target region.
The preset probability threshold may be an empirical value.
According to the method, a first target region corresponding to an aorta is determined from a target CT image, binary segmentation is carried out on the first target region through a preset threshold segmentation algorithm to obtain a first image, a target edge is determined according to the first image, whether an aorta interlayer membrane can be successfully separated is judged according to the target edge, if the aorta interlayer membrane cannot be successfully separated is judged, one of two sub-regions contained in the first image with a smaller gray mean value is determined as a region to be processed, the probability that a pixel belongs to the aorta interlayer membrane is determined for each pixel in the region to be processed, and then a second target region corresponding to the aorta interlayer membrane is extracted from the first target region according to the target pixel in the region to be processed with the probability greater than a preset probability threshold. Therefore, by the threshold segmentation algorithm, the distribution characteristic of the contrast agent during imaging is combined, the aortic interlayer membrane is extracted, under the condition of failure of primary extraction, the aortic interlayer membrane (and the pixels nearby the aortic interlayer membrane) is utilized, the aortic interlayer membrane is further extracted by a method for analyzing the eigenvalue of the three-dimensional Hessian matrix, and because only the eigenvalue of the three-dimensional Hessian matrix is analyzed on the aortic interlayer membrane and the pixels nearby in a small range, the error classification of the eigenvalue analysis of the three-dimensional Hessian matrix can be reduced, the accuracy is improved, and meanwhile, because the processing area is small, the data processing speed can also be improved.
Fig. 2 is a flowchart of an image processing method provided according to another embodiment of the present disclosure. As shown in fig. 2, on the basis of the steps shown in fig. 1, the method provided by the present disclosure may further include the following steps:
in step 21, if it is determined that the aortic dissection membranes can be successfully separated, a second target region corresponding to the aortic dissection membranes is extracted from the first target region according to the target edge.
If the aortic sandwich membrane is successfully separated, the target edge can be better separated from the true lumen and the false lumen, so that the second target region corresponding to the aortic sandwich membrane can be extracted from the first target region according to the target edge. Generally, the second target area is the area corresponding to the target edge in the first target area.
Further, the two communicating areas correspond to the real cavity and the dummy cavity respectively, and the two are divided by the second target area.
Through above-mentioned technical scheme, utilize threshold value to cut apart the algorithm, the distribution characteristic of contrast-medium in the image when combining the image shows, directly carry out aorta intermediate lamella membrane and draw, can utilize the fast advantage of threshold value to cut apart the algorithm functioning speed, draw out aorta intermediate lamella membrane fast and accurately to, through the aorta intermediate lamella membrane who draws out, can also further distinguish true chamber and false chamber, in order to promote the recognition effect.
Fig. 3 is a block diagram of an image processing apparatus provided according to an embodiment of the present disclosure. As shown in fig. 3, the apparatus 30 includes:
a first determining module 31 for determining a first target region corresponding to the aorta from the target CT image;
the segmentation module 32 is configured to perform binary segmentation on the first target region through a preset threshold segmentation algorithm to obtain a first image;
a second determining module 33, configured to determine a target edge according to the first image;
a judging module 34, configured to judge whether the aortic interlayer membrane can be successfully separated according to the target edge;
a third determining module 35, configured to determine, if it is determined that the aortic interlayer membrane cannot be successfully separated, one of the two sub-regions included in the first image that has a smaller mean value of gray levels as a region to be processed, where, if it is determined that the aortic interlayer membrane cannot be successfully separated, the first image is composed of a sub-region corresponding to a true lumen and a false lumen and a sub-region corresponding to the aortic interlayer membrane, and the region to be processed is the sub-region corresponding to the aortic interlayer membrane;
a fourth determining module 36, configured to determine, for each pixel point in the region to be processed, a probability that the pixel point belongs to an aortic interlayer membrane;
the first extraction module 37 is configured to extract a second target region corresponding to the aortic dissection membrane from the first target region according to the target pixel point in the to-be-processed region, where the probability is greater than a preset probability threshold.
Optionally, the first determining module 31 includes:
the first determining submodule is used for determining an aorta region in the target CT image through a preset aorta segmentation algorithm;
and the first processing sub-module is used for carrying out filtering denoising processing and/or sharpening processing on the aorta region so as to obtain the first target region.
Optionally, the second determining module 33 includes:
the second processing submodule is used for carrying out expansion corrosion processing on the first image to obtain a second image;
a second determining submodule, configured to determine an initial edge from the second image by using an edge detection algorithm;
and the third processing submodule is used for obtaining the target edge according to the initial edge through a preset edge connection algorithm.
Optionally, the determining module 34 includes:
a third determining submodule, configured to determine whether the target edge can divide the first target region into two connected regions according to a preset connected region labeling algorithm;
a fourth determining submodule, configured to determine, if the target edge can divide the first target region into two connected regions, a difference between grayscale mean values corresponding to the two divided connected regions;
the first judgment submodule is used for judging that the aortic sandwich membrane can be successfully separated if the difference value is larger than a preset difference value threshold value;
and the second judging submodule is used for judging that the aortic sandwich membrane cannot be successfully separated if the target edge cannot divide the first target region into two communicating regions or if the difference is smaller than or equal to the difference threshold.
Optionally, the target CT image is taken from a sequence of CT images;
the fourth determination module 36 includes:
a fifth determining submodule, configured to determine a three-dimensional Hessian matrix corresponding to the pixel point according to the target CT image and two CT images adjacent to the target CT image in the CT image sequence;
a sixth determining submodule, configured to determine, according to the three-dimensional Hessian matrix, a three-dimensional Hessian matrix eigenvalue corresponding to the pixel point;
and the seventh determining submodule is used for determining the probability that the pixel point belongs to the aortic sandwich membrane according to the characteristic value of the three-dimensional Hessian matrix.
Optionally, the eigenvalues of the three-dimensional Hessian matrix include a first eigenvalue, a second eigenvalue, and a third eigenvalue;
the seventh determining submodule is used for determining the probability S (P) that the pixel point P belongs to the aortic sandwich membrane according to the following formula:
Figure BDA0002795901300000171
wherein λ is1、λ2、λ3Is the first, second and third eigenvalues, respectively, and λ1|<|λ2|<|λ3L, omega is the set of the aortic dissection membrane thicknesses, sigma is one of the thicknesses contained in omega, S (sigma) is the probability that the pixel point P belongs to the aortic dissection membrane under the sigma thickness, alpha is a first preset parameter value, beta is a second preset parameter value, c is a third preset parameter value, R issheetIs the probability that a pixel P belongs to a plane, RblobIs the probability that a pixel P belongs to a block, RnoiseIs the probability that the pixel point P belongs to the noise point, and
Figure BDA0002795901300000172
optionally, the apparatus 30 further comprises:
and the second extraction module is used for extracting a second target region corresponding to the aortic interlayer membrane from the first target region according to the target edge if the aortic interlayer membrane can be successfully separated.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 4 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, the electronic device 1900 may be provided as a server. Referring to fig. 4, an electronic device 1900 includes a processor 1922, which may be one or more in number, and a memory 1932 for storing computer programs executable by the processor 1922. The computer program stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processor 1922 may be configured to execute the computer program to perform the image processing method described above.
Additionally, electronic device 1900 may also include a power component 1926 and a communication component 1950, the power component 1926 may be configured to perform power management of the electronic device 1900, and the communication component 1950 may be configured to enable communication, e.g., wired or wireless communication, of the electronic device 1900. In addition, the electronic device 1900 may also include input/output (I/O) interfaces 1958. The electronic device 1900 may operate based on an operating system, such as Windows Server, stored in memory 1932TM,Mac OS XTM,UnixTM,LinuxTMAnd so on.
In another exemplary embodiment, there is also provided a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the image processing method described above. For example, the computer readable storage medium may be the memory 1932 described above that includes program instructions that are executable by the processor 1922 of the electronic device 1900 to perform the image processing method described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the image processing method described above when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. An image processing method, characterized in that the method comprises:
determining a first target region corresponding to an aorta from a target CT image;
performing binary segmentation on the first target region through a preset threshold segmentation algorithm to obtain a first image;
determining a target edge according to the first image;
judging whether the aortic interlayer membrane can be successfully separated or not according to the target edge;
if the aortic interlayer membrane is judged to be unsuccessfully separated, determining one of the two sub-regions contained in the first image with smaller average value of gray scale as a region to be processed, wherein if the aortic interlayer membrane is judged to be unsuccessfully separated, the first image is composed of sub-regions corresponding to a true lumen and a false lumen and a sub-region corresponding to the aortic interlayer membrane, and the region to be processed is the sub-region corresponding to the aortic interlayer membrane;
determining the probability that each pixel point belongs to the aortic interlayer membrane aiming at each pixel point in the region to be processed;
and extracting a second target region corresponding to the aortic interlayer membrane from the first target region according to the target pixel points in the region to be processed, wherein the probability of the target pixel points is greater than a preset probability threshold.
2. The method of claim 1, wherein determining a first target region corresponding to an aorta from a target CT image comprises:
determining an aorta region in the target CT image through a preset aorta segmentation algorithm;
and carrying out filtering denoising processing and/or sharpening processing on the aorta region to obtain the first target region.
3. The method of claim 1, wherein determining a target edge from the first image comprises:
performing expansion corrosion treatment on the first image to obtain a second image;
determining an initial edge from the second image by using an edge detection algorithm;
and obtaining the target edge according to the initial edge through a preset edge connection algorithm.
4. The method according to claim 1, wherein said determining whether the aortic dissection is successful based on the target edge comprises:
determining whether the target edge can divide the first target area into two connected areas according to a preset connected area marking algorithm;
if the target edge can divide the first target area into two connected areas, determining the difference value of the gray level mean values corresponding to the two divided connected areas;
if the difference is larger than a preset difference threshold value, judging that the aortic interlayer membrane can be successfully separated;
if the target edge cannot divide the first target region into two connected regions, or if the difference is less than or equal to the difference threshold, it is determined that the aortic dissection membrane cannot be successfully separated.
5. The method of claim 1, wherein the target CT image is taken from a sequence of CT images;
the determining the probability that the pixel point belongs to the aortic sandwich membrane comprises:
determining a three-dimensional Hessian matrix corresponding to the pixel points according to the target CT image and two CT images adjacent to the target CT image in the CT image sequence;
determining a three-dimensional Hessian matrix characteristic value corresponding to the pixel point according to the three-dimensional Hessian matrix;
and determining the probability that the pixel points belong to the aortic interlayer membrane according to the characteristic value of the three-dimensional Hessian matrix.
6. The method of claim 5, wherein the three-dimensional Hessian matrix eigenvalues comprise a first eigenvalue, a second eigenvalue, and a third eigenvalue;
the determining the probability that the pixel point belongs to the aortic sandwich membrane according to the characteristic value of the three-dimensional Hessian matrix comprises the following steps:
determining the probability S (P) that the pixel point P belongs to the aortic interlayer membrane according to the following formula:
Figure FDA0002795901290000031
wherein λ is1、λ2、λ3Is the first, second and third eigenvalues, respectively, and λ1|<|λ2|<|λ3L, omega is the set of the aortic dissection membrane thicknesses, sigma is one of the thicknesses contained in omega, S (sigma) is the probability that the pixel point P belongs to the aortic dissection membrane under the sigma thickness, alpha is a first preset parameter value, beta is a second preset parameter value, c is a third preset parameter value, R issheetIs the probability that a pixel P belongs to a plane, RblobIs the probability that a pixel P belongs to a block, RnoiseIs the probability that the pixel point P belongs to the noise point, and
Figure FDA0002795901290000032
7. the method of claim 1, further comprising:
and if the aortic interlayer membrane can be successfully separated, extracting a second target region corresponding to the aortic interlayer membrane from the first target region according to the target edge.
8. An image processing apparatus, characterized in that the apparatus comprises:
a first determination module for determining a first target region corresponding to an aorta from a target CT image;
the segmentation module is used for performing binary segmentation on the first target region through a preset threshold segmentation algorithm to obtain a first image;
the second determining module is used for determining the target edge according to the first image;
the judging module is used for judging whether the aortic interlayer membrane can be successfully separated or not according to the target edge;
a third determining module, configured to determine, if it is determined that the aortic dissection membrane cannot be successfully separated, one of the two sub-regions included in the first image that has a smaller mean value of gray levels as a region to be processed, wherein, if it is determined that the aortic dissection membrane cannot be successfully separated, the first image is composed of a sub-region corresponding to a true lumen and a false lumen and a sub-region corresponding to the aortic dissection membrane, and the region to be processed is the sub-region corresponding to the aortic dissection membrane;
a fourth determining module, configured to determine, for each pixel point in the region to be processed, a probability that the pixel point belongs to an aortic interlayer membrane;
and the first extraction module is used for extracting a second target region corresponding to the aortic interlayer membrane from the first target region according to the target pixel points in the region to be processed, wherein the probability of the target pixel points is greater than a preset probability threshold.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN202011331292.9A 2020-11-24 2020-11-24 Image processing method, device, storage medium and electronic equipment Active CN112330708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011331292.9A CN112330708B (en) 2020-11-24 2020-11-24 Image processing method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011331292.9A CN112330708B (en) 2020-11-24 2020-11-24 Image processing method, device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112330708A true CN112330708A (en) 2021-02-05
CN112330708B CN112330708B (en) 2024-04-23

Family

ID=74308514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011331292.9A Active CN112330708B (en) 2020-11-24 2020-11-24 Image processing method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112330708B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782353A (en) * 2022-04-15 2022-07-22 沈阳东软智能医疗科技研究院有限公司 Method, device and equipment for classifying CTA (computed tomography angiography) images

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060127880A1 (en) * 2004-12-15 2006-06-15 Walter Harris Computerized image capture of structures of interest within a tissue sample
WO2006116672A2 (en) * 2005-04-27 2006-11-02 The Trustees Of Dartmouth College Systems and methods for tomographic image reconstruction
US20090080743A1 (en) * 2007-09-17 2009-03-26 Laurent Launay Method to detect the aortic arch in ct datasets for defining a heart window
CN102982547A (en) * 2012-11-29 2013-03-20 北京师范大学 Automatically initialized local active contour model heart and cerebral vessel segmentation method
CN103700090A (en) * 2013-12-01 2014-04-02 北京航空航天大学 Three-dimensional image multi-scale feature extraction method based on anisotropic thermonuclear analysis
CN105741310A (en) * 2016-03-21 2016-07-06 东北大学 Heart's left ventricle image segmentation system and method
CN106023198A (en) * 2016-05-16 2016-10-12 天津工业大学 Hessian matrix-based method for extracting aortic dissection of human thoracoabdominal cavity CT image
CN106127849A (en) * 2016-05-10 2016-11-16 中南大学 Three-dimensional fine vascular method for reconstructing and system thereof
CN106489152A (en) * 2014-04-10 2017-03-08 Sync-Rx有限公司 Graphical analysis in the case of it there is medical supply
CN106803251A (en) * 2017-01-12 2017-06-06 西安电子科技大学 The apparatus and method of aortic coaractation pressure differential are determined by CT images
WO2017092615A1 (en) * 2015-11-30 2017-06-08 上海联影医疗科技有限公司 Computer aided diagnosis system and method
CN106872272A (en) * 2017-02-23 2017-06-20 北京理工大学 A kind of dissection of aorta diaphragm organization mechanicses attribute determines devices and methods therefor
US20170367580A1 (en) * 2014-10-29 2017-12-28 Spectral Md, Inc. Reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification
US20180064566A1 (en) * 2005-07-07 2018-03-08 Nellix, Inc. System and Methods for Endovascular Aneurysm Treatment
CN108776961A (en) * 2018-04-02 2018-11-09 东南大学 A kind of the first break location of Thoracic Aortic Dissection localization method
CN108805134A (en) * 2018-06-25 2018-11-13 慧影医疗科技(北京)有限公司 A kind of construction method of dissection of aorta parted pattern and application
CN108932723A (en) * 2018-03-26 2018-12-04 天津工业大学 A kind of three-dimensional S nake dissection of aorta dividing method based on curved-surface shape
CN109003278A (en) * 2018-03-26 2018-12-14 天津工业大学 A kind of improved CT image aorta segmentation method based on active shape model
US20180360632A1 (en) * 2017-04-28 2018-12-20 Cook Medical Technologies Llc Medical device with induction triggered anchors and system for deployment of the same
US20190204755A1 (en) * 2017-12-29 2019-07-04 Asml Netherlands B.V. Method and device for determining adjustments to sensitivity parameters
CN110473207A (en) * 2019-07-30 2019-11-19 赛诺威盛科技(北京)有限公司 A kind of method of the Interactive Segmentation lobe of the lung
CN110610502A (en) * 2019-09-18 2019-12-24 天津工业大学 Automatic aortic arch region positioning and segmentation method based on CT image
WO2020001217A1 (en) * 2018-06-27 2020-01-02 东南大学 Segmentation method for dissected aorta in ct image based on convolutional neural network
CN110706246A (en) * 2019-10-15 2020-01-17 上海微创医疗器械(集团)有限公司 Blood vessel image segmentation method and device, electronic equipment and storage medium
US20200085344A1 (en) * 2018-09-14 2020-03-19 Neusoft Medical Systems Co., Ltd. Image processing and emphysema threshold determination
US20200116808A1 (en) * 2018-10-15 2020-04-16 Zayed University Cerebrovascular segmentation from mra images

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060127880A1 (en) * 2004-12-15 2006-06-15 Walter Harris Computerized image capture of structures of interest within a tissue sample
WO2006116672A2 (en) * 2005-04-27 2006-11-02 The Trustees Of Dartmouth College Systems and methods for tomographic image reconstruction
US20180064566A1 (en) * 2005-07-07 2018-03-08 Nellix, Inc. System and Methods for Endovascular Aneurysm Treatment
US20090080743A1 (en) * 2007-09-17 2009-03-26 Laurent Launay Method to detect the aortic arch in ct datasets for defining a heart window
CN102982547A (en) * 2012-11-29 2013-03-20 北京师范大学 Automatically initialized local active contour model heart and cerebral vessel segmentation method
CN103700090A (en) * 2013-12-01 2014-04-02 北京航空航天大学 Three-dimensional image multi-scale feature extraction method based on anisotropic thermonuclear analysis
CN106489152A (en) * 2014-04-10 2017-03-08 Sync-Rx有限公司 Graphical analysis in the case of it there is medical supply
US20170367580A1 (en) * 2014-10-29 2017-12-28 Spectral Md, Inc. Reflective mode multi-spectral time-resolved optical imaging methods and apparatuses for tissue classification
WO2017092615A1 (en) * 2015-11-30 2017-06-08 上海联影医疗科技有限公司 Computer aided diagnosis system and method
CN105741310A (en) * 2016-03-21 2016-07-06 东北大学 Heart's left ventricle image segmentation system and method
CN106127849A (en) * 2016-05-10 2016-11-16 中南大学 Three-dimensional fine vascular method for reconstructing and system thereof
CN106023198A (en) * 2016-05-16 2016-10-12 天津工业大学 Hessian matrix-based method for extracting aortic dissection of human thoracoabdominal cavity CT image
CN106803251A (en) * 2017-01-12 2017-06-06 西安电子科技大学 The apparatus and method of aortic coaractation pressure differential are determined by CT images
CN106872272A (en) * 2017-02-23 2017-06-20 北京理工大学 A kind of dissection of aorta diaphragm organization mechanicses attribute determines devices and methods therefor
US20180360632A1 (en) * 2017-04-28 2018-12-20 Cook Medical Technologies Llc Medical device with induction triggered anchors and system for deployment of the same
US20190204755A1 (en) * 2017-12-29 2019-07-04 Asml Netherlands B.V. Method and device for determining adjustments to sensitivity parameters
CN108932723A (en) * 2018-03-26 2018-12-04 天津工业大学 A kind of three-dimensional S nake dissection of aorta dividing method based on curved-surface shape
CN109003278A (en) * 2018-03-26 2018-12-14 天津工业大学 A kind of improved CT image aorta segmentation method based on active shape model
CN108776961A (en) * 2018-04-02 2018-11-09 东南大学 A kind of the first break location of Thoracic Aortic Dissection localization method
CN108805134A (en) * 2018-06-25 2018-11-13 慧影医疗科技(北京)有限公司 A kind of construction method of dissection of aorta parted pattern and application
WO2020001217A1 (en) * 2018-06-27 2020-01-02 东南大学 Segmentation method for dissected aorta in ct image based on convolutional neural network
US20200085344A1 (en) * 2018-09-14 2020-03-19 Neusoft Medical Systems Co., Ltd. Image processing and emphysema threshold determination
US20200116808A1 (en) * 2018-10-15 2020-04-16 Zayed University Cerebrovascular segmentation from mra images
CN110473207A (en) * 2019-07-30 2019-11-19 赛诺威盛科技(北京)有限公司 A kind of method of the Interactive Segmentation lobe of the lung
CN110610502A (en) * 2019-09-18 2019-12-24 天津工业大学 Automatic aortic arch region positioning and segmentation method based on CT image
CN110706246A (en) * 2019-10-15 2020-01-17 上海微创医疗器械(集团)有限公司 Blood vessel image segmentation method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DUAN XIAOJIE等: ""Segmentation of the Aortic Dissection from CT Images Based on Spatial Continuity Prior Model"", 《INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY IN MEDICINE AND EDUCATION》, pages 275 - 280 *
PEPE A等: ""Detection, segmentation, simulation and visualization"", 《MEDICAL IMAGE ANALYSIS》, vol. 65, pages 1 - 16 *
丁秋芳等: ""基于法线特征匹配的主动脉弓动态图像测量方法"", 《医用生物力学》, vol. 21, no. 4, pages 298 - 303 *
陈中中等: ""基于二次聚类的主动脉弓分割方法"", 《郑州大学学报(工学版)》, vol. 39, no. 3, pages 40 - 44 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782353A (en) * 2022-04-15 2022-07-22 沈阳东软智能医疗科技研究院有限公司 Method, device and equipment for classifying CTA (computed tomography angiography) images

Also Published As

Publication number Publication date
CN112330708B (en) 2024-04-23

Similar Documents

Publication Publication Date Title
Sreedevi et al. A novel approach for removal of pectoral muscles in digital mammogram
JP4767957B2 (en) Volumetric tumor fragmentation using combined spatial strength likelihood ratio test
Harandi et al. An automated method for segmentation of epithelial cervical cells in images of ThinPrep
CN107316311B (en) Cell nucleus image contour capture device and method thereof
US11751823B2 (en) Image processing apparatus, image processing method, and program
WO2012077130A1 (en) Method and system to detect the microcalcifications in x-ray images using nonlinear energy operator.
CN114240978B (en) Cell edge segmentation method and device based on adaptive morphology
Khordehchi et al. Automatic lung nodule detection based on statistical region merging and support vector machines
WO2018176319A1 (en) Ultrasound image analysis method and device
CN110060246B (en) Image processing method, device and storage medium
CN112330708A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110766682A (en) Pulmonary tuberculosis positioning screening device and computer equipment
Soni et al. CT scan based brain tumor recognition and extraction using Prewitt and morphological dilation
Goswami et al. An analysis of image segmentation methods for brain tumour detection on MRI images
CN110428431B (en) Method, device and equipment for segmenting cardiac medical image and storage medium
Sachdeva et al. Automatic segmentation and area calculation of optic disc in ophthalmic images
MUSTAFA et al. Automatic blood vessel detection on retinal image using hybrid combination techniques
CN115908802A (en) Camera shielding detection method and device, electronic equipment and readable storage medium
Dey et al. Chest X-ray analysis to detect mass tissue in lung
Anuradha et al. Improved segmentation of suspicious regions of masses in mammograms by watershed transform
Patino-Correa et al. White matter hyper-intensities automatic identification and segmentation in magnetic resonance images
Sigit et al. Automatic detection brain segmentation to detect brain tumor using MRI
Das et al. Entropy thresholding based microaneurysm detection in fundus images
KN et al. Comparison of 3-segmentation techniques for intraventricular and intracerebral hemorrhages in unenhanced computed tomography scans
Ooto et al. Cost reduction of creating likelihood map for automatic polyp detection using image pyramid

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant