CN107563983B - Image processing method and medical imaging device - Google Patents

Image processing method and medical imaging device Download PDF

Info

Publication number
CN107563983B
CN107563983B CN201710899166.5A CN201710899166A CN107563983B CN 107563983 B CN107563983 B CN 107563983B CN 201710899166 A CN201710899166 A CN 201710899166A CN 107563983 B CN107563983 B CN 107563983B
Authority
CN
China
Prior art keywords
blood vessel
image
vessel
layer
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710899166.5A
Other languages
Chinese (zh)
Other versions
CN107563983A (en
Inventor
姜娈
张宇
马金凤
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201710899166.5A priority Critical patent/CN107563983B/en
Publication of CN107563983A publication Critical patent/CN107563983A/en
Application granted granted Critical
Publication of CN107563983B publication Critical patent/CN107563983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image processing method and medical imaging equipment, which are applied to the technical field of image processing and improve the accuracy of identification of left and right coronary images to a certain extent. The image processing method provided by the embodiment of the invention comprises the following steps: acquiring an original blood vessel three-dimensional scanning image; processing the original blood vessel three-dimensional scanning image to obtain a specified blood vessel candidate region; processing the appointed blood vessel candidate region to obtain a central line of the appointed blood vessel candidate region; acquiring two-dimensional slice data perpendicular to the central line at each sampling point on the central line along the trend of the central line; inputting the two-dimensional slice data into a trained neural network for learning to obtain a learning result; determining a specified blood vessel image according to a plurality of learning results.

Description

Image processing method and medical imaging device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and a medical imaging device.
Background
In recent years, the morbidity and mortality of cardiovascular diseases are increasing year by year, and have become the leading cause of death worldwide, and among various cardiovascular diseases, coronary artery disease accounts for a very high proportion of mortality. With the development of technology, the use of CTA (computed tomography angiography) is well suited for the diagnosis of cardiac diseases due to coronary artery disease.
The coronary artery comprises a left coronary artery and a right coronary artery which are respectively emitted from an aorta at the bottom of the heart, extend towards the apex of the heart, are enveloped on the surface of the heart, are thinner towards the apex of the heart far away from the root of the coronary artery, are based on the complex structure of the coronary artery, and are wrapped around the pericardium, so that the accurate extraction of the coronary artery in an image generated by CTA becomes a very critical step.
Since the physician needs to inject the patient with a contrast medium before a CTA image is acquired, the contrast medium flows into the lumen near the coronary artery, and the accuracy of the coronary artery image extracted from the CTA image is affected by the problems of the physician's operation with the instrument.
Disclosure of Invention
The embodiment of the invention provides an image processing method and medical imaging equipment, which improve the accuracy of extracting coronary arteries from a CTA image.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring an original blood vessel three-dimensional scanning image;
processing the original blood vessel three-dimensional scanning image to obtain a specified blood vessel candidate region;
processing the appointed blood vessel candidate region to obtain a central line of the appointed blood vessel candidate region;
acquiring two-dimensional slice data perpendicular to the central line at each sampling point on the central line along the trend of the central line;
inputting the two-dimensional slice data into a trained neural network for learning to obtain a learning result;
determining a specified blood vessel image according to a plurality of learning results.
The above-described aspects and any possible implementations further provide an implementation in which the blood vessel is designated as a coronary artery, the method further comprising:
processing the coronary arteries in the designated vessel image to remove regions of non-coronary arteries in each vessel branch.
The above-described aspect and any possible implementation further provides an implementation in which processing coronary arteries in the designated vessel image includes:
determining a centerline of the coronary artery;
determining a bifurcation point and an end point on a centerline of the coronary artery;
dividing the center line into a plurality of sections according to the bifurcation points and the end points;
and dividing the specified blood vessel image into a non-coronary artery region and a coronary artery region according to the plurality of segments.
The above-described aspects and any possible implementations further provide an implementation in which the blood vessel is designated as a coronary artery, the method further comprising:
processing the coronary arteries in the designated vessel image to remove non-coronary vessel points in the designated vessel image.
The above-described aspect and any possible implementation further provide an implementation, before the acquiring an original blood vessel three-dimensional scanning image, the method further includes:
and training the neural network by using the positive and negative samples to obtain the trained neural network.
The above-described aspects and any possible implementations further provide an implementation in which the neural network includes a convolutional layer, a pooling layer, a nonlinear mapping layer, a fully-connected layer, and a classification layer, and the classification probability value of the two-dimensional slice data is determined by the trained neural network.
The above-described aspect and any possible implementation further provide an implementation in which the classification probability value includes a probability that the two-dimensional slice data belongs to a specified vessel.
The above-described aspects and any possible implementations further provide an implementation in which the neural network employs a perceptual-memory-decision model.
In a second aspect, an embodiment of the present invention further provides an image processing method, including:
acquiring an original blood vessel three-dimensional scanning image;
determining a specified blood vessel candidate region from the original blood vessel three-dimensional scanning image;
dividing the specified blood vessel candidate region into a plurality of two-dimensional slice data;
inputting a plurality of two-dimensional slice data into the trained neural network for learning to obtain a learning result;
and determining a specified blood vessel image in the specified blood vessel candidate region according to a plurality of learning results.
In a third aspect, an embodiment of the present invention further provides a medical imaging apparatus, where the apparatus includes:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to:
acquiring an original blood vessel three-dimensional scanning image;
determining a specified blood vessel candidate region from the original blood vessel three-dimensional scanning image;
dividing the specified blood vessel candidate region into a plurality of two-dimensional slice data;
inputting a plurality of two-dimensional slice data into the trained neural network for learning to obtain a learning result;
and determining a specified blood vessel image in the specified blood vessel candidate region according to a plurality of learning results.
According to the image processing method and the medical imaging equipment provided by the embodiment of the invention, the coronary artery candidate region obtained by processing the original blood vessel three-dimensional scanning image is processed to obtain the central line of the candidate region, then the two-dimensional slice data with the specified size, which is perpendicular to the central line at each sampling point on the central line, is obtained along the trend of the central line, and finally the two-dimensional slice data is input into a trained neural network as input data, and after learning of the neural network, the non-coronary artery region in the original blood vessel three-dimensional scanning image can be effectively removed, the accurate identification of the left and right coronary arteries is realized, and the problem that the accuracy of accurately extracting the coronary arteries from a CTA image in the prior art is low is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of an embodiment of a medical image processing method provided by an embodiment of the invention;
fig. 2 is a schematic diagram illustrating a result of obtaining a centerline of a candidate region of a specified blood vessel according to an embodiment of the present invention;
FIG. 3 is another flow chart of an embodiment of a medical image processing method according to the present invention;
FIG. 4 is another flow chart of an embodiment of a medical image processing method according to the present invention;
FIG. 5 is a schematic diagram of a candidate connected domain according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a result of deep learning of a candidate connected component according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a final coronary artery extraction result according to an embodiment of the present invention;
FIG. 8 is another flowchart of an embodiment of a medical image processing method according to the present invention;
fig. 9 is a scene schematic diagram of an embodiment of a medical image processing method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Existing object or tissue segmentation generally includes gray value-based models or morphological models, however, for objects or tissues with complex morphology, the segmentation process is susceptible to various factors. In view of the above problem, the present application proposes a medical image processing method, which may include a training (learning) phase and a detection (prediction) phase, wherein: the training stage can be an off-line process, and during the off-line process, a database consisting of the same body part and the same modal image of the same subject or different subjects can be used for training to obtain a model or parameters containing a plurality of characteristics; the detection stage is an online process, and detects the medical image to be detected according to the trained model containing a plurality of characteristics, and obtains the target organ identification or positioning aiming at the current medical image.
In some embodiments, the database may include statistical measurements of the location, function, subject information, etc. of the anatomical structure, the mapping between the target or tissue and each pixel on the medical image may be learned or trained by a training process through optimal fitting between a minimal optimization prediction on the database and the target or tissue, or a morphology-based model and/or a pixel-based model that best matches the target or tissue may be obtained by the training process. Of course, the three-dimensional medical images in the database may also be expert labeled (identified) medical images, by which key features and their characteristics about the lesion or target organ are automatically learned. The automatically learned features are more representative and universal than the manually selected features, and the target organ obtained by the automatically learned feature screening has higher accuracy.
In some embodiments, the training phase comprises: acquiring a plurality of three-dimensional medical images; segmenting a plurality of candidate regions from the plurality of three-dimensional medical images, the plurality of candidate regions corresponding to one or more target sites or target tissues; extracting central lines of the target part in the candidate areas, dividing the three-dimensional medical images corresponding to the candidate areas according to the central lines, and acquiring a two-dimensional slice image data set; and taking the two-dimensional slice image data set as a training sample, and performing learning training on the deep learning neural network by using the training sample, wherein the deep learning neural network after training can be used for segmenting and testing a target part in the three-dimensional medical image. Further, the target site may be a tubular tissue such as an artery, a vein, a trachea, and the like.
In the embodiment of the present invention, the medical image processing method may be used to process an original blood vessel three-dimensional scanning image of a heart region to obtain a coronary artery region image, so as to facilitate a doctor to judge a coronary artery disease, determine a lesion position, estimate a lesion degree, and the like.
Fig. 1 is a flowchart of an embodiment of a medical image processing method according to an embodiment of the present invention, and the three-dimensional medical image may be a CT angiography image (CTA) or a MR angiography image (MRA), which is described below by taking CTA as an example. As shown in fig. 1, the image processing method provided by the embodiment of the present invention may include the following steps:
101. and acquiring an original blood vessel three-dimensional scanning image.
After a contrast agent is injected into a blood vessel for a detected person, a physician scans the detected person by using a Computed Tomography (CT) device to obtain a tomographic image, and then reconstructs the tomographic image to obtain an original blood vessel three-dimensional scanning image of a target region or tissue.
In one particular implementation, it may be a three-dimensional scan image of the original blood vessels of the heart region.
102. And processing the original blood vessel three-dimensional scanning image to obtain a specified blood vessel candidate region.
In a specific implementation, when the original blood vessel three-dimensional scan image is an original blood vessel three-dimensional scan image of a heart region, acquiring the specified blood vessel candidate region may be exemplarily described as follows:
the treatment process can be as follows: firstly, determining the position of an aorta in an original blood vessel three-dimensional scanning image, performing enhancement filtering on the original image to obtain an enhanced filtering image, and in the step, determining the position of the aorta in the original image according to the characteristics of the aorta in the original image and Hough transformation, wherein the characteristics of the aorta are the gray value and the shape characteristics of the aorta. In CTA images, the aorta is characterized by: the gray value of the aorta is generally between 250 and 550; the aortic root approximates a standard circular area on the CTA image and has a radius between 10-25 cm. Determining a circle center at the root of the aorta by Hough transform according to the feature information of the aorta in the CTA image; obtaining a circular area by area growth with the circle center as a starting point; then each layer of image is deformed to a certain extent on the basis of the previous layer of image, the outline of the corresponding area of each layer of image is still closed, and the closed contour line of the layer of image is stopped to be searched until the area of the closed contour line on a certain layer of image is suddenly increased; and overlapping the contour regions on each layer of image to obtain the segmentation result of the aorta in the CTA image.
Secondly, determining the initial regions of the left and right coronary arteries according to the position of the aorta, the three-dimensional scanning image of the original blood vessel and the gray scale of the enhanced filtering image. Specifically, the outline of the aorta is expanded outward by m mm, and a ring-shaped structure is formed between the aorta. Because the left and right coronary arteries are branches of the aorta and are positioned on two sides of the aorta, only part of pixel points around the aorta are needed, and the whole image does not need to be processed. Then, the probability that each pixel point in the annular structure becomes a blood vessel point is calculated. The probability of a point becoming a coronary vessel point can be determined in combination with the aorta position, the raw image gray scale and the enhanced filtered image gray scale. Then, the pixel points with the probability larger than the preset threshold are blood vessel points, and the connected regions of the blood vessel points are counted. Then, according to the position relation between the connected region and the aorta and the gray value of the connected region, the condition value of each connected region is calculated, and according to the condition value and the position relation between the connected region and the aorta, descending sorting is carried out. And respectively selecting the front M sequenced connected regions on the left side and the right side of the aorta, sequentially growing the front M connected regions, and taking the S-th connected region as an initial region of the corresponding side if the growing region of the S-th connected region reaches a preset volume threshold, wherein S is less than or equal to M.
Then, the enhanced filtering image is segmented based on at least one preset segmentation threshold value, and a candidate region corresponding to the preset segmentation threshold value is obtained. Specifically, N preset segmentation threshold values are selected, and threshold segmentation is performed on the enhanced filtering image respectively to obtain N corresponding threshold segmentation results. And then, removing pixel points outside the expansion areas of the initial areas of the left and right coronary arteries from the N threshold segmentation results to obtain candidate areas corresponding to the N thresholds.
And finally, determining the candidate regions of the left and right coronary arteries according to the candidate regions corresponding to the preset segmentation threshold and the initial regions of the left and right coronary arteries. Specifically, the volumes of the candidate region and the left coronary artery starting region on the left side of the aorta corresponding to each N preset segmentation thresholds and the volumes of the candidate region and the right coronary artery starting region on the right side of the aorta corresponding to each N preset segmentation thresholds are respectively calculated. Then, according to the descending order of the N preset segmentation thresholds, the preset segmentation threshold with the largest volume change rate corresponding to the adjacent thresholds is respectively used as the optimal threshold of the left and right coronary arteries, and then the candidate region in the segmented image corresponding to the optimal threshold of the left and right coronary arteries is the candidate region of the left and right coronary arteries.
In the embodiment of the present invention, other methods for acquiring the specified blood vessel candidate region in the prior art may also be adopted.
103. And processing the specified blood vessel candidate region to obtain the central line of the specified blood vessel candidate region.
The extraction method for specifying the vessel candidate region vessel centerline is to segment the vessel from the medical image, and to obtain the vessel skeleton line based on the refinement method by using the mathematical morphology erosion operation, which can be referred to as Pal < gyi K, Balough E, KubaA, et al.A. sequential 3D erosion and bits medical applications [ C ]// Biennial International Conference on Information Processing in medical imaging. Springer, Berlin, Heidelberg,2001: 409-.
In the embodiment of the present invention, the voxel points with the largest shortest distance value to the surface can be found from the voxels with the same distance value to the organ, and the voxel points are connected to form the centerline of the blood vessel.
In one particular implementation, a coronary artery initiation connected domain (region) is determined from the candidate connected domains. When at most 5 starting connected domains of the specified vessel candidate remain on the left and right sides, the method for extracting the center line by using the refinement method based on the specified vessel candidate region is exemplarily described as follows:
1) and classifying the blood vessel tree, and determining the attribute of any point on the skeleton line by using a thinning method. Exemplarily, determining connectivity of a single pixel in a neighborhood related to a skeleton line by using a thinning method, and when the point has only 1 neighborhood point, defining the point as an end point, and representing the point as a blood vessel starting point or a blood vessel end point on a blood vessel; when the point has two neighborhood points, the point is defined as a common connection point and appears as a middle point of the blood vessel on the blood vessel; when the point has 3 neighborhood points, it appears as a vessel bifurcation point on the vessel. Further, when the nodes on the skeleton are connected or indirectly connected by other nodes on the skeleton, the nodes are judged to belong to the same class; and when the nodes on the framework are not connected and are not indirectly connected with other nodes on the framework, judging that the nodes do not belong to the same class.
2) Taking any end point of a blood vessel tree as an initial point, judging whether the end point is operated, and if not, executing the step 3); if so, the end points of other vessel trees continue to be selected until all vessel tree processing is complete.
3) Searching nodes along the vessel tree, judging the attributes of the nodes on the skeleton lines in a plurality of neighborhoods of the current node, and continuing to execute when the nodes or the bifurcation points are judged; when the judgment is that the terminal is the terminal, executing step 4);
4) judging whether an uncalculated branch point exists or not, if so, deleting the branch point and returning to the step 3) for continuous execution; if not, the node searching process of the blood vessel tree is ended, and the step 2) is returned.
Optionally, on the basis of the above method, a root node based on the overall structural feature of the blood vessel may be extracted, and the centerline of the blood vessel is corrected according to the root node. As shown in fig. 2, fig. 2 is a schematic diagram illustrating a result of obtaining a centerline of a specified blood vessel candidate region, where the centerline is a set of center points of a plurality of two-dimensional slice images corresponding to the specified blood vessel candidate region according to an embodiment of the present invention.
In practical application, other methods for acquiring the centerline of the specified blood vessel candidate region in the prior art can also be adopted.
104. And acquiring two-dimensional slice data perpendicular to the central line at each sampling point on the central line along the trend of the central line.
In the embodiment of the present invention, how to acquire the centerline of the blood vessel is described in the foregoing, the purpose of acquiring the centerline of the blood vessel is to determine the cutting position of the two-dimensional slice, and specifically, two-dimensional slice data perpendicular to the centerline can be acquired along the route of the centerline, in a specific implementation process, the specified size is 64 × 64 pixels or 32 × 32 pixels, the sampling points can be set according to actual conditions, and a plurality of sampling points are set according to the specified interval distance, since the blood vessel is not fixed, the normal direction perpendicular to the central line at each sampling point needs to be determined first, in such a way that, the current sampling point and its nth point (n-5) forward on the center line in the direction the center line runs, after the normal direction is determined, the direction perpendicular to the center line can be determined, and finally the corresponding two-dimensional slice data is obtained.
Therefore, the original blood vessel three-dimensional scanning image is converted into two-dimensional slice data for being input into the model for neural network learning. It should be noted that the two-dimensional slice data can reduce the difficulty of sample processing compared with the three-dimensional scanning image when the neural network is learned and processed; two-dimensional slice data is obtained by three-dimensional scanning images, and the number of samples can be increased.
105. And inputting the two-dimensional slice data into the trained neural network for learning to obtain a learning result.
In the embodiment of the invention, the purpose of inputting the two-dimensional slice data into the final neural network for learning is to determine whether each two-dimensional slice data is a target region or a tissue, so that after each two-dimensional slice data is input into the final neural network for learning, a corresponding result can be obtained. In one embodiment, the result is two, one is a probability that the two-dimensional slice data belongs to the target region or tissue, and the other is a probability that the two-dimensional slice data does not belong to the target region or tissue.
In a specific implementation process, when the original three-dimensional blood vessel scanning image is a heart region and the two-dimensional slice data is two-dimensional slice data related to a coronary artery, inputting each two-dimensional slice data into a final neural network for learning, and obtaining a probability that the two-dimensional slice data belongs to the coronary artery or a probability that the two-dimensional slice data does not belong to the coronary artery. Further, the attribute of the two-dimensional slice data can be determined from the probability value of the learning result. For example, for any two-dimensional slice data, after being processed by the trained neural network, the probability of belonging to the coronary artery (positive sample) is 0.7, and the probability of not belonging to the coronary artery (negative sample) is 0.3, that is, the probability of belonging to the positive sample is greater than the probability of belonging to the negative sample, so that the two-dimensional slice data can be determined to belong to the coronary artery. For another example, for a two-dimensional slice data, after being processed by using the trained neural network, the probability of belonging to a coronary artery (positive sample) is 0.2, and the probability of not belonging to a coronary artery (negative sample) is 0.8, that is, the probability of belonging to a negative sample is greater than the probability of belonging to a positive sample, so that it can be determined that the two-dimensional slice data does not belong to a coronary artery or a non-coronary artery. Similarly, for any two-dimensional slice data input to the trained neural network, it may be determined that the two-dimensional slice data belongs to a coronary artery or a non-coronary artery. It should be noted that, in the present application, when the two-dimensional slice data is used for neural network learning and processing, the true attributes of the sample can be reflected from multiple angles compared with the three-dimensional scan image, and the structure obtained by learning is more reliable.
106. A designated blood vessel image is determined from the plurality of learning results.
In the foregoing, the acquired two-dimensional slice data has certain continuity and regularity, so that a plurality of classification probability values can be obtained after a plurality of two-dimensional slice data are input to the final neural network for learning, the classification probability values represent that the two-dimensional slice data belong to a target region or a non-target region, the two-dimensional slice data belonging to the target region or tissue are classified into one type, and the two-dimensional slice data not belonging to the target region or tissue are classified into one type.
Then, all the two-dimensional slice data belonging to the target region or tissue are combined to obtain a specified blood vessel image.
In a specific implementation process, when the original three-dimensional blood vessel scanning image is a heart region and the two-dimensional slice data is two-dimensional slice data related to a coronary artery, the two-dimensional slice data belonging to the coronary artery in all the two-dimensional slice data are combined to obtain a coronary artery image.
On the basis of the foregoing, the embodiment of the present invention further provides the following method flow, which is used for processing the obtained specified blood vessel image to obtain a more accurate and fine specified blood vessel image. In particular, when the blood vessel is designated as a coronary artery, as shown in fig. 3, fig. 3 is another flowchart of an embodiment of the medical image processing method provided by the embodiment of the present invention, and after step 106, the embodiment of the present invention may further include the following steps:
107. the coronary arteries in the image of the designated vessel are processed to remove regions of non-coronary arteries in each vessel branch.
Optionally, the processing the coronary artery in the specified blood vessel image comprises: determining a centerline of a coronary artery; determining a bifurcation point and an end point on a central line of a coronary artery; dividing the central line into a plurality of sections according to the bifurcation points and the end points; the designated blood vessel image is divided into a region of non-coronary arteries and a coronary artery region according to a plurality of segments. In the embodiment of the present invention, processing is performed on the coronary artery in the specified blood vessel image, and if the point of the centerline of the coronary artery has more than 2 points on the centerline in the field, the point is defined as a bifurcation point; if the neighborhood has 1 point on the central line, the point is defined as an end point, the bifurcation point and the end point jointly divide the central line into a plurality of sections, and the proportion of coronary vessel points in each section of the vessel to vessel points in the section is counted. For example: and if the proportion of the coronary vessel points in the segment of the blood vessel to the vessel points in the segment is more than 65%, determining that the segment of the blood vessel is a coronary vessel segment.
In another embodiment, determining a certain segment of a blood vessel as a coronary artery segment may be performed by:
if non-coronary artery with length larger than 5mm is continuously arranged according to the neural network learning result from the bifurcation point or the end point of the central line corresponding to the starting point of the section of blood vessel, the part is considered to be possibly not the coronary artery, the part is removed, the tail end of the part is made to be a new starting point, and the judgment is carried out again from the starting point; or, if the section of the blood vessel has non-coronary artery with the length larger than 20mm according to the neural network learning result, the part is regarded as not being the coronary artery blood vessel and is marked as the non-coronary artery. Further, the coronary artery identified after the above procedure may be further processed to remove the shorter small branch (typically 10mm) of the coronary artery branch.
In another embodiment, considering that the entire coronary vessel tree may include some non-coronary vessel points on the basis of the candidate connected component, the coronary artery in the specified vessel image is further processed to remove the non-coronary vessel points in the coronary artery tree: extracting a centerline from the entire vessel tree and dividing into a plurality of branches/segments; the length of the non-coronary part at the tail end of each segment is calculated, and if the length of the non-coronary part at the tail end of each segment is larger than a set threshold (such as 5mm), the tail end is removed.
On the basis of the foregoing, as shown in fig. 4, fig. 4 is another flowchart of an embodiment of a medical image processing method provided by the embodiment of the present invention, before step 101, the embodiment of the present invention further includes the following steps:
100. and training the neural network by using the positive and negative samples to obtain the trained neural network.
In one embodiment, the training samples are from at least 26 patients (subjects), positive sample images are extracted from the coronary vessel three-dimensional connected region of each patient, negative sample images are extracted from the non-coronary vessel connected region, and the number of the positive sample images and the negative sample images is about 10 ten thousand. The data can be amplified to 100 ten thousand by rotating and translating the blood vessel section image. The positive and negative sample images are two-dimensional images of 32 x 32(32-64 all possible) pixels in size, and the resolution of all slice images is unified to 0.25mm (0.2-0.6 all possible). And (4) training the initial neural network by taking the original CT value of the image as input.
Wherein, training the initial neural network by using positive and negative samples comprises:
setting 64 convolution kernels on the first layer, wherein the size of each convolution kernel is 5 x 5, and performing convolution operation on the two-dimensional slice data and the convolution kernels to obtain 64 first-layer feature maps, wherein the size of each first-layer feature map is 32 x 32;
carrying out nonlinear mapping on the first layer characteristic diagram by using a modified linear unit function at the second layer to obtain a second layer characteristic diagram;
pooling cores are arranged on the third layer, the size of each pooling core is 3 x 3, the second layer characteristic graphs are pooled to obtain 64 third layer characteristic graphs, and the size of each third layer characteristic graph is 16 x 16;
setting 64 convolution kernels on a fourth layer, wherein the size of each convolution kernel is 5 x 5, performing convolution operation on the third layer of feature maps and the convolution kernels to obtain 64 fourth layer of feature maps, and the size of each fourth layer of feature map is 16 x 16;
carrying out nonlinear mapping on the fourth layer characteristic diagram by using the modified linear unit function at the fifth layer to obtain a fifth layer characteristic diagram;
pooling cores in the sixth layer, wherein the size of each pooling core is 3 x 3, pooling the fifth layer characteristic diagram to obtain 64 sixth layer characteristic diagrams, and the size of each sixth layer characteristic diagram is 8 x 8;
setting 128 convolution kernels on a seventh layer, wherein the size of each convolution kernel is 5 x 5, performing convolution operation on the sixth layer feature graph and the convolution kernels to obtain 64 seventh layer feature graphs, and the size of each seventh layer feature graph is 8 x 8;
carrying out nonlinear mapping on the seventh layer characteristic diagram by using a modified linear unit function at the eighth layer to obtain an eighth layer characteristic diagram;
pooling cores in the ninth layer, wherein the size of each pooling core is 3 x 3, pooling the eighth layer of feature maps to obtain 128 ninth layer of feature maps, and the size of each ninth layer of feature map is 4 x 4;
setting 128 convolution kernels on the tenth layer, wherein the size of each convolution kernel is 4 x 4, and performing full connection processing on the ninth layer of feature maps to obtain tenth layer of feature maps, wherein the size of each tenth layer of feature map is 1 x 1;
setting 2 convolution kernels on the eleventh layer, wherein the size of each convolution kernel is 1 x1, and performing full connection processing on the tenth layer of feature maps to obtain eleventh layer feature maps, wherein the size of each tenth layer of feature map is 1 x 1;
and calculating the difference between the predicted value and the actual value at the twelfth layer, returning the gradient through a back propagation algorithm, and updating the weight and the bias of each layer.
In the training process, the Loss values of the training set and the verification set are continuously reduced, the training is stopped when the Loss value of the verification set is not reduced any more, overfitting is prevented, and the neural network model at the moment is taken out to serve as a classifier of the blood vessel slice.
In one example, neural network processing of medical images may employ a perceptual-memory-decision model (PMJ). In the sensing stage, the primary feature extraction can be carried out on the medical image; in the memory stage, an overcomplete dictionary of the target organ can be obtained by deep convolutional network learning; in the decision stage, the overcomplete dictionary is used as the basis for extracting the target organs of the three-dimensional medical image, and tubular organs such as blood vessels and the like are extracted from the single three-dimensional medical image.
Further, the perception stage is also called feature extraction, i.e. it is determined whether a point in the computer-extracted image belongs to an image feature, which is mainly based on the following features: many images, including medical images, have their own characteristics, and the statistical characteristics of some parts of the images are the same, i.e. the features learned in that part can also be used in other parts, so that the same learned features can be used in all positions of the image. Alternatively, the system may first pre-process the input training samples and then pre-train them with a linear decoder to obtain the weights.
In one embodiment, the sensing phase may employ a self-coding neural network of an unsupervised learning algorithm or a Back Propagation (BP) algorithm. Illustratively, for a self-coding neural network, which may include an input layer, a hidden layer and an output layer, the layers of the network may use a full-link manner, and the identity function h may be learned through the self-coding neural networkW,b(x)X. Optionally, if the number of units in the hidden layer is less than the number of units of the input data, it is equivalent to acquiring a sparse matrix of the input data; if the data number of the hidden layer is larger than or equal to that of the input layer, a sparse matrix of the input signal can be obtained by introducing sparsity limitation. In this embodiment, the penalty function for a sparse self-coding network is represented as:
Figure BDA0001422788750000141
wherein,
Figure BDA0001422788750000142
for error terms, construct with the L2 model;
Figure BDA0001422788750000143
is a regular term to prevent overfitting;
Figure BDA0001422788750000144
for the penalty factor, β controls the weight of the sparsity penalty factor.
Figure BDA0001422788750000151
Can be expressed as:
Figure BDA0001422788750000152
wherein ρ is a sparsity parameter and is a value close to zero; s2The number of hidden neurons in the hidden layer; j is each neuron in the hidden layer;
Figure BDA0001422788750000153
to imply the average liveness of the nerve j, it is formulated as:
Figure BDA0001422788750000154
wherein,
Figure BDA0001422788750000155
representing the degree of activation of the implicit neuron j of the self-coding neural network with input x.
In another embodiment, the sensing stage may be based on a sparse self-coding neural network composed of linear coding networks, the sparse self-coding neural network comprising an input layer, a hidden layer and an output layer, the neurons all using the same excitation function. In the three-layer sparse self-coding neural network, the calculation formulas of the output neurons are respectively as follows:
z(3)=W(2)a(2)+b(2)
a(3)=f(z(3))
the output of the network is a(3)Equal to the output of the excitation function f. Alternatively, in a sparse self-coding network, the excitation function is usually a Sigmoid function, and the output value range is [0,1 ]]Correspondingly, a(3)Is also in the range of [0,1 ]]。
Further, the linear coding network is a self-coding network which adopts an identity function as an excitation function at an output layer and still adopts a Sigmoid function as an excitation function at a hidden layer, and at the moment, the output layer satisfies the following conditions:
a(3)=f(z(3))=z(3)=W(2)a(2)+b(2)
in this particular embodiment, the image x1 is in a large size of r × c. Firstly, selecting a x b small image sample x2 from a large-size image to train sparse self-coding, and calculating k features according to the following formula:
f=σ(W(1)x2+b(1))
wherein W (1) represents the weight of a visual layer element; b (1) represents the offset value between the implied cells. For each a × b small image sample x2, the corresponding feature value can be calculated by the above formula, and further, the feature value of each small image sample is convolved, so that k × (r-a +1) × (c-b +1) feature matrices after convolution can be obtained.
It should be noted that an image has an attribute of "staticity" indicating that features useful in one image region are likely to be applicable to another region. Therefore, in order to describe a large-size image, an average value or a maximum value of a specific feature of the image in a region can be calculated, and clustering statistics can be performed on features at different positions. The clustering operation is pooling, and when the average value of a certain specific characteristic of the image on a region is calculated, the average pooling is correspondingly carried out; when calculating the maximum value of a particular feature of an image over a region, corresponding to maximum pooling. After obtaining the convolution characteristics, the size of the pooled region needs to be determined to obtain the pooled convolution characteristics. For example, when the pooled region has a size of m × n, the convolution feature can be divided into a plurality of disjoint regions having a size of m × n, and then the averaged feature or the largest feature of the regions can be used to obtain the pooled convolution feature.
In this particular embodiment, the deep convolutional neural network model used comprises a 5-layer convolutional neural network model, the 5-layer convolutional neural network model comprising: a convolutional layer, a pooling layer, and a fully-linked layer. In this embodiment, for any one of the two-dimensional slice image data sets, the process of the deep convolutional neural network processing is:
1) inputting a two-dimensional slice image into a convolutional layer, wherein the size of the two-dimensional slice image is 64 multiplied by 64, and convolving the input image by 36 convolutional cores with the sizes of 5 multiplied by 5 obtained by pre-training in a perception stage to obtain 36 feature mapping maps with the sizes of 64 multiplied by 64;
2) pooling the 36 feature maps in the convolutional layer by using a window with the size of 3 × 3 to obtain 36 feature maps with the size of 32 × 32;
3) and (2) sampling 36 images of the pooling layer to obtain one or more image block sets with the size of 5 × 5, training the set by using a sparse self-coding network to obtain 64 weights with the size of 5 × 5, and performing convolution on the set and the 36 images of the pooling layer by using the weights as convolution kernels to obtain 64 feature maps with the size of 24 × 24. The method adopts the measures that convolution is carried out on every three 36 images, the convolution is carried out twice, 3 adjacent images are selected for the first time, 3 images separated by 2 units are selected for the second time, and finally 64 characteristic mapping images (36-3+1) + (36-3 × 2) are obtained.
4) The pooling layer was pooled using windows of 3 × 3 size to obtain 64 8 × 8 feature maps.
5) And (4) fully connecting the layers. The training data set used in the present application includes 1300 images in total, and after S4, the feature map of the entire network is 1300 × 64 × 8 × 8, which indicates that 64 maps of 8 × 8 can be obtained for each input image of 64 × 64 size. The data of 1300 × 64 × 8 × 8 is dimensionality reduced to obtain (1300 × 64) × (8 × 8) ═ 83200 × 64, and then the final dictionary is trained using a sparse self-encoding grid with an output of 64.
According to the image processing method provided by the embodiment of the invention, the coronary artery candidate region obtained by processing the original blood vessel three-dimensional scanning image is processed to obtain the central line of the candidate region, then the two-dimensional slice data with the specified size, which is perpendicular to the central line at each sampling point on the central line, is obtained along the trend of the central line, and finally the two-dimensional slice data is input into the trained neural network as input data.
Training sample selection and source:
the number of training samples is from 26 patients (subjects), positive sample images are extracted from coronary artery three-dimensional communication regions of each patient, negative sample images are extracted from non-coronary artery three-dimensional communication regions, and the number of the positive sample images and the number of the negative sample images are about 10 ten thousand. The data can be amplified to 100 ten thousand by rotating and translating the blood vessel section image. The positive and negative sample images are two-dimensional images of 32 x 32(32-64 all possible) pixels in size, and the resolution of all slice images is unified to 0.25mm (0.2-0.6 all possible). The original CT value of the image is used as input for training.
Setting a neural network:
the neural network adopts a Convolutional Neural Network (CNN), and the optimization algorithm adopts a random gradient descent method (SGD) to update the weight. The convolutional neural network has 12 layers, wherein the convolutional neural network comprises three convolutional layers, three nonlinear mapping layers, three pooling layers, two full-connection layers and a Loss layer.
The first layer is a convolution layer and is used for extracting features from an input image, 64 convolution kernels are arranged, the size of each convolution kernel is 5 x 5, and 64 feature maps of the first layer are obtained after convolution operation is carried out on the input image and the convolution kernels, wherein the size of each convolution kernel is 32 x 32;
the second layer is a nonlinear mapping layer and has the function of adding nonlinearity into the neural network and accelerating convergence speed. Carrying out nonlinear mapping on the first layer characteristic diagram by using a modified linear unit function (Relu) to obtain a second layer characteristic diagram;
the third layer is a pooling layer, which acts to reduce image size and reduce noise. The size of the pooling kernel is 3 × 3, the second layer feature map is pooled, and the pooling method is that the maximum value in a 3 × 3 pixel frame is taken to obtain a third layer feature map, the size is 16 × 16 pixels, and the number is 64;
setting 64 convolution kernels on the fourth layer, wherein the size of each convolution kernel is 5 x 5, and obtaining 64 characteristic graphs of the fourth layer, and the size of each convolution kernel is 16 x 16;
carrying out nonlinear mapping on the fourth layer characteristic diagram by using the modified linear unit function at the fifth layer to obtain a fifth layer characteristic diagram;
the sixth layer is a pooling layer, the size of each pooling core is 3 x 3, and the fifth layer characteristic diagram is pooled to obtain a sixth layer characteristic diagram, the size of each pooling core is 8 x 8 pixels, and the number of the pooling cores is 64;
setting 128 convolution kernels on the seventh layer, wherein the size of each convolution kernel is 5 x 5, and obtaining a seventh layer feature map;
carrying out nonlinear mapping on the seventh layer characteristic diagram by using a modified linear unit function at the eighth layer to obtain an eighth layer characteristic diagram;
setting in a ninth layer, wherein the size of each pooling core is 3 × 3, pooling the eighth layer characteristic diagram to obtain a ninth layer characteristic diagram, the size is 4 × 4, and the number is 128;
setting 128 convolution kernels on the tenth layer, wherein the size of each convolution kernel is 4 x 4, and performing full connection processing on the ninth layer of feature maps to obtain a tenth layer of feature maps with the size of 1 x 1;
setting 2 convolution kernels on the eleventh layer, wherein the size of each convolution kernel is 1 x1, and performing full connection processing on the tenth layer of feature map to obtain a eleventh layer of feature map;
and the twelfth layer is a softmax loss layer, the difference between the predicted value and the actual value is calculated, the gradient is transmitted back through a back propagation algorithm (BP algorithm), and the weight (weight) and the bias (bias) of each layer are updated.
In the training process, the Loss values of the training set and the verification set are continuously reduced, the training is stopped when the Loss value of the verification set is not reduced any more, overfitting is prevented, and the neural network model at the moment is taken out to serve as a classifier of the blood vessel slice. In the testing process, the twelfth layer is changed into a softmax layer, the feature map of the eleventh layer is input into the layer for classification prediction, and the probability that the input image is a blood vessel or a non-blood vessel can be obtained, so that a classification result is obtained.
Application 1: fig. 5 is a schematic diagram of candidate connected components according to an embodiment of the present invention, and as shown in fig. 5, a coronary artery initial connected component is determined from the candidate connected components, and when at most 5 candidate initial connected components are respectively retained on the left and right sides, the center lines of the connected components are extracted, and two-dimensional slices are made and input to a network, so as to obtain a learning result. Fig. 6 is a schematic diagram of a result of deep learning of a candidate connected component according to an embodiment of the present invention, and as shown in fig. 6, according to a learning result, a head-to-tail non-coronary artery portion of each branch is removed, if a head continues to have a non-coronary artery with a length greater than 5mm, it is considered that the portion may not be a coronary artery and temporarily removed, and if a length greater than 20mm is a non-coronary artery, it is considered that the portion is not a coronary artery, and the portion is marked as a non-coronary artery. At the same time, the shorter small branch (10mm) of the coronary branch was removed. Fig. 7 is a schematic diagram of a final extraction result of coronary arteries according to an embodiment of the present invention, as shown in fig. 7, finally, statistics is performed on remaining coronary vessel points in all connected domains and a proportion of the coronary vessel points in the entire connected domain, and a connected domain with the most coronary vessel points in the connected domain is retained, and if the first longest connected domain is not the connected domain with the most coronary vessel points and the number of the connected domain points is not greatly different and is within about 10mm, the first longest connected domain is also retained. Application 2: and removing non-coronary vessel points in the coronary artery tree. The whole coronary vessel tree is obtained on the basis of the initial connected domain, wherein non-coronary vessel points are contained. The whole blood vessel tree is extracted with central line and branched, and the non-coronary artery part at the tail end of each branch (the tail end length is greater than 5mm) or branch length is less than 10mm is removed, and the coronary artery part with the ratio lower than 0.5 at the tail section is also detected. The remaining coronary vessels.
An embodiment of the present invention further provides an image processing method, fig. 8 is another flowchart of an embodiment of a medical image processing method provided in an embodiment of the present invention, and fig. 9 is a scene schematic diagram of an embodiment of a medical image processing method provided in an embodiment of the present invention, as shown in fig. 8 and fig. 9, the image processing method provided in an embodiment of the present invention may include the following steps:
801. and acquiring an original blood vessel three-dimensional scanning image.
The specific implementation process of step 801 is detailed in step 101, and the implementation principle and the process are similar, which are not described herein again.
802. A specified blood vessel candidate region is determined from the original blood vessel three-dimensional scan image.
The specific implementation process of step 802 is detailed in step 102, and the implementation principle and process are similar, which are not described herein again.
803. The specified blood vessel candidate region is divided into a plurality of two-dimensional slice data.
Since the three-dimensional images are processed in the steps, and the processing speed of the three-dimensional images is slow due to the fact that the three-dimensional images are processed in the neural network model, in order to increase the processing speed and reduce the processing difficulty of the sample, the specified blood vessel candidate region is divided into a plurality of two-dimensional slice data, and then the two-dimensional slice data are input into the neural network model for learning. Further, by dividing the three-dimensional image into a plurality of two-dimensional slice data, the number of samples can be increased.
804. And inputting the data of the plurality of two-dimensional slices into the trained neural network for learning to obtain a learning result.
The specific implementation process of step 804 is detailed in step 104, and the implementation principle and process are similar, which are not described herein again.
805. A specified blood vessel image is determined in a specified blood vessel candidate region based on a plurality of learning results.
The detailed implementation process of step 805 is shown in step 105, and the implementation principle and process are similar, which are not described herein again.
According to the image processing method provided by the embodiment of the invention, the coronary artery candidate region obtained by processing the original blood vessel three-dimensional scanning image is processed, a plurality of two-dimensional slice data are input into the trained neural network as input data, the non-coronary artery region in the original blood vessel three-dimensional scanning image can be effectively removed, the accurate identification of the left and right coronary arteries is realized, and the problem of low accuracy in accurately extracting the coronary artery from the CTA image in the prior art is solved. In order to implement the above method flow, an embodiment of the present invention further provides a medical imaging apparatus, which includes a processor and a memory for storing processor-executable instructions.
Wherein the processor is configured to:
acquiring an original blood vessel three-dimensional scanning image;
determining a specified blood vessel candidate region from an original blood vessel three-dimensional scanning image;
dividing a specified blood vessel candidate region into a plurality of two-dimensional slice data;
inputting a plurality of two-dimensional slice data into the trained neural network for learning to obtain a learning result;
and determining the specified blood vessel image in the specified blood vessel candidate region according to a plurality of learning results.
The medical imaging apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 8, and the implementation principle and the technical effect are similar, which are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. An image processing method, characterized in that the method comprises:
acquiring an original blood vessel three-dimensional scanning image;
processing the original blood vessel three-dimensional scanning image to obtain a specified blood vessel candidate region;
processing the appointed blood vessel candidate region to obtain a central line of the appointed blood vessel candidate region;
acquiring two-dimensional slice data perpendicular to the central line at each sampling point on the central line along the trend of the central line;
inputting the two-dimensional slice data into a trained neural network for learning to obtain a learning result;
determining a specified blood vessel image according to a plurality of learning results;
wherein the designated vessel is a coronary artery; the appointed blood vessel candidate region comprises a candidate region of a left coronary artery and a candidate region of a right coronary artery, and the original blood vessel three-dimensional scanning image is processed, wherein the processing comprises the following steps:
performing enhancement filtering on the original blood vessel three-dimensional scanning image to obtain an enhanced filtering image;
determining the position of the aorta in the original image;
determining the initial regions of the left and right coronary arteries according to the position of the aorta, the three-dimensional scanning image of the original blood vessel and the gray level of the enhanced filtering image;
segmenting the enhanced filtering image based on at least one preset segmentation threshold value to obtain a candidate region corresponding to the preset segmentation threshold value;
and determining the candidate regions of the left and right coronary arteries according to the candidate regions corresponding to the preset segmentation threshold and the starting regions of the left and right coronary arteries.
2. The method of claim 1, further comprising:
processing the coronary arteries in the designated vessel image to remove regions of non-coronary arteries in each vessel branch.
3. The method of claim 2, wherein processing coronary arteries in the designated vessel image comprises:
determining a centerline of the coronary artery;
determining a bifurcation point and an end point on a centerline of the coronary artery;
dividing the center line into a plurality of sections according to the bifurcation points and the end points;
and dividing the specified blood vessel image into a non-coronary artery region and a coronary artery region according to the plurality of segments.
4. The method of claim 1, wherein the designated vessel is a coronary artery, the method further comprising:
processing the coronary arteries in the designated vessel image to remove non-coronary vessel points in the designated vessel image.
5. The method of claim 1, wherein prior to said acquiring the original vessel three-dimensional scan image, the method further comprises:
and training the neural network by using the positive and negative samples to obtain the trained neural network.
6. The method of claim 5, wherein the neural network comprises a convolutional layer, a pooling layer, a non-linear mapping layer, a fully-connected layer, and a classification layer, and wherein the classification probability values of the two-dimensional slice data are determinable by the trained neural network.
7. The method of claim 6, wherein the classification probability value comprises a probability that the two-dimensional slice data belongs to a specified vessel.
8. The method of claim 5, wherein the neural network employs a perceptual-memory-decision model.
9. A medical imaging device, characterized in that the device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to:
acquiring an original blood vessel three-dimensional scanning image;
determining a specified blood vessel candidate region from the original blood vessel three-dimensional scanning image;
dividing the specified blood vessel candidate region into a plurality of two-dimensional slice data;
inputting a plurality of two-dimensional slice data into the trained neural network for learning to obtain a learning result;
determining a specified blood vessel image in the specified blood vessel candidate region according to a plurality of learning results;
wherein the specified blood vessel candidate region comprises candidate regions of left and right coronary arteries, and the processing of the original blood vessel three-dimensional scanning image comprises:
performing enhancement filtering on the original blood vessel three-dimensional scanning image to obtain an enhanced filtering image;
determining the position of the aorta in the original image;
determining the initial regions of the left and right coronary arteries according to the position of the aorta, the three-dimensional scanning image of the original blood vessel and the gray level of the enhanced filtering image;
segmenting the enhanced filtering image based on at least one preset segmentation threshold value to obtain a candidate region corresponding to the preset segmentation threshold value;
and determining the candidate regions of the left and right coronary arteries according to the candidate regions corresponding to the preset segmentation threshold and the starting regions of the left and right coronary arteries.
CN201710899166.5A 2017-09-28 2017-09-28 Image processing method and medical imaging device Active CN107563983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710899166.5A CN107563983B (en) 2017-09-28 2017-09-28 Image processing method and medical imaging device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710899166.5A CN107563983B (en) 2017-09-28 2017-09-28 Image processing method and medical imaging device

Publications (2)

Publication Number Publication Date
CN107563983A CN107563983A (en) 2018-01-09
CN107563983B true CN107563983B (en) 2020-09-01

Family

ID=60982095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710899166.5A Active CN107563983B (en) 2017-09-28 2017-09-28 Image processing method and medical imaging device

Country Status (1)

Country Link
CN (1) CN107563983B (en)

Families Citing this family (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304845B (en) * 2018-01-16 2021-11-09 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN108447046B (en) * 2018-02-05 2019-07-26 龙马智芯(珠海横琴)科技有限公司 The detection method and device of lesion, computer readable storage medium
CN110310315A (en) * 2018-03-21 2019-10-08 北京猎户星空科技有限公司 Network model training method, device and object pose determine method, apparatus
US10430949B1 (en) * 2018-04-24 2019-10-01 Shenzhen Keya Medical Technology Corporation Automatic method and system for vessel refine segmentation in biomedical images using tree structure based deep learning model
US11497478B2 (en) * 2018-05-21 2022-11-15 Siemens Medical Solutions Usa, Inc. Tuned medical ultrasound imaging
CN109035255B (en) * 2018-06-27 2021-07-02 东南大学 Method for segmenting aorta with interlayer in CT image based on convolutional neural network
EP3593722A1 (en) * 2018-07-13 2020-01-15 Neuroanalytics Pty. Ltd. Method and system for identification of cerebrovascular abnormalities
CN108932715B (en) * 2018-07-13 2022-06-07 北京红云智胜科技有限公司 Deep learning-based coronary angiography image segmentation optimization method
KR102250164B1 (en) * 2018-09-05 2021-05-10 에이아이메딕(주) Method and system for automatic segmentation of vessels in medical images using machine learning and image processing algorithm
CN109389606B (en) * 2018-09-30 2019-12-27 语坤(北京)网络科技有限公司 Coronary artery segmentation method and device
CN109272514B (en) * 2018-10-05 2021-07-13 数坤(北京)网络科技股份有限公司 Sample evaluation method and model training method of coronary artery segmentation model
CN109325948B (en) * 2018-10-09 2019-12-27 数坤(北京)网络科技有限公司 Coronary artery segmentation method and device based on special region optimization
CN109446951B (en) * 2018-10-16 2019-12-10 腾讯科技(深圳)有限公司 Semantic segmentation method, device and equipment for three-dimensional image and storage medium
CN109461495B (en) * 2018-11-01 2023-04-14 腾讯科技(深圳)有限公司 Medical image recognition method, model training method and server
CN111144163B (en) * 2018-11-02 2023-11-21 无锡祥生医疗科技股份有限公司 Vein and artery identification system based on neural network
CN111145137B (en) * 2018-11-02 2023-08-15 无锡祥生医疗科技股份有限公司 Vein and artery identification method based on neural network
WO2020087732A1 (en) * 2018-11-02 2020-05-07 无锡祥生医疗科技股份有限公司 Neural network-based method and system for vein and artery identification
CN111134727B (en) * 2018-11-02 2022-12-20 无锡祥生医疗科技股份有限公司 Puncture guiding system for vein and artery identification based on neural network
CN109523560A (en) * 2018-11-09 2019-03-26 成都大学 A kind of three-dimensional image segmentation method based on deep learning
CN109523547A (en) * 2018-12-21 2019-03-26 四川大学华西医院 Method and device for detecting image nodules
CN109840483B (en) * 2019-01-11 2020-09-11 深圳大学 Landslide crack detection and identification method and device
CN109978888B (en) * 2019-02-18 2023-07-28 平安科技(深圳)有限公司 Image segmentation method, device and computer readable storage medium
CN109872314B (en) * 2019-02-20 2021-04-16 数坤(北京)网络科技有限公司 Centerline-based optimal segmentation method and device
CN110136137A (en) * 2019-04-02 2019-08-16 成都真实维度科技有限公司 A method of angiosomes segmentation is carried out based on faulted scanning pattern data set
CN110148113A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 A kind of lesion target area information labeling method based on tomoscan diagram data
CN110148114A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 A kind of deep learning model training method based on 2D faulted scanning pattern data set
CN110148124A (en) * 2019-05-21 2019-08-20 中山大学 Throat recognition methods, device, system, storage medium and equipment
CN111754476B (en) * 2019-06-19 2024-07-19 科亚医疗科技股份有限公司 Method and system for quantitative modeling of diseases of anatomical tree structure
CN110428431B (en) * 2019-07-12 2022-12-16 广东省人民医院(广东省医学科学院) Method, device and equipment for segmenting cardiac medical image and storage medium
CN114072838A (en) * 2019-07-17 2022-02-18 西门子医疗有限公司 3D vessel centerline reconstruction from 2D medical images
US11200976B2 (en) 2019-08-23 2021-12-14 Canon Medical Systems Corporation Tracking method and apparatus
CN112446911A (en) * 2019-08-29 2021-03-05 阿里巴巴集团控股有限公司 Centerline extraction, interface interaction and model training method, system and equipment
CN110675444B (en) * 2019-09-26 2023-03-31 东软医疗系统股份有限公司 Method and device for determining head CT scanning area and image processing equipment
CN110796653B (en) * 2019-10-31 2022-08-30 北京市商汤科技开发有限公司 Image processing and neural network training method, device, equipment and medium
CN110991339B (en) * 2019-12-02 2023-04-28 太原科技大学 Three-dimensional palate wrinkle identification method adopting cyclic frequency spectrum
CN111312374B (en) * 2020-01-21 2024-03-22 上海联影智能医疗科技有限公司 Medical image processing method, medical image processing device, storage medium and computer equipment
CN111681226B (en) * 2020-06-09 2024-07-12 上海联影医疗科技股份有限公司 Target tissue positioning method and device based on blood vessel identification
CN111627023B (en) * 2020-04-27 2021-02-09 数坤(北京)网络科技有限公司 Method and device for generating coronary artery projection image and computer readable medium
CN111681211B (en) * 2020-05-18 2024-03-08 东软医疗系统股份有限公司 Vascular image processing method and device
WO2021114636A1 (en) * 2020-05-29 2021-06-17 平安科技(深圳)有限公司 Multimodal data-based lesion classification method, apparatus, device, and storage medium
CN113516753B (en) * 2020-06-01 2022-10-21 阿里巴巴集团控股有限公司 Image processing method, device and equipment
CN111815583B (en) * 2020-06-29 2022-08-05 苏州润迈德医疗科技有限公司 Method and system for obtaining aorta centerline based on CT sequence image
EP4174760A4 (en) * 2020-06-29 2024-07-10 Suzhou Rainmed Medical Tech Co Ltd Aorta obtaining method based on deep learning, and storage medium
CN112614141B (en) * 2020-12-18 2023-09-19 深圳市德力凯医疗设备股份有限公司 Vascular scanning path planning method and device, storage medium and terminal equipment
CN112734907B (en) * 2020-12-30 2022-07-08 华东师范大学 Ultrasonic or CT medical image three-dimensional reconstruction method
CN113239992B (en) * 2021-04-28 2024-05-07 深圳睿心智能医疗科技有限公司 Blood vessel classification method and device
CN113256564B (en) * 2021-04-28 2024-03-01 深圳睿心智能医疗科技有限公司 Catheter parameter extraction method and device in medical image
CN113177928B (en) * 2021-05-18 2022-05-17 数坤(北京)网络科技股份有限公司 Image identification method and device, electronic equipment and storage medium
CN113744215B (en) * 2021-08-24 2024-05-31 清华大学 Extraction method and device for central line of tree-shaped lumen structure in three-dimensional tomographic image
CN113763543B (en) * 2021-09-17 2024-07-09 北京理工大学 Three-dimensional voxel structure-based vascular reconstruction method, three-dimensional voxel structure-based vascular reconstruction evaluation method and three-dimensional voxel structure-based vascular reconstruction system
CN113954360A (en) * 2021-10-25 2022-01-21 华南理工大学 3D printing product anti-counterfeiting method based on embedded identification code multi-process application
CN114041761B (en) * 2021-10-27 2022-12-09 北京医准智能科技有限公司 Method, device and computer readable medium for judging origin of coronary artery
CN113974667B (en) * 2021-11-02 2024-06-28 东北大学 Automatic positioning device and method for TAVI preoperative key target
CN114119602B (en) * 2021-12-20 2022-04-15 深圳科亚医疗科技有限公司 Method, apparatus and storage medium for object analysis of medical images
CN114359205B (en) * 2021-12-29 2022-11-01 推想医疗科技股份有限公司 Head and neck blood vessel analysis method and device, storage medium and electronic equipment
CN114693648B (en) * 2022-04-02 2024-07-05 深圳睿心智能医疗科技有限公司 Blood vessel center line extraction method and system
CN114998582A (en) * 2022-05-10 2022-09-02 深圳市第二人民医院(深圳市转化医学研究院) Coronary artery blood vessel segmentation method, device and storage medium
CN115049590B (en) * 2022-05-17 2023-03-10 北京医准智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN115019014B (en) * 2022-06-15 2024-05-31 北京理工大学 Four-dimensional vascular reconstruction method and mechanical calculation method
CN114862850B (en) * 2022-07-06 2022-09-20 深圳科亚医疗科技有限公司 Target detection method, device and medium for blood vessel medical image
CN115588012B (en) * 2022-12-13 2023-04-07 四川大学 Pelvic artery blood vessel segmentation method, system, storage medium and terminal
CN116862877A (en) * 2023-07-12 2023-10-10 新疆生产建设兵团医院 Scanning image analysis system and method based on convolutional neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5305821B2 (en) * 2008-10-10 2013-10-02 株式会社東芝 Medical image processing apparatus and medical image diagnostic apparatus
CN102521873B (en) * 2011-11-22 2014-03-05 中国科学院深圳先进技术研究院 Blood vessel modeling method
US8958618B2 (en) * 2012-06-28 2015-02-17 Kabushiki Kaisha Toshiba Method and system for identification of calcification in imaged blood vessels
CN103961135B (en) * 2013-02-04 2017-04-12 通用电气公司 System and method for detecting guide pipe position in three-dimensional ultrasonic image

Also Published As

Publication number Publication date
CN107563983A (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN107563983B (en) Image processing method and medical imaging device
WO2020001217A1 (en) Segmentation method for dissected aorta in ct image based on convolutional neural network
CN110706246B (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
CN111008984B (en) Automatic contour line drawing method for normal organ in medical image
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN115661467B (en) Cerebrovascular image segmentation method, device, electronic equipment and storage medium
CN111951277A (en) Coronary artery segmentation method based on CTA image
CN112308846B (en) Blood vessel segmentation method and device and electronic equipment
CN111047607B (en) Method for automatically segmenting coronary artery
CN115908297A (en) Topology knowledge-based blood vessel segmentation modeling method in medical image
CN113160120A (en) Liver blood vessel segmentation method and system based on multi-mode fusion and deep learning
KR101625955B1 (en) Method of classifying artery and vein of organ
CN111080556A (en) Method, system, equipment and medium for strengthening trachea wall of CT image
CN114693622A (en) Plaque erosion automatic detection system based on artificial intelligence
Ebrahimdoost et al. Automatic segmentation of pulmonary artery (PA) in 3D pulmonary CTA images
Nardelli et al. Deep-learning strategy for pulmonary artery-vein classification of non-contrast CT images
CN111986216B (en) RSG liver CT image interactive segmentation algorithm based on neural network improvement
KR101442728B1 (en) Method of classifying pulmonary arteries and veins
Zrira et al. Automatic and Fast Whole Heart Segmentation for 3D Reconstruction
Luo et al. Extraction of brain vessels from magnetic resonance angiographic images: Concise literature review, challenges, and proposals
Luo et al. Recent progresses on cerebral vasculature segmentation for 3D quantification and visualization of MRA
CN112446893A (en) Contour segmentation method and device for liver image
CN113838036B (en) Coronary artery segmentation method based on local clustering and filtering
Cui Supervised Filter Learning for Coronary Artery Vesselness Enhancement Diffusion in Coronary CT Angiography Images
CN116228916B (en) Image metal artifact removal method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Patentee after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Patentee before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.