CN114359208B - Head and neck blood vessel segmentation method and device, readable storage medium and electronic equipment - Google Patents

Head and neck blood vessel segmentation method and device, readable storage medium and electronic equipment Download PDF

Info

Publication number
CN114359208B
CN114359208B CN202111646781.8A CN202111646781A CN114359208B CN 114359208 B CN114359208 B CN 114359208B CN 202111646781 A CN202111646781 A CN 202111646781A CN 114359208 B CN114359208 B CN 114359208B
Authority
CN
China
Prior art keywords
blood vessel
head
segmentation
neck
rough
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111646781.8A
Other languages
Chinese (zh)
Other versions
CN114359208A (en
Inventor
王瑜
张欢
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202111646781.8A priority Critical patent/CN114359208B/en
Publication of CN114359208A publication Critical patent/CN114359208A/en
Application granted granted Critical
Publication of CN114359208B publication Critical patent/CN114359208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides a head and neck blood vessel segmentation method and device, a computer readable storage medium and electronic equipment, and relates to the field of medical image segmentation. The head and neck blood vessel segmentation method comprises the following steps: determining a rough blood vessel central line in the head and neck blood vessel image; growing the rough blood vessel center line by adopting a first threshold value based on a growth network to obtain a main blood vessel; determining the central line of the blood vessel to be grown in the central line of the rough blood vessel based on the central line corresponding to the central line of the rough blood vessel and the main blood vessel; growing the central line of the blood vessel to be grown based on the growth network by adopting a second threshold value to obtain a branch blood vessel; and determining a head and neck blood vessel segmentation result of the head and neck blood vessel image based on the main blood vessel and the branch blood vessel. According to the method, the main blood vessel and the branch blood vessels grow in sequence according to the characteristics of the carotid blood vessel, so that the result is more accurate, the accuracy of head and neck blood vessel segmentation is effectively improved, and the problems of incomplete head and neck blood vessel segmentation and inaccurate segmentation are solved.

Description

Head and neck blood vessel segmentation method and device, readable storage medium and electronic equipment
Technical Field
The application relates to the field of medical image segmentation, in particular to a head and neck blood vessel segmentation method and device, a computer-readable storage medium and electronic equipment.
Background
In recent years, cerebrovascular diseases show a trend of youthfulness, and have high fatality rate, so that the cerebrovascular diseases are inestimable in clinical and scientific research. For example, the head and neck blood vessel segmentation has very important significance for assisting doctors in diagnosing and treating head and neck blood vessel diseases. Compared with other vessel segmentation, the head and neck blood vessel has unique features. Especially intracranial vessels, are not only very delicate but also cause reflux, and these factors present a great challenge for head and neck vessel segmentation.
In the prior art, the segmentation of the head and neck blood vessels is in a condition that manual segmentation needs to be participated in, and the segmentation efficiency is low in the condition. At present, the technology of automatically segmenting the blood vessels of the head and the neck is adopted, the segmentation precision is low, and if the blood vessels are not completely segmented or other regions which are not blood vessels are identified as the blood vessels, and the like, the problems exist.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a head and neck blood vessel segmentation method and device, a computer readable storage medium and electronic equipment.
In a first aspect, an embodiment of the present application provides a head and neck blood vessel segmentation method, including: determining a rough blood vessel central line in the head and neck blood vessel image; growing the rough blood vessel center line by adopting a first threshold value based on a growth network to obtain a main blood vessel; determining the central line of the blood vessel to be grown in the central line of the rough blood vessel based on the central line corresponding to the central line of the rough blood vessel and the main blood vessel; growing the central line of the blood vessel to be grown based on the growth network by adopting a second threshold value to obtain a branch blood vessel; and determining a head and neck blood vessel segmentation result of the head and neck blood vessel image based on the main blood vessel and the branch blood vessel.
With reference to the first aspect, in certain implementations of the first aspect, determining a head and neck blood vessel segmentation result of the head and neck blood vessel image based on a main blood vessel and a branch blood vessel includes: dividing a corresponding region of a main blood vessel into at least one main block region, and dividing a corresponding region of a branch blood vessel into at least one branch block region; respectively segmenting at least one trunk block area and at least one branch block area by utilizing a first segmentation network to obtain respective segmentation results of the at least one trunk block area and the at least one branch block area; and determining a head and neck blood vessel segmentation result based on the segmentation result of each of the at least one trunk block region and the at least one branch block region.
With reference to the first aspect, in certain implementations of the first aspect, growing the rough blood vessel centerline by using a first threshold based on a growth network to obtain a main blood vessel includes: determining a plurality of blood vessel characteristic classification results of the rough blood vessel central line by using a classification network model; determining the maximum connected region of each of a plurality of blood vessel characteristic classification results; and adopting a first threshold value based on the growth network to grow the respective maximum communication areas of the multiple blood vessel feature classification results to obtain the main blood vessel.
With reference to the first aspect, in certain implementations of the first aspect, determining a rough vessel centerline in a head and neck vessel image includes: determining seed points of rough blood vessels in the head and neck blood vessel image; based on the seed points of the coarse vessel, a coarse vessel centerline is determined.
With reference to the first aspect, in certain implementations of the first aspect, determining a seed point of a coarse vessel in a head and neck vessel image includes: segmenting the head and neck blood vessel image by using a second segmentation network to obtain a rough blood vessel segmentation region; and segmenting the region based on the rough blood vessel to obtain seed points of the rough blood vessel.
With reference to the first aspect, in certain implementations of the first aspect, determining a coarse vessel centerline in a head and neck vessel image comprises: roughly segmenting the head and neck blood vessel image by using a third segmentation network to obtain a rough blood vessel; based on the coarse vessel, a coarse vessel centerline is obtained.
With reference to the first aspect, in certain implementations of the first aspect, the roughly segmenting the head and neck blood vessel image by using a third segmentation network to obtain a rough blood vessel includes: carrying out window width and/or window level adjustment on the head and neck blood vessel image to obtain an adjustment image corresponding to the head and neck blood vessel image; compressing the adjusted image to obtain a compressed image corresponding to the adjusted image; and carrying out coarse segmentation on the compressed image by using a third segmentation network to obtain a coarse blood vessel.
In a second aspect, an embodiment of the present application provides a head and neck blood vessel segmentation apparatus, including: the first determination module is used for determining a rough blood vessel central line in the head and neck blood vessel image; the first growing module is used for growing the rough blood vessel central line by adopting a first threshold value based on a growing network to obtain a main blood vessel; the second determination module is used for determining the central line of the blood vessel to be grown in the central lines of the rough blood vessel based on the central line corresponding to the central line of the rough blood vessel and the main blood vessel; the second growth module is used for growing the central line of the blood vessel to be grown by adopting a second threshold value based on the growth network to obtain a branch blood vessel; and the third determining module is used for determining a head and neck blood vessel segmentation result of the head and neck blood vessel image based on the main blood vessel and the branch blood vessel.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor; a memory for storing processor-executable instructions; the processor is configured to perform the head and neck blood vessel segmentation method mentioned in the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program is configured to execute the method for head and neck blood vessel segmentation mentioned in the first aspect.
According to the head and neck blood vessel segmentation method provided by the embodiment of the application, the main blood vessel and the branch blood vessel grow in sequence according to the characteristics of the carotid artery blood vessel, and compared with uniform growth, different thresholds are adopted by the method for growth in a hierarchical mode, so that the result is more accurate, the accuracy of head and neck blood vessel segmentation is effectively improved, and the problems of incomplete head and neck blood vessel segmentation and inaccurate segmentation are solved.
Drawings
Fig. 1 is a schematic flow chart of a head and neck blood vessel segmentation method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart illustrating a head and neck blood vessel segmentation result for determining a head and neck blood vessel image based on a main blood vessel and a branch blood vessel according to an embodiment of the present application.
Fig. 3 is a schematic flow chart illustrating a process of growing a rough blood vessel centerline to obtain a main blood vessel based on a first threshold value of a growth network according to an embodiment of the present application.
Fig. 4 is a schematic flowchart illustrating a process of determining a rough blood vessel centerline in a head and neck blood vessel image according to an embodiment of the present application.
Fig. 5 is a schematic flowchart illustrating a process of determining a seed point of a rough blood vessel in a head and neck blood vessel image according to an embodiment of the present application.
Fig. 6 is a schematic flowchart illustrating a process of determining a rough blood vessel centerline in a head and neck blood vessel image according to another embodiment of the present application.
Fig. 7 is a schematic flow chart illustrating a process of roughly segmenting a head and neck blood vessel image by using a third segmentation network to obtain a rough blood vessel according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a head and neck blood vessel segmentation apparatus according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a third determining module according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a first growth module according to an embodiment of the present disclosure.
Fig. 11 is a schematic structural diagram of a first determining module according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of a first determining module according to another embodiment of the present application.
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Since the embodiments of the present application relate to applications in medical image segmentation and deep learning, for ease of understanding, related terms and deep learning related concepts that may be involved in the embodiments of the present application are briefly described below.
(1) Digital radiography: digital Radiography (DR) is a new Digital imaging technology that has recently been developed, and has similar applications to Computer Radiography (CR), but the basic principles and structures are different. DR is developed on the basis of Digital Fluorescence (DF) photography, and it uses an image intensifier tube as an information carrier, receives X-ray information transmitted through a human body, converts the information into Digital signals after being collected by a video camera, and digitizes the signals. In contrast to the CR, which does not match other devices but uses special equipment, except for the information carrier, the CR can acquire information with an imaging plate on any X-ray imaging device. Like CR, DR can perform various post-processing of images, and can perform image transmission and storage.
(2) Computed Tomography (CT): the method utilizes precisely collimated X-ray beams, gamma rays, ultrasonic waves and the like, and performs section scanning one by one around a certain part of a human body together with a detector with extremely high sensitivity, has the characteristics of short scanning time, clear images and the like, and can be used for the examination of various diseases; the following can be classified according to the radiation used: x-ray CT (X-CT), and gamma-ray CT (gamma-CT).
(3) CT Angiography (CT Angiography, CTA): CT angiography is a non-invasive angiography synthesized by using a computer three-dimensional reconstruction method. It uses the fast scanning technique of spiral CT to complete cross section scanning within a certain range in a short time, i.e. when the contrast agent is still concentrated in the blood vessel. And sending the acquired image data to an image workstation or an image reconstruction functional area of the CT machine for image reconstruction. The reconstruction technique generally adopts a Maximum Intensity Projection (MIP) method or a Volume Reconstruction (VR) method, and only a continuous and clear blood vessel image without a surrounding tissue structure image can be obtained by adjusting an image display threshold. If a proper reconstruction method and a display threshold are selected, a three-dimensional image which simultaneously displays the blood vessel and the tissue structure can be obtained and can be observed at any angle and cut in any direction by computer software.
The advantages of CTA are noninvasive angiography, which requires no puncture and vessel cannulation techniques, although injection of contrast agent is required, with minimal risk and few complications other than adverse effects of contrast agent. CTA can not be achieved by ordinary angiography, and can also know the relationship between blood vessels and surrounding tissues or lesions while knowing the condition of blood vessels. CTA, however, has its drawbacks, such as unclear display of small blood vessels, sometimes artifact of image reconstruction, and continuous dynamic display of arteriovenous arteries and veins.
(4) Deep Learning (Deep Learning, DL): deep learning is one of the technical and research fields of machine learning, and artificial intelligence is implemented in a computing system by establishing Artificial Neural Networks (ANNs) with a hierarchical structure. Because the hierarchical ANN can extract and screen input information layer by layer, deep Learning has the capability of Representation Learning (prediction Learning), and end-to-end supervised Learning and unsupervised Learning can be realized. The hierarchical ANN used for deep learning has various forms, the complexity of the hierarchy is generally called 'depth', and the deep learning forms comprise a multilayer perceptron, a convolutional neural network, a cyclic neural network, a deep belief network and other mixed structures according to the structure types. Deep learning uses data to update parameters in its construction to achieve a training goal, a process commonly referred to as "learning", common methods of learning are gradient descent algorithms and variants thereof, and some statistical learning theory is used for optimization of the learning process. In application, deep learning is used for learning high-dimensional data of complex structures and large samples, and according to research fields, the deep learning comprises computer vision, natural language processing, bioinformatics, automatic control and the like, and the deep learning is successful in reality problems of portrait recognition, machine translation, automatic driving and the like. Deep learning provides a method for enabling a computer to automatically learn mode characteristics, and the characteristic learning is integrated into the process of establishing a model, so that incompleteness caused by artificial design characteristics is reduced.
(5) Convolutional Neural Networks (CNN): convolutional Neural Networks are a class of Feed-forward Neural Networks (Feed-forward Neural Networks) that contain convolutional computations and have a deep structure, and are one of the representative algorithms for deep learning. The convolutional neural network imitates the Visual Perception (Visual Perception) mechanism construction of organisms, can perform supervised learning and unsupervised learning, and has the advantages that convolutional kernel parameter sharing in an implicit layer and sparsity of connection among layers enable the convolutional neural network to learn Grid-like (Grid-like) features such as pixels and audio with small calculation amount, stable effect and no additional Feature Engineering (Feature Engineering) requirement on data.
(6) Image Segmentation (Segmentation): segmentation includes Semantic Segmentation (Semantic Segmentation) and Instance Segmentation (Instance Segmentation). Semantic segmentation is an expansion of background separation, requiring separation of image parts with different semantics. Example segmentation is an extension of the detection task, requiring the delineation of the target (finer than the detection box). Segmentation is a pixel-level description of an image, which gives each pixel class meaning and is suitable for understanding a scene with higher requirements, such as segmentation of roads and non-roads in unmanned driving.
The medical image segmentation is a complex and key step in the field of medical image processing and analysis, and aims to segment parts with certain special meanings in a medical image, extract relevant features, provide reliable basis for clinical diagnosis and pathological research and assist doctors in making more accurate diagnosis.
In the field of medical image segmentation, head and neck vessel segmentation has its uniqueness compared to other vessel segmentations. Especially intracranial vessels, are not only very delicate but also cause reflux, and these factors present a great challenge for head and neck vessel segmentation.
In the prior art, the segmentation of the head and neck blood vessels is in a condition that manual segmentation needs to be participated in, and the segmentation efficiency is low in the condition. At present, the technology of automatically segmenting the blood vessels of the head and the neck is adopted, the segmentation precision is low, and if the blood vessels are not completely segmented or other regions which are not blood vessels are identified as the blood vessels, and the like, the problems exist.
In order to solve the above technical problems, embodiments of the present application provide a method and an apparatus for segmenting a head and neck blood vessel, a computer-readable storage medium, and an electronic device, where the method for segmenting a head and neck blood vessel sequentially grows a main blood vessel and a branch blood vessel according to characteristics of a carotid artery blood vessel, and compared with uniform growth, the method hierarchically adopts different thresholds for growth, so that a result is more accurate, an accuracy of segmenting a head and neck blood vessel is effectively improved, and the problems of incomplete segmentation and inaccurate segmentation of a head and neck blood vessel are solved.
The head and neck blood vessel segmentation method according to the embodiment of the present application will be described in detail below with reference to fig. 1 to 7.
Fig. 1 is a schematic flow chart of a head and neck blood vessel segmentation method according to an embodiment of the present application. As shown in fig. 1, a head and neck blood vessel segmentation method provided in an embodiment of the present application includes the following steps.
Step S100, determining a rough blood vessel central line in the head and neck blood vessel image.
Illustratively, the head and neck blood vessel image is a CTA image of a head and neck blood vessel, a CTA image sequence of a head and neck blood vessel, or a head and neck blood vessel CTA image, but is not limited to the CTA image, and may be an image sequence of a cranial Magnetic Resonance Angiography (MRA), a Transcranial Doppler ultrasound (TCD) image, a Digital Subtraction Angiography (DSA) image.
The blood vessel is one of tubular objects, and the center line of the blood vessel can be used for calculating the diameter of the blood vessel and can also be used for three-dimensional reconstruction of a blood vessel section or a blood vessel tree and a navigation path of an interventional operation. At present, the extraction of the central line can adopt a method based on topology refinement, a method based on tracking, a shortest path method or a method based on distance transformation, etc.
Illustratively, the rough vessel centerline may refer to a rough segmentation result obtained by performing vessel segmentation on the head and neck vessel image using the prior art, and the vessel segmentation result is presented or stored in a form of centerline. Where coarse may also be replaced with the approximate, or initial terms. The rough blood vessel center line can also refer to an extraction result obtained by extracting the blood vessel center line of the head and neck blood vessel image by using the prior art, and the extraction result is the rough blood vessel center line.
And S200, growing the rough blood vessel center line by adopting a first threshold value based on a growth network to obtain a main blood vessel.
Illustratively, the growing network may be a directional growing network, a regional growing network, such as the directional growing network skrenext 3DWHint. Because the confidence coefficient of the points on the main blood vessels in the image in the growth network model is higher, the first threshold (higher threshold) is adopted to grow the rough blood vessel center line, the main blood vessels can grow completely, and the only main blood vessels are obtained by utilizing the communication area.
And step S300, determining the central line of the blood vessel to be grown in the central lines of the rough blood vessels based on the central lines corresponding to the rough blood vessels and the main blood vessels.
The centerline corresponding to the main vessel can be determined through step S200, and after the centerline corresponding to the main vessel is removed from the rough vessel centerline, the centerline of the vessel to be grown remains as the centerline of the branch vessel.
And S400, growing the central line of the blood vessel to be grown by adopting a second threshold value based on the growth network to obtain a branch blood vessel.
The confidence of the points on the branch vessels in the image in the growth network model is low, and the central line of the vessel to be grown is grown by adopting a second threshold (a lower threshold, compared with the first threshold) to obtain the branch vessels.
And step S500, determining a head and neck blood vessel segmentation result of the head and neck blood vessel image based on the main blood vessel and the branch blood vessel.
And (4) obtaining a main blood vessel and branch blood vessels through the two growths, and determining a head and neck blood vessel segmentation result of the head and neck blood vessel image. Carotid blood vessels are particularly close to the skull, and when ordinary growth is performed, a part of the skull is very easy to identify as blood vessels. In the embodiment, different thresholds are adopted for growth in different levels, so that the condition that the skull is mistakenly identified as a blood vessel is effectively eliminated.
The main blood vessel and the branch blood vessels are obtained through two growths, and form a complete blood vessel topological structure together, the topological structure can be stored in a storage central line mode, and the topological structure can also be stored and displayed in a whole complete blood vessel mode. The head and neck blood vessel segmentation result can be displayed by being added to the original image, or only the head and neck blood vessel segmentation result can be displayed at the position corresponding to the image with the same size as the original image.
If the head and neck blood vessel segmentation result is displayed in a full-automatic three-dimensional visualization manner in practical application, the method can also select to display only continuous and clear blood vessel shadows without surrounding tissue structure shadows or simultaneously display three-dimensional images of blood vessels and tissue structures by adjusting an image display threshold. And computer software can be used for observing the segmented image at any angle and cutting the segmented image in any direction, so that the segmentation result of the head and neck blood vessels is more accurate, and a doctor is assisted to make more accurate diagnosis.
According to the head and neck blood vessel segmentation method provided by the embodiment of the application, the main blood vessel and the branch blood vessel grow in sequence according to the characteristics of the carotid artery blood vessel, and compared with uniform growth, different thresholds are adopted by the method for growth in a hierarchical mode, so that the result is more accurate, the accuracy of head and neck blood vessel segmentation is effectively improved, and the problems of incomplete head and neck blood vessel segmentation and inaccurate segmentation are solved.
Fig. 2 is a schematic flow chart illustrating a head and neck blood vessel segmentation result determining a head and neck blood vessel image based on a main blood vessel and a branch blood vessel according to an embodiment of the present application. The embodiment shown in fig. 2 is extended based on the embodiment shown in fig. 1, and the differences between the embodiment shown in fig. 2 and the embodiment shown in fig. 1 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 2, in the embodiment of the present application, determining a head and neck blood vessel segmentation result of a head and neck blood vessel image based on a main blood vessel and a branch blood vessel includes the following steps.
Step S510, dividing the corresponding region of the main blood vessel into at least one main block region, and dividing the corresponding region of the branch blood vessel into at least one branch block region.
Illustratively, a rectangle with a block size of X × Y (the length and width of the rectangle are respectively smaller than the length and width of the image) is determined, and on the basis of the determination of the main blood vessel, the corresponding region of the main blood vessel is divided into at least one main block region according to the X × Y rectangle along the main blood vessel. Similarly, the corresponding region of the branch vessel is divided into at least one branch block region.
Step S520, the first segmentation network is utilized to segment the at least one trunk block region and the at least one branch block region, respectively, so as to obtain respective segmentation results of the at least one trunk block region and the at least one branch block region.
Step S530, determining a head and neck blood vessel segmentation result based on the segmentation result of each of the at least one trunk block region and the at least one branch block region.
Illustratively, the main blood vessel and the branch blood vessel are stored and displayed in a central line form, after at least one main block region around the central line of the main blood vessel and at least one branch block region around the central line of the branch blood vessel are obtained, the blood vessels in the at least one main block region and the at least one branch block region are segmented by using a first segmentation network (e.g., a neural network, such as an improved network like SkSegNetwork), and a head and neck blood vessel segmentation result is obtained. The head and neck blood vessel segmentation result can be put back to the original image (head and neck blood vessel image) for storage and display, and the like, and the head and neck blood vessel segmentation result can be displayed only at the position corresponding to the image with the same size as the original image.
According to the head and neck blood vessel segmentation method provided by the embodiment of the application, the blood vessel is divided into a plurality of block areas along the center line of the blood vessel for segmentation, and compared with a common segmentation network adopted in a common situation, the head and neck blood vessel boundary obtained by segmentation through the method is finer. Meanwhile, according to the characteristic that the density of points on the blood vessel along the central line presents Gaussian distribution, the loss function of the first segmentation network is improved according to the characteristic, the farther the loss function is away from the central line, the larger the loss function is, the robustness of the model is improved, and the segmentation precision is further improved.
Fig. 3 is a schematic flow chart illustrating a process of growing a rough blood vessel centerline to obtain a main blood vessel based on a first threshold value of a growth network according to an embodiment of the present application. The embodiment shown in fig. 3 is extended based on the embodiment shown in fig. 1, and the differences between the embodiment shown in fig. 3 and the embodiment shown in fig. 1 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 3, in the embodiment of the present application, growing the rough blood vessel centerline by using a first threshold based on the growth network to obtain the main blood vessel includes the following steps.
Step S210, a plurality of blood vessel characteristic classification results of the rough blood vessel central line are determined by utilizing a classification network model.
The blood vessel characteristics refer to the size of the inner and outer walls of the blood vessel, the curvature of the blood vessel, the trend of the blood vessel, the diameter of the blood vessel (the thickness of the blood vessel), the length of the central line of the blood vessel, the number of points on the central line of the blood vessel and the like. Illustratively, the classification network model is a ResUnet model or a modified ResUnet model. It can be understood that the classification network model is used to classify the rough vessel centerline according to the vessel features. If a first length threshold of the centerline of the blood vessel and a second length threshold of the centerline of the blood vessel (which are different) are preset, the centerlines corresponding to the rough blood vessels are classified into three categories according to the two thresholds, and the corresponding rough blood vessels are classified into three categories, for example, the blood vessels corresponding to the centerline of the rough blood vessels which are larger than the first length threshold are main blood vessels.
Step S220, determining a maximum connected region of each of the plurality of blood vessel feature classification results.
Illustratively, the multiple blood vessel feature classification results correspond to multiple blood vessel sets, different post-processing is performed on different blood vessel sets corresponding to the classification results, and respective maximum communication areas of the multiple blood vessel sets are determined.
Step S230, growing the respective maximum connected regions of the multiple blood vessel feature classification results based on the growth network by using a first threshold, so as to obtain a main blood vessel.
Illustratively, when the growth network is based on the first threshold, the growth is performed on the basis of the maximum connected region of each of the plurality of blood vessel feature classification results to obtain the main blood vessel.
According to the head and neck blood vessel segmentation method provided by the embodiment of the application, after the rough blood vessel center lines are classified through the classification network model, the maximum communication areas of the classification results are obtained, the main blood vessels are obtained by growing on the basis of the communication areas, the classification is carried out according to the same characteristics, different types of post-processing methods are selected for distinguishing and treating, and a more precise result can be obtained.
Fig. 4 is a schematic flowchart illustrating a process of determining a rough blood vessel centerline in a head and neck blood vessel image according to an embodiment of the present application. The embodiment shown in fig. 4 is extended from the embodiment shown in fig. 1, and the differences between the embodiment shown in fig. 4 and the embodiment shown in fig. 1 will be mainly described below, and the same parts will not be described again.
As shown in fig. 4, in the embodiment of the present application, determining a rough blood vessel centerline in a head and neck blood vessel image includes the following steps.
Step S110, determining a seed point of a rough blood vessel in the head and neck blood vessel image.
It can be understood that the seed points in the image are initial pixel points selected during the region growing. The region growing process is to merge the adjacent pixel points with similar attributes to the seed points in the region into the seed point set by using the seed points as the starting points, that is, to develop the region into a larger region.
Step S120, determining a rough blood vessel centerline based on the seed points of the rough blood vessel.
The head and neck blood vessel segmentation method provided by the embodiment of the application determines the rough blood vessel central line based on the seed points of the rough blood vessel, and the blood vessel seed point information is convenient to obtain, and the technology for extracting the blood vessel seed points is relatively mature and reliable, so that the method meets the requirements of practical application, and is stable, efficient and practical.
Fig. 5 is a schematic flowchart illustrating a process of determining a seed point of a rough blood vessel in a head and neck blood vessel image according to an embodiment of the present application. The embodiment shown in fig. 5 is extended based on the embodiment shown in fig. 4, and the differences between the embodiment shown in fig. 5 and the embodiment shown in fig. 4 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 5, in the embodiment of the present application, determining a seed point of a rough blood vessel in a head and neck blood vessel image includes the following steps.
And step S111, segmenting the head and neck blood vessel image by using a second segmentation network to obtain a rough blood vessel segmentation region.
Illustratively, the second split network, such as the Scnet, splits the network. The head and neck blood vessel image is segmented by the second segmentation network, so that a region which is definitely the head and neck blood vessel is obtained, and the condition that the skull and other tissues are segmented into blood vessels by mistake is effectively eliminated.
In step S112, a seed point of the rough blood vessel is obtained based on the rough blood vessel segmentation region.
According to the head and neck blood vessel segmentation method provided by the embodiment of the application, the segmentation region is determined by utilizing the second segmentation network, then the seed points are obtained, the method is suitable for determining the seed points in the head and neck blood vessel image, and the situation that the skull and other tissues are segmented into the points of the blood vessels in the image by mistake is effectively eliminated through the seed points obtained by the method.
Fig. 6 is a schematic flowchart illustrating a process of determining a rough blood vessel centerline in a head and neck blood vessel image according to another embodiment of the present application. The embodiment shown in fig. 6 is extended based on the embodiment shown in fig. 1, and the differences between the embodiment shown in fig. 6 and the embodiment shown in fig. 1 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 6, in the embodiment of the present application, determining a rough blood vessel centerline in a head and neck blood vessel image includes the following steps.
And S130, roughly segmenting the head and neck blood vessel image by using a third segmentation network to obtain a rough blood vessel.
Illustratively, a third split network such as a ResUnetMC split network.
Step S140, obtaining a rough blood vessel centerline based on the rough blood vessel.
According to the head and neck blood vessel segmentation method provided by the embodiment of the application, the rough blood vessel center line is determined after the head and neck blood vessel image is roughly segmented based on the third segmentation network, the segmentation network is pertinently modified based on the prior art according to the characteristics of the head and neck blood vessel on the basis of the segmentation network adopted in the prior blood vessel segmentation, the development cost is saved, and the requirement for improving the accuracy of segmenting the head and neck blood vessel is met.
Fig. 7 is a schematic flow chart illustrating a process of roughly segmenting a head and neck blood vessel image by using a third segmentation network to obtain a rough blood vessel according to an embodiment of the present application. The embodiment shown in fig. 7 is extended based on the embodiment shown in fig. 6, and the differences between the embodiment shown in fig. 7 and the embodiment shown in fig. 6 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 7, in the embodiment of the present application, the coarse segmentation is performed on the head and neck blood vessel image by using the third segmentation network to obtain a coarse blood vessel, which includes the following steps.
Step S131, carrying out window width and/or window level adjustment on the head and neck blood vessel image to obtain an adjustment image corresponding to the head and neck blood vessel image.
Based on the priori knowledge of the angiography characteristics, the window width and/or the window level of the head and neck blood vessel image are adjusted, and interested pixel points in the image are taken out as key points. For example, the value range of the pixel value (also called CT value, HU value) of each pixel point in the head and neck blood vessel image is [ -1024,1024], based on the priori knowledge of the angiography characteristics, the blood vessel in the image is used as a highlight region, the pixel value is mainly concentrated between 100 and 500, the pixel point in the range in the image is taken as a key point, and the adjustment image corresponding to the head and neck blood vessel image is obtained.
And step S132, compressing the adjustment image to obtain a compressed image corresponding to the adjustment image.
It is considered that if the image data stored in the video memory is too large, the video memory space is occupied, and the speed is also influenced. The adjusted image is compressed, where the compression may mean image size reduction, image quality degradation, etc., to obtain a compressed image corresponding to the adjusted image.
And step S133, roughly segmenting the compressed image by using a third segmentation network to obtain a rough blood vessel.
Illustratively, a third split network such as a ResUnetMC split network.
According to the head and neck blood vessel segmentation method provided by the embodiment of the application, the head and neck blood vessel image is adjusted by combining the priori knowledge of the angiography characteristics, the practicability is considered (the video memory space is not excessively occupied), the image is compressed and then roughly segmented, a rough blood vessel is obtained, and the requirement for improving the segmentation efficiency in practical application can be met.
In conjunction with the above embodiments, a practical application system will now be described. The system is a set of full-automatic head and neck blood vessel system identification system, is developed by using a Starship training platform based on a deep learning technology. The method comprises the steps of inputting a complete head and neck CTA image shot before an operation, carrying out full-automatic head and neck blood vessel segmentation by a system to obtain a head and neck blood vessel segmentation result, and storing and displaying the head and neck blood vessel segmentation result in a blood vessel central line form or in a whole complete blood vessel form. The head and neck blood vessel segmentation result may be displayed in addition to the original image, or only the head and neck blood vessel segmentation result may be displayed at a position corresponding to an image having the same size as the original image. The system comprises a blood vessel rough segmentation model, a seed point segmentation model, a direction growth model and a blood vessel fine segmentation model according to an execution sequence.
In the rough segmentation model of the blood vessel, the input head and neck CTA image is data in DICOM (Digital Imaging and Communications in Medicine, DCM for short), such as 500 pieces of 512 × 512 image data, where the value range of the pixel value of each pixel point is [ -1024,1024], based on the priori knowledge of the characteristic of angiography, the blood vessel in the image is taken as a highlight area, the pixel values are mainly concentrated between 100 and 500, and the pixel point in the range in the image is taken as a key point. In this step, the window width of the head and neck blood vessel image is adjusted according to the corresponding blood vessel rough segmentation model, and a corresponding adjusted image is obtained, but 500 pieces of 512 × 512 image data are still obtained. It is considered that if the image data is too large, the speed is also affected while the video memory space is occupied. The blood vessel rough segmentation model compresses the adjusted image, wherein the compression can represent image size reduction, image quality reduction and the like, and a compressed image corresponding to the adjusted image is obtained. The vessel rough segmentation model puts the compressed image into a segmentation network (corresponding to the third segmentation network in the embodiment), such as a ResUnetMC segmentation network, to perform rough segmentation to obtain a rough vessel. It can be understood that the rough segmentation results in approximate regions of the blood vessels being segmented in the compressed image. The blood vessel rough segmentation model enlarges the compressed image of the segmented rough blood vessel to restore the original size of the image of 512 x 512.
In the seed point segmentation model, the image transmitted by the blood vessel rough segmentation model is only the approximate region of the blood vessel, and a false blood vessel region which is not the blood vessel region is likely to exist in the image. The seed point segmentation model divides the image transmitted by the blood vessel rough segmentation model into an SCnet segmentation network to obtain a region which is necessarily a blood vessel in the image and is called as a seed point segmentation region.
In the directional growth model, a seed point skeleton is extracted based on an image (in which a seed point segmentation area is identified) transmitted by the seed point segmentation model, a skeleton seed point is obtained through initialization (based on a large seed model SCnet), a centerline of a blood vessel is obtained from the skeleton seed point, and the centerline is obtained through segmentation by using a directional growth network SkResNeXt3DWHint (a growth network in a corresponding embodiment). Since the carotid blood vessels are particularly close to the skull, it is easy to identify a portion of the skull as a blood vessel when growing. The directional growth model part utilizes the characteristics that the model confidence coefficient on the blood vessel trunk is higher and the confidence coefficient on the branches is lower, adopts a mode of twice growth, adopts a higher threshold value for growth for the first time, ensures that the trunk can grow completely, and obtains a unique trunk by utilizing a communication area. And growing by adopting a lower threshold value for the second time, removing the trunk obtained by the first growth, and only performing region growth on the branch region. Thus (growth model skresenext 3 DWHint) the complete vessel topology is obtained by two directional growths, where the topology is chosen to be stored and displayed in the form of a centerline.
In the blood vessel fine segmentation model, block areas around the center line of the blood vessel in the image transmitted from the direction growth model part are intercepted, each block area blood vessel is segmented by utilizing a segmentation network SkSegNetwork (a first segmentation network in the corresponding embodiment), a final head and neck blood vessel segmentation result is obtained, and the segmentation result is put back to the original image. In the neural network (fine segmentation model) SkSegNetwork, the characteristic that the density of blood vessels along a central line presents Gaussian distribution is considered, the loss function of the network is improved, and the loss function is larger as the distance from the central line is farther.
The full-automatic head and neck blood vessel identification system utilizes a head and neck blood vessel segmentation module to obtain a blood vessel segmentation result, then samples the result, classifies the result through a ResUnet model, performs post-processing on the model classification result to obtain the maximum communication area of each type, and then utilizes area growth to connect surrounding tissues together.
The system fully considers and analyzes the characteristics of the head and neck blood vessels, and breaks down the actual medical task into a plurality of steps, particularly provides two growth models according to the characteristics of the head and neck blood vessels in the direction growth model, and can effectively improve the accuracy of segmentation. Meanwhile, in the blood vessel fine segmentation model, the blood vessel priori knowledge is introduced into the model, so that the robustness of the model is improved, and the segmentation precision is further improved.
Method embodiments of the present application are described in detail above in conjunction with fig. 1-7, and apparatus embodiments of the present application are described in detail below in conjunction with fig. 8-12. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.
Fig. 8 is a schematic structural diagram of a head and neck blood vessel segmentation apparatus according to an embodiment of the present application. As shown in fig. 8, the head and neck blood vessel segmentation apparatus provided in the embodiment of the present application includes a first determination module 100, a first growing module 200, a second determination module 300, a second growing module 400, and a third determination module 500. Specifically, the first determining module 100 is configured to determine a rough blood vessel centerline in a head and neck blood vessel image; a first growing module 200, configured to grow the rough blood vessel center line by using a first threshold based on a growing network to obtain a main blood vessel; a second determining module 300, configured to determine a centerline of a blood vessel to be grown in the rough blood vessel centerlines based on the rough blood vessel centerlines and the centerlines corresponding to the main blood vessel; the second growing module 400 is configured to grow the centerline of the blood vessel to be grown based on the growing network by using a second threshold to obtain a branch blood vessel; a third determining module 500, configured to determine a head and neck blood vessel segmentation result of the head and neck blood vessel image based on the main blood vessel and the branch blood vessel.
Fig. 9 is a schematic structural diagram of a third determining module according to an embodiment of the present application. As shown in fig. 9, the third determining module 500 provided in the embodiment of the present application includes a partition area unit 510, a segmentation unit 520, and a segmentation result determining unit 530.
Specifically, the sub-region unit 510 is configured to divide a corresponding region of a main blood vessel into at least one main block region, and divide a corresponding region of a branch blood vessel into at least one branch block region; the segmentation unit 520 is configured to segment the at least one trunk block region and the at least one branch block region by using a first segmentation network, to obtain respective segmentation results of the at least one trunk block region and the at least one branch block region; the segmentation result determination unit 530 is configured to determine a head and neck blood vessel segmentation result based on the respective segmentation results of the at least one trunk block region and the at least one branch block region.
Fig. 10 is a schematic structural diagram of a first growth module according to an embodiment of the present disclosure. As shown in fig. 10, the first growing module 200 provided in the embodiment of the present application includes a blood vessel feature classification unit 210, a maximum connected region determination unit 220, and a growing unit 230.
Specifically, the blood vessel feature classification unit 210 is configured to determine a plurality of blood vessel feature classification results of a rough blood vessel centerline by using a classification network model; the maximum connected region determining unit 220 is configured to determine a maximum connected region of each of the plurality of blood vessel feature classification results; the growing unit 230 is configured to grow the maximum connected region of each of the multiple blood vessel feature classification results based on the growth network by using a first threshold, so as to obtain a main blood vessel.
Fig. 11 is a schematic structural diagram of a first determining module according to an embodiment of the present application. As shown in fig. 11, the first determining module 100 provided in the embodiment of the present application includes a seed point determining unit 110 and a center line first determining unit 120.
Specifically, the seed point determination unit 110 is configured to determine a seed point of a rough blood vessel in the head and neck blood vessel image; the centerline first determination unit 120 is configured to determine a coarse vessel centerline based on the seed points of the coarse vessel.
In some embodiments, the seed point determining unit 110 is further configured to segment the head and neck blood vessel image by using a second segmentation network, so as to obtain a rough blood vessel segmentation region; and segmenting the region based on the rough blood vessel to obtain a seed point of the rough blood vessel.
Fig. 12 is a schematic structural diagram of a first determining module according to another embodiment of the present application. As shown in fig. 12, the first determining module 100 provided in the embodiment of the present application includes a rough segmentation unit 130 and a center line second determining unit 140.
Specifically, the rough segmentation unit 130 is configured to perform rough segmentation on the head and neck blood vessel image by using a third segmentation network to obtain a rough blood vessel; the centerline second determination unit 140 is configured to obtain a coarse vessel centerline based on the coarse vessel.
In some embodiments, the rough segmentation unit 130 is further configured to perform window width and/or window level adjustment on the head and neck blood vessel image, so as to obtain an adjusted image corresponding to the head and neck blood vessel image; compressing the adjusted image to obtain a compressed image corresponding to the adjusted image; and carrying out coarse segmentation on the compressed image by using a third segmentation network to obtain a coarse blood vessel.
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 13. Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 13, the electronic device 9 includes one or more processors 7 and a memory 8.
The processor 7 may be a Central Processing Unit (CPU) or other form of Processing Unit having data Processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 9 to perform desired functions.
Memory 8 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile Memory may include, for example, random Access Memory (RAM), cache Memory (cache), and/or the like. The nonvolatile Memory may include, for example, a Read-Only Memory (ROM), a hard disk, a flash Memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and executed by the processor 7 to implement the methods of the various embodiments of the application mentioned above and/or other desired functions. Various contents such as a head and neck blood vessel image may also be stored in the computer readable storage medium.
In one example, the electronic device 9 may further include: an input device 6 and an output device 5, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input means 6 may comprise, for example, a keyboard, a mouse, etc.
The output device 5 can output various information including risk task data and the like to the outside. The output devices 5 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for the sake of simplicity, only some of the components related to the present application in the electronic device 9 are shown in fig. 13, and components such as a bus, an input/output interface, and the like are omitted. In addition, the electronic device 9 may comprise any other suitable components, depending on the specific application.
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the methods according to the various embodiments of the present application described above in this specification.
The computer program product may be used to write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the steps in the methods according to the various embodiments of the present application described above in the present specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable programmable Read-Only Memory (EPROM or flash Memory), an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably herein. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (9)

1. A head and neck blood vessel segmentation method, characterized by comprising:
determining a rough blood vessel central line in the head and neck blood vessel image;
adopting a first threshold value based on a growth network to grow the rough blood vessel center line to obtain a main blood vessel, wherein the rough blood vessel center line is an extraction result obtained by extracting the blood vessel center line of the head and neck blood vessel image;
determining a vessel centerline to be grown in the rough vessel centerlines based on the rough vessel centerline and a centerline corresponding to the main vessel;
adopting a second threshold value based on the growth network to grow the central line of the blood vessel to be grown to obtain a branch blood vessel;
determining a head and neck blood vessel segmentation result of the head and neck blood vessel image based on the main blood vessel and the branch blood vessel;
wherein the determining a head and neck blood vessel segmentation result of the head and neck blood vessel image based on the main blood vessel and the branch blood vessel comprises:
dividing the corresponding region of the main blood vessel into at least one main block region, and dividing the corresponding region of the branch blood vessel into at least one branch block region;
respectively segmenting the at least one trunk block area and the at least one branch block area by utilizing a first segmentation network to obtain respective segmentation results of the at least one trunk block area and the at least one branch block area;
and determining the head and neck blood vessel segmentation result based on the segmentation result of each of the at least one trunk block region and the at least one branch block region.
2. The method for head and neck blood vessel segmentation according to claim 1, wherein the growing the rough blood vessel centerline using a first threshold based on a growth network to obtain a main blood vessel comprises:
determining a plurality of vessel feature classification results of the rough vessel centerline by using a classification network model;
determining a maximum connected region of each of the plurality of blood vessel feature classification results;
and growing the respective maximum communication areas of the plurality of blood vessel feature classification results by adopting the first threshold value based on the growth network to obtain the main blood vessel.
3. The head and neck blood vessel segmentation method according to claim 1 wherein the determining a coarse vessel centerline in a head and neck blood vessel image comprises:
determining seed points of rough blood vessels in the head and neck blood vessel image;
determining the rough vessel centerline based on seed points of the rough vessel.
4. The head and neck blood vessel segmentation method according to claim 3 wherein the determining seed points of the coarse blood vessels in the head and neck blood vessel image comprises:
segmenting the head and neck blood vessel image by utilizing a second segmentation network to obtain a rough blood vessel segmentation region;
and obtaining a seed point of the rough blood vessel based on the rough blood vessel segmentation region.
5. The method for head and neck blood vessel segmentation according to claim 1, wherein the determining the rough blood vessel centerline in the head and neck blood vessel image comprises:
roughly segmenting the head and neck blood vessel image by using a third segmentation network to obtain a rough blood vessel;
and obtaining the rough blood vessel central line based on the rough blood vessel.
6. The method for segmenting the head and neck blood vessel according to claim 5, wherein the coarse segmentation of the head and neck blood vessel image by using a third segmentation network to obtain a coarse blood vessel comprises:
carrying out window width and/or window level adjustment on the head and neck blood vessel image to obtain an adjustment image corresponding to the head and neck blood vessel image;
compressing the adjusted image to obtain a compressed image corresponding to the adjusted image;
and carrying out coarse segmentation on the compressed image by using the third segmentation network to obtain the coarse blood vessel.
7. A head and neck blood vessel segmentation device, comprising:
the first determination module is used for determining a rough blood vessel central line in the head and neck blood vessel image;
the first growing module is used for growing the rough blood vessel center line by adopting a first threshold value based on a growing network to obtain a main blood vessel, wherein the rough blood vessel center line is an extraction result obtained by extracting the blood vessel center line of the head and neck blood vessel image;
the second determining module is used for determining a vessel centerline to be grown in the rough vessel centerlines based on the rough vessel centerline and a centerline corresponding to the main vessel;
the second growth module is used for growing the central line of the blood vessel to be grown based on the growth network by adopting a second threshold value to obtain a branch blood vessel;
a third determining module, configured to determine a head and neck blood vessel segmentation result of the head and neck blood vessel image based on the main blood vessel and the branch blood vessel, where the determining the head and neck blood vessel segmentation result of the head and neck blood vessel image based on the main blood vessel and the branch blood vessel includes: dividing the corresponding region of the main blood vessel into at least one main block region, and dividing the corresponding region of the branch blood vessel into at least one branch block region; respectively segmenting the at least one trunk block area and the at least one branch block area by utilizing a first segmentation network to obtain respective segmentation results of the at least one trunk block area and the at least one branch block area; and determining the head and neck blood vessel segmentation result based on the segmentation result of each of the at least one main block region and the at least one branch block region.
8. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor for performing the head and neck blood vessel segmentation method according to any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that a computer program is stored thereon for performing the method for head and neck vessel segmentation according to any one of the preceding claims 1 to 6.
CN202111646781.8A 2021-12-29 2021-12-29 Head and neck blood vessel segmentation method and device, readable storage medium and electronic equipment Active CN114359208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111646781.8A CN114359208B (en) 2021-12-29 2021-12-29 Head and neck blood vessel segmentation method and device, readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111646781.8A CN114359208B (en) 2021-12-29 2021-12-29 Head and neck blood vessel segmentation method and device, readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN114359208A CN114359208A (en) 2022-04-15
CN114359208B true CN114359208B (en) 2022-11-01

Family

ID=81103587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111646781.8A Active CN114359208B (en) 2021-12-29 2021-12-29 Head and neck blood vessel segmentation method and device, readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114359208B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708390B (en) * 2022-05-25 2022-09-20 深圳科亚医疗科技有限公司 Image processing method and device for physiological tubular structure and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682636A (en) * 2016-12-31 2017-05-17 上海联影医疗科技有限公司 Blood vessel extraction method and system
CN109448004A (en) * 2018-10-26 2019-03-08 强联智创(北京)科技有限公司 A kind of intercept method and system of the intracranial vessel image based on center line
CN112308846A (en) * 2020-11-04 2021-02-02 赛诺威盛科技(北京)有限公司 Blood vessel segmentation method and device and electronic equipment
CN113313715A (en) * 2021-05-27 2021-08-27 推想医疗科技股份有限公司 Method, device, apparatus and medium for segmenting cardiac artery blood vessel

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10068340B2 (en) * 2014-11-03 2018-09-04 Algotec Systems Ltd. Method for segmentation of the head-neck arteries, brain and skull in medical images
CN112989889B (en) * 2019-12-17 2023-09-12 中南大学 Gait recognition method based on gesture guidance
CN111681224A (en) * 2020-06-09 2020-09-18 上海联影医疗科技有限公司 Method and device for acquiring blood vessel parameters

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682636A (en) * 2016-12-31 2017-05-17 上海联影医疗科技有限公司 Blood vessel extraction method and system
CN109448004A (en) * 2018-10-26 2019-03-08 强联智创(北京)科技有限公司 A kind of intercept method and system of the intracranial vessel image based on center line
CN112308846A (en) * 2020-11-04 2021-02-02 赛诺威盛科技(北京)有限公司 Blood vessel segmentation method and device and electronic equipment
CN113313715A (en) * 2021-05-27 2021-08-27 推想医疗科技股份有限公司 Method, device, apparatus and medium for segmenting cardiac artery blood vessel

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A Skeletal Similarity Metric for Quality Evaluation of Retinal Vessel Segmentation";Zengqiang Yan et al.;《IEEE Transactions on Medical Imaging》;20171130;第37卷(第4期);第1045-1057页 *
"一种低对比度CT图像的血管分割方法";叶建平 等;《计算机系统应用》;20151231;第24卷(第2期);第184-188页 *

Also Published As

Publication number Publication date
CN114359208A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
US11748889B2 (en) Brain image segmentation method and apparatus, network device, and storage medium
WO2022021955A1 (en) Image segmentation method and apparatus, and training method and apparatus for image segmentation model
US11195280B2 (en) Progressive and multi-path holistically nested networks for segmentation
US7773791B2 (en) Analyzing lesions in a medical digital image
CN111008984B (en) Automatic contour line drawing method for normal organ in medical image
US7676257B2 (en) Method and apparatus for segmenting structure in CT angiography
CN111667478B (en) Method and system for identifying carotid plaque through CTA-MRA cross-modal prediction
US10258304B1 (en) Method and system for accurate boundary delineation of tubular structures in medical images using infinitely recurrent neural networks
CN103514597A (en) Image processing device
EP2620909B1 (en) Method, system and computer readable medium for automatic segmentation of a medical image
WO2005055137A2 (en) Vessel segmentation using vesselness and edgeness
EP3432215A1 (en) Automated measurement based on deep learning
JP2015066311A (en) Image processor, image processing method, program for controlling image processor, and recording medium
CN114359205B (en) Head and neck blood vessel analysis method and device, storage medium and electronic equipment
CN116503607B (en) CT image segmentation method and system based on deep learning
CN113327225A (en) Method for providing airway information
CN114359208B (en) Head and neck blood vessel segmentation method and device, readable storage medium and electronic equipment
Sivanesan et al. Unsupervised medical image segmentation with adversarial networks: From edge diagrams to segmentation maps
CN114255235A (en) Method and arrangement for automatic localization of organ segments in three-dimensional images
Altarawneh et al. A modified distance regularized level set model for liver segmentation from CT images
Tong et al. Automatic lumen border detection in IVUS images using dictionary learning and kernel sparse representation
US20210279884A1 (en) Method of computing a boundary
CN112991314A (en) Blood vessel segmentation method, device and storage medium
Hassanin et al. Automatic localization of Common Carotid Artery in ultrasound images using Deep Learning
Luo et al. Recent progresses on cerebral vasculature segmentation for 3D quantification and visualization of MRA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant