CN114998273A - Blood vessel image processing method and device, electronic equipment and storage medium - Google Patents

Blood vessel image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114998273A
CN114998273A CN202210655838.9A CN202210655838A CN114998273A CN 114998273 A CN114998273 A CN 114998273A CN 202210655838 A CN202210655838 A CN 202210655838A CN 114998273 A CN114998273 A CN 114998273A
Authority
CN
China
Prior art keywords
blood vessel
image
sampling
point cloud
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210655838.9A
Other languages
Chinese (zh)
Inventor
王旭
马骏
郑凌霄
兰宏志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Raysight Intelligent Medical Technology Co Ltd
Original Assignee
Shenzhen Raysight Intelligent Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Raysight Intelligent Medical Technology Co Ltd filed Critical Shenzhen Raysight Intelligent Medical Technology Co Ltd
Priority to CN202210655838.9A priority Critical patent/CN114998273A/en
Publication of CN114998273A publication Critical patent/CN114998273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a blood vessel image processing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring an initial blood vessel image, and performing blood vessel rough segmentation processing on the initial blood vessel image to obtain a blood vessel rough segmentation image of the initial blood vessel image; performing point cloud feature extraction pretreatment on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image, and performing down-sampling treatment on the initial point cloud features with different parameters to obtain down-sampling point cloud features with different down-sampling parameters; and inputting the point cloud characteristics of each downsampling into a pre-trained blood vessel segmentation model, and determining the segmentation name of each blood vessel central line in the initial blood vessel image based on the blood vessel rough segmentation image and the blood vessel segmentation image to obtain the blood vessel segmentation image of the initial blood vessel image. By the technical scheme disclosed by the embodiment of the invention, the problem of low accuracy in the process of segmenting and naming the blood vessel in the prior art is solved, and the accuracy of the segmentation and naming result is improved.

Description

Blood vessel image processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of medical image processing technologies, and in particular, to a blood vessel image processing method and apparatus, an electronic device, and a storage medium.
Background
In order to diagnose the tissue related to the blood vessel, the blood vessel in the medical image is often segmented and named to determine the name of the anatomical structure of each part of the blood vessel in the medical field. In the prior art, a blood vessel is segmented by topological relation, morphology, and the like of an anatomical structure.
But in the process of naming blood vessels using the prior art it was found that: the structure of the human heart varies from person to person, so this regularity is not robust, affecting the accuracy of its segmentation.
Disclosure of Invention
The invention provides a blood vessel image processing method, a blood vessel image processing device, electronic equipment and a storage medium, which are used for solving the problem of low accuracy in the process of segmenting and naming blood vessels in the prior art and improving the accuracy of segmentation and naming results.
In a first aspect, an embodiment of the present invention provides a blood vessel image processing method, where the method includes:
acquiring an initial blood vessel image, and performing blood vessel rough segmentation processing on the initial blood vessel image to obtain a blood vessel rough segmentation image of the initial blood vessel image;
performing point cloud feature extraction pretreatment on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image, and performing down-sampling treatment on the initial point cloud features with different parameters to obtain down-sampling point cloud features with different down-sampling parameters;
inputting each down-sampling point cloud feature into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image; the vessel segmentation model comprises at least one preset feature extraction layer, at least one hierarchical downsampling coding module and at least one hierarchical upsampling decoding module; the down-sampling times of the down-sampling coding module and the up-sampling times of the up-sampling decoding module are the same as the number of layers of the preset feature extraction layer;
and determining the segment name of each blood vessel central line in the initial blood vessel image based on the blood vessel rough segmentation image and the blood vessel segment image.
Optionally, the vessel coarse segmentation image comprises a coronary artery main vessel segmentation image;
correspondingly, the obtaining an initial blood vessel image, performing a blood vessel rough segmentation process on the initial blood vessel image, and obtaining a blood vessel rough segmentation image of the initial blood vessel image includes:
acquiring an initial vessel image containing a coronary artery main vessel;
and inputting the initial blood vessel image into a pre-trained blood vessel rough segmentation model to obtain a coronary artery main blood vessel segmentation image of the initial blood vessel image.
Optionally, the performing point cloud feature extraction preprocessing on the blood vessel rough segmentation image to generate an initial point cloud feature of the blood vessel rough segmentation image includes:
performing point cloud processing on the blood vessel rough segmentation image to generate point cloud data of the blood vessel rough segmentation image;
and extracting the characteristics of the point cloud data to obtain the initial point cloud characteristics of the point cloud data.
Optionally, the number of feature points obtained after the initial point cloud feature downsampling is the same as the number of feature points of the downsampling coding module of the current level;
correspondingly, the step of inputting each downsampling point cloud feature into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image includes:
for a first-level downsampling coding module, taking downsampling characteristics obtained after first parameter downsampling processing as input data of the first-level downsampling coding module, and outputting the input data after characteristic extraction and downsampling processing;
for at least one middle-level down-sampling coding module, respectively inputting down-sampling point cloud features obtained after down-sampling processing of different parameters into a first input data of the down-sampling coding module as a corresponding level, and respectively outputting output data of the down-sampling coding module as a second input data of the down-sampling coding module as a current level, and performing feature extraction and down-sampling processing on the first input data and the second input data and then outputting the first input data and the second input data;
for the last-level down-sampling coding module, taking the output data of the last-level down-sampling coding module as the input data of the current-level down-sampling coding module, and outputting the input data after feature extraction and down-sampling processing;
for an up-sampling encoding module of a first level, taking output data of a down-sampling encoding module of a current level as input data of an up-sampling decoding module of the current level, and outputting the input data after performing feature processing and up-sampling processing;
and for the other up-sampling decoding modules of one hierarchy, respectively taking output data of the down-sampling encoding module of the current hierarchy as first input data of the up-sampling decoding module of the current hierarchy, taking decoded output data of the previous hierarchy as second input data of the up-sampling module of the current hierarchy, and outputting the first input data and the second input data after performing feature processing and up-sampling processing on the first input data and the second input data to obtain the blood vessel segmented image of the initial blood vessel image.
Optionally, the down-sampling coding module of any level in the vessel segmentation model comprises a plurality of layers of perception layers, a down-sampling layer and at least one layer of feature extraction layer; the multi-layer perception layer is used for adjusting the feature dimension of the point cloud feature; the feature extraction layer is used for extracting features of the initial point cloud features corresponding to the current level and the output data of the down-sampling coding module of the previous level; the down-sampling layer is used for performing down-sampling processing of a preset stride on the extracted features;
the last level of the up-sampling decoding module comprises up-sampling decoding layers of any level, including a plurality of layers of sensing layers and at least one layer of feature fusion layer; the up-sampling coding modules of other levels except the last level comprise an up-sampling layer and at least one feature fusion layer; the multi-layer sensing layer is used for adjusting the feature dimension of the fusion feature output by the previous layer; the feature fusion layer is used for performing feature fusion on output data of an up-sampling decoding module of an upper layer and output data of a down-sampling coding module of a current layer; and the up-sampling layer is used for performing up-sampling processing of a preset step on the fused features.
Optionally, the method further includes:
in the process of training the blood vessel segmentation model, obtaining the category of each blood vessel segment and the category weight corresponding to each category;
and determining a loss function of the vessel segmentation model based on the weights of all classes, and performing iterative training on the vessel segmentation model based on the loss function until a trained vessel segmentation model is obtained.
Optionally, the determining a segment name of a vessel centerline in the initial vessel image based on the vessel rough segmentation image and the vessel segmentation image includes:
determining the vessel central line of each vessel in the vessel rough segmentation image;
for any blood vessel central line, determining the coincidence degree of the current blood vessel central line and each blood vessel segment in the blood vessel segment image, and determining the segment name of each blood vessel central line based on each coincidence degree.
In a second aspect, an embodiment of the present invention further provides a blood vessel image processing apparatus, including:
the blood vessel rough segmentation processing module is used for acquiring an initial blood vessel image, performing blood vessel rough segmentation processing on the initial blood vessel image and acquiring a blood vessel rough segmentation image of the initial blood vessel image;
the characteristic extraction preprocessing module is used for carrying out point cloud characteristic extraction preprocessing on the blood vessel rough segmentation image to generate initial point cloud characteristics of the blood vessel rough segmentation image, and carrying out down-sampling processing on the initial point cloud characteristics with different parameters to obtain down-sampling point cloud characteristics with different down-sampling parameters;
and the blood vessel segmentation image obtaining module is used for inputting the down-sampling point cloud characteristics to a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the vessel image processing method according to any of the embodiments of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where computer instructions are stored, and the computer instructions are configured to enable a processor to implement the blood vessel image processing method according to any embodiment of the present invention when executed.
The technical scheme of the embodiment of the invention specifically comprises the steps of obtaining an initial blood vessel image, carrying out blood vessel rough segmentation processing on the initial blood vessel image, and obtaining a blood vessel rough segmentation image of the initial blood vessel image; performing point cloud feature extraction pretreatment on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image, and performing down-sampling treatment on the initial point cloud features with different parameters to obtain down-sampling point cloud features with different down-sampling parameters; and inputting the down-sampling point cloud characteristics into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image. The technical scheme includes that segmentation processing is carried out on an initial blood vessel image containing a coronary artery main blood vessel to obtain a segmentation image, and downsampling point cloud characteristics of the segmentation image after different downsampling parameters are processed are extracted; the pre-trained blood vessel segmentation model is adopted to process the point cloud characteristics of the downsamples, so that a blood vessel segmentation image of an initial blood vessel image containing a coronary artery main blood vessel is obtained, the problem of low accuracy in the process of segmenting and naming the blood vessel in the prior art is solved, and the accuracy of segmentation and naming results is improved.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a blood vessel image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a vessel segmentation model according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a vessel segmentation result according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating the naming results of vessel segments according to an embodiment of the present invention;
fig. 5 is a flowchart of a blood vessel image processing method according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a blood vessel image processing device according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device provided in the fourth embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that, before the technical solutions disclosed in the embodiments of the present disclosure are used, the user should be informed of the type, the use range, the use scene, etc. of the personal information related to the present disclosure in a proper manner according to the relevant laws and regulations and obtain the authorization of the user.
For example, in response to receiving an active request from a user, a prompt message is sent to the user to explicitly prompt the user that the requested operation to be performed would require the acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the popup.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
It will be appreciated that the data involved in the subject technology, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the corresponding laws and regulations and related regulations.
Example one
Fig. 1 is a flowchart of a blood vessel image processing method according to an embodiment of the present invention, where the embodiment is applicable to naming coronary vessels, and the method may be executed by a blood vessel image processing apparatus, which may be implemented in a form of hardware and/or software, and may be configured in a mobile terminal, a PC terminal, a server, or the like. As shown in fig. 1, the method includes:
and S110, acquiring an initial blood vessel image, and performing blood vessel rough segmentation processing on the initial blood vessel image to obtain a blood vessel rough segmentation image of the initial blood vessel image.
In the embodiment of the present invention, the initial blood vessel image may be understood as an initial blood vessel image of a portion including any blood vessel, for example, an initial blood vessel image of each portion such as a brain blood vessel image and a heart blood vessel image; alternatively, the image format of the initial blood vessel image may include, but is not limited to, CCTA (Coronary Computed Tomography, CT angiography), CAG (Coronary angiography), MRI (Magnetic Resonance Imaging), and the like. Specifically, the method for acquiring the initial blood vessel image may include scanning a region of the target object including a blood vessel with a medical imaging device, so as to obtain the initial blood vessel image of the target object including the blood vessel. It should be noted that an initial blood vessel image including different portions can be obtained depending on the scanning portions. On the basis of the above embodiment, the technical solution of this embodiment may also obtain the initial blood vessel image based on retrieving the initial blood vessel image data stored in the local storage or the cloud server; the method for acquiring the initial blood vessel image is not limited in this embodiment.
Optionally, the initial blood vessel image acquired in this embodiment may include an initial blood vessel image including a coronary artery main vessel; specifically, under the condition that an initial blood vessel image containing a coronary artery main blood vessel is obtained, performing blood vessel rough segmentation processing on the initial blood vessel image containing the coronary artery main blood vessel to obtain a blood vessel rough segmentation image of the initial blood vessel image; correspondingly, the obtained blood vessel rough segmentation image comprises a coronary artery main blood vessel segmentation image. Optionally, the manner of the rough segmentation processing includes but is not limited to: traditional image processing algorithms and deep convolutional neural networks. Optionally, if the neural network algorithm is used to perform the blood vessel rough segmentation processing, the method for obtaining the blood vessel rough segmentation image may include: and inputting the initial blood vessel image into a pre-trained blood vessel rough segmentation model to obtain a coronary artery main blood vessel segmentation image of the initial blood vessel image. Illustratively, a blood vessel rough segmentation model can be preset as a binary classification model, and the corresponding blood vessel rough segmentation model outputs a coronary artery binary rough segmentation result as a blood vessel rough segmentation image of an initial blood vessel image; among them, the foreground is coronary artery main vessels.
And S120, performing point cloud feature extraction pretreatment on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image, and performing down-sampling treatment on the initial point cloud features with different parameters to obtain down-sampling point cloud features with different down-sampling parameters.
In the embodiment of the invention, under the condition of obtaining the coronary artery main vessel segmentation image of the initial vessel image, point cloud feature extraction preprocessing is carried out on the coronary artery main vessel rough segmentation image to generate the initial point cloud feature of the coronary artery main vessel rough segmentation image; in order to obtain image detail information of different degrees in the point cloud features, in the technical scheme of this embodiment, under the condition of obtaining the initial point cloud features, downsampling processing of different parameters is further performed on the initial point cloud features, so that downsampling point cloud features of different parameters are obtained, and the initial blood vessel image is subjected to segmentation processing based on each downsampling point cloud feature, so that a segmentation result of the initial blood vessel image is obtained.
Optionally, the point cloud feature extraction preprocessing is performed on the coronary artery main vessel rough segmentation image, and the method for generating the point cloud feature of the coronary artery main vessel rough segmentation image may include: and inputting the initial blood vessel image into a pre-trained blood vessel rough segmentation model to obtain a coronary artery main blood vessel segmentation image of the initial blood vessel image.
Specifically, the image data of the roughly segmented image of the coronary artery main vessel obtained by the above implementation is normalized data with regular data characteristics, and the point cloud data of the roughly segmented image of the coronary artery main vessel is obtained by performing point cloud processing on the image data. Optionally, the processing method of point cloud processing may include: and performing point cloud processing on the foreground (coronary binary rough segmentation) of the normalized data to obtain point cloud data including the coordinate position of each foreground point. On the basis, extracting the characteristics of the point cloud data, and extracting the position characteristics in the point cloud data; optionally, in this embodiment, the point cloud data may be normalized, the position features of the normalized point cloud data are extracted, and the two position features are subjected to feature splicing to serve as the initial point cloud features of the point cloud data.
Of course, the above manner of determining the initial point cloud feature of the blood vessel rough-segmented image is only an optional embodiment, and this embodiment may also generate the initial point cloud feature of the blood vessel rough-segmented image based on other forms, which is not limited to this.
The obtained feature point data of the initial point cloud feature is too large, and the efficiency of feature processing is reduced by inputting the feature point data into the blood vessel segmentation model, so that the down-sampling processing is performed on the initial point cloud feature. Specifically, after the initial point cloud features are obtained, a preset down-sampling processing method is adopted to perform down-sampling processing on the initial point cloud features, wherein the number of the feature points of the down-sampling point cloud features obtained after the down-sampling processing is performed on the basis of different down-sampling parameters is consistent with the number of the feature points of the coding module of the current level in the blood vessel segmentation model, so that the down-sampling point cloud features can be conveniently input into the blood vessel segmentation model for feature processing. The downsampling processing method in the prior art can be used as the technical solution in this embodiment, and the method of the downsampling processing is not limited in this embodiment.
And S130, inputting the down-sampling point cloud characteristics to a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image.
In the embodiment of the invention, the vessel segmentation model comprises at least one preset feature extraction layer, at least one level of down-sampling coding module and at least one level of up-sampling decoding module; the down-sampling times of the down-sampling coding module and the up-sampling times of the up-sampling decoding module are the same as the number of layers of the preset feature extraction layer. The number of feature points obtained after the initial point cloud features are downsampled is the same as that of feature points of a downsampling coding module of the current level, so that downsampled point cloud features are conveniently input into the downsampling coding module of the current level for feature extraction; correspondingly, inputting each down-sampling point cloud feature into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image, wherein the method comprises the following steps: for a first-level downsampling coding module, taking downsampling characteristics obtained after first parameter downsampling processing as input data of the first-level downsampling coding module, and outputting the input data after characteristic extraction and downsampling processing; for at least one middle level of down-sampling coding modules, respectively inputting down-sampling point cloud features obtained after down-sampling processing of different parameters into a first input data of the down-sampling coding module as a corresponding level, respectively taking output data of the down-sampling coding module as second input data of the down-sampling coding module as a current level, and performing feature extraction and down-sampling processing on the first input data and the second input data and then outputting the first input data and the second input data; for the last-level down-sampling coding module, taking the output data of the last-level down-sampling coding module as the input data of the current-level down-sampling coding module, and outputting the input data after feature extraction and down-sampling processing; for the up-sampling encoding module of the first level, the output data of the down-sampling encoding module of the current level is used as the input data of the up-sampling decoding module of the current level, and the input data is output after being subjected to feature processing and up-sampling processing; and for the other up-sampling decoding modules of one hierarchy, respectively taking the output data of the down-sampling encoding module of the current hierarchy as the first input data of the up-sampling decoding module of the current hierarchy, taking the decoded output data of the previous hierarchy as the second input data of the up-sampling module of the current hierarchy, and outputting the first input data and the second input data after performing characteristic processing and up-sampling processing on the first input data and the second input data to obtain the blood vessel segmented image of the initial blood vessel image.
On the basis of the embodiment, the down-sampling coding module of any level in the blood vessel segmentation model comprises a plurality of layers of sensing layers, a down-sampling layer and at least one layer of feature extraction layer; the multi-layer perception layer is used for adjusting the feature dimension of the point cloud feature; the feature extraction layer is used for extracting features of initial point cloud features or down-sampling point cloud features corresponding to a current level and output data of a down-sampling coding module of a previous level; the down-sampling layer is used for performing down-sampling processing with a preset stride on the extracted features; the last level of the up-sampling decoding module comprises a plurality of layers of sensing layers and at least one layer of feature fusion layer; the up-sampling coding modules of other levels except the last level comprise an up-sampling layer and at least one feature fusion layer; the multilayer sensing layer is used for adjusting the feature dimension of the fusion feature output by the previous layer; the feature fusion layer is used for performing feature fusion on output data of an up-sampling decoding module of an upper layer and output data of a down-sampling coding module of a current layer; and the upsampling layer is used for performing upsampling processing of a preset step on the fused features.
In this embodiment, the vessel segmentation model may be a PTnet model. Specifically, the overall structure of the PTnet model comprises an encoding part and a decoding part, wherein the encoding part is formed by cascading a plurality of encoding modules; wherein each encoding module comprises: the system comprises at least one cascaded transformer component and a down-sampling component; the decoding part is formed by cascading a plurality of decoding modules; wherein each decoding module comprises an up-sampling component and at least one cascaded transform component.
In this embodiment, the input of the other upsampling decoding modules except the first layer of the decoding part is composed of the output data of the upsampling decoding module of the previous layer and the output data of the coding modules with the same number of points at the same layer, which are subjected to feature extraction by the feature extraction module formed by the transform component. Specifically, the shallow feature information is fused into the deep information, and the output features include both the detail information of the shallow layer which is noticed after the transform and the global information of the deep information, so that the category of the vessel segmentation result naming is more accurate.
Specifically, the expression of the output characteristic of the decoding portion is as follows:
F un =Linear(sum(F dn-1 ,F un-1 ))
wherein, F _ un represents the input of the decoding transform component of the nth layer; sum represents a fusion mode of the features, optionally, the fusion mode in this embodiment may also be processed by concat, and the fusion mode is not limited in this embodiment; f un-1 The characteristics of the output of the n-1 layer decoding module after up-sampling; f dn-1 For encoding process and decoding sampled F un-1 And the output of the coding modules with the same number of points at the same level.
Specifically, in this embodiment, the processing procedure of the point cloud feature in the transform component input into the vessel segmentation model may include: inputting the point cloud characteristics into a Linear layer, and then inputting the characteristics output by the Linear layer into a transform layer for calculation, wherein the characteristic value output of the next layer point is obtained by using attention calculation at the local position due to the fact that a certain anatomical space relation exists between the local point and the adjacent point. Illustratively, the expression of the transform layer output characteristics includes:
Figure BDA0003687642790000121
wherein χ represents a domain point feature of point i; ρ represents a softmax function; gamma, k and q represent Linear transformation, tau operation represents subtraction operation, and optionally tau operation can also represent operations such as dot multiplication, feature fusion or addition; δ represents a learnable relative code of position, representing the positional relationship with respect to i, j.
Specifically referring to fig. 2, the input data of the down-sampling coding modules of other levels except the first level and the last level in the blood vessel segmentation model may come from two paths, one of which is the output data of the coding module of the upper level and the down-sampling point cloud feature obtained after down-sampling processing is performed on the down-sampling parameter corresponding to the current level; it should be noted that, in this embodiment, the number of the down-sampling feature points obtained after different down-sampling parameters is the same as the number of the feature points of the coding module, and the method has the effects that multi-scale information is used for fusion with the coding module, and the fusion mode is input to the MLP layer after being added, and then is input to the transformer for calculation, so that the subsequently obtained coding feature information is more comprehensive, and thus the accuracy of model classification is improved.
Optionally, for the feature output by the transform layer, attention calculation is performed on h neighboring points near the feature, and the attention values obtained by the h points are summed, and the calculation for the neighborhood in the operation may consider the relationship between each point and the neighboring points, and consider local information between the points. Compared with the operation only considering the global information in the conventional transform calculation, the operation can obtain more local information in a shallow layer, obtain more global information in a deep layer and save the calculation resources. And obtaining a final output feature matrix based on the neighborhood point operation, inputting the final output feature matrix to the Linear layer, and fusing the output features and the input point cloud features to obtain M-dimension features serving as features output by the transform layer.
With reference to fig. 2 continuously on the basis of the foregoing embodiment, in the technical solution of this embodiment, before inputting the point cloud feature into a transform layer in a first-level coding module of a coding section in a vessel segmentation model, the point cloud feature is further input into a preset multilayer sensing layer, so as to adjust a feature dimension of the point cloud feature, and facilitate subsequent inputting of the output feature into the transform layer for feature processing; optionally, in the technical scheme of this embodiment, after obtaining the image feature output by the transform layer in the decoding module of the last level of the decoding portion, the image feature is further input into a preset multilayer sensing layer, so as to adjust the feature dimension of the image feature, and it is convenient to subsequently keep the feature dimension of the output image feature consistent with the feature dimension of the point cloud feature.
And S140, determining the segment name of each blood vessel central line in the initial blood vessel image based on the blood vessel rough segmentation image and the blood vessel segment image.
In the embodiment of the present invention, on the basis of obtaining the blood vessel segment image of the initial blood vessel image based on the above embodiment, the blood vessel segment image is determined to determine the segment name of each central line in the initial blood vessel image. Optionally, the determining method may include: determining the vessel central line of each coronary artery main vessel in the vessel rough segmentation image; for any blood vessel central line, determining the coincidence degree of the current blood vessel central line and each blood vessel segment in the blood vessel segment image, and determining the segment name of each blood vessel central line based on each coincidence degree.
Specifically, when the blood vessel rough-segmented image is obtained based on the above embodiments, the blood vessel center line of each coronary artery main blood vessel in the blood vessel rough-segmented image is determined. Optionally, the method for determining the centerline of the blood vessel may be based on a conventional image processing algorithm, or may be based on a method such as a deep convolutional neural network, which is not limited in this embodiment.
In the embodiment of the present invention, a blood vessel segmentation image obtained based on the foregoing embodiments is obtained, as shown in fig. 3 for example, for a blood vessel center line of any blood vessel segment in each blood vessel segmentation result, the coincidence degree between the current blood vessel center line and each blood vessel segment in the blood vessel segmentation image is determined, for example, the number of coincidence points between the current blood vessel center line and each blood vessel segment is determined, if the number of coincidence points with the RCA segment is the largest, that is, the coincidence degree with the RCA segment is the highest, it indicates that the current blood vessel center line is the RCA center line, blood vessel and center line segments of LCX and LAD are processed in the same manner, then the coincidence rates of passing points of other blood vessel center lines are segmented, and finally, a center line segment and a naming result corresponding to the blood vessel segment shown in fig. 4 are obtained.
The technical scheme of the embodiment of the invention specifically comprises the steps of obtaining an initial blood vessel image, carrying out blood vessel rough segmentation processing on the initial blood vessel image, and obtaining a blood vessel rough segmentation image of the initial blood vessel image; performing point cloud feature extraction pretreatment on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image, and performing down-sampling treatment on the initial point cloud features with different parameters to obtain down-sampling point cloud features with different down-sampling parameters; inputting the down-sampling point cloud characteristics into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of an initial blood vessel image; determining the vessel central line of each coronary artery main vessel in the vessel rough segmentation image; for any blood vessel central line, determining the coincidence degree of the current blood vessel central line and each blood vessel segment in the blood vessel segment image, and determining the segment name of each blood vessel central line based on each coincidence degree. The technical scheme includes that segmentation processing is carried out on an initial blood vessel image containing a coronary artery main blood vessel to obtain a segmentation image, and downsampling point cloud characteristics of the segmentation image after different downsampling parameters are processed are extracted; the method comprises the steps of processing each downsampling point cloud characteristic by adopting a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of an initial blood vessel image containing a coronary artery main blood vessel, naming each segmented blood vessel central line based on the determined blood vessel central line of each blood vessel segmentation and the blood vessel segmentation image, solving the problem of low accuracy in the process of segmenting and naming the blood vessel in the prior art, and improving the accuracy and reliability of segmentation and naming.
Example two
Fig. 5 is a flowchart of a blood vessel image processing method according to a second embodiment of the present invention, where the relationship between the present embodiment and the foregoing embodiments includes: before the initial blood vessel image is acquired, the method further comprises the following steps: the vessel segmentation model is trained in advance. In the embodiment, technical solutions consistent with the above embodiments are not described in detail. As shown in fig. 5, the method includes:
s210, training a blood vessel segmentation model in advance.
In the embodiment of the invention, the model training is carried out on the blood vessel segmentation model in advance before the initial blood vessel image is obtained, and the blood vessel segmentation image of the initial blood vessel image is obtained based on the trained blood vessel segmentation model.
Optionally, in the process of training the blood vessel segmentation model, the category to which each blood vessel segment belongs and the category weight corresponding to each category are obtained; and determining a loss function of the vessel segmentation model based on the weight of each category, and performing iterative training on the vessel segmentation model based on the loss function until the trained vessel segmentation model is obtained.
Specifically, according to a preset blood vessel segment naming rule, blood vessel segments of the coronary artery are classified in advance, and model loss of a blood vessel segment model is calculated according to a classification result.
Alternatively, a cross entropy loss function may be employed to determine the model loss. Illustratively, the expression of the cross entropy loss function includes:
Figure BDA0003687642790000151
therein, loss i Represents the loss at point i; c is the number of sample types; p is a radical of formula i The predicted class probability value for point i.
In this embodiment, since the number of points of the main vessels such as (RCA, LCX, LAD) in each vessel segment category is greater, and the number of branches such as (diagonal branch, posterior interventricular sulcus, etc.) is smaller, in order to consider the imbalance of the number of points between categories, a weight loss is added, so as to improve the performance of model learning, and improve the accuracy of vessel segmentation. Specifically, the expression of the category weight calculation is as follows:
Figure BDA0003687642790000152
wherein, C is the total class number of the sample, and C represents the current class of each blood vessel segment; w is a c A class weight representing the vessel segment c;N c indicating the number of class points.
Further, a model loss of the vessel segmentation model is determined based on the class weight and the cross entropy loss function. Illustratively, the expression for model loss is as follows:
Figure BDA0003687642790000153
therein, loss i Representing the loss of the point i, and C is the total class number of the sample; loss c Represents the loss of class c, w c Representing the class weight of the vessel segment c.
Further, under the condition that the loss function of the blood vessel segmentation model is determined, iterative training is carried out on the blood vessel segmentation model based on the loss function until the trained blood vessel segmentation model is obtained.
S220, obtaining an initial blood vessel image, and carrying out blood vessel rough segmentation processing on the initial blood vessel image to obtain a blood vessel rough segmentation image of the initial blood vessel image.
And S230, performing point cloud feature extraction pretreatment on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image, and performing down-sampling treatment on the initial point cloud features with different parameters to obtain down-sampling point cloud features with different down-sampling parameters.
S240, inputting the point cloud characteristics of the downsampling into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image.
And S250, determining the segment name of each blood vessel central line in the initial blood vessel image based on the blood vessel rough segmentation image and the blood vessel segment image.
According to the technical scheme of the embodiment of the invention, the method specifically comprises the steps of training the blood vessel segmentation model in advance, setting different loss weights based on the frequency of different types, training the model based on the loss weights to obtain the trained blood vessel segmentation model, and improving the accuracy of subsequent blood vessel image segmentation. Further, the technical scheme of the embodiment of the invention obtains an initial blood vessel image, and carries out blood vessel rough segmentation processing on the initial blood vessel image to obtain a blood vessel rough segmentation image of the initial blood vessel image; performing point cloud feature extraction pretreatment on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image, and performing down-sampling treatment on the initial point cloud features with different parameters to obtain down-sampling point cloud features with different down-sampling parameters; and inputting the down-sampling point cloud characteristics into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image. The technical scheme includes that segmentation processing is carried out on an initial blood vessel image containing a coronary artery main blood vessel to obtain a segmentation image, and downsampling point cloud characteristics of the segmentation image after different downsampling parameters are processed are extracted; the pre-trained blood vessel segmentation model is adopted to process the point cloud characteristics of the downsamples, so that a blood vessel segmentation image of an initial blood vessel image containing a coronary artery main blood vessel is obtained, the problem of low accuracy in the process of segmenting and naming the blood vessel in the prior art is solved, and the accuracy of segmentation and naming results is improved.
EXAMPLE III
Fig. 6 is a schematic structural diagram of a blood vessel image processing apparatus according to a third embodiment of the present invention. As shown in fig. 6, the apparatus includes: a blood vessel rough segmentation processing module 310, a feature extraction preprocessing module 320, a blood vessel segmentation image obtaining module 330 and a segmentation name determining module 340; wherein the content of the first and second substances,
a blood vessel rough segmentation processing module 310, configured to obtain an initial blood vessel image, perform rough blood vessel segmentation processing on the initial blood vessel image, and obtain a rough blood vessel segmentation image of the initial blood vessel image;
the feature extraction preprocessing module 320 is configured to perform point cloud feature extraction preprocessing on the blood vessel rough segmentation image to generate an initial point cloud feature of the blood vessel rough segmentation image, and perform downsampling processing of different parameters on the initial point cloud feature to obtain downsampled point cloud features of different downsampling parameters;
a vessel segment image obtaining module 330, configured to input each downsampled point cloud feature to a vessel segment model trained in advance, so as to obtain a vessel segment image of the initial vessel image; the vessel segmentation model comprises at least one preset feature extraction layer, at least one hierarchical downsampling coding module and at least one hierarchical upsampling decoding module; the down-sampling times of the down-sampling coding module and the up-sampling times of the up-sampling decoding module are the same as the number of layers of the preset feature extraction layer;
a segment name determining module 340, configured to determine a segment name of each blood vessel centerline in the initial blood vessel image based on the blood vessel rough segmentation image and the blood vessel segment image.
The technical scheme of the embodiment of the invention specifically comprises the steps of obtaining an initial blood vessel image, carrying out blood vessel rough segmentation processing on the initial blood vessel image, and obtaining a blood vessel rough segmentation image of the initial blood vessel image; performing point cloud feature extraction pretreatment on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image, and performing down-sampling treatment on the initial point cloud features with different parameters to obtain down-sampling point cloud features with different down-sampling parameters; and inputting the down-sampling point cloud characteristics into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image. According to the technical scheme, the initial blood vessel image containing the coronary artery main blood vessel is segmented to obtain a segmented image, and down-sampling point cloud characteristics of the segmented image after different down-sampling parameter processing are extracted; the pre-trained blood vessel segmentation model is adopted to process the point cloud characteristics of the downsamples, so that a blood vessel segmentation image of an initial blood vessel image containing a coronary artery main blood vessel is obtained, the problem of low accuracy in the process of segmenting and naming the blood vessel in the prior art is solved, and the accuracy of segmentation and naming results is improved.
On the basis of the foregoing embodiments, optionally, the blood vessel rough segmentation image includes a coronary artery main blood vessel segmentation image;
accordingly, the blood vessel rough segmentation processing module 310 includes:
an initial blood vessel image acquisition unit for acquiring an initial blood vessel image containing a coronary artery main blood vessel;
and the coronary artery main vessel segmentation image obtaining unit is used for inputting the initial vessel image into a vessel rough segmentation model trained in advance to obtain a coronary artery main vessel segmentation image of the initial vessel image.
On the basis of the foregoing embodiments, optionally, the feature extraction preprocessing module 320 includes:
the point cloud data generating unit is used for carrying out point cloud processing on the blood vessel rough segmentation image to generate point cloud data of the blood vessel rough segmentation image;
and the point cloud characteristic acquisition unit is used for extracting the characteristics of the point cloud data to obtain the initial point cloud characteristics of the point cloud data.
On the basis of the above-described embodiments, alternatively,
the number of the feature points obtained after the initial point cloud features are downsampled is the same as that of the feature points of the downsampling coding module of the current level;
correspondingly, the step of inputting each downsampling point cloud feature into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image includes:
for a first-level downsampling coding module, taking downsampling characteristics obtained after first parameter downsampling processing as input data of the first-level downsampling coding module, and outputting the input data after performing characteristic extraction and downsampling processing;
for at least one middle-level down-sampling coding module, respectively inputting down-sampling point cloud features obtained after down-sampling processing of different parameters into a first input data of the down-sampling coding module as a corresponding level, and respectively outputting output data of the down-sampling coding module as a second input data of the down-sampling coding module as a current level, and performing feature extraction and down-sampling processing on the first input data and the second input data and then outputting the first input data and the second input data;
for the down-sampling coding module of the last level, the output data of the down-sampling coding module of the last level is taken as the input data of the down-sampling coding module of the current level, and the input data is output after feature extraction and down-sampling processing;
for an up-sampling encoding module of a first level, taking output data of a down-sampling encoding module of a current level as input data of an up-sampling decoding module of the current level, and outputting the input data after performing feature processing and up-sampling processing;
and for the other up-sampling decoding modules of one hierarchy, respectively taking output data of the down-sampling encoding module of the current hierarchy as first input data of the up-sampling decoding module of the current hierarchy, taking decoded output data of the previous hierarchy as second input data of the up-sampling module of the current hierarchy, and outputting the first input data and the second input data after performing feature processing and up-sampling processing on the first input data and the second input data to obtain the blood vessel segmented image of the initial blood vessel image.
On the basis of the foregoing embodiments, optionally, the down-sampling coding module at any level in the blood vessel segmentation model includes a plurality of sensing layers, a down-sampling layer, and at least one feature extraction layer; the multi-layer perception layer is used for adjusting the feature dimension of the point cloud feature; the feature extraction layer is used for extracting features of the initial point cloud features or the down-sampling point cloud features corresponding to the current level and the output data of the down-sampling coding module of the previous level; the down-sampling layer is used for performing down-sampling processing of a preset stride on the extracted features;
the last level of the up-sampling decoding module comprises up-sampling decoding layers of any level, including a plurality of layers of sensing layers and at least one layer of feature fusion layer; the up-sampling coding modules of other levels except the last level comprise an up-sampling layer and at least one feature fusion layer; the multi-layer sensing layer is used for adjusting the feature dimension of the fusion feature output by the previous layer; the feature fusion layer is used for performing feature fusion on output data of an up-sampling decoding module of an upper layer and output data of a down-sampling coding module of a current layer; and the up-sampling layer is used for performing up-sampling processing of a preset step on the fused features.
On the basis of the foregoing embodiments, optionally, the apparatus further includes:
the category weight acquisition module is used for acquiring the category of each blood vessel section and the category weight corresponding to each category in the process of training the blood vessel section model;
and the loss function determining module is used for determining a loss function of the vessel segmentation model based on the weight of each category, and performing iterative training on the vessel segmentation model based on the loss function until the trained vessel segmentation model is obtained.
On the basis of the foregoing embodiments, optionally, the segment name determining module 340 includes:
a vessel centerline determining unit, configured to determine a vessel centerline of each coronary artery main vessel in the vessel rough segmentation image after the obtaining of the vessel segmentation image of the initial vessel image;
and the segment name determining unit is used for determining the coincidence degree of the current blood vessel central line and each blood vessel segment in the blood vessel segment image for any blood vessel central line, and determining the segment name of each blood vessel central line based on each coincidence degree.
The blood vessel image processing device provided by the embodiment of the invention can execute the blood vessel image processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
FIG. 7 illustrates a schematic diagram of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 7, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM)12, a Random Access Memory (RAM)13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM)12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as a blood vessel image processing method.
In some embodiments, the blood vessel image processing method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the blood vessel image processing method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the blood vessel image processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A blood vessel image processing method, characterized by comprising:
acquiring an initial blood vessel image, and performing blood vessel rough segmentation processing on the initial blood vessel image to obtain a blood vessel rough segmentation image of the initial blood vessel image;
performing point cloud feature extraction pretreatment on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image, and performing down-sampling treatment on the initial point cloud features with different parameters to obtain down-sampling point cloud features with different down-sampling parameters;
inputting each down-sampling point cloud feature into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image; the vessel segmentation model comprises at least one preset feature extraction layer, at least one hierarchical downsampling coding module and at least one hierarchical upsampling decoding module; the down-sampling times of the down-sampling coding module and the up-sampling times of the up-sampling decoding module are the same as the number of layers of the preset feature extraction layer;
and determining the segment name of each blood vessel central line in the initial blood vessel image based on the blood vessel rough segmentation image and the blood vessel segment image.
2. The method of claim 1, wherein the vessel coarse segmentation image comprises a coronary artery main vessel segmentation image;
correspondingly, the obtaining an initial blood vessel image, performing a blood vessel rough segmentation process on the initial blood vessel image, and obtaining a blood vessel rough segmentation image of the initial blood vessel image includes:
acquiring an initial vessel image containing a coronary artery main vessel;
and inputting the initial blood vessel image into a pre-trained blood vessel rough segmentation model to obtain a coronary artery main blood vessel segmentation image of the initial blood vessel image.
3. The method according to claim 1, wherein the preprocessing of point cloud feature extraction on the blood vessel rough segmentation image to generate initial point cloud features of the blood vessel rough segmentation image comprises:
performing point cloud processing on the blood vessel rough segmentation image to generate point cloud data of the blood vessel rough segmentation image;
and extracting the characteristics of the point cloud data to obtain the initial point cloud characteristics of the point cloud data.
4. The method of claim 1, wherein the number of feature points obtained after the down-sampling of the initial point cloud features is the same as that of the feature points of the down-sampling coding module of the current level;
correspondingly, the step of inputting each downsampling point cloud feature into a pre-trained blood vessel segmentation model to obtain a blood vessel segmentation image of the initial blood vessel image includes:
for a first-level downsampling coding module, taking downsampling characteristics obtained after first parameter downsampling processing as input data of the first-level downsampling coding module, and outputting the input data after characteristic extraction and downsampling processing;
for at least one middle-level down-sampling coding module, respectively inputting down-sampling point cloud features obtained after down-sampling processing of different parameters into a first input data of the down-sampling coding module as a corresponding level, and respectively outputting output data of the down-sampling coding module as a second input data of the down-sampling coding module as a current level, and performing feature extraction and down-sampling processing on the first input data and the second input data and then outputting the first input data and the second input data;
for the last-level down-sampling coding module, taking the output data of the last-level down-sampling coding module as the input data of the current-level down-sampling coding module, and outputting the input data after feature extraction and down-sampling processing;
for an up-sampling encoding module of a first level, taking output data of a down-sampling encoding module of a current level as input data of an up-sampling decoding module of the current level, and outputting the input data after performing feature processing and up-sampling processing;
and for the other up-sampling decoding modules of one hierarchy, respectively taking the output data of the down-sampling encoding module of the current hierarchy as the first input data of the up-sampling decoding module of the current hierarchy, taking the decoded output data of the previous hierarchy as the second input data of the up-sampling module of the current hierarchy, and performing feature processing and up-sampling processing on the first input data and the second input data and then outputting the processed data to obtain the blood vessel segmented image of the initial blood vessel image.
5. The method of claim 1,
the down-sampling coding module of any level in the blood vessel segmentation model comprises a plurality of layers of perception layers, down-sampling layers and at least one layer of feature extraction layer; the multi-layer perception layer is used for adjusting the feature dimension of the point cloud feature; the feature extraction layer is used for extracting features of initial point cloud features or down-sampling point cloud features corresponding to a current level and output data of a down-sampling coding module of a previous level; the down-sampling layer is used for performing down-sampling processing of a preset stride on the extracted features;
the last level of the up-sampling decoding module comprises a plurality of layers of sensing layers and at least one layer of feature fusion layer; the up-sampling coding modules of other levels except the last level comprise an up-sampling layer and at least one feature fusion layer; the multi-layer sensing layer is used for adjusting the feature dimension of the fusion feature output by the previous layer; the feature fusion layer is used for performing feature fusion on output data of an up-sampling decoding module of an upper layer and output data of a down-sampling coding module of a current layer; and the up-sampling layer is used for performing up-sampling processing of a preset step on the fused features.
6. The method of claim 1, further comprising:
in the process of training the blood vessel segmentation model, obtaining the category of each blood vessel segment and the category weight corresponding to each category;
and determining a loss function of the vessel segmentation model based on the weights of all classes, and performing iterative training on the vessel segmentation model based on the loss function until a trained vessel segmentation model is obtained.
7. The method according to claim 1, wherein the determining a segment name of a vessel centerline in the initial vessel image based on the vessel rough segmentation image and the vessel segmentation image comprises:
determining the vessel central line of each vessel in the vessel rough segmentation image;
for any blood vessel central line, determining the coincidence degree of the current blood vessel central line and each blood vessel segment in the blood vessel segment image, and determining the segment name of each blood vessel central line based on each coincidence degree.
8. A blood vessel image processing apparatus characterized by comprising:
the blood vessel rough segmentation processing module is used for acquiring an initial blood vessel image, performing blood vessel rough segmentation processing on the initial blood vessel image and acquiring a blood vessel rough segmentation image of the initial blood vessel image;
the characteristic extraction preprocessing module is used for carrying out point cloud characteristic extraction preprocessing on the blood vessel rough segmentation image to generate initial point cloud characteristics of the blood vessel rough segmentation image, and carrying out down-sampling processing on the initial point cloud characteristics with different parameters to obtain down-sampling point cloud characteristics with different down-sampling parameters;
a vessel segmentation image obtaining module, configured to input each of the downsampled point cloud features to a vessel segmentation model trained in advance, so as to obtain a vessel segmentation image of the initial vessel image; the vessel segmentation model comprises at least one preset feature extraction layer, at least one hierarchical downsampling coding module and at least one hierarchical upsampling decoding module; the down-sampling times of the down-sampling coding module and the up-sampling times of the up-sampling decoding module are the same as the number of layers of the preset feature extraction layer;
and the segment name determining module is used for determining the segment name of each blood vessel central line in the initial blood vessel image based on the blood vessel rough segmentation image and the blood vessel segment image.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the blood vessel image processing method of any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to implement the blood vessel image processing method according to any one of claims 1 to 7 when executed.
CN202210655838.9A 2022-06-10 2022-06-10 Blood vessel image processing method and device, electronic equipment and storage medium Pending CN114998273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210655838.9A CN114998273A (en) 2022-06-10 2022-06-10 Blood vessel image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210655838.9A CN114998273A (en) 2022-06-10 2022-06-10 Blood vessel image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114998273A true CN114998273A (en) 2022-09-02

Family

ID=83032207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210655838.9A Pending CN114998273A (en) 2022-06-10 2022-06-10 Blood vessel image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114998273A (en)

Similar Documents

Publication Publication Date Title
CN115018805A (en) Segmentation model training method, image segmentation method, device, equipment and medium
CN115409990B (en) Medical image segmentation method, device, equipment and storage medium
CN113971728B (en) Image recognition method, training method, device, equipment and medium for model
CN114972221A (en) Image processing method and device, electronic equipment and readable storage medium
CN116245832B (en) Image processing method, device, equipment and storage medium
CN114972220B (en) Image processing method and device, electronic equipment and readable storage medium
CN114419327B (en) Image detection method and training method and device of image detection model
CN114998273A (en) Blood vessel image processing method and device, electronic equipment and storage medium
CN115482261A (en) Blood vessel registration method, device, electronic equipment and storage medium
CN115131390A (en) Image segmentation method, image segmentation device, electronic equipment and storage medium
CN114926322A (en) Image generation method and device, electronic equipment and storage medium
CN114943995A (en) Training method of face recognition model, face recognition method and device
CN114820686B (en) Matting method and device, electronic equipment and storage medium
CN115690143B (en) Image segmentation method, device, electronic equipment and storage medium
CN114663372B (en) Video-based focus classification method and device, electronic equipment and medium
CN116051935B (en) Image detection method, training method and device of deep learning model
CN116664951A (en) Training method of image classification model, image classification method and device
CN115760864A (en) Image segmentation method and device, electronic equipment and storage medium
CN117747120A (en) Prediction method, device, equipment and medium based on multi-example learning
CN116580050A (en) Medical image segmentation model determination method, device, equipment and medium
CN116452915A (en) Image processing method, device, electronic equipment and storage medium
CN117727031A (en) Image processing method, device, equipment and storage medium based on time sequence relation
CN114419068A (en) Medical image segmentation method, device, equipment and storage medium
CN115861255A (en) Model training method, device, equipment, medium and product for image processing
CN115861623A (en) Blood vessel image segmentation method, device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination