CN115482372A - Blood vessel center line extraction method and device and electronic equipment - Google Patents

Blood vessel center line extraction method and device and electronic equipment Download PDF

Info

Publication number
CN115482372A
CN115482372A CN202211194661.3A CN202211194661A CN115482372A CN 115482372 A CN115482372 A CN 115482372A CN 202211194661 A CN202211194661 A CN 202211194661A CN 115482372 A CN115482372 A CN 115482372A
Authority
CN
China
Prior art keywords
centerline
determining
candidate
point
vessel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211194661.3A
Other languages
Chinese (zh)
Inventor
刘宇航
丁佳
吕晨翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yizhun Medical AI Co Ltd
Original Assignee
Beijing Yizhun Medical AI Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhun Medical AI Co Ltd filed Critical Beijing Yizhun Medical AI Co Ltd
Priority to CN202211194661.3A priority Critical patent/CN115482372A/en
Publication of CN115482372A publication Critical patent/CN115482372A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides a method and a device for extracting a blood vessel center line and electronic equipment; the method comprises the following steps: acquiring a feature map of a medical image; determining a candidate region containing a blood vessel central line based on a characteristic map of the medical image; for each candidate region, determining a candidate centerline point within the candidate region; the vessel centerline is determined based on the candidate centerline points within all candidate regions. The vessel centerline extraction method provided by the application predicts the vessel centerline based on the point set, and can improve the accuracy of vessel centerline extraction.

Description

Blood vessel center line extraction method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for extracting a blood vessel center line, and an electronic device.
Background
The extraction of the coronary artery vessel center line is an important step of coronary artery image analysis, and the realization of accurate and rapid vessel center line extraction has important significance for the analysis and diagnosis of coronary diseases. However, the coronary artery blood vessels have the characteristics of complex blood vessel tree structure, large individual difference, long blood vessel length and common tiny blood vessels, and the extraction effect of the tiny blood vessels by adopting the traditional blood vessel segmentation method is poor. Therefore, the conventional vessel segmentation method is not suitable for the coronary vessel centerline extraction task.
Disclosure of Invention
The embodiment of the application provides a method and a device for extracting a blood vessel center line and electronic equipment.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a blood vessel centerline extraction method, including:
acquiring a feature map of a medical image;
determining a candidate region containing a blood vessel central line based on the feature map of the medical image;
for each candidate region, determining candidate centerline points within the candidate region;
the vessel centerline is determined based on the candidate centerline points within all candidate regions.
In the above solution, the determining the candidate region including the centerline of the blood vessel based on the feature map of the medical image includes:
for each feature point on the feature map, determining an image area corresponding to the feature point on the medical image;
predicting a first probability that the feature points contain the vessel centerline according to a first linear layer;
determining the image region as the candidate region if the first probability is greater than a first threshold.
In the above solution, for each candidate region, determining a candidate centerline point in the candidate region includes:
determining a first number of predicted centerline points for each candidate region;
for each predicted centerline point, if a second probability of the predicted centerline point on the vessel centerline is greater than a second threshold, determining the predicted centerline point as the candidate centerline point.
In the above solution, the determining a first number of predicted centerline points for each candidate region includes:
for the candidate region, determining a corresponding feature point of the candidate region on the feature map;
for each feature point, determining a second probability of each predicted centerline point on the vessel centerline and a coordinate offset for each predicted centerline point of a first number of predicted centerline points.
In the above solution, the determining, for each feature point, a second probability of each predicted centerline point on the centerline of the blood vessel and a coordinate offset of each predicted centerline point in the first number of predicted centerline points includes:
determining the coordinates of the center point of the candidate area;
performing second processing on the feature points based on a second linear layer to obtain a second probability of each predicted centerline point in the first number of predicted centerline points on the blood vessel centerline;
and performing third processing on the characteristic points based on a third linear layer to obtain the coordinate offset of each predicted centerline point in the first number of predicted centerline points relative to the center point coordinate.
In the above solution, the determining the vessel centerline based on the candidate centerline points in all the candidate regions includes:
merging the candidate centerline points in all the candidate regions to obtain a centerline point set;
determining outliers in the set of centerline points;
determining the vessel centerline based on the set of centerline points excluding outliers.
In the above solution, the determining the outlier in the centerline point set includes:
determining all connected components of the set of centerline points;
for each connected component, if the number of centerline points included in the connected component is less than a third threshold, determining that the centerline points included in the connected component are outliers.
In a second aspect, an embodiment of the present application provides a method for training a vessel centerline extraction model, including:
training a vessel center line extraction model based on the medical image in the training set and the first neural network to obtain a second number of vessel center line extraction models;
and performing model verification on each blood vessel center line extraction model based on the medical image images in the verification set, and determining the model with the highest accuracy as a target blood vessel center line extraction model.
In the above scheme, the training of the vessel centerline extraction model based on the medical image in the training set and the first neural network to obtain a second number of vessel centerline extraction models includes:
determining candidate centerline points of the vessel centerline in each medical image based on the first neural network and a third number of medical image images in the training set;
updating parameters of the blood vessel centerline extraction model based on the difference between the candidate centerline point and the blood vessel centerline point labeled in the corresponding medical image;
completing one round of model training when all the medical image images in the training set are trained;
obtaining a corresponding blood vessel center line extraction model after completing the model training of the fourth number of rounds;
and circularly training the model until the second quantity of the blood vessel central line extraction models are obtained.
In the above solution, the determining candidate centerline points of the blood vessel centerline in each medical image based on the first neural network and a third number of medical images in the training set includes:
determining a feature map of each medical image based on the first neural network;
determining a candidate region containing a blood vessel central line aiming at the characteristic map of each medical image;
for each candidate region, candidate centerline points within the candidate region are determined.
In a third aspect, an embodiment of the present application provides a blood vessel centerline extraction device, including:
the characteristic map acquisition module is used for acquiring a characteristic map of the medical image;
the candidate region determining module is used for determining a candidate region containing a blood vessel central line based on the feature map of the medical image;
a candidate centerline point determining module, configured to determine, for each candidate region, a candidate centerline point within the candidate region;
a vessel centerline determination module for determining the vessel centerline based on the candidate centerline points within all the candidate regions.
In a fourth aspect, an embodiment of the present application provides a vessel centerline extraction model training device, where the device includes:
the model training module is used for training the blood vessel central line extraction model based on the medical image in the training set and the first neural network to obtain a second number of blood vessel central line extraction models;
and the model verification module is used for extracting a model for each blood vessel central line, performing model verification based on the medical image images in the verification set, and determining the model with the highest accuracy as the target blood vessel central line extraction model.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform a vessel centerline extraction or vessel centerline extraction model training method provided by the embodiments of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the storage medium includes a set of computer-executable instructions, and when the instructions are executed, the instructions are used to execute the vessel centerline extraction method or the vessel centerline extraction model training method provided in the present application.
The method for extracting the blood vessel center line, provided by the embodiment of the application, is used for acquiring a feature map of a medical image; determining a candidate region containing a blood vessel central line based on the feature map of the medical image; for each candidate region, determining candidate centerline points within the candidate region; the vessel centerline is determined based on the candidate centerline points within all candidate regions. The method and the device determine the center line based on the point set in a flexible characterization mode, predict the candidate centerline points in each candidate area, and obtain the final prediction result of the blood vessel center line based on the candidate centerline points of all the candidate areas, so that not only can the tiny blood vessel structures be effectively captured, but also the problem that the blood vessel center line is determined by adopting a traditional blood vessel segmentation method without unique solution can be avoided, and the accuracy of the extraction of the blood vessel center line is improved.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic processing flow diagram of an alternative blood vessel centerline extraction method provided in the embodiment of the present application;
FIG. 2 is a schematic view of an alternative processing flow of a vessel centerline extraction model training method according to an embodiment of the present application;
FIG. 3 is a schematic view of an alternative structure of a blood vessel centerline extraction device provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative structure of a vessel centerline extraction model training device according to an embodiment of the present application;
fig. 5 is a schematic block diagram of an alternative electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first", "second", and the like, are only to distinguish similar objects and do not denote a particular order, but rather the terms "first", "second", and the like may be used interchangeably with the order specified, where permissible, to enable embodiments of the present application described herein to be practiced otherwise than as specifically illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
A vessel centerline extraction method provided in an embodiment of the present application will be described below, referring to fig. 1, where fig. 1 is a schematic view of an optional processing flow of the vessel centerline extraction method provided in the embodiment of the present application, and the following description will be made with reference to steps S101 to S104 shown in fig. 1.
And step S101, acquiring a feature map of the medical image.
In some embodiments, a medical image, such as a cardiac CTA (Computed tomography angiography) image, is input into a neural network to obtain a feature map of the corresponding medical image, where each feature point in the feature map corresponds to a region in the medical image. The neural network can select a residual error network ResNet-50 with the depth of 50, or other networks can be used, and the feature extraction can be performed on the medical image by setting the down-sampling multiple to 4.
Step S102, based on the feature map of the medical image, determining a candidate region containing the center line of the blood vessel.
In some embodiments, each feature point on the feature map corresponds to an image region in the medical image. A first linear layer in a neural network may be used to predict a first probability that each feature point on the feature map contains a vessel centerline. And if the first probability corresponding to the feature point is larger than a first threshold value, determining that the image area corresponding to the feature point in the medical image is a candidate area. The first threshold value may be set in advance according to actual conditions, such as 0.5.
Since most regions in the medical image do not contain the vessel centerline, much calculation overhead and waste are caused if the centerline point is predicted in the whole medical image. By selecting part of candidate areas in the medical image, the inference speed of the center line of the blood vessel can be effectively improved, and the calculated amount is reduced.
Step S103, for each candidate region, determining candidate centerline points in the candidate region.
In some embodiments, for each candidate region, a first number of candidate centerline points within the candidate region may be predicted, and a second probability of each candidate centerline point on the vessel centerline and a coordinate offset of each candidate centerline point may be determined.
The process of predicting the first number of candidate centerline points in each candidate region may be:
determining the coordinates of the central point of each candidate region on the medical image and the corresponding characteristic point of each candidate region on the characteristic map; performing second processing on the feature points corresponding to the candidate region by using a second linear layer in the neural network, for example, performing feature transformation on the feature points, and obtaining a second probability of each predicted centerline point in the first number of predicted centerline points on the centerline of the blood vessel through a Sigmoid function; and performing third processing on the feature points by using a third linear layer in the neural network, for example, performing feature transformation on the feature points to obtain a first number of coordinate offsets, of which the dimensionalities are 3, relative to the coordinates of the central point. The second linear layer and the third linear layer can synchronously perform corresponding processing on the feature points, and obtain a second probability and a corresponding coordinate offset of each predicted centerline point in the vessel centerline from the first number of predicted centerline points.
Because the boundary, the width and the like of the blood vessel are difficult to accurately calculate, if the centerline of the coronary artery blood vessel is predicted, the centerline point is difficult to accurately obtain in a calculation mode because the coronary artery blood vessel has the characteristics of complex blood vessel tree structure, large individual difference, long blood vessel length and common tiny blood vessels. The coordinate offset of each predicted centerline point relative to the center point of the candidate region is obtained in the above manner, and compared with a manner of classifying each pixel in a traditional image segmentation model, the method can capture a fine structure, has good flexibility, and can enhance the extraction effect of fine blood vessels.
Step S104, determining the vessel centerline based on the candidate centerline points in all the candidate regions.
In some embodiments, the candidate centerline points in all the candidate regions are combined to obtain a centerline point set. There are some outliers in view of the predicted centerline point set. If we want to obtain the centerline point of the artery vessel, the centerline point set obtained by the above method may contain some outliers, such as the centerline point of the vein vessel or other structural key points. Therefore, in order to obtain the centerline points of the finally desired arterial vessels, a post-processing mechanism is required to be designed to remove these outliers, and the specific steps are as follows:
first, all connected components of the centerline point set are determined. If the euclidean distance between the center line point and the center line point is smaller than a preset distance threshold, the two center line points are considered to be communicated with each other.
Second, outliers are determined. The number of centerline points for each connected component is calculated. Since the number of outliers is relatively small, the connected components corresponding to the outliers also include only a few centerline points. All the centerline points in the connected components containing the number of centerline points less than the third threshold may be directly determined as outliers.
Finally, a vessel centerline is determined based on the set of centerline points excluding outliers. And determining the centerline of the blood vessel based on the centerline points obtained after all outliers in the centerline point set are deleted.
Another method for extracting a blood vessel centerline provided in the embodiment of the present application is described below, referring to fig. 2, fig. 2 is a schematic view of an alternative processing flow of training a blood vessel centerline extraction model provided in the embodiment of the present application, and the following description will be made with reference to steps 201 to 202 shown in fig. 2.
Step 201, training the blood vessel center line extraction model based on the medical image and the first neural network in the training set to obtain a second number of blood vessel center line extraction models.
In some embodiments, the collected medical image images may be divided into 8:1:1 training set, validation set, and test set. If 3000 coronary CTA images are collected, 2400 images of the 3000 coronary CTA images are randomly selected to be used as a training set, and model training is performed. 300 images were used for the validation set, the model with the best effect was selected, and the final effect of the model was evaluated using 300 images as the test set.
In some embodiments, the first neural network may be based on ResNet-50 or other neural network models, and the third number is used as a batch size in the model training process, and if the third number is 8, 8 medical image images in the training set may be input into the first neural network at each iteration, and meanwhile, the learning rate of the first neural network may be set to 1e-3, and the model training may be performed by using a random gradient descent method.
In the process of model training, during each iterative training, feature map extraction is performed on a third number of medical influence images input each time, a candidate region containing a blood vessel centerline is determined based on the feature map of each medical image, and then a candidate centerline point in the candidate region is determined for each candidate region. And updating parameters of the model based on the difference between the determined candidate centerline point and the actual vessel centerline point marked in the corresponding medical image, and completing one iterative operation.
And completing one round of model training when the training of all the medical image images in the training set is finished.
And obtaining a corresponding blood vessel central line extraction model after completing the model training of the fourth number of times, and storing the model. The fourth number may be determined according to actual conditions, and may be set to 5.
And circularly training the model until a second number of blood vessel central line extraction models are obtained. Similarly, the second number may be determined according to actual conditions, and may be set to 40, that is, when 200 rounds of model training are completed, the model training process is ended.
Step 202, extracting a model for each blood vessel central line, performing model verification based on medical image images in a verification set, and determining the model with the highest accuracy as a target blood vessel central line extraction model.
In some embodiments, the second number of blood vessel centerline extraction models obtained as described above may be sequentially verified based on the medical image images in the verification set, and the candidate centerline points output based on each model are compared with the blood vessel centerline points labeled on the medical image corresponding to the model to obtain a verification result. And selecting the model with the highest accuracy as the target blood vessel central line extraction model.
After the target blood vessel center line extraction model is obtained, model testing can be performed on the target blood vessel center line extraction model based on the medical image in the test set, a candidate center line output based on the model is compared with a blood vessel center line point marked on the medical image in the test set corresponding to the model to obtain a detection result, and if the detection result is within a reasonable threshold value, the target blood vessel center line extraction model is in accordance with an expected effect.
The following describes a blood vessel centerline extraction device provided in an embodiment of the present application. Referring to fig. 3, fig. 3 is an alternative structure diagram of a blood vessel centerline extraction device according to an embodiment of the present application, and the blood vessel centerline extraction device 300 includes a feature map acquisition module 301, a candidate region determination module 302, a candidate centerline point determination module 303, and a blood vessel centerline determination module 304. Wherein,
a feature map acquisition module 301, configured to acquire a feature map of a medical image;
a candidate region determining module 302, configured to determine a candidate region including a centerline of a blood vessel based on a feature map of the medical image;
a candidate centerline point determining module 303, configured to determine, for each candidate region, a candidate centerline point within the candidate region;
a vessel centerline determination module 304 for determining the vessel centerline based on the candidate centerline points within all the candidate regions.
In some embodiments, the candidate region determination module 302 is configured to: for each feature point on the feature map, determining an image area corresponding to the feature point on the medical image; predicting a first probability that the feature points contain the vessel centerline according to a first linear layer; determining the image region as the candidate region if the first probability is greater than a first threshold.
In some embodiments, the candidate centerline point determination module 303 is configured to: for each candidate region, determining a first number of predicted centerline points; for each predicted centerline point, if a second probability of the predicted centerline point on the vessel centerline is greater than a second threshold, determining the predicted centerline point as the candidate centerline point.
In some embodiments, the candidate centerline point determination module 303 is further configured to: for the candidate region, determining the corresponding feature point of the candidate region on the feature map; for each feature point, determining a second probability of each predicted centerline point on the vessel centerline and a coordinate offset of each predicted centerline point of a first number of predicted centerline points.
In some embodiments, the candidate centerline point determination module 303 is further configured to: determining the coordinates of the center point of the candidate area; performing second processing on the feature points based on a second linear layer to obtain a second probability of each predicted centerline point in the first number of predicted centerline points on the blood vessel centerline; and performing third processing on the characteristic points based on a third linear layer to obtain the coordinate offset of each predicted centerline point in the first number of predicted centerline points relative to the center point coordinate.
In some embodiments, the vessel centerline determination module 304 is to: merging the candidate centerline points in all the candidate regions to obtain a centerline point set; determining outliers in the set of centerline points; determining the vessel centerline based on the set of centerline points excluding outliers.
In some embodiments, the vessel centerline determination module 304 is further operable to: determining all connected components of the centerline point set; for each connected component, if the number of centerline points included in the connected component is less than a third threshold, determining that the centerline points included in the connected component are outliers.
Fig. 4 is a schematic structural diagram of an optional device of a vessel centerline extraction model training device according to an embodiment of the present disclosure, where the vessel centerline extraction model training device 400 includes a model training module 401 and a model verification module 402. Wherein,
the model training module 401 is configured to perform training of a blood vessel centerline extraction model based on the medical image in the training set and the first neural network, so as to obtain a second number of blood vessel centerline extraction models;
and the model verification module 402 is a model verification module, and is configured to extract a model for each blood vessel centerline, perform model verification based on the medical image images in the verification set, and determine a model with the highest accuracy as the target blood vessel centerline extraction model.
In some embodiments, model training module 401 is to: determining candidate centerline points of the vessel centerline in each medical image based on the first neural network and a third number of medical image images in the training set; updating parameters of the blood vessel centerline extraction model based on the difference between the candidate centerline point and the blood vessel centerline point labeled in the corresponding medical image; when all the medical image images in the training set are trained, completing one round of model training; obtaining a corresponding blood vessel center line extraction model after completing the model training of the fourth number of rounds; circularly training the model until obtaining the second quantity of blood vessel central line extraction models;
in some embodiments, the model training module 401 is further configured to: determining a feature map of each medical image based on the first neural network; determining a candidate region containing a blood vessel central line aiming at the characteristic map of each medical image; for each candidate region, candidate centerline points within the candidate region are determined.
It should be noted that the blood vessel centerline extraction device in the embodiment of the present application is similar to the description of the blood vessel centerline extraction method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the detailed description is omitted. The inexhaustible technical details in the blood vessel centerline extraction device provided by the embodiment of the present application can be understood from the description of any one of fig. 1 to 3.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. The electronic device 500 is configured to implement the vessel centerline extraction method or the vessel centerline extraction model training method according to the embodiment of the disclosure. In some alternative embodiments, the electronic device 500 may implement the vessel centerline extraction method or the vessel centerline extraction model training method provided in the embodiments of the present application by running a computer program, for example, the computer program may be a software module in an operating system; may be a local (Native) APP (Application), i.e. a program that needs to be installed in the operating system to run; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded into any APP. In general, the computer programs described above may be any form of application, module, or plug-in.
In practical applications, the electronic device 500 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a Cloud server providing basic Cloud computing services such as Cloud services, a Cloud database, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a CDN, and a big data and artificial intelligence platform, where Cloud Technology (Cloud Technology) refers to a hosting Technology that unifies a series of resources such as hardware, software, and a network in a wide area network or a local area network to implement computing, storage, processing, and sharing of data. The electronic device 500 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart television, a smart watch, and the like.
Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, vehicle terminals, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 5, the electronic device 500 includes a computing unit 501, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 can also be stored. The computing unit 501, the ROM502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the electronic device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the electronic device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above, such as a blood vessel centerline extraction method or a blood vessel centerline extraction model training method. For example, in some alternative embodiments, the vessel centerline extraction method or the vessel centerline extraction model training method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some alternative embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 500 via the ROM502 and/or the communication unit 509. When the computer program is loaded into the RAM503 and executed by the computing unit 501, one or more steps of the vessel centerline extraction method or the vessel centerline extraction model training method described above may be performed. Alternatively, in other embodiments, the calculation unit 501 may be configured as a vessel centerline extraction method or a vessel centerline extraction model training method by any other suitable means (e.g., by means of firmware).
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, cause the processor to execute a vessel centerline extraction method or a vessel centerline extraction model training method provided by embodiments of the present application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, E PROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that, in the various embodiments of the present application, the size of the serial number of each implementation process does not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (14)

1. A method of vessel centerline extraction, the method comprising:
acquiring a feature map of a medical image;
determining a candidate region containing a blood vessel central line based on a characteristic map of the medical image;
for each candidate region, determining a candidate centerline point within the candidate region;
determining the vessel centerline based on the candidate centerline points within all candidate regions.
2. The method according to claim 1, wherein determining the candidate region including the centerline of the blood vessel based on the feature map of the medical image comprises:
for each feature point on the feature map, determining an image area corresponding to the feature point on the medical image;
predicting a first probability that the feature points contain the vessel centerline according to a first linear layer;
determining the image region as the candidate region if the first probability is greater than a first threshold.
3. The method of claim 1, wherein for each candidate region, determining candidate centerline points within the candidate region comprises:
determining a first number of predicted centerline points for each candidate region;
for each predicted centerline point, if a second probability of the predicted centerline point on the vessel centerline is greater than a second threshold, determining the predicted centerline point as the candidate centerline point.
4. The method of claim 3, wherein determining a first number of predicted centerline points for each candidate region comprises:
for the candidate region, determining a corresponding feature point of the candidate region on the feature map;
for each feature point, determining a second probability of each predicted centerline point on the vessel centerline and a coordinate offset for each predicted centerline point of a first number of predicted centerline points.
5. The method of claim 4, wherein determining, for each feature point, a second probability of each predicted centerline point being on the vessel centerline and a coordinate offset for each predicted centerline point of a first number of predicted centerline points comprises:
determining the coordinates of the center point of the candidate area;
performing second processing on the feature points based on a second linear layer to obtain a second probability of each predicted centerline point in the first number of predicted centerline points on the blood vessel centerline;
and performing third processing on the characteristic points based on a third linear layer to obtain the coordinate offset of each predicted centerline point in the first number of predicted centerline points relative to the center point coordinate.
6. The method of claim 1, wherein determining the vessel centerline based on the candidate centerline points within all candidate regions comprises:
merging the candidate centerline points in all the candidate regions to obtain a centerline point set;
determining outliers in the set of centerline points;
determining the vessel centerline based on the set of centerline points excluding outliers.
7. The method of claim 6, the determining outliers in the set of centerline points, comprising:
determining all connected components of the centerline point set;
for each connected component, if the number of centerline points included in the connected component is less than a third threshold, determining that the centerline points included in the connected component are outliers.
8. A method for training a vessel centerline extraction model, the method comprising:
training a vessel center line extraction model based on the medical image in the training set and the first neural network to obtain a second number of vessel center line extraction models;
and (4) aiming at each blood vessel central line extraction model, carrying out model verification based on the medical image images in the verification set, and determining the model with the highest accuracy as the target blood vessel central line extraction model.
9. The method of claim 8, wherein the training of the vessel centerline extraction model based on the medical image images in the training set and the first neural network to obtain a second number of vessel centerline extraction models comprises:
determining candidate centerline points of the vessel centerline in each medical image based on the first neural network and a third number of medical image images in the training set;
updating parameters of the blood vessel centerline extraction model based on the difference between the candidate centerline point and the blood vessel centerline point labeled in the corresponding medical image;
completing one round of model training when all the medical image images in the training set are trained;
obtaining a corresponding blood vessel center line extraction model after completing the model training of the fourth number of rounds;
and circularly training the model until the second quantity of blood vessel central line extraction models are obtained.
10. The method of claim 9, the determining candidate centerline points for the vessel centerline in each medical image based on the first neural network and a third number of medical image images in a training set, comprising:
determining a feature map of each medical image based on the first neural network;
determining a candidate region containing a blood vessel central line aiming at the characteristic map of each medical image;
for each candidate region, a candidate centerline point within the candidate region is determined.
11. A vessel centerline extraction device, the device comprising:
the characteristic map acquisition module is used for acquiring a characteristic map of the medical image;
the candidate region determining module is used for determining a candidate region containing a blood vessel central line based on the feature map of the medical image;
a candidate centerline point determining module, configured to determine, for each candidate region, a candidate centerline point within the candidate region;
a vessel centerline determination module for determining the vessel centerline based on the candidate centerline points within all the candidate regions.
12. A vessel centerline extraction model training device, characterized in that the device comprises:
the model training module is used for training the blood vessel central line extraction model based on the medical image in the training set and the first neural network to obtain a second number of blood vessel central line extraction models;
and the model verification module is used for extracting a model aiming at each blood vessel central line, performing model verification based on the medical image images in the verification set and determining the model with the highest accuracy as the target blood vessel central line extraction model.
13. An electronic device, characterized in that the electronic device comprises:
at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7 or claims 8-10.
14. A computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform the vessel centerline extraction method of any of claims 1-7; or performing the vessel centerline extraction model training method of any one of claims 8-10.
CN202211194661.3A 2022-09-28 2022-09-28 Blood vessel center line extraction method and device and electronic equipment Pending CN115482372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211194661.3A CN115482372A (en) 2022-09-28 2022-09-28 Blood vessel center line extraction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211194661.3A CN115482372A (en) 2022-09-28 2022-09-28 Blood vessel center line extraction method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115482372A true CN115482372A (en) 2022-12-16

Family

ID=84394735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211194661.3A Pending CN115482372A (en) 2022-09-28 2022-09-28 Blood vessel center line extraction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115482372A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262733A1 (en) * 2016-03-10 2017-09-14 Siemens Healthcare Gmbh Method and System for Machine Learning Based Classification of Vascular Branches
CN110555481A (en) * 2019-09-06 2019-12-10 腾讯科技(深圳)有限公司 Portrait style identification method and device and computer readable storage medium
CN113870215A (en) * 2021-09-26 2021-12-31 推想医疗科技股份有限公司 Midline extraction method and device
CN115049590A (en) * 2022-05-17 2022-09-13 北京医准智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170262733A1 (en) * 2016-03-10 2017-09-14 Siemens Healthcare Gmbh Method and System for Machine Learning Based Classification of Vascular Branches
CN110555481A (en) * 2019-09-06 2019-12-10 腾讯科技(深圳)有限公司 Portrait style identification method and device and computer readable storage medium
CN113870215A (en) * 2021-09-26 2021-12-31 推想医疗科技股份有限公司 Midline extraction method and device
CN115049590A (en) * 2022-05-17 2022-09-13 北京医准智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DONG WANG等: "PointScatter: Point Set Representation for Tubular Structure Extraction", no. 2209, pages 1 - 5 *
JIAHANG SU等: "Automatic Collateral Scoring From 3D CTA Images", vol. 39, no. 39, pages 2190 *
方积乾: "《医学统计手册》", 北京:中国统计出版社, pages: 2 - 15 *
李明鸣: "基于结构语义感知的视网膜血管分割方法", no. 04, pages 073 - 47 *
苏丹等: "基于曲线描述子的手指静脉识别", vol. 41, no. 41, pages 420 - 430 *

Similar Documents

Publication Publication Date Title
CN110852447B (en) Meta learning method and apparatus, initializing method, computing device, and storage medium
CN108427708B (en) Data processing method, data processing apparatus, storage medium, and electronic apparatus
CN112396613B (en) Image segmentation method, device, computer equipment and storage medium
CN111738351B (en) Model training method and device, storage medium and electronic equipment
CN110809768B (en) Data cleansing system and method
CN113705628B (en) Determination method and device of pre-training model, electronic equipment and storage medium
CN115034315B (en) Service processing method and device based on artificial intelligence, computer equipment and medium
CN111444807A (en) Target detection method, device, electronic equipment and computer readable medium
WO2022100607A1 (en) Method for determining neural network structure and apparatus thereof
CN114187009A (en) Feature interpretation method, device, equipment and medium of transaction risk prediction model
CN110610140A (en) Training method, device and equipment of face recognition model and readable storage medium
CN116227573B (en) Segmentation model training method, image segmentation device and related media
CN117953341A (en) Pathological image segmentation network model, method, device and medium
CN112329810B (en) Image recognition model training method and device based on significance detection
CN116704266B (en) Power equipment fault detection method, device, equipment and storage medium
CN112801999A (en) Method and device for determining heart coronary artery dominant type
CN117474879A (en) Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium
CN115956889A (en) Blood pressure monitoring method and device and electronic equipment
CN115482372A (en) Blood vessel center line extraction method and device and electronic equipment
CN116703466A (en) System access quantity prediction method based on improved wolf algorithm and related equipment thereof
CN114972220B (en) Image processing method and device, electronic equipment and readable storage medium
CN115482248A (en) Image segmentation method and device, electronic device and storage medium
CN112242959B (en) Micro-service current-limiting control method, device, equipment and computer storage medium
CN114550203A (en) Method for determining three-dimensional coordinates of joint key points and related equipment
CN109800873B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20221216

RJ01 Rejection of invention patent application after publication