CN113436177A - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents
Image processing method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN113436177A CN113436177A CN202110751164.8A CN202110751164A CN113436177A CN 113436177 A CN113436177 A CN 113436177A CN 202110751164 A CN202110751164 A CN 202110751164A CN 113436177 A CN113436177 A CN 113436177A
- Authority
- CN
- China
- Prior art keywords
- coronary artery
- information
- initial reference
- reference point
- target point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The application relates to an image processing method, an image processing device, electronic equipment and a computer storage medium, which are applicable to the technical field of image processing, and the method comprises the following steps: acquiring a three-dimensional image block to be recognized and an initial reference point, and then performing coronary artery recognition processing through the trained network model based on the three-dimensional image block to be recognized and the initial reference point to obtain a coronary artery recognition result; the coronary artery can be identified from medical images such as CTA images.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development and progress of modern science and technology, the application of medical imaging technology is more and more extensive, and is favored by more and more doctors. Contrast images are often used by physicians as a reference for the diagnosis of clinical disease and for protocol assignment.
Currently, CT Angiography (CTA) is widely used in clinical practice, and a large amount of CTA images are processed and diagnosed by doctors. Taking the coronary artery of the heart as an example, the coronary artery is generally subjected to post-processing steps such as coronary artery (i.e., coronary artery) identification, coronary tree reconstruction, multi-plane reconstruction, curved surface reconstruction or volume reconstruction, and then the disease diagnosis is assisted by a doctor according to the post-processing results.
However, the cardiac images include not only coronary images but also images of veins, atria, ventricles, and the like, and the images of veins, atria, ventricles, and the like affect the identification of coronary arteries, and particularly interfere with the veins with similar shapes, so that how to identify coronary arteries from medical images such as CTA images becomes a key problem.
Disclosure of Invention
The present application aims to provide an image processing method, an image processing apparatus, an electronic device, and a computer storage medium, which are used to solve at least one of the above technical problems.
The above object of the present invention can be achieved by the following technical solutions:
in a first aspect, a method for image processing is provided, including:
acquiring a three-dimensional image block to be identified and an initial reference point;
performing coronary artery identification processing through the trained network model based on the three-dimensional image block to be identified and the initial reference point to obtain a coronary artery identification result:
extracting image characteristic information from the three-dimensional image block to be identified;
tracking to a target point in the three-dimensional image block based on the image feature information and the initial reference point;
determining whether the confidence of the target point meets a preset confidence threshold;
if the preset confidence level threshold is met, determining the target point as the initial reference point, tracking the target point in the three-dimensional image block to the target point based on the image feature information and the initial reference point, and determining whether the confidence level of the target point meets the preset confidence level threshold or not until the tracked target point does not meet the preset confidence level threshold;
extracting a coronary artery identification result corresponding to a target point meeting a preset confidence level threshold, wherein the coronary artery identification result comprises: coronary artery direction information and coronary artery radius information.
In one possible implementation, the tracking to a target point in the three-dimensional image block based on the image feature information and the initial reference point includes: determining area characteristic information corresponding to an area where the initial reference point is located based on the image characteristic information; predicting the direction information of the coronary artery to be identified at the initial reference point according to the region characteristic information; and tracking in the three-dimensional image block based on the initial reference point and the predicted direction information to obtain the target point.
In another possible implementation manner, the predicting, according to the region feature information, direction information of a coronary artery to be identified at the initial reference point includes: predicting probability information that the trend of the coronary artery to be identified at the initial reference point belongs to each alternative trend according to the regional characteristic information; determining candidate trends of which the probability information meets a second preset condition from the probability information of each candidate trend, wherein the second preset condition comprises the following steps: the probability information is greater than a preset threshold value; and predicting the direction information of the coronary artery to be identified at the initial reference point from the alternative trend meeting the second preset condition.
In another possible implementation manner, the predicting, from the alternative trends satisfying the second preset condition, the direction information of the coronary artery to be identified at the initial reference point includes:
determining a tracking direction to track to the initial reference point;
and predicting the direction information of the coronary artery to be identified at the initial reference point based on the included angle information respectively corresponding to the tracking direction and each alternative trend meeting a second preset condition.
In another possible implementation manner, the acquiring a three-dimensional image block to be recognized previously includes:
acquiring an initial three-dimensional image block;
and carrying out voxel space transformation processing on the initial three-dimensional image block to obtain the three-dimensional image block to be identified, wherein the distance between any two adjacent voxels in the three-dimensional image block to be identified is a preset value.
In another possible implementation manner, the performing coronary artery identification processing on the three-dimensional image block to be identified and the initial reference point through the trained network model to obtain a coronary artery identification result further includes:
acquiring sample image information;
inputting the sample image information into an original network model to obtain a coronary artery identification prediction result;
and performing parameter optimization on the original network model based on the coronary artery identification prediction result and the coronary artery identification expected result to obtain the trained network model.
In another possible implementation manner, the performing parameter optimization on the original network model based on the coronary artery recognition prediction result and the coronary artery recognition expected result to obtain the trained network model includes:
determining first difference information between the coronary artery prediction direction information and the expected direction information;
determining second difference information between the predicted category information of the tracked target point and the desired category information of the tracked target point;
determining third difference information between predicted radius information of a coronary artery at the tracked target point and expected radius information of the coronary artery at the tracked target point;
and performing parameter optimization on the original network model based on the first difference information, the second difference information and the third difference information to obtain the trained network model.
In a second aspect, an apparatus for image processing is provided, including:
the first acquisition module is used for acquiring a three-dimensional image block to be identified and an initial reference point;
the coronary artery identification processing module is used for carrying out coronary artery identification processing on the basis of the three-dimensional image block to be identified and the initial reference point through the trained network model to obtain a coronary artery identification result;
the coronary artery identification processing module is used for carrying out coronary artery identification processing on the basis of the three-dimensional image block to be identified and the initial reference point through a trained network model to obtain a coronary artery identification result, and is specifically used for:
extracting image characteristic information from the three-dimensional image block to be identified;
tracking to a target point in the three-dimensional image block based on the image feature information and the initial reference point;
determining whether the confidence of the target point meets a preset confidence threshold;
if the preset confidence level threshold is met, determining the target point as the initial reference point, tracking the target point in the three-dimensional image block to the target point based on the image feature information and the initial reference point, and determining whether the confidence level of the target point meets the preset confidence level threshold or not until the tracked target point does not meet the preset confidence level threshold;
extracting a coronary artery identification result corresponding to a target point meeting a preset confidence level threshold, wherein the coronary artery identification result comprises: coronary artery direction information and coronary artery radius information.
In a possible implementation manner, the coronary artery identification processing module, when tracking to a target point in the three-dimensional image block based on the image feature information and the initial reference point, is specifically configured to:
determining area characteristic information corresponding to an area where the initial reference point is located based on the image characteristic information;
predicting the direction information of the coronary artery to be identified at the initial reference point according to the region characteristic information;
and tracking in the three-dimensional image block based on the initial reference point and the predicted direction information to obtain the target point.
In another possible implementation manner, when predicting, according to the region feature information, direction information of a coronary artery to be identified at the initial reference point, the coronary artery identification processing module is specifically configured to:
predicting probability information that the trend of the coronary artery to be identified at the initial reference point belongs to each alternative trend according to the regional characteristic information;
determining candidate trends of which the probability information meets a second preset condition from the probability information of each candidate trend, wherein the second preset condition comprises the following steps: the probability information is greater than a preset threshold value;
and predicting the direction information of the coronary artery to be identified at the initial reference point from the alternative trend meeting the second preset condition.
In another possible implementation manner, when predicting the direction information of the coronary artery to be identified at the initial reference point from the alternative trend satisfying the second preset condition, the coronary artery identification processing module is specifically configured to:
determining a tracking direction to track to the initial reference point;
and predicting the direction information of the coronary artery to be identified at the initial reference point based on the included angle information respectively corresponding to the tracking direction and each alternative trend meeting a second preset condition.
In another possible implementation manner, the apparatus further includes: a second acquisition module and a voxel-space transformation processing module, wherein,
the second obtaining module is used for obtaining the initial three-dimensional image block;
the voxel space transformation processing module is used for carrying out voxel space transformation processing on the initial three-dimensional image block to obtain the three-dimensional image block to be identified, wherein the distance between any two adjacent voxels in the three-dimensional image block to be identified is a preset value.
In another possible implementation manner, the apparatus further includes: a third acquisition module, an input module, and a parameter optimization module, wherein,
the third acquisition module is used for acquiring sample image information;
the input module is used for inputting the sample image information to an original network model to obtain a coronary artery identification prediction result;
and the parameter optimization module is used for carrying out parameter optimization on the original network model based on the coronary artery identification prediction result and the coronary artery identification expected result to obtain the trained network model.
In another possible implementation manner, the parameter optimization module, when performing parameter optimization on the original network model based on the coronary artery recognition prediction result and the coronary artery recognition expected result to obtain the trained network model, is specifically configured to:
determining first difference information between the coronary artery prediction direction information and the expected direction information;
determining second difference information between the predicted category information of the tracked target point and the desired category information of the tracked target point;
determining third difference information between predicted radius information of a coronary artery at the tracked target point and expected radius information of the coronary artery at the tracked target point;
and performing parameter optimization on the original network model based on the first difference information, the second difference information and the third difference information to obtain the trained network model.
In a third aspect, an electronic device is provided, which includes:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: and executing corresponding operations according to the image processing method shown in any possible implementation manner of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium storing at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of image processing as shown in any one of the possible implementations of the first aspect.
Based on the technical scheme, the application at least comprises the following effects:
the application provides an image processing method, an image processing device, an electronic device and a computer readable storage medium, wherein a three-dimensional image block to be identified and an initial reference point are obtained, target point tracking is carried out in the three-dimensional image block based on the initial reference point, when the confidence coefficient of the target point meets a preset confidence coefficient threshold, the tracked target point is used as the initial target point for tracking until the tracked target point does not meet the preset confidence coefficient threshold, and a coronary artery identification result corresponding to the target point meeting the preset confidence coefficient threshold is extracted, so that coronary artery direction information and coronary artery radius information in the three-dimensional image block can be identified in a target point tracking mode, and further, the identification of a coronary artery from medical images such as CTA images can be realized.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic device structure diagram of an electronic apparatus according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a default tracking direction according to an embodiment of the present application;
fig. 5 is a schematic diagram of a network architecture according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to the attached drawings.
The present embodiment is only for explaining the present application, and it is not limited to the present application, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present application.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
The embodiments of the present application will be described in further detail with reference to the drawings attached hereto.
The embodiment of the present application provides an image processing method, which may be used to identify coronary artery from a three-dimensional image block, but in the embodiment of the present application, the method is not limited to identifying coronary artery from a three-dimensional image block, and may also be used to identify tubular objects such as veins and intestines from medical images, and may also be applied to identify other tubular objects from other images. The embodiments of the present application are described taking coronary artery identification from a three-dimensional image block as an example.
An embodiment of the present application provides an image processing method, as shown in fig. 1, the image processing method may be executed by an electronic device, and the electronic device may be a server or a terminal device, where the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. The terminal device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like, but is not limited thereto, and the terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto, and the method includes:
step S101, a three-dimensional image block to be identified and an initial reference point are obtained.
For the embodiment of the present application, the three-dimensional image block to be identified may be a three-dimensional DAT image block, and may also be a medical image block in any other format. For example, if the three-dimensional image block to be recognized is a medical image block, the three-dimensional image block to be recognized may be a three-dimensional angiographic image, a three-dimensional nuclear magnetic resonance image, or the like.
Further, in the embodiment of the present application, the initial reference point may be a point on the coronal object to be extracted, may be an initial end point, an end point on the coronal object to be extracted, or may be any point on the coronal object to be extracted. For coronary artery extraction, the initial reference point may be an initial endpoint and a termination endpoint of a coronary artery to be extracted, or may be any point on the coronary artery to be extracted.
Further, in the embodiment of the present application, the initial reference point may be determined manually or by a regression model, and in the embodiment of the present application, when the initial reference point is predicted by the regression model, the Digital Imaging and Communications in Medicine (DICOM) image may be wholly input into the regression model to predict the initial reference point.
And S102, performing coronary artery identification processing through the trained network model based on the three-dimensional image block to be identified and the initial reference point to obtain a coronary artery identification result.
For the embodiment of the present application, the trained network model may include: a backbone network (backbone), a head network (head), wherein the head network may include: a tracker and an authenticator. In the embodiment of the present application, the tracker may be configured to track the target point and output the radius information and the direction information, and the discriminator is configured to discriminate whether the tracked target point belongs to the coronary artery and output corresponding confidence information. In the embodiment of the present application, the network model may use a three-dimensional Convolutional Neural network (3D-CNN) as a basic architecture, or may use other models as a basic architecture, which is not limited in the embodiment of the present application.
Specifically, step S102 may specifically include: extracting image characteristic information from a three-dimensional image block to be identified; tracking to a target point in the three-dimensional image block based on the image characteristic information and the initial reference point; determining whether the confidence of the target point meets a preset confidence threshold; if the preset confidence level threshold is met, determining the target point as an initial reference point, tracking the target point in the three-dimensional image block to the target point based on the image characteristic information and the initial reference point, and determining whether the confidence level of the target point meets the preset confidence level threshold or not until the tracked target point does not meet the preset confidence level threshold; and extracting the coronary artery identification result corresponding to the target point meeting the preset confidence level threshold.
For the embodiment of the application, image feature information can be extracted from a three-dimensional image block to be identified based on a backbone network, then the image feature information and an initial reference point are tracked to a next target point through a tracker based on the backbone network, whether the confidence of the tracked next target point meets a preset confidence threshold is determined through an identifier, if the preset confidence threshold is met, the tracking process is executed by determining the tracked target point as the initial reference point in a circulating mode until the confidence of the tracked target point does not meet the preset confidence threshold, and a coronary identification result corresponding to the target point meeting the preset confidence threshold is extracted. In the embodiment of the present application, the coronary artery identification result includes: coronary artery direction information and coronary artery radius information.
Specifically, the image feature information may include: at least one of color feature information, texture feature information, shape feature information, and spatial relationship feature information of the image.
Further, the three-dimensional image block to be recognized according to the above embodiment may be a three-dimensional image block with a size of W × W that is cut out from a three-dimensional image, but in order to ensure the standards and unification of the images without losing detailed features of the images, the three-dimensional image block may be a three-dimensional image block that is obtained by performing voxel pitch transform on the cut three-dimensional image block with a size of W × W according to the following embodiment, that is, the step S101 may further include: acquiring an initial three-dimensional image block; and carrying out voxel space transformation processing on the initial three-dimensional image block to obtain a three-dimensional image block to be identified.
The distance between any two adjacent voxels in the three-dimensional image block to be identified is a preset value. For example, the distance between any two adjacent voxels in the three-dimensional image block to be identified, which is obtained after the voxel space transformation process, is 0.5.
Further, in the embodiment of the present application, a manner of acquiring the three-dimensional image block to be recognized and the initial reference point may be as shown in the embodiment of the present application, or as shown in the related art. The embodiments of the present application are not limited.
Further, in the above embodiment, the network model trained in step S102 includes a tracker and a discriminator, where the discriminator may include three-dimensional convolution layers, where the first layer is a 3 × 3 convolution layer for outputting a feature map with 64 channels, the second layer may be a 1 × 1 convolution layer for outputting a feature map with 64 channels, and the third layer may be a 1 × 1 convolution layer for outputting a probability scalar, that is, a confidence that the target point belongs to the coronary artery; the tracker design adopts a network formed by combining 1 convolution layer, a dropout layer, a BatchNorm layer and a Relu layer, wherein the dropout layer is used for temporarily discarding a part of neural network units from the network according to a certain probability in the training process of the deep learning network, and is equivalent to finding a thinner network from the original network; the BatchNorm keeps the same distribution of the input of each layer of neural network in the deep neural network training process; relu is an activation function, in a convolutional neural network, the function is used for removing negative values in a convolution result and keeping positive values unchanged, the ReLU activation function activates a node only when the input is greater than 0, the output is zero when the input is less than 0, and the output is equal to the input when the input is greater than 0.
Specifically, tracking to the target point in the three-dimensional image block based on the image feature information and the initial reference point may specifically include: step Sa (not shown), step Sb (not shown), and step Sc (not shown), wherein,
and step Sa, determining the regional characteristic information corresponding to the region where the initial reference point is located based on the image characteristic information.
For the embodiment of the present application, the area where the initial reference point is located may be a preset area using the initial reference point as a central point, or may also be a preset area including the initial reference point.
Further, in the embodiment of the present application, the manner of determining the region feature information corresponding to the region where the initial reference point is located may be extracted by the convolution layer in the tracker, and a specific extraction manner is not limited in the embodiment of the present application.
And Sb, predicting the direction information of the coronary artery to be identified at the initial reference point according to the regional characteristic information.
Specifically, in the embodiment of the present application, the step Sb specifically includes: step Sb1 (not shown), step Sb2 (not shown), and step Sb3 (not shown), wherein,
and step Sb1, predicting probability information that the trend of the coronary artery to be identified at the initial reference point belongs to each alternative trend according to the regional characteristic information.
For the embodiment of the application, each alternative trend corresponds to one unit vector, and the unit vectors corresponding to a plurality of alternative trends and the number of the alternative trends can be preset. In the embodiment of the application, the larger the number of the alternative trends, the more uniform the distribution of the alternative trends, so that the direction information at the position predicted by the network model is more accurate. For example, 500 alternative trends may be preset for coronary identification, and specifically, as shown in fig. 4, each point in fig. 4 may represent one alternative trend.
For the embodiment of the application, after the regional characteristic information corresponding to the region where the initial reference point is located is obtained, probability information that the trend of the coronary artery to be identified at the initial reference point belongs to each alternative trend can be predicted. For example, 500 candidate trends are preset for coronary artery identification, and probability information that the trend of the coronary artery to be identified at the initial reference point belongs to each trend in the 500 candidate trends can be predicted.
Step Sb2 determines candidate routes for which the probability information satisfies the second preset condition from among the probability information of the respective candidate routes.
Wherein the second preset condition comprises: the probability information is greater than a preset threshold. In the embodiment of the present application, the preset threshold may be preset by a user, or may be input by the user, which is not limited in the embodiment of the present application. For example, the preset threshold may be 0.5, 0.7, 0.96, or the like.
Further, in the embodiment of the present application, if the initial reference point is located at the end of one coronary artery, the next target point may be located in different coronary arteries respectively, that is, there is a branch at the coronary artery, and in this case (which may be referred to as a first case hereinafter), it is determined that there may be a plurality of alternative trends in which the probability information satisfies the second preset condition; if the initial reference point is not located at the end of a coronary artery, the alternative trends determined in this case (hereinafter, may be referred to as a second case) to satisfy the second preset condition may be only two. In the embodiment of the present application, when the branch structure of the coronary artery is not considered, or when only one coronary artery is tracked, two candidate directions having probability information greater than other candidate directions may be selected from the candidate directions as candidate directions satisfying the second preset condition. In general, the candidate with the highest probability is selected from the candidate with the opposite direction.
Step Sb3, predicting the direction information of the coronary artery to be identified at the initial reference point from the alternative trends satisfying the second preset condition.
Further, in the first case, the determined candidate heading satisfying the second preset condition may be at least 3 candidate headings, wherein the possible predicted candidate heading includes a tracked heading, so that in step Sb3, the tracked heading needs to be screened from the candidate headings satisfying the second preset condition, and the candidate headings satisfying the second preset condition except the tracked heading are predicted as the heading information of the coronary artery to be identified at the initial reference point. In the embodiment of the present application, the tracked direction may be screened from the alternative trends meeting the second preset condition in the following manner, or the tracked direction may be screened from the alternative trends meeting the second preset condition based on the recorded tracking direction; for example, the determined candidate directions meeting the second preset condition may be d0, d1 and d2, wherein d0 is the already tracked direction, and then the direction information of the coronary artery to be identified at the initial reference point is predicted to be d1 and d2 from the candidate directions meeting the second preset condition;
further, in the first case, there is another possible implementation manner: the model may record the tracked direction, that is, no tracked direction exists in the candidate directions satisfying the second preset condition, and the manner of predicting the direction information of the coronary artery to be identified at the initial reference point from the candidate directions satisfying the second preset condition in step Sb3 may specifically include: and predicting the alternative trend meeting the second preset condition as the direction information of the coronary artery to be identified at the initial reference point. For example, the alternative trends satisfying the second preset condition are d1 and d2, and the predicted direction information of the coronary artery to be identified at the initial reference point is d1 and d 2.
Further, in the second case, one possible implementation: if the determined alternative trend meeting the second preset condition may be two alternative trends, and one of the alternative trends is the tracked direction, the trend to be tracked may be determined based on the following manner, that is, the direction information of the coronary artery to be identified at the initial reference point, or the tracked direction may be determined from the two alternative trends based on the recorded tracking direction, so as to obtain the direction information of the coronary artery to be identified at the initial reference point; for example, the alternative trends that satisfy the second preset condition in this case may be d0 and d1, where it is determined that d0 is the already tracked direction, and then the direction information of the coronary artery to be identified at the initial reference point is determined to be d 1. Further, in the second case, another possible implementation is: the determined alternative trend meeting the second preset condition may include an alternative trend and not include the already tracked trend, for example, if the alternative trend meeting the second preset condition is determined to be d1, the direction information of the coronary artery to be identified at the initial reference point is determined to be the determined alternative trend meeting the second preset condition, that is, d 1.
Further, if at least three alternative trends (at least three alternatives include the tracked direction) are determined in the first case, the following embodiment is described in detail for determining the direction information of the coronary artery to be identified at the initial reference point from among the at least three alternative trends satisfying the second preset condition, and further, if two alternative trends (including the tracked direction) are determined in the second case, the following embodiment is described in detail for predicting the direction information of the coronary artery to be identified at the initial reference point from among the two alternative trends satisfying the second preset condition:
specifically, in the embodiment of the present application, predicting, from alternative trends satisfying the second preset condition, the direction information of the coronary artery to be identified at the initial reference point may specifically include: determining a tracking direction to an initial reference point; determining included angle information respectively corresponding to the tracking direction and each alternative trend meeting a second preset condition; and predicting the direction information of the coronary artery to be identified at the initial reference point based on the included angle information respectively corresponding to the tracking direction and each alternative trend meeting a second preset condition. For example, in two cases, two alternative trends satisfying the second preset condition are d0 and d1, respectively, and the alternative trend in which the included angle information is not greater than the preset included angle information in d0 and d1 is predicted as the direction information of the coronary artery to be identified at the initial reference point.
The preset included angle information may be determined according to a curvature of a coronary artery to be extracted, for example, the preset included angle corresponding to the coronary artery may be set to 60 degrees or 30 degrees, and is not limited in the embodiment of the present application. Further, for example, if the preset angle corresponding to the coronary artery is 60 degrees, the angle between the direction d0 and the tracking direction is 80 degrees, and the angle between the direction d1 and the tracking direction is 30 degrees, the direction d1 is predicted as the direction information of the coronary artery to be identified at the initial reference point.
And step Sc, tracking in the three-dimensional image block based on the initial reference point and the predicted direction information to obtain a target point.
For the embodiment of the present application, after obtaining the predicted direction information, the target point may be obtained by tracking along the predicted direction information and along a preset step length starting from the initial reference point. In the embodiment of the present application, the preset step length may be preset by a user, or may be determined according to a tubular object to be identified, for example, the preset step length may be maximum radius information of a coronary artery, or may be radius information corresponding to a current target point.
Further, after the target point is tracked, the radius information of the coronary artery at the target point may be predicted, that is, the radius information of the coronary artery in the output coronary artery identification result may be the radius information predicted at each target point, and may also be the maximum radius information, which is not limited in the embodiment of the present application. In the embodiment of the application, after the target point is tracked, 501-dimensional data including the maximum radius information and probability information corresponding to the target point in each direction can be output through the tracker; the discriminator outputs the confidence information corresponding to each direction.
Further, in the above embodiment, a tracking manner of tracking from an initial reference point to a target point is described, however, after tracking to a target point, it is necessary to use the target point as the initial reference point and perform tracking to a next target point in the above manner until the confidence of the tracked target point is not greater than the preset confidence threshold.
Further, in the above embodiment, a method of identifying tubular objects such as coronary artery by using the trained network model is described, and the following embodiment describes a training method for the network model, which is specifically as follows:
in another possible implementation manner of the embodiment of the present application, step S102 may further include: step S001 (not shown), step S002 (not shown), and step S003 (not shown), wherein,
and S001, acquiring sample image information.
For the embodiments of the present application, the sample images in the sample image information are marked with a dot centerline and the radius of the standard is annotated. In an embodiment of the present application, the training data may be in the CTA08 data set, which contains 8 CTA inspection data. Each examination image is labeled with the coordinates of the centerline of 5 main coronary arteries and the radius of the current position of the coronary artery.
For the embodiment of the present application, after obtaining the sample image information, data enhancement processing, for example, random translation and image rotation, may be further performed on the sample image, and the method may further include: cropping and sliding of the image, for example, crops the sample image into a 19 x 19 square centered on the current starting point position coordinates. Because the distance between the pixel points is 0.5mm, the length of the cube is 9.5mm, and thus the cube can completely contain the peripheral tube wall of the coronary artery at the initial reference point; for example, the coronary radius is almost all less than 3mm, but there are also partial areas exceeding 3mm, and in the case of less data, point location data whose radius will be greater than 3mm we perform multiple random translations or rotations of the image as data enhancement. And S002, inputting the sample image information into the original network model to obtain a coronary artery identification prediction result.
From the above embodiment, it can be seen that: the original network model may include: the system comprises a tracker and a discriminator, wherein labeled data points are used as positive samples for data for training the discriminator, and data points are randomly extracted without labeled coordinate point data to be used as negative samples.
And S003, performing parameter optimization on the original network model based on the coronary artery identification prediction result and the coronary artery identification expected result to obtain the trained network model.
For the embodiment of the application, a difference result between the coronary artery identification expected result and the coronary artery identification predicted result is obtained through the following loss function based on the coronary artery identification predicted result and the coronary artery identification expected result, and the original network model is subjected to parameter optimization based on the difference result after the difference result is obtained, so that the trained network model is obtained. Wherein the loss function may be: LDCAT = Ldir + λ Lreg + β Lcls, where Ldir is the categorical cross entropy loss from the direction task, Lreg is the mean error of the radius regression task, and Lcls is the loss of the binary cross entropy discriminator task. For example, in the present embodiment, λ may be 15 and β may be 1.
Further, in the embodiment of the present application, implemented by using a Pytorch and NVIDIV tesla P100GPU, in the training process, the training model reaches 100 periods (epoch), the initial learning rate is 0.001, the learning rate keeps slowly decreasing along with the iteration of the model, and finally, loss decreases to 3.1 and ends. Specifically, in this embodiment of the present application, based on the coronary artery recognition prediction result and the coronary artery recognition expected result, performing parameter optimization on the original network model to obtain a trained network model, which may specifically include: step S0031 (not shown), step S0032 (not shown), step S0033 (not shown), and step S0034 (not shown), in which,
and step S0031, determining first difference information between the coronary artery prediction direction information and the expected direction information.
For the embodiment of the application, the sample image information is input into the original network model, coronary artery prediction direction information can be obtained, and then first difference information between the coronary artery prediction direction information and the expected direction information is determined based on the first loss function; for example, the first loss function may be Ldir.
Step S0032, determining second difference information between the predicted category information of the tracked target point and the desired category information of the tracked target point.
For the present embodiment, the sample image information is input to the original network model, the prediction category information of the tracked target point may be obtained, and then the second difference information between the prediction category information of the tracked target point and the expected category information of the tracked target point is determined based on the second loss function. For example, the second loss function may be Lcls.
Step S0033, third difference information between the predicted radius information of the coronary artery at the tracked target point and the expected radius information of the coronary artery at the tracked target point is determined.
For the embodiment of the present application, the sample image information is input to the original network model, the predicted radius information of the tracked target point may be obtained, and then third difference information between the predicted radius information of the tracked target point and the expected radius information of the tracked target point is determined based on the third loss function. For example, the third penalty function may be Lreg.
Further, it should be noted that step S0031, step S0032, and step S0033 may be executed simultaneously, or may be executed in the above order, or may be executed in any other order, which is not limited in the embodiment of the present application.
And S0034, performing parameter optimization on the original network model based on the first difference information, the second difference information and the third difference information to obtain the trained network model.
For the present embodiment, the Adam optimizer updates the backbone network, the tracker, and the discriminator during network model training to minimize losses. Among them, an adaptive moment estimation (Adam) optimizer is another method of calculating an adaptive learning rate for each parameter. In addition to storing the exponentially decaying averages of past squared gradients vtvt like Adadelta and RMSprop, Adam also maintains the exponentially decaying average of past gradients mtmt, similar to momentum.
Further, in the embodiment of the present application, centerline extraction is performed on all blood vessel data (coronary artery data) in the data set, and then a quantitative analysis is performed on the extraction result, where the related quantitative parameters may include: overlap rate OF centerline extraction (OV), overlap rate before first error Occurrence (OF), clinically relevant partial Overlap (OT), and average internal Accuracy (AI). Further, it is determined whether or not the coronary artery identification manner is improved based on the quantization parameters.
Based on the foregoing embodiments, an example is further provided in the embodiments of the present application, in which a three-dimensional image block to be recognized is subjected to coronary artery recognition through a trained network model, specifically as shown in fig. 5, the trained network model may include: the coronary artery identification method comprises a backbone network and a head network, wherein the head network can comprise a tracker and a discriminator, specifically, a three-dimensional image block to be identified is subjected to image feature extraction through the backbone network, then target point tracking is carried out on the extracted image feature information through the tracker to obtain coronary artery direction information and maximum coronary artery radius information, and coronary artery identification is carried out on the image feature information through the discriminator to obtain confidence information that the target point belongs to the coronary artery. In the foregoing embodiment, an image processing method is introduced from the perspective of a method flow to recognize a tubular object such as a coronary artery in a three-dimensional image, and on the basis of the foregoing embodiment, the following embodiment introduces an image processing apparatus from the perspective of a virtual module or a virtual unit, which can be used to recognize a tubular object such as a coronary artery in a three-dimensional image, as follows:
an embodiment of the present application provides an image processing apparatus, and as shown in fig. 2, the image processing apparatus 20 may specifically include: a first acquisition module 21 and a coronary identification processing module 22, wherein,
the first obtaining module 21 is configured to obtain a three-dimensional image block to be identified and an initial reference point;
the coronary artery identification processing module 22 is configured to perform coronary artery identification processing on the basis of the three-dimensional image block to be identified and the initial reference point through the trained network model to obtain a coronary artery identification result;
the coronary artery identification processing module 22 performs coronary artery identification processing on the basis of the three-dimensional image block to be identified and the initial reference point through the trained network model to obtain a coronary artery identification result, and is specifically configured to:
extracting image characteristic information from a three-dimensional image block to be identified;
tracking to a target point in the three-dimensional image block based on the image characteristic information and the initial reference point;
determining whether the confidence of the target point meets a preset confidence threshold;
if the preset confidence level threshold is met, determining the target point as an initial reference point, tracking the target point in the three-dimensional image block to the target point based on the image characteristic information and the initial reference point, and determining whether the confidence level of the target point meets the preset confidence level threshold or not until the tracked target point does not meet the preset confidence level threshold;
extracting a coronary artery identification result corresponding to a target point meeting a preset confidence level threshold, wherein the coronary artery identification result comprises: coronary artery direction information and coronary artery radius information.
In another possible implementation manner of the embodiment of the present application, when the coronary artery identification processing module 22 tracks a target point in a three-dimensional image block based on the image feature information and the initial reference point, the coronary artery identification processing module is specifically configured to: determining area characteristic information corresponding to an area where the initial reference point is located based on the image characteristic information; predicting direction information of the coronary artery to be identified at the initial reference point according to the regional characteristic information; and tracking in the three-dimensional image block based on the initial reference point and the predicted direction information to obtain a target point.
In another possible implementation manner of the embodiment of the present application, when predicting the direction information of the coronary artery to be identified at the initial reference point according to the region feature information, the coronary artery identification processing module 22 is specifically configured to: predicting probability information that the trend of the coronary artery to be identified at the initial reference point belongs to each alternative trend according to the regional characteristic information; determining the alternative trends of which the probability information meets a second preset condition from the probability information of each alternative trend, wherein the second preset condition comprises the following steps: the probability information is greater than a preset threshold value; and predicting the direction information of the coronary artery to be identified at the initial reference point from the alternative trend meeting the second preset condition.
In another possible implementation manner of the embodiment of the present application, when predicting the direction information of the coronary artery to be identified at the initial reference point from the alternative trend satisfying the second preset condition, the coronary artery identification processing module 22 is specifically configured to: determining a tracking direction to an initial reference point; and predicting the direction information of the coronary artery to be identified at the initial reference point based on the included angle information respectively corresponding to the tracking direction and each alternative trend meeting a second preset condition.
In another possible implementation manner of the embodiment of the present application, the apparatus 20 for image processing further includes: a second acquisition module and a voxel-space transformation processing module, wherein,
the second acquisition module is used for acquiring the initial three-dimensional image block;
and the voxel space conversion processing module is used for carrying out voxel space conversion processing on the initial three-dimensional image block to obtain a three-dimensional image block to be identified.
The distance between any two adjacent voxels in the three-dimensional image block to be identified is a preset value.
In another possible implementation manner of the embodiment of the present application, the apparatus 20 for image processing further includes: a third acquisition module, an input module, and a parameter optimization module, wherein,
the third acquisition module is used for acquiring sample image information;
the input module is used for inputting the sample image information into the original network model to obtain a coronary artery identification prediction result;
and the parameter optimization module is used for carrying out parameter optimization on the original network model based on the coronary artery identification prediction result and the coronary artery identification expected result to obtain the trained network model.
For the embodiment of the present application, the first obtaining module 21, the second obtaining module, and the third obtaining module may be the same obtaining module, may also be different obtaining modules, and may also be partially the same obtaining module, which is not limited in the embodiment of the present application.
In another possible implementation manner of the embodiment of the present application, the parameter optimization module is specifically configured to, when performing parameter optimization on the original network model based on the coronary artery recognition prediction result and the coronary artery recognition expected result to obtain the trained network model:
determining first difference information between the coronary artery prediction direction information and the expected direction information;
determining second difference information between the predicted category information of the tracked target point and the desired category information of the tracked target point;
determining third difference information between predicted radius information of the coronary artery at the tracked target point and expected radius information of the coronary artery at the tracked target point;
and performing parameter optimization on the original network model based on the first difference information, the second difference information and the third difference information to obtain a trained network model.
The embodiment of the application provides an image processing device, in the embodiment of the application, a three-dimensional image block to be identified and an initial reference point are obtained, target point tracking is performed in the three-dimensional image block based on the initial reference point, when the confidence coefficient of the target point meets a preset confidence coefficient threshold, the tracked target point is taken as the initial target point to be tracked until the tracked target point does not meet the preset confidence coefficient threshold, and a coronary artery identification result corresponding to the target point meeting the preset confidence coefficient threshold is extracted, so that coronary artery direction information and coronary artery radius information in the three-dimensional image block can be identified in a target point tracking mode, and further, a coronary artery can be identified from medical images such as CTA images.
The image processing apparatus provided in the embodiment of the present application is applicable to the method embodiments, and is not described herein again.
In an embodiment of the present application, an electronic device is provided, as shown in fig. 3, where the electronic device 300 shown in fig. 3 includes: a processor 301 and a memory 303. Wherein processor 301 is coupled to memory 303, such as via bus 302. Optionally, the electronic device 300 may also include a transceiver 304. It should be noted that the transceiver 304 is not limited to one in practical applications, and the structure of the electronic device 300 is not limited to the embodiment of the present application.
The Processor 301 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 301 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
The Memory 303 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 303 is used for storing application program codes for executing the scheme of the application, and the processor 301 controls the execution. The processor 301 is configured to execute application program code stored in the memory 303 to implement the aspects illustrated in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. But also a server, etc. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. In the embodiment of the application, the three-dimensional image block to be identified and the initial reference point are obtained, the target point tracking is performed in the three-dimensional image block based on the initial reference point, when the confidence coefficient of the target point meets the preset confidence coefficient threshold, the tracked target point is used as the initial target point for tracking until the tracked target point does not meet the preset confidence coefficient threshold, and the coronary artery identification result corresponding to the target point meeting the preset confidence coefficient threshold is extracted, so that the coronary artery direction information and the coronary artery radius information in the three-dimensional image block can be identified in the target point tracking mode, and further, the coronary artery can be identified from medical images such as CTA images.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
The above embodiments are only used to describe the technical solutions of the present application in detail, but the above embodiments are only used to help understanding the method and the core idea of the present application, and should not be construed as limiting the present application. Those skilled in the art should also appreciate that various modifications and substitutions can be made without departing from the scope of the present disclosure.
Claims (10)
1. A method of image processing, comprising:
acquiring a three-dimensional image block to be identified and an initial reference point;
performing coronary artery identification processing through the trained network model based on the three-dimensional image block to be identified and the initial reference point to obtain a coronary artery identification result:
extracting image characteristic information from the three-dimensional image block to be identified;
tracking to a target point in the three-dimensional image block based on the image feature information and the initial reference point;
determining whether the confidence of the target point meets a preset confidence threshold;
if the preset confidence level threshold is met, determining the target point as the initial reference point, tracking the target point in the three-dimensional image block to the target point based on the image feature information and the initial reference point, and determining whether the confidence level of the target point meets the preset confidence level threshold or not until the tracked target point does not meet the preset confidence level threshold;
extracting a coronary artery identification result corresponding to a target point meeting a preset confidence level threshold, wherein the coronary artery identification result comprises: coronary artery direction information and coronary artery radius information.
2. The method of claim 1, wherein tracking to a target point in the three-dimensional image patch based on the image feature information and the initial reference point comprises:
determining area characteristic information corresponding to an area where the initial reference point is located based on the image characteristic information;
predicting the direction information of the coronary artery to be identified at the initial reference point according to the region characteristic information;
and tracking in the three-dimensional image block based on the initial reference point and the predicted direction information to obtain the target point.
3. The method according to claim 2, wherein the predicting the direction information of the coronary artery to be identified at the initial reference point according to the region feature information comprises:
predicting probability information that the trend of the coronary artery to be identified at the initial reference point belongs to each alternative trend according to the regional characteristic information;
determining candidate trends of which the probability information meets a second preset condition from the probability information of each candidate trend, wherein the second preset condition comprises the following steps: the probability information is greater than a preset threshold value;
and predicting the direction information of the coronary artery to be identified at the initial reference point from the alternative trend meeting the second preset condition.
4. The method according to claim 3, wherein the predicting the direction information of the coronary artery to be identified at the initial reference point from the alternative trend satisfying the second preset condition comprises:
determining a tracking direction to track to the initial reference point;
determining included angle information respectively corresponding to the tracking direction and each alternative trend meeting a second preset condition;
and predicting the direction information of the coronary artery to be identified at the initial reference point based on the included angle information respectively corresponding to the tracking direction and each alternative trend meeting a second preset condition.
5. The method according to claim 1, wherein the obtaining of the three-dimensional image block to be recognized further comprises:
acquiring an initial three-dimensional image block;
and carrying out voxel space transformation processing on the initial three-dimensional image block to obtain the three-dimensional image block to be identified, wherein the distance between any two adjacent voxels in the three-dimensional image block to be identified is a preset value.
6. The method according to claim 1, wherein the performing coronary artery identification processing on the three-dimensional image block to be identified and the initial reference point through the trained network model to obtain a coronary artery identification result further comprises:
acquiring sample image information;
inputting the sample image information into an original network model to obtain a coronary artery identification prediction result;
and performing parameter optimization on the original network model based on the coronary artery identification prediction result and the coronary artery identification expected result to obtain the trained network model.
7. The method of claim 6, wherein the performing parameter optimization on the original network model based on the coronary artery recognition prediction result and the coronary artery recognition expected result to obtain the trained network model comprises:
determining first difference information between the coronary artery prediction direction information and the expected direction information;
determining second difference information between the predicted category information of the tracked target point and the desired category information of the tracked target point;
determining third difference information between predicted radius information of a coronary artery at the tracked target point and expected radius information of the coronary artery at the tracked target point;
and performing parameter optimization on the original network model based on the first difference information, the second difference information and the third difference information to obtain the trained network model.
8. An apparatus for image processing, comprising:
the first acquisition module is used for acquiring a three-dimensional image block to be identified and an initial reference point;
the coronary artery identification processing module is used for carrying out coronary artery identification processing on the basis of the three-dimensional image block to be identified and the initial reference point through the trained network model to obtain a coronary artery identification result;
the coronary artery identification processing module is used for carrying out coronary artery identification processing on the basis of the three-dimensional image block to be identified and the initial reference point through a trained network model to obtain a coronary artery identification result, and is specifically used for:
extracting image characteristic information from the three-dimensional image block to be identified;
tracking to a target point in the three-dimensional image block based on the image feature information and the initial reference point;
determining whether the confidence of the target point meets a preset confidence threshold;
if the preset confidence level threshold is met, determining the target point as the initial reference point, tracking the target point in the three-dimensional image block to the target point based on the image feature information and the initial reference point, and determining whether the confidence level of the target point meets the preset confidence level threshold or not until the tracked target point does not meet the preset confidence level threshold;
extracting a coronary artery identification result corresponding to a target point meeting a preset confidence level threshold, wherein the coronary artery identification result comprises: coronary artery direction information and coronary artery radius information.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: method of performing image processing according to any of claims 1 to 7.
10. A computer readable storage medium, characterized in that it stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of image processing according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110751164.8A CN113436177A (en) | 2021-07-01 | 2021-07-01 | Image processing method and device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110751164.8A CN113436177A (en) | 2021-07-01 | 2021-07-01 | Image processing method and device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113436177A true CN113436177A (en) | 2021-09-24 |
Family
ID=77758738
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110751164.8A Pending CN113436177A (en) | 2021-07-01 | 2021-07-01 | Image processing method and device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113436177A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111145173A (en) * | 2019-12-31 | 2020-05-12 | 上海联影医疗科技有限公司 | Plaque identification method, device, equipment and medium for coronary angiography image |
CN111260055A (en) * | 2020-01-13 | 2020-06-09 | 腾讯科技(深圳)有限公司 | Model training method based on three-dimensional image recognition, storage medium and equipment |
WO2020156195A1 (en) * | 2019-01-30 | 2020-08-06 | 腾讯科技(深圳)有限公司 | Ct image generation method and apparatus, computer device and computer-readable storage medium |
CN112446911A (en) * | 2019-08-29 | 2021-03-05 | 阿里巴巴集团控股有限公司 | Centerline extraction, interface interaction and model training method, system and equipment |
-
2021
- 2021-07-01 CN CN202110751164.8A patent/CN113436177A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020156195A1 (en) * | 2019-01-30 | 2020-08-06 | 腾讯科技(深圳)有限公司 | Ct image generation method and apparatus, computer device and computer-readable storage medium |
CN112446911A (en) * | 2019-08-29 | 2021-03-05 | 阿里巴巴集团控股有限公司 | Centerline extraction, interface interaction and model training method, system and equipment |
CN111145173A (en) * | 2019-12-31 | 2020-05-12 | 上海联影医疗科技有限公司 | Plaque identification method, device, equipment and medium for coronary angiography image |
CN111260055A (en) * | 2020-01-13 | 2020-06-09 | 腾讯科技(深圳)有限公司 | Model training method based on three-dimensional image recognition, storage medium and equipment |
Non-Patent Citations (1)
Title |
---|
张艳等: "冠状动脉三维重建的研究", 《中国医学装备》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230104173A1 (en) | Method and system for determining blood vessel information in an image | |
CN110838125B (en) | Target detection method, device, equipment and storage medium for medical image | |
CN112465834B (en) | Blood vessel segmentation method and device | |
CN111640124B (en) | Blood vessel extraction method, device, equipment and storage medium | |
CN110570394A (en) | medical image segmentation method, device, equipment and storage medium | |
US12119117B2 (en) | Method and system for disease quantification of anatomical structures | |
CN111091010A (en) | Similarity determination method, similarity determination device, network training device, network searching device and storage medium | |
CN115375583A (en) | PET parameter image enhancement method, device, equipment and storage medium | |
CN116912299A (en) | Medical image registration method, device, equipment and medium of motion decomposition model | |
CN112446911A (en) | Centerline extraction, interface interaction and model training method, system and equipment | |
CN116342986B (en) | Model training method, target organ segmentation method and related products | |
CN113888566A (en) | Target contour curve determining method and device, electronic equipment and storage medium | |
CN112381822B (en) | Method for processing images of focal zones of the lungs and related product | |
CN112381824B (en) | Method for extracting geometric features of image and related product | |
CN113256622A (en) | Target detection method and device based on three-dimensional image and electronic equipment | |
CN117474879A (en) | Aortic dissection true and false cavity segmentation method and device, electronic equipment and storage medium | |
CN117078714A (en) | Image segmentation model training method, device, equipment and storage medium | |
CN116993812A (en) | Coronary vessel centerline extraction method, device, equipment and storage medium | |
CN116521915A (en) | Retrieval method, system, equipment and medium for similar medical images | |
CN115482261A (en) | Blood vessel registration method, device, electronic equipment and storage medium | |
CN113436177A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
KR20230136512A (en) | Method and apparatus for determining level of airway region | |
CN113408595B (en) | Pathological image processing method and device, electronic equipment and readable storage medium | |
CN113177953B (en) | Liver region segmentation method, liver region segmentation device, electronic equipment and storage medium | |
CN115760868A (en) | Colorectal and colorectal cancer segmentation method, system, device and medium based on topology perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |