CN112446911A - Centerline extraction, interface interaction and model training method, system and equipment - Google Patents

Centerline extraction, interface interaction and model training method, system and equipment Download PDF

Info

Publication number
CN112446911A
CN112446911A CN201910808284.XA CN201910808284A CN112446911A CN 112446911 A CN112446911 A CN 112446911A CN 201910808284 A CN201910808284 A CN 201910808284A CN 112446911 A CN112446911 A CN 112446911A
Authority
CN
China
Prior art keywords
point
image
extracted
neural network
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910808284.XA
Other languages
Chinese (zh)
Inventor
杨晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910808284.XA priority Critical patent/CN112446911A/en
Publication of CN112446911A publication Critical patent/CN112446911A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Geometry (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a method, a system and equipment for centerline extraction, interface interaction and model training. The method comprises the following steps: determining a first point on a tubular object to be extracted in the image; inputting the first point and the image into a trained neural network model to obtain an extraction result of the central line of the tubular object to be extracted; wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the fact that the category of the target point meets the preset condition is predicted, the target point is used as a new reference point until the fact that the category of the target point tracked based on the new reference point does not meet the preset condition is predicted; and determining an extraction result according to the tracked points. The technical scheme provided by the embodiment of the application can effectively reduce the error tracking caused by the interference of the interference information and improve the extraction accuracy of the central line of the tubular object.

Description

Centerline extraction, interface interaction and model training method, system and equipment
Technical Field
The application relates to the technical field of computers, in particular to a method, a system and equipment for centerline extraction, interface interaction and model training.
Background
With the development and progress of modern science and technology, the application of medical imaging technology is more and more extensive, and is favored by more and more doctors. Contrast images are often used by physicians as a reference for the diagnosis of clinical disease and for protocol assignment. Currently, CT angiography (CTA) technology is widely used in clinical practice, and consequently, a large amount of CTA images require a doctor to process and diagnose.
Taking the heart coronary artery as an example, the diagnosis of the disease is generally performed by performing post-processing steps such as extraction of the centerline of the coronary artery (i.e., coronary artery), reconstruction of the coronary tree, multi-plane reconstruction, curved surface reconstruction, or volume reconstruction, and the like, and then performing the diagnosis according to the post-processing result.
The cardiac image includes not only coronary images but also background images such as veins, atria, and ventricles. When the coronary artery central line is extracted by adopting a tracking-based tubular object central line extraction method in the prior art, the coronary artery central line is easily interfered by a background image, particularly veins with similar shapes, so that the coronary artery central line extraction accuracy rate is low and the like.
Disclosure of Invention
In view of the above, the present application is directed to a centerline extraction, interface interaction and model training method, system and apparatus that addresses, or at least partially addresses the above-mentioned problems.
Thus, in one embodiment of the present application, a method of extracting a centerline of a tubular is provided.
The method comprises the following steps:
determining a first point on a tubular object to be extracted in the image;
inputting the first point and the image into a trained neural network model to obtain an extraction result of the central line of the tubular object to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In another embodiment of the present application, an interface interaction method is provided. The method comprises the following steps:
displaying an image on an interface;
marking a central line of the tubular object to be extracted on the image in response to a trigger operation at a position point on the tubular object to be extracted in the image;
the central line of the tubular object to be extracted is obtained by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In another embodiment of the present application, a model training method is provided. The method comprises the following steps:
determining a first point on a sample tubular object to be extracted in the third sample image;
inputting the first point and the third sample image into a neural network model to obtain a predicted extraction result of the central line of the tubular object of the sample to be extracted;
performing parameter optimization on the neural network model according to the predicted extraction result and an expected extraction result corresponding to the third sample image;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the third sample image based on the reference point; when the prediction type of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the prediction type of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the prediction extraction result according to the tracked points.
In another embodiment of the present application, a neural network system is provided. The system comprises:
at least one neural network layer;
the at least one neural network layer is to: determining an initial reference point according to a first point on a tubular object to be extracted in the image; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result of the central line of the tubular object to be extracted according to the tracked points.
In another embodiment of the present application, a method of extracting a centerline of a tubular is provided. The method comprises the following steps:
determining an initial reference point according to a first point on a tubular object to be extracted in the image;
tracking a target point on the image according to the reference point;
when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition;
extracting a center line of the tubular to be extracted in the image according to the tracked point.
In another embodiment of the present application, a method for extracting a cardiac coronary centerline is provided. The method comprises the following steps:
determining a first point on a coronary artery to be extracted in a heart image;
inputting the first point and the heart image into a trained neural network model to obtain an extraction result of the coronary artery central line to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In another embodiment of the present application, an interface interaction method is provided. The method comprises the following steps:
displaying the heart image on the interface;
marking a coronary artery central line to be extracted on the heart image in response to a trigger operation at a position point on the coronary artery to be extracted in the heart image;
the coronary artery central line to be extracted is extracted by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In another embodiment of the present application, a method of extracting a centerline of a tubular is provided. The method comprises the following steps:
determining an initial reference point according to a first point on a coronary artery to be extracted in a heart image;
tracking a target point on the heart image according to the reference point;
when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition;
and extracting the central line of the coronary artery to be extracted in the heart image according to the tracked points.
In another embodiment of the present application, an electronic device is provided. The electronic device includes:
a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining a first point on a tubular object to be extracted in the image;
inputting the first point and the image into a trained neural network model to obtain an extraction result of the central line of the tubular object to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In another embodiment of the present application, an electronic device is provided. The electronic device includes:
a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
displaying an image on an interface;
marking a central line of the tubular object to be extracted on the image in response to a trigger operation at a position point on the tubular object to be extracted in the image;
the central line of the tubular object to be extracted is obtained by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In another embodiment of the present application, an electronic device is provided. The electronic device includes:
a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining a first point on a sample tubular object to be extracted in the third sample image;
inputting the first point and the third sample image into a neural network model to obtain a predicted extraction result of the central line of the tubular object of the sample to be extracted;
performing parameter optimization on the neural network model according to the predicted extraction result and an expected extraction result corresponding to the third sample image;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the third sample image based on the reference point; when the prediction type of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the prediction type of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the prediction extraction result according to the tracked points.
In another embodiment of the present application, an electronic device is provided. The electronic device includes:
a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining an initial reference point according to a first point on a tubular object to be extracted in the image;
tracking a target point on the image according to the reference point;
when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition;
extracting a center line of the tubular to be extracted in the image according to the tracked point.
In another embodiment of the present application, an electronic device is provided. The electronic device includes:
a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining a first point on a coronary artery to be extracted in a heart image;
inputting the first point and the heart image into a trained neural network model to obtain an extraction result of the coronary artery central line to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In another embodiment of the present application, an electronic device is provided. The electronic device includes:
a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
displaying the heart image on the interface;
marking a coronary artery central line to be extracted on the heart image in response to a trigger operation at a position point on the coronary artery to be extracted in the heart image;
the coronary artery central line to be extracted is extracted by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In another embodiment of the present application, an electronic device is provided. The electronic device includes:
a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining an initial reference point according to a first point on a coronary artery to be extracted in a heart image;
tracking a target point on the heart image according to the reference point;
when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition;
and extracting the central line of the coronary artery to be extracted in the heart image according to the tracked points.
According to the technical scheme provided by the embodiment of the application, the center line of the tubular object is tracked in an iterative mode by using a trained neural network model, the type of each tracked point is predicted by using the neural network model, and whether the tracking is continued or not is judged according to the type of the tracked point. That is to say, the technical scheme that this application embodiment provided can effectively distinguish the interfering information such as tubular object and background of waiting to extract. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the extraction accuracy of the central line of the tubular object is improved.
In the technical scheme provided by the embodiment of the application, an image is displayed on an interface; and in response to a trigger operation of a user at a position point on the tubular to be extracted in the image, marking the tubular central line to be extracted on the image. Therefore, in the technical scheme provided by the embodiment of the application, the user can obtain the center line of the tubular object by clicking once, the interaction is simple and easy to use, and the manual workload is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1a is a scene schematic diagram of a coronary artery centerline extraction method according to an embodiment of the present application
FIG. 1b is a schematic flow chart of a method for extracting a centerline of a tubular object according to an embodiment of the present application;
FIG. 2 is a schematic flowchart of an interface interaction method according to another embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a model training method according to another embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of a method for extracting a centerline of a tubular object according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a method for extracting a cardiac coronary centerline according to an embodiment of the present application;
FIG. 6 is a schematic flowchart of an interface interaction method according to another embodiment of the present application;
fig. 7 is a schematic flowchart of a method for extracting a cardiac coronary centerline according to an embodiment of the present application;
FIG. 8 is a block diagram of a device for extracting a centerline of a tubular object according to an embodiment of the present disclosure;
fig. 9 is a block diagram of an interface interaction apparatus according to another embodiment of the present application;
FIG. 10 is a block diagram of a model training apparatus according to another embodiment of the present application;
FIG. 11 is a block diagram of a device for extracting a centerline of a tubular object according to an embodiment of the present disclosure;
fig. 12 is a block diagram illustrating a structure of a device for extracting a coronary centerline of a heart according to an embodiment of the present disclosure;
fig. 13 is a block diagram illustrating an interface interaction apparatus according to another embodiment of the present application;
fig. 14 is a block diagram illustrating a structure of a device for extracting a coronary centerline of a heart according to an embodiment of the present disclosure;
fig. 15 is a block diagram of an electronic device according to an embodiment of the present application;
FIG. 16 is a first interactive interface provided in accordance with an embodiment of the present application;
fig. 17 is a second interactive interface provided in an embodiment of the present application.
Detailed Description
The inventor researches and discovers in the process of realizing the technical scheme provided by the embodiment of the application that: the main problem of the tube centerline solution based on tracking in the prior art is that only local information along the vessel path is utilized, and the algorithm is completely unknown for the space outside the vessel path, once tracking on the interference target, for example: when the coronary artery is tracked, the vein vessels with similar shapes can not be correctly processed, so that the tracking error is caused; moreover, if the blood vessel is blocked, the whole blood vessel cannot be completely extracted. In addition, such methods generally extract low-level two-dimensional features, or low-level three-dimensional features, which are not sufficient to correctly resolve complex situations.
Based on the above, the embodiment of the application provides a method for extracting the central line of the tubular object, the method introduces category discrimination while tracking, the probability of error tracking can be effectively reduced through the category discrimination, and the extraction accuracy is improved; the problems of incomplete blood vessel extraction and the like caused by coronary artery obstruction can be avoided.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Further, in some flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, such as 101, 102, S11, S12, etc., are used only to distinguish between the various operations, and do not represent any order of execution per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 1b shows a schematic flow chart of a method for extracting a tube centerline according to an embodiment of the present application. The execution main body of the method can be a client or a server. The client may be hardware integrated on the terminal and having an embedded program, may also be application software installed in the terminal, and may also be tool software embedded in an operating system of the terminal, which is not limited in this embodiment of the present application. The terminal can be any terminal equipment including a mobile phone, a tablet personal computer, intelligent wearable equipment and the like. The server may be a common server, a cloud, or a virtual server, and as shown in fig. 1, the method includes:
101. a first point on the tubular to be extracted in the image is determined.
102. And inputting the first point and the image into a trained neural network model to obtain an extraction result of the central line of the tubular object to be extracted.
In the above 101, in the medical field, the tubular object to be extracted may be an intestinal tract or a blood vessel. The blood vessel may be an artery or a vein. When the image is a heart image in the medical field, the tubular object to be extracted may be a coronary artery (the coronary artery is one of the arteries).
In some scenarios, it is desirable to extract the vein centerline; in other scenarios, it is desirable to extract the artery centerline. For example: when diagnosing brain diseases, the central line of the cerebral veins needs to be extracted, so that doctors can further diagnose according to the extraction result. For another example: when cardiovascular diseases are diagnosed, coronary artery central lines need to be extracted, so that doctors can conveniently make further diagnosis according to the extraction results.
It should be noted that the method for extracting the centerline of the tubular object may be applied not only to the medical field, but also to the industrial field where the centerline of the tubular object needs to be extracted, and this is not particularly limited in the embodiment of the present application.
The first point may be any point on the tubular to be extracted. Typically, the tube to be extracted comprises a starting end point and an ending end point, the tube to be extracted being located between its starting end point and its ending end point. The first point may be a starting end point, an ending end point, or any point on the tubular object to be extracted, which is located between the starting end point and the ending end point, and this is not specifically limited in this embodiment of the application. The first point may be determined by human interaction or by automatic detection by a detection model. The detailed implementation will be described in detail in the following embodiments.
In 102, the first point and the image are used as inputs of a trained neural network model, and an extraction result of the centerline of the tubular object to be extracted, which is output by the neural network model, is obtained.
The neural network model is obtained by training according to a third sample image, the central line marking information and the category marking information of the sample tube in the third sample image, and the category marking information of at least one background in the third sample image. The specific training process will be described in detail in the following embodiments.
Wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In particular, the first point may be used as an initial reference point. Alternatively, based on the first point, an initial reference point is tracked in the image.
In an example, the "determining an initial reference point according to the first point" includes: and taking the first point as an initial reference point.
The method for determining the initial reference point provided by this embodiment is more suitable for a scene in which the position of the first point is known, for example: the first point is known as the starting or ending end point of the tubular to be extracted. Thus, the complete tubular object can be tracked by only using the first point as the starting point of the tracking and performing the tracking along one direction.
In another example, based on the first point, an initial reference point is tracked in the image, which may be implemented by:
and S11, tracking two second points based on the first point.
And S12, predicting the categories of the two second points.
And S13, determining the second point of the two second points, the category of which meets the preset condition, as an initial reference point.
In an implementation manner, the above-mentioned "tracking two second points based on the first point" in S11 can be implemented by the following steps:
and S111, performing feature extraction on a third area where the first point in the image is located to obtain third area features.
And S112, predicting the probability that the trend of the tubular object to be extracted at the first point belongs to a plurality of alternative trends according to the third region characteristics.
And S113, tracking the two second points according to the two alternative trends with the maximum probability.
In S111, the first point may be a center point of the third area, and of course, the first point may also be a point within a setting range with the center point of the third area as a center, where the setting range may be set according to actual needs, for example: the range is set to be circular or spherical with a radius of 2 pixels. The shape and pixel size of the third region can be set according to actual needs. For convenience of calculation, when the image is a two-dimensional image, the shape of the third region may be a square; when the image is a three-dimensional image, the shape of the third region may be a cube. The pixel size of the third area can be set according to the maximum possible radius of the tubular object to be extracted and the pixel pitch of the image, so as to ensure that the third area can completely contain the peripheral tube wall of the tubular object to be extracted at the first point. In this way, the neural network model may be enabled to extract valid features based on the third region.
It should be added that: in the medical field, when the image is a two-dimensional image, the image may be a medical slice, for example: nuclear magnetic resonance slicing; when the image is a three-dimensional image, the image may be a three-dimensional angiographic image, a three-dimensional nuclear magnetic resonance image, or the like.
Taking the coronary artery as an example, the maximum possible radius of the coronary artery is 3mm, the image is a three-dimensional heart image, the pixel pitch of the image is 0.5mm, and the pixel size of the third region can be set as: 19 x 19 cube. Since the pixel pitch is 0.5mm, the length of the cube is 9.5mm, so that the cube can completely contain the peripheral wall of the coronary at the first point.
The neural network model can extract the third region features through the neural network layer, and specific implementation can be referred to in the prior art, and is not described in detail here.
In S112, each candidate trend corresponds to a unit vector. The unit vectors corresponding to the multiple alternative trends and the number of the alternative trends can be set in advance according to actual needs. The more the number of the alternative trends is, the more uniform the distribution of the alternative trends is, and the more accurate the prediction result of the neural network model is. For example: 500 alternative trends can be set, and the 500 alternative trends are uniformly distributed.
Thus, the predicted trend also corresponds to a classification task. And predicting the probability that the trend of the tubular object to be extracted at the first point belongs to a plurality of alternative trends according to the third region characteristics, specifically predicting the probability that the trend of the tubular object to be extracted at the first point belongs to a plurality of alternative trends through a convolution layer or a full connection layer.
In step S113, the two second points are tracked according to the two candidate trends with the largest probability.
In an example, starting from the first point, a first point (i.e., a first pixel point) encountered along the two alternative trends in the image may be used as the second point.
In another example, the neural network model may also predict a radius of the tubular to be extracted at the first point from the third region features; and tracking to obtain a target point according to the two alternative trends and the radius of the tubular object to be extracted at the first point. Specifically, in the above image, starting from a first point, the radius of the tubular object to be extracted at the first point is taken as a tracking step, and the first point respectively tracked along the two alternative trends is taken as the above second point.
The "predicting the categories of the two second points" in S12 includes:
and S121, performing feature extraction on a fourth region where the second point is located in the image to obtain fourth region features.
And S122, classifying the second points according to the fourth region characteristics to obtain the category of the second points.
Specifically, according to the fourth region feature, predicting the probability that the second point belongs to a plurality of candidate categories; and determining the category of the second point according to the probability. The candidate class with the highest probability may be determined as the class of the second point.
Among the plurality of candidate categories are: the tubular to be extracted category and at least one background category. Taking the extraction of coronary artery of heart as an example, the category of the tubular object to be extracted is the category of coronary artery; at least one of the context categories may include: the venous category. Of course, at least one of the context categories may also include: atrial category, ventricular category, and the like. It should be noted that, in an example, the number of the at least one background category is one, that is, the background categories are all the other than the category of the tubular object to be extracted in the image.
In the above S13, if the category of the second point is the category of the tubular object to be extracted, the category of the second point satisfies the preset condition.
And when the categories of the two second points both meet the preset condition, determining two initial reference points. And when the category of only one second point in the two second points meets the preset condition, determining that the number of the initial reference points is one.
The method for determining the initial reference point provided by this embodiment is suitable for a scene in which the position of the first point is unknown, and if the first point is any point located between the starting end point and the ending end point of the tubular object to be extracted, the number of the initial reference points determined in the above-described manner two are determined, and a complete tubular object is obtained by tracking based on two opposite directions in which the two initial reference points extend along the tubular object to be extracted. If the first point is the starting end point or the ending end point of the tubular object to be extracted, the number of the initial reference points determined according to the method provided by the embodiment is one, and the complete tubular object can be tracked along one direction based on the one initial reference point. Therefore, compared with the technical scheme of directly determining the first point as the initial reference point, the method provided by the embodiment has wider application range.
In an implementation, the above "tracking a target point in the image based on the reference point" can be implemented by the following steps:
and S21, performing feature extraction on the second region where the reference point is located in the image to obtain second region features.
S22, predicting the first trend of the tubular object to be extracted at the reference point according to the second area characteristics.
And S23, tracking to obtain a target point according to the first trend.
In the above S21, the reference point may be a center point of the above second area. Of course, the reference point may also be a point within a set range centered on the center point of the second region, such as: the range is set to be circular or spherical with a radius of 2 pixels. The shape and pixel size of the second region can be set according to actual needs. For convenience of calculation, when the image is a two-dimensional image, the shape of the second region may be a square; when the image is a three-dimensional image, the shape of the second region may be a cube. The pixel size of the second area can be set according to the maximum possible radius of the tubular to be extracted and the pixel pitch of the image, so as to ensure that the second area can completely contain the peripheral pipe wall of the tubular to be extracted at the reference point. In this way, the neural network model may be enabled to extract valid features based on the second region. In one example, the pixel size of the second region may be the same as the pixel size of the third region described above.
The neural network model can extract the second region features through the neural network layer, and specific implementation can be referred to in the prior art, and is not described in detail here.
In one example, in the above S22, the first direction of the tubular to be extracted at the reference point can be represented by a unit vector (modulo of the unit vector is 1). When the image is a three-dimensional image, the unit vector is a vector (x, y, z) in three-dimensional space, where x, y, z are three elements in the vector in three-dimensional space. When the image is a two-dimensional image, the unit vector is a vector (x, y) in a two-dimensional space, where x, y are two elements in the vector in the two-dimensional space.
The values of the elements in the unit vector representing the first trend can be predicted according to the second region feature. Specifically, each element value in the unit vector representing the first direction can be calculated according to the second region feature through the convolution layer or the full-link layer.
In another example, the above-mentioned "predicting the first trend of the tubular to be extracted at the reference point according to the second regional characteristic" in S22 may specifically be implemented by the following steps:
s221, predicting the probability that the trend of the tubular object to be extracted at the reference point belongs to a plurality of alternative trends according to the second region characteristics.
S222, according to the tracking direction of the reference point, determining a first trend from the two alternative trends with the highest probability.
In S221, each candidate trend corresponds to a unit vector. The unit vectors corresponding to the multiple alternative trends and the number of the alternative trends can be set in advance according to actual needs. The more the number of the alternative trends is, the more uniform the distribution of the alternative trends is, and the more accurate the prediction result of the neural network model is. For example: 500 alternative trends can be set, and the 500 alternative trends are uniformly distributed.
Thus, the predicted trend also corresponds to a classification task. And predicting the probability that the trend of the tubular object to be extracted at the reference point belongs to a plurality of alternative trends according to the second region characteristics, specifically predicting the probability that the trend of the tubular object to be extracted at the reference point belongs to a plurality of alternative trends through a convolution layer or a full connection layer.
In the above S222, generally, the tubular object to be extracted has two directions at each point, and the directions are opposite to each other. Both of these trends can be predicted by the probability prediction in S221 above.
However, in order to avoid returning to track the tracked path, the candidate trend having an included angle smaller than or equal to a preset included angle with the tracking direction among the two candidate trends with the highest probability is determined as the first trend. This ensures that the next untracked path is traced, rather than returning to deduplication tracing.
The size of the preset included angle can be set according to the curvature of the tubular object to be extracted in actual situations, and the size is not particularly limited in the present application. In the case of coronary artery, the preset included angle may be set to 60 ° or 30 °.
In the above S23, according to the first trend, the specific implementation of tracking the target point may be implemented in one or more of the following manners:
the first method is as follows: the first point (i.e. the first pixel point) encountered along the first trend may be regarded as the tracked target point in the image from the reference point.
The second method comprises the following steps: the neural network model is further configured to: and predicting the radius of the tubular object to be extracted at the reference point according to the second region characteristic. And tracking to obtain a target point according to the first trend and the radius.
Specifically, in the above image, starting from the reference point, the radius of the tubular to be extracted at the first point is used as a tracking step, and the first point tracked along the first trend is used as a tracked target point.
For example, assume the coordinates of the reference point are (x)0,y0,z0) If the unit vector corresponding to the first trend is (a, b, c) and the radius of the tubular object to be extracted at the reference point is r, the coordinates (x, y, z) of the target point are calculated by the following calculation formula (1):
(x,y,z)=(x0,y0,z0)+r(a,b,c) (1)
in the step "when it is predicted that the category of the target point satisfies the preset condition, the target point is used as a new reference point until it is predicted that the category of the target point tracked based on the new reference point does not satisfy the preset condition", where it is predicted that the category of the target point satisfies the preset condition, which indicates that the target point is still located on the tubular object to be extracted, and the tubular object to be extracted is not yet completely extracted, so that the tracking needs to be continued. And sequentially tracking in an iteration mode, and further searching the whole tubular object to be extracted. The termination condition of the iteration is that the category of the target point tracked on the basis of the new reference point does not satisfy the preset condition, that is, when the tracked target point is not located on the tubular to be extracted, the tracking process started from the initial reference point is stopped.
The "determining the extraction result according to the tracked point" may specifically be: and determining the extraction result according to other points except the point of which the category does not meet the preset condition in all the points tracked in the tracking process started from the initial reference point. The point where the category does not satisfy the preset condition is the target point traced last time.
Furthermore, in practical applications, the first point and/or the initial reference point may also be taken into account when determining the extraction result.
In the embodiment of the present application, the first point is any point on the tubular to be extracted, that is, the first point is likely not on the center line of the tubular to be extracted, so the first point can be ignored in determining the final extraction result. The initial reference point may also be ignored when it is the first point described above.
When the number of the initial reference points determined according to the first point is two; correspondingly, the "determining the extraction result according to the tracked point" specifically includes: integrating the tracked points in the two tracking processes to determine the extraction result; wherein the two tracking procedures are initiated from the two initial reference points, respectively. Determining an extraction result corresponding to each tracking process according to the tracked points in each tracking process in the two tracking processes; and integrating the extraction results corresponding to the tracking processes in the two tracking processes to determine the final extraction result. For the specific implementation of determining the extraction result corresponding to each tracking process according to the tracked point in each tracking process in the two tracking processes, reference may be made to the corresponding contents in the above embodiments, which is not described herein again.
According to the technical scheme provided by the embodiment of the application, the central line of the tubular object is tracked in an iterative mode by using a trained neural network, the type of each tracked point is predicted by using a neural network model, and whether the tracking is continued or not is judged according to the type of the tracked point. That is to say, the technical scheme that this application embodiment provided can effectively distinguish the interfering information such as tubular object and background of waiting to extract. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the extraction accuracy of the central line of the tubular object is improved.
In addition, in the embodiment of the application, even when the internal blocking position of the tubular object to be extracted is tracked, the current position is still located on the tubular object to be extracted, and the type of the current position point can be still judged to meet the preset condition through type judgment, so that the tracking can be continued, and the problems of incomplete tracking and the like caused by tracking interruption due to the internal blocking condition of the tubular object to be extracted in the prior art are effectively solved.
It is added that the above method can also output the radius of the pipe to be extracted at each tracked point predicted by the neural network model for the user to use as a reference. In the application scenario of coronary artery extraction, the radius of the coronary artery at each tracked point can be provided to the doctor to serve as the diagnosis basis of the doctor.
In one implementation, the neural network model is further configured to:
and S31, performing feature extraction on the first region where the target point is located in the image to obtain first region features.
And S32, classifying the target points according to the first area characteristics to obtain the categories of the target points.
In S31, the target point may be a center point of the first area. Of course, the target point may also be a point within a set range centered on the center point of the first region, such as: the range is set to be circular or spherical with a radius of 2 pixels. The shape and pixel size of the first region can be set according to actual needs. For convenience of calculation, when the image is a two-dimensional image, the shape of the first region may be a square; when the image is a three-dimensional image, the shape of the first region may be a cube. The pixel size of the first area can be set according to the maximum possible radius of the tubular to be extracted and the pixel pitch of the image, so as to ensure that the first area can completely contain the peripheral pipe wall of the tubular to be extracted at the target point. In this way, the neural network model may be enabled to extract valid features based on the first region. In one example, the pixel size of the first region may be the same as the pixel size of the second region described above.
The neural network model can perform feature extraction on a first region where the target point is located in the image through a neural network layer to obtain first region features. The specific feature extraction method and feature extraction structure can be referred to in the prior art, and are not described herein again.
In S32, predicting the probability that the target point belongs to a plurality of candidate categories based on the first area feature; among the plurality of candidate categories are: the tubular object category to be extracted and at least one background category; and determining the category of the target point according to the probability. The category with the highest probability may be taken as the category of the target point.
Taking the extraction of coronary artery of heart as an example, the category of the tubular object to be extracted is the category of coronary artery; at least one of the context categories may include: the venous category. Therefore, the neural network model can learn the characteristics of the veins and can better distinguish the veins with similar shapes from the coronary arteries. Of course, at least one of the context categories may also include: atrial category, ventricular category, and the like. It should be noted that, in an example, the number of the at least one background category may be one, that is, the background categories are all the other than the category of the tubular object to be extracted in the image.
Further, the neural network model is further configured to: and when the type of the target point is the type of the tubular object to be extracted, judging that the type of the target point meets the preset condition. And stopping the tracking process started from the initial reference point when the category of the target point is not the category of the tubular object to be extracted.
In order to ensure that the final output of the neural network model is a complete extraction result, the method further includes:
103. determining a starting end point in the image related to the tubular to be extracted.
The tubular object to be extracted comprises a starting end point and an ending end point.
In one example, the above step 103 can be implemented by the following steps:
1031: and inputting the image to a first detection model to obtain the starting endpoint.
The first detection model is trained according to a first sample image and coordinate information of a starting endpoint of a sample tubular object in the first sample image.
In practical application, the first detection model can roughly determine the position of the starting endpoint without high precision. Therefore, the first detection model can be a model based on a shallow neural network, so that the calculation amount and the calculation time can be effectively reduced. The first detection model may be a 3D detection model.
Taking the coronary artery as an example, the starting endpoint of the first detection model output is the coronary artery mouth, and the coronary artery mouth may specifically include a left coronary artery mouth and a right coronary artery mouth.
Accordingly, the "determining the extraction result according to the tracked point" may be implemented by the following steps:
and S41, connecting the tracked points to obtain the connecting line.
And S42, when the distance between the starting end point and the connecting line is smaller than a first preset distance, determining the connecting line as the central line of the tubular object to be extracted as the extraction result.
In S41, the connecting line may be obtained by connecting points other than the point whose category does not satisfy the preset condition among all the points tracked in the tracking process started from the initial reference point. Or connecting the initial reference point and all the points tracked in the tracking process started from the initial reference point except the points of which the categories do not meet the preset condition to obtain the connecting line.
In S42, the first preset distance may be set according to actual needs, which is not specifically limited in the embodiment of the present application. For example: the first preset distance may be set according to the maximum possible radius of the tubular object to be extracted, taking coronary artery as an example, the maximum possible radius is 3mm, and the pixel pitch in the image is 0.5mm, then the first preset distance may be: 6 pixels.
And judging whether the distance between the starting end point and the connecting line is smaller than a first preset distance, if so, determining that the extraction of the tubular object to be extracted is complete, and determining the connecting line as the central line of the tubular object to be extracted to serve as the extraction result.
If the distance is larger than the first preset distance, the extraction of the tubular object to be extracted is incomplete, and the output of the neural network model is empty.
Taking the coronary artery as an example, the starting end points of the output of the first detection model are the left coronary ostium and the right coronary ostium. And when the distance between the connecting line and any one of the left coronary artery opening and the right coronary artery opening is smaller than a first preset distance, determining the connecting line as the central line of the tubular object to be extracted to serve as the extraction result.
Therefore, the result output by the neural network model can be determined to be a complete central line of the tubular object to be extracted, and the user experience is improved.
In a fully automatic extraction scheme, the "determining a first point on a tubular object to be extracted" in an image may specifically be implemented by the following steps:
and S51, inputting the image into a second detection model, and detecting to obtain a plurality of detection points on a plurality of tubular objects in the image.
S52, determining the first point from the plurality of detection points.
And S53, taking the tube where the first point is located in the plurality of tubes as the tube to be extracted.
The second detection model is trained according to a second sample image and coordinate information of a plurality of sample detection points on a plurality of sample tubes in the second sample image. The second detection model may be a 3D detection model.
In S51, the plurality of tubes have the same category, and are all the tube categories to be extracted. Taking coronary artery as an example, the plurality of tubes refers to a plurality of coronary arteries, and the plurality of sample tubes refers to a plurality of sample coronary arteries.
The plurality of detection points detected by the second detection model only need to be positioned on the plurality of tubular objects and do not need to be accurate to a certain point, so that the second detection model is a model established based on a shallow neural network, and the calculated amount and the calculated time can be effectively reduced.
In S52, the "determining the first point from the plurality of detection points" may specifically be implemented by:
and S521, determining at least one unused detection point from the plurality of detection points.
S522, determining the first point needed to be used at this time from the at least one unused detection point.
In the above S521, in order to avoid repetitive tracking, at least one unused detection point is determined among the plurality of detection points.
In the above S522, one unused detection point may be randomly determined from the at least one unused detection point as the first point to be used at this time.
Considering that there may be some points intersecting with the currently extracted tubular in the at least one unused detection point, if tracking is performed based on these intersecting points, it is also necessary to perform repeated tracking, and it is meaningless. Therefore, in the above S522, "determine the first point to be used this time from the at least one unused detection point", specifically may be: determining at least one disjoint detection point from the at least one unused detection point that is disjoint from a currently extracted tubular of the plurality of tubulars; and determining the first point required to be used at this time in the at least one disjoint detection point.
In actual application, the distance between each unused detection point and the central line of the currently extracted tubular object can be calculated; determining unused detection points with the distance less than or equal to a second preset distance as intersection detection points; and determining unused detection points with the distance larger than the second preset distance as disjoint detection points.
And randomly determining a disjoint detection point from at least one disjoint detection point as the first point to be used at this time.
Further, the method may further include:
104. when there is no unused detection point in the plurality of detection points, or the plurality of detection points all intersect with a currently extracted tubular in the plurality of tubulars, constructing a tubular tree from the currently extracted tubular.
Taking coronary artery as an example, the tubular tree is the coronary artery tree.
In a half of the automatic extraction schemes, the "determining a first point on a tubular object to be extracted" in an image may specifically be implemented by the following steps:
and S61, receiving a trigger operation event of a position point on the tubular object to be extracted in the image by the user.
And S62, determining the position point as a first point on the tubular object to be extracted in the image.
In S61, the position point may be any position on the tubular object to be extracted. The trigger operation can be click operation and long-time press operation. That is to say, the user clicks once or presses once for a long time at any position on the tubular object to be extracted, so that the extraction result of the central line of the tubular object to be extracted can be obtained, and the interaction is simple and the operation is easy.
Further, the image may be a three-dimensional image. The Neural network model can take a three-dimensional convolution Neural network 3D-CNN (volumetric Neural networks) as a basic framework. In this way, high-level abstract features can be learned, enhancing the expressive power of the model.
In addition, because the pixel pitches of the images collected by different devices are different, the pixel pitches of the images can be unified through resampling, and the influence caused by data difference can be reduced.
Therefore, the method may further include, before the step 101:
105. resampling the image to adjust a pixel pitch in the image.
In addition, in order to facilitate the solution of the subsequent neural network model, after the image is resampled, the image can be normalized.
In conclusion, the deep learning model based on learning is adopted for modeling, and the 3D-CNN is used for modeling, so that high-level abstract features can be learned, and the expression capability of the model is enhanced; the advantages of GPU (Graphics Processing Unit) parallel computing can be fully utilized based on deep learning, so that the time consumption of the model is lower; the category discrimination is introduced into the neural network model, so that global information can be fully utilized, the model can correctly distinguish the tubular object to be extracted and the interference target, and the coronary artery extraction is taken as an example, so that the coronary artery and the interference target such as a vein can be correctly distinguished; by category discrimination, a termination condition based on learning is provided, and compared with the conventional tracking model in which the iteration process is terminated by adopting rules (the rules need to be obtained according to a large amount of prior knowledge), the method is more robust and flexible; the 3D detection model is used for rapidly detecting the starting endpoint and the detection points related to the tubular object to be extracted, so that the defect of complex interaction required in an interactive extraction scheme in the prior art can be overcome, and full-automatic extraction is realized. The method obtains the result of State-of-art under the international general evaluation framework.
It needs to be supplemented that, through comparison of coronary artery extraction experiments, a coronary artery can be extracted only in 1min by a neural network-based segmentation scheme in the prior art; the technical scheme provided by the embodiment of the application can extract a coronary artery within 0.5 s. Moreover, because the coronary artery of the heart is very fine, the radius range of the coronary artery is approximately in the range of [0.3mm,3mm ], the whole segmentation effect of the segmentation scheme based on the neural network in the prior art is poor, particularly the middle and far segments of the coronary artery are difficult to segment, and finally the central line extraction effect is poor; the technical scheme provided by the embodiment of the application can well extract the middle and far segments of the coronary artery.
The coronary centerline extraction process will be described with reference to fig. 1 a:
and inputting the heart image input by the user at the client into the first detection model, and detecting the left coronary artery and the right coronary artery by the first detection model. The heart image is input to the second detection model, and a plurality of detection points on a plurality of coronary arteries are detected by the second detection model. A first point is determined from the plurality of detection points. Inputting the first point and the heart image into a neural network model, tracking by a tracking sub-network in the neural network model by taking the first point as a tracking starting point, and tracking to obtain a plurality of points; and the judging sub-network in the neural network model is used for judging the category of each point tracked by the tracking sub-network, and when the category of the point tracked by the tracking sub-network does not meet the preset condition, the tracking is stopped. The tracked points are connected to serve as the coronary centerline of the coronary where the first point is located. The neural network model can calculate the distance between the central line of the coronary artery and the left coronary ostium and the distance between the central line of the coronary artery and the right coronary ostium; and when the distance is less than the first preset distance, keeping the extraction result of the coronary artery central line. The multiple detection points can be traversed according to the modes provided in the embodiments, after all coronary artery central lines are extracted, a coronary artery tree is constructed according to the remaining result of the coronary artery central lines, and the coronary artery tree is output.
In addition, in practical applications, it is generally necessary to further determine the abnormal position on the tubular object to be extracted according to the extraction result of the central line of the tubular object to be extracted. For example: in industrial equipment or industrial workshops, a large number of hoses are arranged, and abnormal positions on the hoses need to be detected in the using process. Specifically, the method comprises the following steps:
the method may further include:
x: reconstructing a curved surface reconstruction image according to the central line of the tubular object to be extracted;
y: inputting the curved surface reconstruction image into a third detection model to obtain an abnormal position on the tubular object to be extracted;
and the third detection model is obtained by training according to the curved surface reconstruction image of the sample tubular object and the abnormal position marking information on the sample tubular object.
In the step X, volume metadata along the centerline of the tubular object to be extracted in the image is reconstructed to obtain a curved surface reconstruction cpr (curved planar reconstruction) image. The detailed implementation and principles can be seen in the prior art CPR technique and will not be described in detail here.
In the step Y, reference may be made to the prior art for specific implementation and principle of the third detection model, which is not specifically limited in this embodiment.
In addition, the extraction method of the tubular center line can also be applied to the field of muscle edge detection, and the body-building effect of people can be determined by analyzing the muscle edge detection result; or, the method is applied to the field of edge detection of the stomach, and the health state of the stomach can be determined by analyzing the edge detection result of the stomach. Specifically, the method comprises the following steps:
determining a first point on the edge of an object to be detected in an image of the object to be detected;
inputting the first point and the image into a trained neural network model to obtain an extraction result of the edge of the object to be detected;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
The neural network model is obtained by training according to the sample image and the edge marking information of the sample.
The object to be detected can be muscle, stomach, etc.
It should be noted that, in this embodiment, the specific implementation of extracting the edge of the object to be detected through the neural network model can refer to the corresponding content in the above embodiments in the same manner, and details are not described here.
In the prior art, in the interaction-based blood vessel centerline extraction scheme, the blood vessel centerline is usually determined based on a large amount of manual interaction or two points of start and end.
In the scheme based on a large amount of manual interaction, a large amount of manual interaction is provided, for example, a plurality of point coordinates on a blood vessel are provided, and a blood vessel center line is generated by matching with some traditional auxiliary algorithms.
In a scheme based on a start end point and an end point, coordinates of the start end point and the end point of a blood vessel are provided, and then a blood vessel path between the two points is acquired by using an algorithm, for example, a shortest path algorithm is used. Although this type of method requires only two points of interaction, these two points require a certain amount of time to determine, in particular, the end point, which limits the extraction speed as a whole.
Fig. 2 is a flowchart illustrating an interface interaction method according to another embodiment of the present application. The execution subject of the method may be a client. The client may be hardware integrated on the terminal and having an embedded program, may also be application software installed in the terminal, and may also be tool software embedded in an operating system of the terminal, which is not limited in this embodiment of the present application. The terminal can be any terminal equipment including a mobile phone, a tablet personal computer, intelligent wearable equipment and the like. The server may be a common server, a cloud, or a virtual server, and as shown in fig. 2, the method includes:
201. the image is displayed on the interface.
202. And in response to a trigger operation at a position point on the tubular to be extracted in the image, marking the central line of the tubular to be extracted on the image.
The central line of the tubular object to be extracted is obtained by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; from the tracked points, the extracted results are determined.
In the above 201, when the image is a two-dimensional image, the image may be directly displayed.
When the image is a three-dimensional image, at least one perspective view of the image can be displayed on the interface. Specifically, the at least one perspective view may include: one or more of a front view, a side view, and a top view.
In one implementation, the image is a three-dimensional image, and the multiple perspective views of the image are displayed in a partition mode on the interface. The plurality of perspective views can include at least two of a front view, a side view, and a top view. And displaying a plurality of view angle images, so that the user can find the tubular object which the user wants to extract.
In 202, the trigger operation may be an operation of clicking, double clicking, long pressing, or the like, on a fixed point. When the interface is a non-touch screen interface, a user can execute the triggering operation through a mouse; when the interface is a touch screen interface, the user can execute the triggering operation through a mouse, or the user can execute the triggering operation through fingers.
When a plurality of view angle diagrams of the image are displayed on the interface in a partition mode, the center line of the tubular object to be extracted can be marked in the view angle diagrams respectively, so that a user can conveniently view the tubular object.
Specifically, as shown in FIG. 16, the partitions display a front view 1701, a side view 1702, and a top view 1703 of the image. In response to a user's trigger operation for a position point on a tubular to be extracted in any one of a plurality of perspective views, the position point (i.e., a position point indicated by an arrow in fig. 16) is marked in each of the plurality of perspective views, and then a centerline of the tubular to be extracted (i.e., a curve indicated by an arrow in fig. 17) is marked in each of the plurality of perspective views.
For specific implementation of the neural network model, reference may be made to the corresponding contents in the above embodiments, which are not described herein again. It should be added that, the process of the neural network model executing step "determining the initial reference point according to the position point" may refer to the process of the neural network model executing step "determining the initial reference point according to the first point" in the above embodiments, and will not be described in detail here.
In the technical scheme provided by the embodiment of the application, an image is displayed on an interface; and in response to a trigger operation of a user at a position point on the tubular to be extracted in the image, marking the tubular central line to be extracted on the image. Therefore, in the technical scheme provided by the embodiment of the application, the user can obtain the center line of the tubular object by clicking once, the interaction is simple and easy to use, and the manual workload is reduced.
Further, the position point may be any point on the tubular object to be extracted. Therefore, the user can extract the central line of the tubular object to be extracted only by performing triggering operation at any point on the tubular object to be extracted, so that the operation difficulty of the user can be reduced, and the time of the user can be saved.
In practical application, the image is a three-dimensional image, the tubular objects to be extracted are not located on the same plane, and curved surface reconstruction is required to be performed according to the central line of the tubular objects to be extracted so as to project the tubular objects to be extracted on the same plane for viewing by an observer. Namely, the above method may further include:
203. and reconstructing a curved surface reconstruction image according to the central line of the tubular object to be extracted.
204. And displaying the curved surface reconstruction result on the interface.
In 203, volume metadata along the centerline of the tubular object to be extracted in the image is reconstructed to obtain a curved surface reconstruction cpr (curved planar reconstruction) image. The detailed implementation and principles can be seen in the prior art CPR technique and will not be described in detail here.
In 204, the curved surface reconstructed image is displayed on the interface, and specifically, the curved surface reconstructed image may be displayed in an area other than a display area of the multiple view angle views of the image in the interface. In practical application, as shown in fig. 17, the image display area of the interface may be divided into four sub-areas; a front view 1701, a side view 1702, and a top view 1703 in which the image is displayed in three sub-regions, respectively; the curved surface reconstructed image 1704 is displayed in the remaining one sub-region, and the centerline of the tubular to be extracted is marked in the curved surface reconstructed image 1704.
Considering that the calculation amount of the neural network model is large and the calculation capability of most terminal devices is limited, in order to ensure that the neural network model can operate normally, the neural network model may be deployed on a processing device with strong calculation capability, for example: and on the server, the processing equipment runs the neural network model to obtain the central line of the tubular object to be extracted. Specifically, in the above 202, "labeling the centerline of the tubular to be extracted on the image in response to the triggering operation at a position point on the tubular to be extracted in the image", may specifically be implemented by:
2021. and sending the position points and the images to a processing device so that the processing device extracts the central line of the tubular object to be extracted by utilizing the trained neural network model.
2022. And receiving an extraction result returned by the processing equipment.
2023. And marking the central line of the tubular object to be extracted on the image according to the extraction result.
The Processing device may be provided with a GPU (Graphics Processing Unit), so that parallel acceleration can be performed by the GPU, and the extraction time can be reduced.
Certainly, with the continuous development of computer technology, some terminal devices are available at present, which can well support the operation of the neural network model, and like such terminal devices, the neural network model can be locally operated to extract the centerline of the tubular object to be extracted, and there is no need to request other processing devices to perform processing.
Here, it should be noted that: the content of each step in the method provided by the embodiment of the present application, which is not described in detail in the foregoing embodiment, may refer to the corresponding content in the foregoing embodiment, and is not described herein again. In addition, the method provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and specific reference may be made to corresponding contents in the above embodiments, which is not described herein again.
Fig. 3 is a flowchart illustrating a method for training a neural network model according to an embodiment of the present disclosure. The execution main body of the method can be a client or a server. The client may be hardware integrated on the terminal and having an embedded program, may also be application software installed in the terminal, and may also be tool software embedded in an operating system of the terminal, which is not limited in this embodiment of the present application. The terminal can be any terminal equipment including a mobile phone, a tablet personal computer, intelligent wearable equipment and the like. The server may be a common server, a cloud, or a virtual server, as shown in fig. 3, the method includes:
301. a first point on the sample tube to be extracted in the third sample image is determined.
302. And inputting the first point and the third sample image into a neural network model to obtain a prediction extraction result of the central line of the tubular object of the sample to be extracted.
303. And performing parameter optimization on the neural network model according to the predicted extraction result and the expected extraction result corresponding to the third sample image.
Wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the third sample image based on the reference point; when the prediction type of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the prediction type of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the prediction extraction result according to the tracked points.
In 301, the first point may be an arbitrary point on the sample tube to be extracted. The first point may be determined by human interaction or by automatic detection by a detection model. Similarly, the corresponding content of the first point on the tubular object to be extracted is determined in the image in the embodiments, and details are not described herein.
In 302, the first point and the third sample image are used as inputs of a neural network model, and a predicted extraction result of the central line of the tubular object of the sample to be extracted is obtained by using the neural network model.
The specific implementation of the steps executed by the neural network model in this embodiment can refer to the corresponding contents in the above embodiments in a similar manner, and is not described herein again.
The initial value of each network parameter in the neural network model may be a random value.
303, wherein the neural network model is used to extract the centerline of the tubular object to be extracted after training.
A first difference between the predicted extraction result and an expected extraction result corresponding to the third sample image may be calculated, and the neural network model may be optimized for parameters according to the first difference. In particular, a loss function may be employed to calculate the first difference.
According to the technical scheme provided by the embodiment of the application, the central line of the tubular object is tracked in an iterative mode by using a trained neural network, the type of each tracked point is predicted by using a neural network model, and whether the tracking is continued or not is judged according to the type of the tracked point. That is to say, the technical scheme that this application embodiment provided can effectively distinguish the interfering information such as tubular object and background of waiting to extract. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the extraction accuracy of the central line of the tubular object is improved.
Further, to improve the effectiveness of model training, the network may also be optimized in conjunction with the differences between the predicted categories of the tracked points and their expected categories. Specifically, in 303, "performing parameter optimization on the neural network model according to the predicted extraction result and the expected extraction result corresponding to the third sample image" specifically includes:
3031a, performing parameter optimization on the neural network model according to a first difference between the predicted extraction result and an expected extraction result corresponding to the third sample image and a second difference between the predicted category and the expected category of the tracked point.
Wherein the tracked points include all points tracked during the tracking process initiated from the initial reference point. A second difference between the predicted category and its expected category for each tracked point may be calculated; adding the first difference and a second difference between the prediction category and the expected category of each tracked point in all the tracked points to obtain a first sum value; and performing parameter optimization on the neural network model according to the first sum. Wherein, the second difference can also be calculated by using a loss function.
Further, the "tracking a target point in the third sample image based on the reference point" may specifically be implemented by the following steps:
and S71, performing feature extraction on a second region where the reference point is located in the third sample image to obtain second region features.
And S72, predicting the trend of the sample tube to be extracted at the reference point according to the second area characteristics.
And S73, tracking to obtain the target point according to the trend.
In S71, the shape and the pixel size of the second region in this embodiment are the same as those in the above embodiments, and the position of the reference point in the second region in this embodiment is also the same as that in the above embodiments.
In the above S72, the specific implementation of "predicting the direction of the to-be-extracted sample tube at the reference point according to the second region feature" may refer to the specific implementation of "predicting the direction of the to-be-extracted sample tube at the reference point according to the second region feature" in the above embodiment in the same manner, and details are not repeated here.
The specific implementation of S73 can refer to the corresponding content in the foregoing embodiments, and will not be described herein again.
Specifically, the method further includes:
304. and predicting the radius of the sample tubular to be extracted at the reference point according to the second region characteristic.
Similarly, the specific implementation of 304 refers to the specific implementation of "predicting the radius of the tubular object to be extracted at the reference point" in the embodiments, and is not described herein again.
In the above S73, "obtaining the target point by tracking according to the trend" specifically includes: the target point is obtained by tracking according to the trend and the radius, and the corresponding contents in the above embodiments can be referred to in the same manner, which is not described herein again.
In an example, 303 "performing parameter optimization on the neural network model according to the predicted extraction result and the expected extraction result corresponding to the third sample image" may specifically be implemented by:
3031b, determining a first difference between the predicted extraction result and a desired extraction result corresponding to the third sample image;
3032b, determining a third difference between the predicted radius of the sample tube to be extracted at the traced point and the expected radius of the sample tube to be extracted at the traced point;
3033b, according to the first difference and the third difference, the parameter optimization is carried out on the neural network model.
Wherein the tracked points include all points tracked during the tracking process initiated from the initial reference point. Calculating a third difference between the predicted radius of the sample tube to be extracted at each traced point and the expected radius of the sample tube to be extracted at each traced point; adding the first difference and third differences corresponding to all tracked points in all tracked points to obtain a second sum value; and performing parameter optimization on the neural network model according to the second sum value. Wherein, the third difference can also be calculated by using a loss function.
The first difference and the third difference are combined, so that the effectiveness of model training can be effectively improved, and the neural network model with high accuracy can be obtained.
In another example, in the above 3031a, "performing parameter optimization on the neural network model according to a first difference between the predicted extraction result and an expected extraction result corresponding to the third sample image and a second difference between the predicted category of the tracked point and the expected category thereof" may be specifically implemented by:
s81, determining a first difference between the predicted extraction result and an expected extraction result corresponding to the third sample image.
S82, determining a second difference between the predicted category of the tracked point and its expected category.
S83, determining a third difference between the predicted radius of the sample tube to be extracted at the traced point and the expected radius of the sample tube to be extracted at the traced point.
And S84, performing parameter optimization on the neural network model according to the first difference, the second difference and the third difference.
Specifically, the first difference, and a second difference and a third difference corresponding to each tracked point in all the tracked points may be added to obtain a third sum, and the neural network model may be optimized according to the third sum. In the embodiment, when the neural network model is optimized, the neural network model is optimized by combining the differences in three aspects, so that the prediction accuracy of the neural network model can be effectively improved.
Usually, thousands or even hundreds of thousands of sample data are needed in the training process of a neural network model, and the labeling cost of the tube center line, especially the vessel center line, is very high. The inventor finds that part of the network structure in the neural network model provided by the embodiment of the application is used for predicting the class, that is, some sample images only labeling the class can be used for training the part of the network structure. Therefore, in the whole training process, the neural network model can be trained by using some sample images which are labeled with the central line, the radius and the category of the tubular object, and the neural network model can be trained by using other sample images which are only labeled with the category, so that the training cost can be effectively reduced.
Specifically, the method further includes:
305. a first data set and a second data set are acquired as a training data set of the neural network model.
Wherein the sample image in the first data set corresponds to centerline labeling information, radius labeling information, category labeling information of the sample tube, and category labeling information of at least one background therein; the sample images in the second data set have class label information in which the sample tube and at least one background are associated.
The cost of annotating the sample images in the second data set is much lower than the cost of annotating the sample images in the first data set.
In an example, the neural network model includes at least one first neural network layer, at least one second neural network layer, and at least one third neural network layer;
the at least one first neural network layer and the at least one second neural network layer are used for determining an initial reference point according to the first point; tracking a target point in the third sample image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; determining the extraction result according to the tracked points;
the at least one first neural network layer and the at least one third neural network layer are used for predicting the category of the target point.
It should be added that: the first neural network layer is specifically configured to perform all the feature extraction steps in the embodiments; the second neural network layer is specifically configured to perform the steps of all the tracking points in the embodiments; the third neural network layer is specifically configured to perform all the steps of the category determination in the embodiments. In an example, the second neural network layer is further specifically configured to perform the step of radius prediction in the embodiments.
In practical application, in 303, "performing parameter optimization on the neural network model according to the predicted extraction result and the expected extraction result corresponding to the sample image" specifically includes:
and when the third sample image is from the first data set, performing parameter optimization on at least one first neural network layer, at least one second neural network layer and at least one third neural network layer according to the predicted extraction result and an expected extraction result corresponding to the third sample image.
Since the sample images in the first data set correspond to the centerline labeling information, the radius labeling information, the class labeling information of the sample tube, and the class labeling information of the at least one background, parameter optimization can be performed on the at least one first neural network layer, the at least one second neural network layer, and the at least one third neural network layer.
Further, the method further includes:
306. and when the sample image is from the second data set, performing parameter optimization on the at least one first neural network layer and the at least one third neural network layer according to the prediction category and the expected category of the tracked point.
The sample images in the second data set only correspond to the class labeling information of the sample tube and at least one background, so that only parameter optimization can be performed on the at least one first neural network layer and the at least one third neural network layer, and parameters of the at least one third neural network layer are not suitable for optimization, otherwise, an overfitting problem can occur.
In another embodiment, the present application provides a neural network system, comprising: at least one neural network layer;
the at least one neural network layer is to: determining an initial reference point according to a first point on a tubular object to be extracted in the image; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result of the central line of the tubular object to be extracted according to the tracked points.
For a specific implementation process of the step executed by the at least one neural network layer in this embodiment, reference may be made to corresponding contents in the foregoing embodiments, and details are not described here again. The determination method of the first point can also refer to the corresponding content in the above embodiments, and is not described herein again.
According to the technical scheme provided by the embodiment of the application, the central line of the tubular object is tracked in an iterative mode by using a trained neural network, the type of each tracked point is predicted by using a neural network model, and whether the tracking is continued or not is judged according to the type of the tracked point. That is to say, the technical scheme that this application embodiment provided can effectively distinguish the interfering information such as tubular object and background of waiting to extract. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the extraction accuracy of the central line of the tubular object is improved.
Further, the at least one network layer includes: at least one first neural network layer, at least one second neural network layer, and at least one third neural network layer;
the at least one first neural network layer and the at least one second neural network layer are used for determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; determining the extraction result according to the tracked points; the at least one first neural network layer and the at least one third neural network layer are used for predicting the category of the target point.
It should be added that: the first neural network layer is specifically configured to perform all the feature extraction steps in the embodiments; the second neural network layer is specifically configured to perform the steps of all the tracking points in the embodiments; the third neural network layer is specifically configured to perform all the steps of the category determination in the embodiments. In an example, the second neural network layer is further specifically configured to perform the step of radius prediction in the embodiments.
In an example, the at least one first neural network layer may be specifically configured to: and performing feature extraction on a first region where the target point is located in the image to obtain first region features. The at least one third neural network layer may be specifically configured to: and classifying the target points according to the first area characteristics to obtain the categories of the target points.
The at least one first neural network layer may be further specifically configured to: and performing feature extraction on a second region where the reference point is located in the image to obtain second region features. The at least one second neural network layer may be specifically configured to: predicting a first trend of the tubular object to be extracted at the reference point according to the second regional characteristics; and tracking to obtain a target point according to the first trend.
In an example, the at least one second neural network layer is further specifically configured to: predicting the radius of the tubular object to be extracted at the reference point according to the second region characteristic; and tracking to obtain a target point according to the first trend and the radius.
In an example, the at least one second neural network layer is further specifically configured to: when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In an example, the at least one first neural network layer is further specifically configured to: and performing feature extraction on a fourth region where the second point is located in the image to obtain fourth region features. The at least one third neural network layer is specifically configured to: and classifying the second points according to the fourth region characteristics to obtain the category of the second points.
Fig. 4 shows a flow chart of a method for extracting a tube centerline according to an embodiment of the present application. The execution main body of the method can be a client or a server. The client may be hardware integrated on the terminal and having an embedded program, may also be application software installed in the terminal, and may also be tool software embedded in an operating system of the terminal, which is not limited in this embodiment of the present application. The terminal can be any terminal equipment including a mobile phone, a tablet personal computer, intelligent wearable equipment and the like. The server may be a common server, a cloud, or a virtual server, and as shown in fig. 4, the method includes:
401. an initial reference point is determined from a first point on the tubular to be extracted in the image.
402. And tracking a target point on the image according to the reference point.
403. And when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition.
404. Extracting a center line of the tubular to be extracted in the image according to the tracked point.
In 401, the first point may be any point on the tubular object to be extracted. The determination method of the first point and the determination method of the initial reference point may refer to corresponding contents in the above embodiments, and are not described herein again.
In 402, feature extraction may be performed on a second region where the reference point in the image is located to obtain a second region feature; predicting a first trend of the tubular object to be extracted at the reference point according to the second regional characteristics; and determining the target point according to the first trend. In practical application, the above steps can be realized by using a neural network model.
It should be noted that, in the above step 402, a tracking scheme in the prior art may also be adopted to track and obtain the target point, which is not specifically limited in this embodiment.
In 403, the category of the target point may be predicted by a machine learning model, for example: may be a neural network model.
And when the type of the target point is the type of the tubular object to be extracted, judging that the type of the target point meets the preset condition. And when the category of the target point meets a preset condition, taking the target point as a new reference point until the fact that the category of the target point tracked according to the new reference point does not meet the preset condition is predicted.
For specific implementation of the steps 401, 402, 403, and 404, reference may be made to corresponding contents in the foregoing embodiments, and details are not described here.
In one example, the above steps 401, 402, 403 and 404 can be performed by the neural network model in the above embodiments.
According to the technical scheme provided by the embodiment of the application, the center line of the tubular object is tracked in an iterative mode, the category of each tracked point is predicted, and whether the tracking is continued or not is judged according to the category of the tracked point. That is to say, the technical scheme that this application embodiment provided can effectively distinguish the interfering information such as tubular object and background of waiting to extract. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the extraction accuracy of the central line of the tubular object is improved.
Further, the method may further include:
405. and performing feature extraction on a first region where the target point is located in the image to obtain first region features.
406. And classifying the target points according to the first area characteristics to obtain the categories of the target points.
The specific implementation of the above 405 and 406 can refer to the corresponding content in the above embodiments, and is not described herein again.
Here, it should be noted that: the content of each step in the method provided by the embodiment of the present application, which is not described in detail in the foregoing embodiment, may refer to the corresponding content in the foregoing embodiment, and is not described herein again. In addition, the method provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and specific reference may be made to corresponding contents in the above embodiments, which is not described herein again.
Fig. 5 is a flowchart illustrating a method for extracting a cardiac coronary centerline according to another embodiment of the present application. The execution main body of the method can be a client or a server. The client may be hardware integrated on the terminal and having an embedded program, may also be application software installed in the terminal, and may also be tool software embedded in an operating system of the terminal, which is not limited in this embodiment of the present application. The terminal can be any terminal equipment including a mobile phone, a tablet personal computer, intelligent wearable equipment and the like. As shown in fig. 5, the method includes:
501. a first point on the coronary artery to be extracted in the cardiac image is determined.
502. And inputting the first point and the heart image into a trained neural network model to obtain an extraction result of the coronary artery central line to be extracted.
Wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In the present embodiment, the image in each of the above embodiments is embodied as a heart image and the tubular object in each of the above embodiments is embodied as a coronary artery. For specific implementation of the steps 501 and 502, reference may be made to corresponding contents in the foregoing embodiments, and details are not described here.
According to the technical scheme provided by the embodiment of the application, the trained neural network is used for tracking the coronary artery central line in an iterative mode, the type of each tracked point is predicted by using the neural network model, and whether tracking is continued or not is judged according to the type of the tracked point. That is to say, the technical scheme provided by the embodiment of the application can effectively distinguish the coronary artery to be extracted from the interference information such as the background. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the accuracy of extracting the coronary artery central line is improved.
In one example, the neural network model is further configured to: performing feature extraction on a first region where the target point is located in the heart image to obtain first region features; and classifying the target points according to the first area characteristics to obtain the categories of the target points.
Specifically, the target point is classified according to the first area feature to obtain the category of the target point, which may specifically be implemented by the following steps: predicting the probability that the target point belongs to a plurality of candidate categories according to the first regional characteristics; among the plurality of candidate categories are: a coronary category and at least one background category; and determining the category of the target point according to the probability.
Here, it should be noted that: the content of each step in the method provided by the embodiment of the present application, which is not described in detail in the foregoing embodiment, may refer to the corresponding content in the foregoing embodiment, and is not described herein again. In addition, the method provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and specific reference may be made to corresponding contents in the above embodiments, which is not described herein again.
It should be added that, the training mode of the neural network model in the embodiment of the present application may also refer to the corresponding content in the above embodiments, and details are not described herein again.
Fig. 6 is a flowchart illustrating an interface interaction method according to another embodiment of the present application. The execution subject of the method may be a client. The client may be hardware integrated on the terminal and having an embedded program, may also be application software installed in the terminal, and may also be tool software embedded in an operating system of the terminal, which is not limited in this embodiment of the present application. The terminal can be any terminal equipment including a mobile phone, a tablet personal computer, intelligent wearable equipment and the like. As shown in fig. 6, the method includes:
601. the heart image is displayed on the interface.
602. Marking the coronary artery central line to be extracted on the heart image in response to a trigger operation at a position point on the coronary artery to be extracted in the heart image.
The coronary artery central line to be extracted is extracted by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In the present embodiment, the image in each of the above embodiments is embodied as a heart image and the tubular object in each of the above embodiments is embodied as a coronary artery. The specific implementation of the steps 601 and 602 may refer to the corresponding content in the above embodiments, and is not described herein again.
In the technical scheme provided by the embodiment of the application, a heart image is displayed on an interface; marking the coronary artery central line to be extracted on the heart image in response to a trigger operation of a user at a position point on the coronary artery to be extracted in the heart image. Therefore, in the technical scheme provided by the embodiment of the application, the user can obtain the coronary artery central line by clicking once, the interaction is simple and easy to use, and the manual workload is reduced.
Here, it should be noted that: the content of each step in the method provided by the embodiment of the present application, which is not described in detail in the foregoing embodiment, may refer to the corresponding content in the foregoing embodiment, and is not described herein again. In addition, the method provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and specific reference may be made to corresponding contents in the above embodiments, which is not described herein again.
Fig. 7 is a flowchart illustrating a method for extracting a cardiac coronary centerline according to another embodiment of the present application. The execution main body of the method can be a client or a server. The client may be hardware integrated on the terminal and having an embedded program, may also be application software installed in the terminal, and may also be tool software embedded in an operating system of the terminal, which is not limited in this embodiment of the present application. The terminal can be any terminal equipment including a mobile phone, a tablet personal computer, intelligent wearable equipment and the like. As shown in fig. 7, the method includes:
701. an initial reference point is determined from a first point on the coronary artery to be extracted in the cardiac image.
702. And tracking a target point on the heart image according to the reference point.
703. And when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition.
704. And extracting the central line of the coronary artery to be extracted in the heart image according to the tracked points.
In the present embodiment, the image in each of the above embodiments is embodied as a heart image and the tubular object in each of the above embodiments is embodied as a coronary artery. The specific implementation of the above steps 701 to 702 can refer to the corresponding content in the above embodiments, and is not described herein again.
According to the technical scheme provided by the embodiment of the application, the coronary artery central line is tracked in an iterative mode, the category of each tracked point is predicted, and whether the tracking is continued or not is judged according to the category of the tracked point. That is to say, the technical scheme provided by the embodiment of the application can effectively distinguish the coronary artery to be extracted from the interference information such as the background. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the accuracy of extracting the coronary artery central line is improved.
Here, it should be noted that: the content of each step in the method provided by the embodiment of the present application, which is not described in detail in the foregoing embodiment, may refer to the corresponding content in the foregoing embodiment, and is not described herein again. In addition, the method provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and specific reference may be made to corresponding contents in the above embodiments, which is not described herein again.
It should be noted that, in the above embodiments, the neural network model may be put into use after training, so as to help the user extract the centerline of the tubular object. After being put into use, the neural network model still needs to be continuously updated and iterated to deal with the new situation. Therefore, in practical application, a new sample image can be labeled by using the extraction result of the extraction method of the tubular object center line, so as to obtain new training data for training the neural network model.
Fig. 8 shows a schematic structural diagram of an apparatus for extracting a centerline of a tube according to another embodiment of the present application. As shown in fig. 8, the apparatus includes: a first determination module 801 and a first acquisition module 802. Wherein the content of the first and second substances,
a first determination module 801, configured to determine a first point on a tubular to be extracted in an image;
a first input module 802, configured to input the first point and the image into a trained neural network model, and obtain an extraction result of the centerline of the tubular object to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
According to the technical scheme provided by the embodiment of the application, the central line of the tubular object is tracked in an iterative mode by using a trained neural network, the type of each tracked point is predicted by using a neural network model, and whether the tracking is continued or not is judged according to the type of the tracked point. That is to say, the technical scheme that this application embodiment provided can effectively distinguish the interfering information such as tubular object and background of waiting to extract. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the extraction accuracy of the central line of the tubular object is improved. In addition, the problems of incomplete tracking and the like caused by tracking interruption caused by the internal blocking condition of the tubular object to be extracted in the prior art can be effectively avoided. Further, the above apparatus further includes:
a second determining module 803, configured to determine a starting endpoint in the image related to the tubular to be extracted.
Further, the above apparatus further includes:
a second input module 803, configured to input the image to the first detection model, and obtain the starting endpoint;
the first detection model is trained according to a first sample image and coordinate information of a starting endpoint of a sample tubular object in the first sample image.
Further, the first determining module 801 is specifically configured to:
inputting the image into a second detection model, and detecting to obtain a plurality of detection points on a plurality of tubular objects in the image;
determining the first point from the plurality of detection points;
taking the tube where the first point is located in the plurality of tubes as the tube to be extracted;
the second detection model is trained according to a second sample image and coordinate information of a plurality of sample detection points on a plurality of sample tubes in the second sample image.
Further, the first determining module 801 is specifically configured to:
determining at least one unused detection point from the plurality of detection points;
determining at least one disjoint detection point from the at least one unused detection point that is disjoint from a currently extracted tubular of the plurality of tubulars;
and determining the first point required to be used at this time in the at least one disjoint detection point.
Further, when there is no unused detection point in the plurality of detection points, or the plurality of detection points all intersect with a currently extracted tubular in the plurality of tubulars, a tubular tree is constructed from the currently extracted tubular.
Further, the first determining module 801 is specifically configured to:
receiving a trigger operation event of a position point on a tubular object to be extracted in an image by a user;
and determining the position point as a first point on the tubular object to be extracted in the image.
Further, the image is a three-dimensional image; the neural network model takes a three-dimensional convolution neural network as a basic framework.
Here, it should be noted that: the device for extracting the center line of the tubular object provided in the above embodiments can achieve the technical solutions and technical effects described in the above embodiments, and the specific implementation and principle of each module and the neural network model can refer to the corresponding content in the above embodiments, and will not be described herein again.
Fig. 9 is a schematic structural diagram of an interface interaction device according to still another embodiment of the present application. As shown in fig. 9, the apparatus includes:
a first display module 901, configured to display an image on an interface;
a first labeling module 902, configured to label a centerline of a tubular to be extracted on the image in response to a triggering operation at a location point on the tubular to be extracted in the image;
the central line of the tubular object to be extracted is obtained by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In the technical scheme provided by the embodiment of the application, an image is displayed on an interface; and in response to a trigger operation of a user at a position point on the tubular to be extracted in the image, marking the tubular central line to be extracted on the image. Therefore, in the technical scheme provided by the embodiment of the application, the user can obtain the center line of the tubular object by clicking once, the interaction is simple and easy to use, and the manual workload is reduced.
Further, the position point is any point on the tubular object to be extracted.
Further, the image is a three-dimensional image;
the above-mentioned device still includes:
a first reconstruction module 903, configured to reconstruct a curved surface reconstructed image according to the central line of the tubular object to be extracted;
the first display module 901 is further configured to display the curved surface reconstructed image on the interface.
Further, the image is a three-dimensional image; and the number of the first and second groups,
the first display module 901 is specifically configured to:
and displaying the multiple view angle graphs of the image in a partition mode on the interface.
Further, the first labeling module 902 is further configured to:
respectively marking the central line of the tubular object to be extracted in the plurality of view angle images.
Further, the first labeling module 902 is further configured to:
sending the position points and the images to a processing device so that the processing device extracts the central line of the tubular object to be extracted by utilizing the trained neural network model;
receiving an extraction result returned by the processing equipment;
and marking the central line of the tubular object to be extracted on the image according to the extraction result.
Here, it should be noted that: the interface interaction device provided in the above embodiments can implement the technical solutions and technical effects described in the above method embodiments, and the specific implementation and principle of the above modules and neural network models can refer to the corresponding contents in the above method embodiments, which are not described herein again.
Fig. 10 is a schematic structural diagram of a model training apparatus according to still another embodiment of the present application. As shown in fig. 10, the apparatus includes:
a third determining module 1001 for determining a first point on a sample tube to be extracted in a third sample image;
a third input module 1002, configured to input the first point and the third sample image to a neural network model, so as to obtain a predicted extraction result of the central line of the tubular object to be extracted;
a first optimization module 1003, configured to perform parameter optimization on the neural network model according to the predicted extraction result and an expected extraction result corresponding to the third sample image;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the third sample image based on the reference point; when the prediction type of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the prediction type of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the prediction extraction result according to the tracked points.
According to the technical scheme provided by the embodiment of the application, the central line of the tubular object is tracked in an iterative mode by using a trained neural network, the type of each tracked point is predicted by using a neural network model, and whether the tracking is continued or not is judged according to the type of the tracked point. That is to say, the technical scheme that this application embodiment provided can effectively distinguish the interfering information such as tubular object and background of waiting to extract. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the extraction accuracy of the central line of the tubular object is improved.
Further, the first optimization module 1003 is specifically configured to:
and performing parameter optimization on the neural network model according to a first difference between the predicted extraction result and an expected extraction result corresponding to the third sample image and a second difference between the predicted category and the expected category of the tracked point.
Further, the first optimization module 1003 is specifically configured to:
determining a first difference between the predicted extraction result and an expected extraction result corresponding to the third sample image;
determining a second difference between the predicted category of the tracked point and its expected category;
determining a third difference between the predicted radius of the sample tube to be extracted at the traced point and the expected radius of the sample tube to be extracted at the traced point;
and performing parameter optimization on the neural network model according to the first difference, the second difference and the third difference.
Further, the above apparatus further includes:
a first obtaining module 1004, configured to obtain a first data set and a second data set as a training data set of the neural network model;
wherein the sample image in the first data set corresponds to centerline labeling information, radius labeling information, category labeling information of the sample tube, and category labeling information of at least one background therein;
the sample images in the second data set have class label information in which the sample tube and at least one background are associated.
Further, the first optimization module 1003 is specifically configured to: and when the third sample image is from the first data set, performing parameter optimization on at least one first neural network layer, at least one second neural network layer and at least one third neural network layer according to the predicted extraction result and an expected extraction result corresponding to the third sample image.
Further, the above apparatus further comprises:
a second optimization module 1005, configured to perform parameter optimization on the at least one first neural network layer and the at least one third neural network layer according to the predicted category and the expected category of the tracked point when the third sample image is from the second data set.
Here, it should be noted that: the model training device provided in the above embodiments can implement the technical solutions and technical effects described in the above method embodiments, and the specific implementation and principle of the above modules and neural network models can refer to the corresponding contents in the above method embodiments, and are not described herein again.
Fig. 11 shows a schematic structural diagram of an extraction device for a centerline of a tubular object according to another embodiment of the present application. As shown in fig. 11, the apparatus includes:
a fourth determining module 1201, configured to determine an initial reference point according to a first point on the tubular object to be extracted in the image;
a first tracking module 1202, configured to track a target point on the image according to the reference point;
a fifth determining module 1203, configured to, when it is predicted that the category of the target point satisfies the preset condition, use the target point as a new reference point until it is predicted that the category of the target point tracked according to the new reference point does not satisfy the preset condition;
a first extracting module 1204, configured to extract a centerline of the tubular to be extracted in the image according to the tracked point.
According to the technical scheme provided by the embodiment of the application, the center line of the tubular object is tracked in an iterative mode, the category of each tracked point is predicted, and whether the tracking is continued or not is judged according to the category of the tracked point. That is to say, the technical scheme that this application embodiment provided can effectively distinguish the interfering information such as tubular object and background of waiting to extract. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the extraction accuracy of the central line of the tubular object is improved.
Further, the above apparatus further includes:
a second extraction module 1304, configured to perform feature extraction on a first region where the target point is located in the image, so as to obtain a first region feature;
a first classification module 1305, configured to classify the target point according to the first area characteristic, so as to obtain a category of the target point.
Here, it should be noted that: the device for extracting the center line of the tubular object provided in the above embodiments can achieve the technical solutions and technical effects described in the above embodiments, and the specific implementation and principle of each module and the neural network model can refer to the corresponding content in the above embodiments, and will not be described herein again.
Fig. 12 is a schematic structural diagram illustrating an apparatus for extracting a coronary centerline of a heart according to another embodiment of the present application. As shown in fig. 12, the apparatus includes:
a sixth determining module 1301, configured to determine a first point on the coronary artery to be extracted in the cardiac image;
a fourth input module 1302, configured to input the first point and the cardiac image into a trained neural network model, so as to obtain an extraction result of the coronary artery centerline to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
According to the technical scheme provided by the embodiment of the application, the trained neural network is used for tracking the coronary artery central line in an iterative mode, the type of each tracked point is predicted by using the neural network model, and whether tracking is continued or not is judged according to the type of the tracked point. That is to say, the technical scheme provided by the embodiment of the application can effectively distinguish the coronary artery to be extracted from the interference information such as the background. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the accuracy of extracting the coronary artery central line is improved.
Here, it should be noted that: the device for extracting the cardiac coronary artery centerline provided in the above embodiments can achieve the technical solutions and technical effects described in the above embodiments, and the specific implementation and principle of the above modules and neural network models can refer to the corresponding contents in the above embodiments, and will not be described herein again.
Fig. 13 is a schematic structural diagram of an interface interaction apparatus according to still another embodiment of the present application. As shown in fig. 13, the apparatus includes:
a first display module 1401 for displaying a heart image on an interface;
a second labeling module 1402, configured to label a coronary artery centerline to be extracted on the cardiac image in response to a triggering operation at a location point on a coronary artery to be extracted in the cardiac image;
the coronary artery central line to be extracted is extracted by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
In the technical scheme provided by the embodiment of the application, a heart image is displayed on an interface; marking the coronary artery central line to be extracted on the heart image in response to a trigger operation of a user at a position point on the coronary artery to be extracted in the heart image. Therefore, in the technical scheme provided by the embodiment of the application, the user can obtain the coronary artery central line by clicking once, the interaction is simple and easy to use, and the manual workload is reduced.
Here, it should be noted that: the interface interaction device provided in the above embodiments can implement the technical solutions and technical effects described in the above method embodiments, and the specific implementation and principle of the above modules and neural network models can refer to the corresponding contents in the above method embodiments, which are not described herein again.
Fig. 14 is a schematic structural diagram illustrating an apparatus for extracting a coronary centerline of a heart according to another embodiment of the present application. As shown in fig. 14, the apparatus includes:
a seventh determining module 1501, configured to determine an initial reference point according to a first point on a coronary artery to be extracted in the cardiac image;
a second tracking module 1502 for tracking a target point on the cardiac image according to the reference point;
an eighth determining module 1503, configured to, when it is predicted that the category of the target point satisfies the preset condition, use the target point as a new reference point until it is predicted that the category of the target point tracked according to the new reference point does not satisfy the preset condition;
a second extracting module 1504, configured to extract a centerline of the coronary artery to be extracted in the cardiac image according to the tracked point.
According to the technical scheme provided by the embodiment of the application, the coronary artery central line is tracked in an iterative mode, the category of each tracked point is predicted, and whether the tracking is continued or not is judged according to the category of the tracked point. That is to say, the technical scheme provided by the embodiment of the application can effectively distinguish the coronary artery to be extracted from the interference information such as the background. Therefore, the error tracking caused by the interference of the interference information can be effectively reduced, and the accuracy of extracting the coronary artery central line is improved.
Here, it should be noted that: the device for extracting the cardiac coronary artery centerline provided in the above embodiments can achieve the technical solutions and technical effects described in the above embodiments, and the specific implementation and principle of the above modules and neural network models can refer to the corresponding contents in the above embodiments, and will not be described herein again.
Fig. 15 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 15, the electronic device includes a memory 1101 and a processor 1102. The memory 1101 may be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device. The memory 1101 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The memory 1101 is used for storing programs;
the processor 1102 is coupled to the memory 1101 and configured to execute the program stored in the memory 1101 to implement the tube centerline extraction method, the interface interaction method, the model training method or the cardiac coronary centerline extraction method provided in the foregoing method embodiments.
Further, as shown in fig. 15, the electronic apparatus further includes: communication components 1103, display 1104, power components 1105, audio components 1106, and the like. Only some of the components are schematically shown in fig. 15, and it is not meant that the electronic device includes only the components shown in fig. 15.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, which when executed by a computer, can implement the steps or functions of each of the method for extracting a tube centerline, the interface interaction method, the model training method, or the method for extracting a cardiac coronary centerline provided in the above method embodiments.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (49)

1. A method for extracting a centerline of a tubular object, comprising:
determining a first point on a tubular object to be extracted in the image;
inputting the first point and the image into a trained neural network model to obtain an extraction result of the central line of the tubular object to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
2. The method of claim 1, wherein the neural network model is further configured to:
performing feature extraction on a first region where the target point is located in the image to obtain first region features;
and classifying the target points according to the first area characteristics to obtain the categories of the target points.
3. The method of claim 2, wherein classifying the target point according to the first regional characteristic to obtain a category of the target point comprises:
predicting the probability that the target point belongs to a plurality of candidate categories according to the first regional characteristics; among the plurality of candidate categories are: the tubular object category to be extracted and at least one background category;
and determining the category of the target point according to the probability.
4. The method of claim 2, wherein the target point is a center point of the first area.
5. The method of any one of claims 1 to 4, wherein the neural network model is further configured to:
and when the type of the target point is the type of the tubular object to be extracted, judging that the type of the target point meets the preset condition.
6. The method of any one of claims 1 to 4, wherein tracking a target point in the image based on the reference point comprises:
performing feature extraction on a second region where the reference point is located in the image to obtain second region features;
predicting a first trend of the tubular object to be extracted at the reference point according to the second regional characteristics;
and tracking to obtain a target point according to the first trend.
7. The method according to claim 6, wherein predicting a first trend of the tubular to be extracted at the reference point according to the second regional characteristic comprises:
predicting the probability that the trend of the tubular object to be extracted at the reference point belongs to a plurality of alternative trends according to the second regional characteristics;
and determining a first trend from the two alternative trends with the highest probability according to the tracking direction of the reference point.
8. The method according to claim 7, wherein determining the first trend from the two candidate trends with the highest probability according to the tracking direction to the reference point comprises:
and determining the candidate trend with an included angle smaller than or equal to a preset included angle with the tracking direction in the two candidate trends with the maximum probability as the first trend.
9. The method of claim 6, wherein the neural network model is further configured to: predicting the radius of the tubular object to be extracted at the reference point according to the second region characteristic; and
according to the first trend, tracking to obtain a target point, including:
and tracking to obtain a target point according to the first trend and the radius.
10. The method of any of claims 1 to 4, further comprising: determining a starting end point in the image relative to the tubular to be extracted; and
determining the extraction result according to the tracked points, comprising:
connecting the tracked points to obtain a connecting line;
and when the distance between the starting end point and the connecting line is smaller than a first preset distance, determining the connecting line as the central line of the tubular object to be extracted as the extraction result.
11. The method of claim 10, further comprising:
inputting the image to a first detection model to obtain the starting endpoint;
the first detection model is trained according to a first sample image and coordinate information of a starting endpoint of a sample tubular object in the first sample image.
12. The method according to any one of claims 1 to 4, wherein there are two of the initial reference points determined from the first point; accordingly, the method can be used for solving the problems that,
determining the extraction result according to the tracked points, comprising:
integrating the tracked points in the two tracking processes to determine the extraction result;
wherein the two tracking procedures are initiated from the two initial reference points, respectively.
13. The method of any one of claims 1 to 4, wherein determining a first point on the tubular to be extracted in the image comprises:
inputting the image into a second detection model, and detecting to obtain a plurality of detection points on a plurality of tubular objects in the image;
determining the first point from the plurality of detection points;
taking the tube where the first point is located in the plurality of tubes as the tube to be extracted;
the second detection model is trained according to a second sample image and coordinate information of a plurality of sample detection points on a plurality of sample tubes in the second sample image.
14. The method of claim 13, wherein determining the first point from the plurality of detection points comprises:
determining at least one unused detection point from the plurality of detection points;
and determining the first point required to be used at this time from the at least one unused detection point.
15. The method of claim 14, further comprising:
when there is no unused detection point in the plurality of detection points, or the plurality of detection points all intersect with a currently extracted tubular in the plurality of tubulars, constructing a tubular tree from the currently extracted tubular.
16. The method of any one of claims 1 to 4, wherein determining a first point on the tubular to be extracted in the image comprises:
receiving a trigger operation event of a position point on a tubular object to be extracted in an image by a user;
and determining the position point as a first point on the tubular object to be extracted in the image.
17. The method according to any one of claims 1 to 4, wherein the image is a three-dimensional image; the neural network model takes a three-dimensional convolution neural network as a basic framework.
18. The method of any of claims 1 to 4, further comprising:
reconstructing a curved surface reconstruction image according to the extraction result of the central line of the tubular object to be extracted;
inputting the curved surface reconstruction image into a third detection model to obtain an abnormal position on the tubular object to be extracted;
and the third detection model is obtained by training according to the curved surface reconstruction image of the sample tubular object and the abnormal position marking information on the sample tubular object.
19. An interface interaction method, comprising:
displaying an image on an interface;
marking a central line of the tubular object to be extracted on the image in response to a trigger operation at a position point on the tubular object to be extracted in the image;
the central line of the tubular object to be extracted is obtained by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
20. The method of claim 19, wherein the location point is an arbitrary point on the tubular to be extracted.
21. The method of claim 19 or 20, wherein the image is a three-dimensional image;
the method further comprises the following steps:
reconstructing a curved surface reconstruction image according to the central line of the tubular object to be extracted;
and displaying the curved surface reconstruction image on the interface.
22. The method of claim 19 or 20, wherein the image is a three-dimensional image; and the number of the first and second groups,
displaying an image on an interface, comprising:
and displaying the multiple view angle graphs of the image in a partition mode on the interface.
23. The method of claim 22, wherein marking the tubular centerline to be extracted in the image according to the extraction result comprises:
respectively marking the central line of the tubular object to be extracted in the plurality of view angle images.
24. The method of claim 19 or 20, wherein labeling the tubular to be extracted centerline on the image in response to a triggering operation at a location point on the tubular to be extracted in the image comprises:
sending the position points and the images to a processing device so that the processing device extracts the central line of the tubular object to be extracted by utilizing the trained neural network model;
receiving an extraction result returned by the processing equipment;
and marking the central line of the tubular object to be extracted on the image according to the extraction result.
25. A method of model training, comprising:
determining a first point on a sample tubular object to be extracted in the third sample image;
inputting the first point and the third sample image into a neural network model to obtain a predicted extraction result of the central line of the tubular object of the sample to be extracted;
performing parameter optimization on the neural network model according to the predicted extraction result and an expected extraction result corresponding to the third sample image;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the third sample image based on the reference point; when the prediction type of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the prediction type of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the prediction extraction result according to the tracked points.
26. The method of claim 25, wherein performing parameter optimization on the neural network model according to the expected extraction result corresponding to the predicted extraction result and the third sample image comprises:
and performing parameter optimization on the neural network model according to a first difference between the predicted extraction result and an expected extraction result corresponding to the third sample image and a second difference between the predicted category and the expected category of the tracked point.
27. The method of claim 26, wherein tracking a target point in the third sample image based on the reference point comprises:
performing feature extraction on a second region where the reference point is located in the third sample image to obtain second region features;
according to the second area characteristics, predicting the trend of the sample tubular object to be extracted at the reference point;
and tracking to obtain a target point according to the trend.
28. The method of claim 27, wherein the neural network model is further configured to: predicting the radius of the sample tubular to be extracted at the reference point according to the second region characteristic; and
according to the trend, tracking to obtain a target point, comprising the following steps:
and tracking to obtain a target point according to the trend and the radius.
29. The method of claim 28, wherein performing parameter optimization on the neural network model based on a first difference between the predicted extraction result and an expected extraction result corresponding to the third sample image and a second difference between the predicted category of the tracked point and its expected category comprises:
determining a first difference between the predicted extraction result and an expected extraction result corresponding to the third sample image;
determining a second difference between the predicted category of the tracked point and its expected category;
determining a third difference between the predicted radius of the sample tube to be extracted at the traced point and the expected radius of the sample tube to be extracted at the traced point;
and performing parameter optimization on the neural network model according to the first difference, the second difference and the third difference.
30. The method of any one of claims 25 to 29, further comprising:
acquiring a first data set and a second data set to serve as a training data set of the neural network model;
wherein the sample image in the first data set corresponds to centerline labeling information, radius labeling information, category labeling information of the sample tube, and category labeling information of at least one background therein;
the sample images in the second data set have class label information in which the sample tube and at least one background are associated.
31. The method of claim 30, wherein the neural network model comprises at least one first neural network layer, at least one second neural network layer, and at least one third neural network layer;
the at least one first neural network layer and the at least one second neural network layer are used for determining an initial reference point according to the first point; tracking a target point in the third sample image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; determining the extraction result according to the tracked points;
the at least one first neural network layer and the at least one third neural network layer are used for predicting the category of the target point.
32. The method of claim 31, wherein performing parameter optimization on the neural network model according to the expected extraction result corresponding to the predicted extraction result and the third sample image comprises:
and when the third sample image is from the first data set, performing parameter optimization on at least one first neural network layer, at least one second neural network layer and at least one third neural network layer according to the predicted extraction result and an expected extraction result corresponding to the third sample image.
33. The method of claim 32, further comprising:
and when the third sample image is from the second data set, performing parameter optimization on the at least one first neural network layer and the at least one third neural network layer according to the prediction category and the expected category of the tracked point.
34. A neural network system, comprising: at least one neural network layer;
the at least one neural network layer is to: determining an initial reference point according to a first point on a tubular object to be extracted in the image; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result of the central line of the tubular object to be extracted according to the tracked points.
35. The system according to claim 34, wherein said at least one network layer comprises: at least one first neural network layer, at least one second neural network layer, and at least one third neural network layer;
the at least one first neural network layer and the at least one second neural network layer are used for determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; determining an extraction result of the central line of the tubular object to be extracted according to the tracked points;
the at least one first neural network layer and the at least one third neural network layer are used for predicting the category of the target point.
36. A method for extracting a centerline of a tubular object, comprising:
determining an initial reference point according to a first point on a tubular object to be extracted in the image;
tracking a target point on the image according to the reference point;
when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition;
extracting a center line of the tubular to be extracted in the image according to the tracked point.
37. The method of claim 36, further comprising:
performing feature extraction on a first region where the target point is located in the image to obtain first region features;
and classifying the target points according to the first area characteristics to obtain the categories of the target points.
38. A method for extracting a heart coronary artery central line is characterized by comprising the following steps:
determining a first point on a coronary artery to be extracted in a heart image;
inputting the first point and the heart image into a trained neural network model to obtain an extraction result of the coronary artery central line to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
39. The method of claim 38, wherein the neural network model is further configured to:
performing feature extraction on a first region where the target point is located in the heart image to obtain first region features;
and classifying the target points according to the first area characteristics to obtain the categories of the target points.
40. The method of claim 39, wherein classifying the target point according to the first regional characteristic to obtain a classification of the target point comprises:
predicting the probability that the target point belongs to a plurality of candidate categories according to the first regional characteristics; among the plurality of candidate categories are: a coronary category and at least one background category;
and determining the category of the target point according to the probability.
41. An interface interaction method, comprising:
displaying the heart image on the interface;
marking a coronary artery central line to be extracted on the heart image in response to a trigger operation at a position point on the coronary artery to be extracted in the heart image;
the coronary artery central line to be extracted is extracted by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
42. A method for extracting a heart coronary artery central line is characterized by comprising the following steps:
determining an initial reference point according to a first point on a coronary artery to be extracted in a heart image;
tracking a target point on the heart image according to the reference point;
when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition;
and extracting the central line of the coronary artery to be extracted in the heart image according to the tracked points.
43. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining a first point on a tubular object to be extracted in the image;
inputting the first point and the image into a trained neural network model to obtain an extraction result of the central line of the tubular object to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
44. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
displaying an image on an interface;
marking a central line of the tubular object to be extracted on the image in response to a trigger operation at a position point on the tubular object to be extracted in the image;
the central line of the tubular object to be extracted is obtained by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
45. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining a first point on a sample tubular object to be extracted in the third sample image;
inputting the first point and the third sample image into a neural network model to obtain a predicted extraction result of the central line of the tubular object of the sample to be extracted;
performing parameter optimization on the neural network model according to the predicted extraction result and an expected extraction result corresponding to the third sample image;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the third sample image based on the reference point; when the prediction type of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the prediction type of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the prediction extraction result according to the tracked points.
46. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining an initial reference point according to a first point on a tubular object to be extracted in the image;
tracking a target point on the image according to the reference point;
when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition;
extracting a center line of the tubular to be extracted in the image according to the tracked point.
47. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining a first point on a coronary artery to be extracted in a heart image;
inputting the first point and the heart image into a trained neural network model to obtain an extraction result of the coronary artery central line to be extracted;
wherein the neural network model is to: determining an initial reference point according to the first point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
48. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
displaying the heart image on the interface;
marking a coronary artery central line to be extracted on the heart image in response to a trigger operation at a position point on the coronary artery to be extracted in the heart image;
the coronary artery central line to be extracted is extracted by utilizing a trained neural network model; the neural network model is to: determining an initial reference point according to the position point; tracking a target point in the cardiac image based on the reference point; when the category of the target point is predicted to meet a preset condition, taking the target point as a new reference point until the category of the target point tracked based on the new reference point is predicted to not meet the preset condition; and determining the extraction result according to the tracked points.
49. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
determining an initial reference point according to a first point on a coronary artery to be extracted in a heart image;
tracking a target point on the heart image according to the reference point;
when the type of the target point is predicted to meet the preset condition, taking the target point as a new reference point until the type of the target point tracked according to the new reference point is predicted to not meet the preset condition;
and extracting the central line of the coronary artery to be extracted in the heart image according to the tracked points.
CN201910808284.XA 2019-08-29 2019-08-29 Centerline extraction, interface interaction and model training method, system and equipment Pending CN112446911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910808284.XA CN112446911A (en) 2019-08-29 2019-08-29 Centerline extraction, interface interaction and model training method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910808284.XA CN112446911A (en) 2019-08-29 2019-08-29 Centerline extraction, interface interaction and model training method, system and equipment

Publications (1)

Publication Number Publication Date
CN112446911A true CN112446911A (en) 2021-03-05

Family

ID=74742129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910808284.XA Pending CN112446911A (en) 2019-08-29 2019-08-29 Centerline extraction, interface interaction and model training method, system and equipment

Country Status (1)

Country Link
CN (1) CN112446911A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222964A (en) * 2021-05-27 2021-08-06 推想医疗科技股份有限公司 Method and device for generating coronary artery central line extraction model
CN113436177A (en) * 2021-07-01 2021-09-24 万里云医疗信息科技(北京)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN116503816A (en) * 2023-06-28 2023-07-28 杭州久展电子有限公司 Pin branching detection method for data cable

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080132774A1 (en) * 2004-01-15 2008-06-05 Alogtec Systems Ltd. Vessel Centerline Determination
US20170258433A1 (en) * 2016-03-10 2017-09-14 Siemens Healthcare Gmbh Method and System for Extracting Centerline Representation of Vascular Structures in Medical Images Via Optimal Paths in Computational Flow Fields
CN107563983A (en) * 2017-09-28 2018-01-09 上海联影医疗科技有限公司 Image processing method and medical imaging devices
CN108022251A (en) * 2017-12-14 2018-05-11 北京理工大学 A kind of extracting method and system of the center line of tubular structure
CN108961170A (en) * 2017-05-24 2018-12-07 阿里巴巴集团控股有限公司 Image processing method, device and system
US10258304B1 (en) * 2017-11-29 2019-04-16 Siemens Healthcare Gmbh Method and system for accurate boundary delineation of tubular structures in medical images using infinitely recurrent neural networks
CN110047078A (en) * 2019-04-18 2019-07-23 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080132774A1 (en) * 2004-01-15 2008-06-05 Alogtec Systems Ltd. Vessel Centerline Determination
US20170258433A1 (en) * 2016-03-10 2017-09-14 Siemens Healthcare Gmbh Method and System for Extracting Centerline Representation of Vascular Structures in Medical Images Via Optimal Paths in Computational Flow Fields
CN108961170A (en) * 2017-05-24 2018-12-07 阿里巴巴集团控股有限公司 Image processing method, device and system
CN107563983A (en) * 2017-09-28 2018-01-09 上海联影医疗科技有限公司 Image processing method and medical imaging devices
US10258304B1 (en) * 2017-11-29 2019-04-16 Siemens Healthcare Gmbh Method and system for accurate boundary delineation of tubular structures in medical images using infinitely recurrent neural networks
CN108022251A (en) * 2017-12-14 2018-05-11 北京理工大学 A kind of extracting method and system of the center line of tubular structure
CN110047078A (en) * 2019-04-18 2019-07-23 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JELMER M. WOLTERINK ET AL: "Coronary artery centerline extraction in cardiac CT angiography using a CNN-based orientation classifier", 《 MEDICAL IMAGE ANALYSIS》 *
刘海坤等: "基于深度学习和二维高斯拟合的视网膜血管管径测量方法", 《中国医学物理学杂志》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222964A (en) * 2021-05-27 2021-08-06 推想医疗科技股份有限公司 Method and device for generating coronary artery central line extraction model
CN113436177A (en) * 2021-07-01 2021-09-24 万里云医疗信息科技(北京)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN116503816A (en) * 2023-06-28 2023-07-28 杭州久展电子有限公司 Pin branching detection method for data cable
CN116503816B (en) * 2023-06-28 2023-09-01 杭州久展电子有限公司 Pin branching detection method for data cable

Similar Documents

Publication Publication Date Title
Sermesant et al. Applications of artificial intelligence in cardiovascular imaging
JP7183376B2 (en) Computer-assisted detection using multiple images from different views of the region of interest to improve detection accuracy
CN108022238B (en) Method, computer storage medium, and system for detecting object in 3D image
US9959615B2 (en) System and method for automatic pulmonary embolism detection
US9760689B2 (en) Computer-aided diagnosis method and apparatus
US9014456B2 (en) Computer aided diagnostic system incorporating appearance analysis for diagnosing malignant lung nodules
JP2019076699A (en) Nodule detection with false positive reduction
CN112446911A (en) Centerline extraction, interface interaction and model training method, system and equipment
AU2020322893B2 (en) Data processing method, apparatus, device, and storage medium
US10997720B2 (en) Medical image classification method and related device
JP2010500089A (en) An image context-dependent application related to anatomical structures for efficient diagnosis
US20220101034A1 (en) Method and system for segmenting interventional device in image
RU2746152C2 (en) Detection of a biological object
CN111178420B (en) Coronary artery segment marking method and system on two-dimensional contrast image
EP3762936A1 (en) Display of medical image data
CN110163872A (en) A kind of method and electronic equipment of HRMR image segmentation and three-dimensional reconstruction
CN108597589B (en) Model generation method, target detection method and medical imaging system
WO2017168424A1 (en) System and methods for diagnostic image analysis and image quality assessment
US20160210774A1 (en) Breast density estimation
CN109410170A (en) Image processing method, device and equipment
CN112488982A (en) Ultrasonic image detection method and device
CN113610841B (en) Blood vessel abnormal image identification method and device, electronic equipment and storage medium
US11263481B1 (en) Automated contrast phase based medical image selection/exclusion
CN113990432A (en) Image report pushing method and device based on RPA and AI and computing equipment
CN114902283A (en) Deep learning for optical coherence tomography segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210305