CN117314939B - Training method of blood vessel segmentation intelligent agent, blood vessel segmentation method and related products - Google Patents

Training method of blood vessel segmentation intelligent agent, blood vessel segmentation method and related products Download PDF

Info

Publication number
CN117314939B
CN117314939B CN202311609234.1A CN202311609234A CN117314939B CN 117314939 B CN117314939 B CN 117314939B CN 202311609234 A CN202311609234 A CN 202311609234A CN 117314939 B CN117314939 B CN 117314939B
Authority
CN
China
Prior art keywords
training
image
dimensional
trained
agent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311609234.1A
Other languages
Chinese (zh)
Other versions
CN117314939A (en
Inventor
谢卫国
黄炳顶
李昊玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weide Precision Medical Technology Co ltd
Original Assignee
Shenzhen Weide Precision Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weide Precision Medical Technology Co ltd filed Critical Shenzhen Weide Precision Medical Technology Co ltd
Priority to CN202311609234.1A priority Critical patent/CN117314939B/en
Publication of CN117314939A publication Critical patent/CN117314939A/en
Application granted granted Critical
Publication of CN117314939B publication Critical patent/CN117314939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The application discloses a training method of a blood vessel segmentation intelligent agent, a blood vessel segmentation method and related products. The method comprises the following steps: acquiring an agent to be trained, training data and a label of the training data, wherein the training data comprises a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image, and the label comprises the position of a blood vessel in the training three-dimensional CT image; determining a training starting point of the to-be-trained intelligent agent based on the vessel prediction probability image; based on the training starting point, controlling the to-be-trained intelligent body to move in a training three-dimensional space to obtain a training movement track of the to-be-trained intelligent body in a training three-dimensional CT image; determining rewards of the to-be-trained intelligent agent based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image; and updating parameters of the intelligent agent to be trained based on the rewards to obtain the target intelligent agent.

Description

Training method of blood vessel segmentation intelligent agent, blood vessel segmentation method and related products
Technical Field
The application relates to the technical field of medical images, in particular to a training method of a blood vessel segmentation intelligent agent, a blood vessel segmentation method and related products.
Background
With the rapid development of artificial intelligence technology, the application of artificial intelligence technology is becoming wider and wider, and the method includes segmenting blood vessels in three-dimensional electronic computed tomography (computed tomography, CT) images by a model to obtain segmentation results of the blood vessels. Therefore, how to train a model of a blood vessel from which a single connected region can be divided is of great importance.
Disclosure of Invention
The application provides a training method for segmenting blood vessels, a blood vessel segmentation method and related products, so as to obtain the intelligent body capable of segmenting blood vessels through training.
In a first aspect, a training method of a vessel segmentation agent is provided, the training method comprising:
acquiring an agent to be trained, training data and a label of the training data, wherein the training data comprises a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image, the blood vessel prediction probability image comprises the probability that the semantics of voxels in the training three-dimensional CT image are blood vessels, and the label comprises the positions of the blood vessels in the training three-dimensional CT image;
determining a training starting point of the to-be-trained intelligent agent based on the blood vessel prediction probability image, wherein the training starting point is a starting point of the to-be-trained intelligent agent in a training three-dimensional space, and the training three-dimensional space is a three-dimensional space determined by the training three-dimensional CT image;
Controlling the to-be-trained intelligent body to move in the training three-dimensional space based on the training starting point to obtain a training movement track of the to-be-trained intelligent body in the training three-dimensional CT image;
determining rewards of the to-be-trained intelligent agent based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image;
and updating parameters of the intelligent agent to be trained based on the rewards to obtain the target intelligent agent.
In combination with any embodiment of the present application, based on the training starting point, the controlling the to-be-trained intelligent body to move in the training three-dimensional space, to obtain a training motion track of the to-be-trained intelligent body in the training three-dimensional CT image, includes:
acquiring a training perception range, wherein the training perception range is the perception range of the to-be-trained intelligent body in the training three-dimensional space;
based on the training starting point and the training perception range, determining a perception area of the to-be-trained intelligent body in the training three-dimensional space as a training perception area, wherein voxels in the training perception area are training voxels;
processing the training voxels by using the to-be-trained intelligent agent, and determining the training movement direction of the to-be-trained intelligent agent;
And controlling the movement of the to-be-trained intelligent body based on the training movement direction to obtain the training movement track.
In combination with any one of the embodiments of the present application, the controlling the movement of the to-be-trained intelligent object based on the training movement direction to obtain the training movement track includes:
determining a number of voxels within the training three-dimensional space;
and controlling the movement of the to-be-trained intelligent body based on the training movement direction under the condition that the movement distance of the to-be-trained intelligent body is smaller than or equal to the number, so as to obtain the training movement track.
In combination with any one of the embodiments of the present application, the determining the reward of the agent to be trained based on the training motion trajectory and the position of the blood vessel in the training three-dimensional CT image includes:
calculating the coincidence degree of the training motion trail and the position of the blood vessel in the training three-dimensional CT image;
and determining the rewards according to the coincidence degree under the condition that the coincidence degree is positively correlated with the rewards.
In combination with any one of the embodiments of the present application, the determining, based on the vessel prediction probability image, a training start point of the to-be-trained agent includes:
determining the semantic meaning of the voxel as the maximum value of the probability of the blood vessel according to the blood vessel prediction probability image;
And taking the voxel corresponding to the maximum value as the training starting point.
In combination with any of the embodiments of the present application, a ratio of a volume of the training perception range to a volume of the training three-dimensional space is less than or equal to a target threshold.
In combination with any of the embodiments of the present application, the blood vessel is a renal artery.
In a second aspect, there is provided a vessel segmentation method for segmenting a vessel from a three-dimensional CT image, the method comprising:
acquiring a three-dimensional CT image to be segmented, wherein the three-dimensional CT image to be segmented comprises blood vessels;
acquiring a target agent trained according to the first aspect and any of its embodiments;
processing the three-dimensional CT image to be segmented by using the target intelligent agent to obtain a target motion track of the target intelligent agent in the three-dimensional CT image to be segmented;
and taking the target motion track as a segmentation result of the blood vessel in the three-dimensional CT image to be segmented.
In combination with any one of the embodiments of the present application, the processing the three-dimensional CT image to be segmented by using the target agent to obtain a target motion track of the target agent in the three-dimensional CT image to be segmented includes:
Determining a target three-dimensional space determined by the three-dimensional CT image to be segmented;
determining any point in the target three-dimensional space as a target starting point of the target intelligent agent;
determining a target sensing area of the target intelligent body based on the target starting point and the target sensing range, wherein the target sensing range is the sensing range of the target intelligent body in the target three-dimensional space, and the target sensing area is the sensing area of the target intelligent body in the target three-dimensional space;
processing target voxels in the target perception region by using the target intelligent agent, and determining a target motion direction of the target intelligent agent;
and controlling the target intelligent body to move based on the target movement direction to obtain the target movement track.
In combination with any one of the embodiments of the present application, the controlling the movement of the target agent based on the movement direction of the target, to obtain the movement track of the target includes:
acquiring a target movement step length of the target intelligent agent;
and controlling the target intelligent body to move based on the target movement direction and the target movement step length to obtain the target movement track.
In combination with any of the embodiments of the present application, a ratio of the volume of the target perception range to the volume of the target three-dimensional space is less than or equal to a target threshold.
In a third aspect, a training device for a vessel segmentation agent is provided, the training device comprising:
the training device comprises an acquisition unit, a training unit and a label acquisition unit, wherein the acquisition unit is used for acquiring an agent to be trained, training data and a label of the training data, the training data comprises a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image, the blood vessel prediction probability image comprises the probability that the semantics of voxels in the training three-dimensional CT image are blood vessels, and the label comprises the positions of the blood vessels in the training three-dimensional CT image;
the determining unit is used for determining a training starting point of the to-be-trained intelligent body based on the blood vessel prediction probability image, wherein the training starting point is a starting point of the to-be-trained intelligent body in a training three-dimensional space, and the training three-dimensional space is a three-dimensional space determined by the training three-dimensional CT image;
the control unit is used for controlling the to-be-trained intelligent body to move in the training three-dimensional space based on the training starting point to obtain a training movement track of the to-be-trained intelligent body in the training three-dimensional CT image;
The determining unit is used for determining rewards of the intelligent agent to be trained based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image;
and the updating unit is used for updating the parameters of the intelligent agent to be trained based on the rewards to obtain the target intelligent agent.
In combination with any one of the embodiments of the present application, the control unit is configured to:
acquiring a training perception range, wherein the training perception range is the perception range of the to-be-trained intelligent body in the training three-dimensional space;
based on the training starting point and the training perception range, determining a perception area of the to-be-trained intelligent body in the training three-dimensional space as a training perception area, wherein voxels in the training perception area are training voxels;
processing the training voxels by using the to-be-trained intelligent agent, and determining the training movement direction of the to-be-trained intelligent agent;
and controlling the movement of the to-be-trained intelligent body based on the training movement direction to obtain the training movement track.
In combination with any one of the embodiments of the present application, the control unit is configured to:
determining a number of voxels within the training three-dimensional space;
and controlling the movement of the to-be-trained intelligent body based on the training movement direction under the condition that the movement distance of the to-be-trained intelligent body is smaller than or equal to the number, so as to obtain the training movement track.
In combination with any one of the embodiments of the present application, the determining unit is configured to:
calculating the coincidence degree of the training motion trail and the position of the blood vessel in the training three-dimensional CT image;
and determining the rewards according to the coincidence degree under the condition that the coincidence degree is positively correlated with the rewards.
In combination with any one of the embodiments of the present application, the determining unit is configured to:
determining the semantic meaning of the voxel as the maximum value of the probability of the blood vessel according to the blood vessel prediction probability image;
and taking the voxel corresponding to the maximum value as the training starting point.
In combination with any of the embodiments of the present application, a ratio of a volume of the training perception range to a volume of the training three-dimensional space is less than or equal to a target threshold.
In combination with any of the embodiments of the present application, the blood vessel is a renal artery.
In a fourth aspect, there is provided a blood vessel segmentation apparatus for segmenting a blood vessel from a three-dimensional CT image, the segmentation apparatus comprising:
the device comprises an acquisition unit, a segmentation unit and a segmentation unit, wherein the acquisition unit is used for acquiring a three-dimensional CT image to be segmented, and the three-dimensional CT image to be segmented comprises blood vessels;
the obtaining unit is used for obtaining the target intelligent agent obtained through training according to the first aspect and any implementation mode of the first aspect;
The processing unit is used for processing the three-dimensional CT image to be segmented by utilizing the target intelligent agent to obtain a target motion track of the target intelligent agent in the three-dimensional CT image to be segmented;
the processing unit is used for taking the target motion trail as a segmentation result of the blood vessels in the three-dimensional CT image to be segmented.
In combination with any one of the embodiments of the present application, the processing unit is configured to:
determining a target three-dimensional space determined by the three-dimensional CT image to be segmented;
determining any point in the target three-dimensional space as a target starting point of the target intelligent agent;
determining a target sensing area of the target intelligent body based on the target starting point and the target sensing range, wherein the target sensing range is the sensing range of the target intelligent body in the target three-dimensional space, and the target sensing area is the sensing area of the target intelligent body in the target three-dimensional space;
processing target voxels in the target perception region by using the target intelligent agent, and determining a target motion direction of the target intelligent agent;
and controlling the target intelligent body to move based on the target movement direction to obtain the target movement track.
In combination with any one of the embodiments of the present application, the processing unit is configured to:
acquiring a target movement step length of the target intelligent agent;
and controlling the target intelligent body to move based on the target movement direction and the target movement step length to obtain the target movement track.
In combination with any of the embodiments of the present application, a ratio of the volume of the target perception range to the volume of the target three-dimensional space is less than or equal to a target threshold.
In a fifth aspect, there is provided an electronic device comprising: a processor and a memory for storing computer program code, the computer program code comprising computer instructions;
the electronic device performs the first aspect and any implementation thereof as described above, when the processor executes the computer instructions; the electronic device may alternatively perform the second aspect and any embodiments thereof as described above, when the processor executes the computer instructions.
In a sixth aspect, there is provided another electronic device comprising: a processor, a transmitting device, an input device, an output device, and a memory for storing computer program code, the computer program code comprising computer instructions;
The electronic device performs the first aspect and any implementation thereof as described above, when the processor executes the computer instructions; the electronic device may alternatively perform the second aspect and any embodiments thereof as described above, when the processor executes the computer instructions.
In a seventh aspect, there is provided a computer readable storage medium having a computer program stored therein, the computer program comprising program instructions;
causing a processor to perform the first aspect and any implementation thereof as described above, when the program instructions are executed by the processor; in the case where the program instructions are executed by a processor, either cause the processor to perform or perform the second aspect as described above and any embodiments thereof.
In an eighth aspect, there is provided a computer program product comprising a computer program or instructions; when the computer program or instructions are run on a computer, the computer is caused to perform the first aspect and any implementation thereof described above; the program instructions, when executed by a processor, or cause the processor to perform the second aspect and any embodiments thereof as described above.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
In the application, the training data comprises a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image, wherein the blood vessel prediction probability image comprises the probability that the semantics of voxels in the training three-dimensional CT image are blood vessels, and the labels comprise the positions of the blood vessels in the training three-dimensional CT image. After the training device acquires the to-be-trained intelligent body, training data and labels of the training data, the training starting point of the to-be-trained intelligent body is determined based on the blood vessel prediction probability image, and therefore the probability that the training starting point is the position of the blood vessel can be improved. And controlling the to-be-trained intelligent body to move in the training three-dimensional space based on the training starting point, so that a training movement track of the to-be-trained intelligent body in the training three-dimensional CT image can be obtained, wherein the training movement track is a blood vessel of the to-be-trained intelligent body which is separated from the training three-dimensional CT image, and the blood vessel of the to-be-trained intelligent body which is separated from the training three-dimensional CT image is continuous because the training movement track of the to-be-trained intelligent body is continuous, namely, a blood vessel region which is separated from the training three-dimensional CT image by the to-be-trained intelligent body is a single communication region. After the training device obtains the training motion trail, the rewards of the to-be-trained intelligent body can be determined based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image, and finally, the parameters of the to-be-trained intelligent body are updated based on the rewards, so that the target intelligent body can be obtained, the target intelligent body has the capability of segmenting blood vessels from the three-dimensional CT image, and the blood vessel region segmented by the target intelligent body is a single communication region.
Drawings
In order to more clearly describe the technical solutions in the embodiments or the background of the present application, the following description will describe the drawings that are required to be used in the embodiments or the background of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
Fig. 1 is a schematic flow chart of a training method of a blood vessel segmentation agent according to an embodiment of the present application;
FIG. 2 is a schematic diagram of training an agent through reinforcement learning according to an embodiment of the present application;
FIG. 3 is a schematic diagram of obtaining a predicted probability image of renal arteries according to an embodiment of the present application;
FIG. 4 is a schematic diagram of actions of an agent according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of controlling movement of an agent to be trained to obtain a training movement track according to an embodiment of the present application;
FIG. 6a is a schematic illustration of a renal artery indicated by a label provided in an embodiment of the present application;
FIG. 6b is a schematic illustration of a renal artery segmented by a conventional method according to an embodiment of the present application;
FIG. 6c is a schematic illustration of a renal artery segmented by a target agent according to an embodiment of the present application;
FIG. 7 is a flowchart of another training method of a blood vessel segmentation agent according to an embodiment of the present disclosure;
FIG. 8 is a flowchart of another training method for a vascular segmentation agent according to an embodiment of the present disclosure;
fig. 9 is a schematic flow chart of a blood vessel segmentation method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a training device for a blood vessel segmentation agent according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a blood vessel segmentation device according to an embodiment of the present disclosure;
fig. 12 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments. It should be understood that in the present application, "at least one (item)" means one or more, "a plurality" means two or more, and "at least two (item)" means two or three and three or more.
The embodiment of the application comprises a training method of a blood vessel segmentation agent and a blood vessel segmentation method, wherein the training method of the blood vessel segmentation agent is used for training to obtain the agent which can be used for segmenting blood vessels in a three-dimensional CT image. The execution main body of the training method of the blood vessel segmentation intelligent agent is a training device (hereinafter simply referred to as training device) of the blood vessel segmentation intelligent agent, wherein the training device can be any electronic equipment capable of executing the technical scheme disclosed by the embodiment of the method. Alternatively, the training device may be one of the following: computer, server.
It should be understood that the method embodiments of the present application may also be implemented by way of a processor executing computer program code. Embodiments of the present application are described below with reference to the accompanying drawings in the embodiments of the present application. Referring to fig. 1, fig. 1 is a flow chart of a training method of a blood vessel segmentation agent according to an embodiment of the present application.
101. Acquiring an agent to be trained, training data and labels of the training data.
In this embodiment of the present application, the agent (including the above-mentioned agent to be trained and the target agent to be mentioned below) refers to a computing entity that resides in a certain environment, can continuously and autonomously function, and has characteristics of residence, reactivity, sociability, initiative, and the like. The intelligent agent has parameters, and the parameters of the intelligent agent to be trained can be changed by training the intelligent agent to be trained, so that the intelligent agent to be trained has the capability of executing certain tasks. For example, by training the agent to be trained, the agent to be trained may be provided with the ability to segment the blood vessel from the three-dimensional CT image. In one possible implementation, the agent may be trained by reinforcement learning. For example, fig. 2 is a schematic diagram of training an agent through reinforcement learning, as shown in fig. 2, where reinforcement learning is a training method for the agent to optimize parameters of the agent through interaction with an environment, and the interaction between the agent and the environment includes: the agent determines a next action based on the state of the environment and performs the action in the environment. In the case where the agent performs the action in the environment, the environment will change accordingly, which in turn results in a change in the state of the environment. Further, in the case where the agent performs the action in the environment, based on the accuracy of the result obtained by the action performed by the agent, the incentive of the agent may be determined, and thus the parameters of the agent may be optimized based on the incentive.
In the embodiment of the application, the training data comprises a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image. The training three-dimensional CT image is a three-dimensional CT image including blood vessels, for example, by performing CT scanning on the kidney, a plurality of two-dimensional CT images including the kidney can be obtained, then a three-dimensional CT image including the kidney can be obtained based on the plurality of two-dimensional CT images including the kidney, and the three-dimensional CT image including the kidney can be used as the training three-dimensional CT image. The vessel prediction probability image includes a probability that the semantics of the voxels in the training three-dimensional CT image are vessels, for example, the training three-dimensional CT image includes a voxel a, and if the probability of the voxel a in the vessel prediction probability image is 0.8, the probability that the semantics of the voxel a are vessels is 0.8, that is, the probability that the vessels appear at the position of the voxel a is 0.8.
In one possible implementation, the training three-dimensional CT image is processed by using a vessel segmentation model, which is a model for segmenting vessels in the three-dimensional CT image, so as to obtain a vessel prediction probability image. For example, in the case where the blood vessel is a renal artery, the blood vessel prediction probability image is a renal artery prediction probability image, and fig. 3 is a schematic diagram of obtaining a renal artery prediction probability image according to an embodiment of the present application. As shown in fig. 3, the initial segmentation model is first supervised and trained by using a training set, wherein the training set comprises a three-dimensional CT image of the kidney and an artificial renal artery annotation, the three-dimensional CT image of the kidney is a three-dimensional CT image including the kidney, the artificial renal artery annotation is annotation data obtained by manually annotating the renal artery in the three-dimensional CT image of the kidney, and the supervised and trained training means that the initial segmentation model is trained by using the three-dimensional CT image of the kidney under supervision of the artificial renal artery annotation. After the training of the initial segmentation model is completed, predicting renal arteries in all data by using the trained initial segmentation model to obtain renal artery prediction probability images of all data, wherein all data comprise a training set and a testing set, and the testing set comprises a renal three-dimensional CT image different from the renal three-dimensional CT image in the training set. Specifically, the renal arteries in the three-dimensional CT images of the kidney in the training set are predicted by using the initial segmentation model after training, so as to obtain a renal artery prediction probability image of the three-dimensional CT images of the kidney in the training set, and the renal arteries in the three-dimensional CT images of the kidney in the test set are predicted by using the initial segmentation model after training, so as to obtain a renal artery prediction probability image of the three-dimensional CT images of the kidney in the test set.
Alternatively, the three-dimensional CT image of the kidney is a CT angiography (CTA) image of the kidney.
In this embodiment, the label includes training the position of the blood vessel in the three-dimensional CT image, and the label is a true value (GT), in other words, the true position of the blood vessel in the three-dimensional CT image is the position indicated by the label. Alternatively, the label is obtained by manually labeling the training three-dimensional CT image.
In one implementation of acquiring an agent to be trained, a training device receives an agent to be trained input by a user through an input component, wherein the input component comprises: mouse, keyboard, touch screen, touch pad, audio input device.
In still another implementation manner of obtaining an agent to be trained, the training device receives the agent to be trained sent by a user through a terminal, where the terminal includes: cell phone, computer, tablet computer, intelligent wearable equipment.
In one implementation of acquiring training data, a training device receives training data input by a user through an input component.
In yet another implementation of acquiring training data, the training device receives training data sent by a user through a terminal.
In one implementation of obtaining a tag of training data, a training device receives a tag of training data input by a user through an input component.
In yet another implementation of obtaining a tag of training data, the training device receives a tag of training data sent by a user through a terminal.
It should be understood that, in the embodiment of the present application, the step of acquiring the to-be-trained agent, the step of acquiring the training, and the step of acquiring the tag of the training data by the training device may be performed separately or simultaneously, which is not limited in this application.
102. And determining the training starting point of the to-be-trained agent based on the vessel prediction probability image.
In this embodiment of the present application, the training starting point is a starting point of an agent to be trained in a training three-dimensional space, that is, the agent to be trained starts to move from the training starting point in the training three-dimensional space. The training three-dimensional space is a three-dimensional space determined by the training three-dimensional CT image, specifically, if the size of the training three-dimensional CT image is w×h×d, where W is the width (width) of the three-dimensional CT image, H is the height (height) of the three-dimensional CT image, and D is the depth (depth) of the three-dimensional CT image, the length, width, and height of the training three-dimensional space respectively correspond to W, H, D. For example, the length of the training three-dimensional space is D, the width of the training three-dimensional space is W, and the height of the training three-dimensional space is H.
In one possible implementation manner, the training device determines the maximum value of probability that the semantic meaning of the voxel is a blood vessel according to the blood vessel prediction probability image, and the training device takes the voxel corresponding to the maximum value as a training starting point. For example, in the blood vessel prediction probability image, the probability of the semantic meaning of the voxel a is 0.6, the probability of the semantic meaning of the voxel b is 0.9, and the probability of the semantic meaning of the voxel c is 0.7, and the maximum value is 0.9, and the voxel corresponding to the maximum value is the voxel b, and at this time, the voxel b is the training start point. Optionally, in the case that the number of the maximum values is greater than 1, the training device selects one voxel corresponding to the maximum value as a training starting point.
In the implementation mode, after the training device determines the maximum value based on the blood vessel prediction probability image, the voxel corresponding to the maximum value is used as a training starting point of the intelligent body to be trained, so that the intelligent body to be trained can more rapidly divide the blood vessel from the training three-dimensional CT image, and the training efficiency is improved.
In another possible implementation manner, the training device determines, according to the vessel prediction probability image, a voxel with a semantic vessel probability greater than or equal to a start threshold value as a reference voxel, and the training device uses the reference voxel as a training start point. For example, in the vessel prediction probability image, the probability of the semantic meaning of the voxel a being a vessel is 0.6, the probability of the semantic meaning of the voxel b being a vessel is 0.9, the probability of the semantic meaning of the voxel c being a vessel is 0.7, and if the starting threshold is 0.65, the voxel b and the voxel c are both reference voxels, and the training device may select one of the voxel b and the voxel c as the training starting point.
In the implementation mode, after the training device determines the reference voxel based on the blood vessel prediction probability image, the reference voxel is used as a training starting point of the to-be-trained intelligent body, so that the to-be-trained intelligent body can more rapidly divide blood vessels from the training three-dimensional CT image, and the training efficiency is improved.
103. And controlling the to-be-trained intelligent body to move in a training three-dimensional space based on the training starting point to obtain a training movement track of the to-be-trained intelligent body in a training three-dimensional CT image.
After determining the training starting point, the to-be-trained intelligent body can move in the training three-dimensional space by taking the training starting point as the starting point, specifically, the to-be-trained intelligent body judges the position of the blood vessel based on voxels around the to-be-trained intelligent body, and then moves to the position of the blood vessel.
Because the training three-dimensional space is the three-dimensional space determined by the training three-dimensional CT image, the motion trail of the to-be-trained intelligent body in the training three-dimensional space is the motion trail of the to-be-trained intelligent body in the training three-dimensional CT image. In the embodiment of the application, the motion trail of the to-be-trained intelligent agent in the training three-dimensional CT image is called a training motion trail.
As described above, the to-be-trained agent moves toward the position where the blood vessel is located, and thus, the training motion trajectory is the blood vessel of the to-be-trained agent divided from the training three-dimensional CT image, in other words, the training trajectory is the position of the blood vessel predicted by the to-be-trained agent in the training three-dimensional CT image.
104. Determining rewards of the to-be-trained intelligent agent based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image.
As described above, the position of the blood vessel in the training three-dimensional CT image is the true position of the blood vessel in the training three-dimensional CT image, so the training apparatus can determine the accuracy of the training motion trajectory by using the position of the blood vessel in the training three-dimensional CT image as a reference, in other words, the training apparatus can determine the accuracy of the position of the blood vessel predicted by the agent to be trained in the training three-dimensional CT image by using the position of the blood vessel in the training three-dimensional CT image as a reference. And rewards of the to-be-trained intelligent agent represent the accuracy of the positions of the blood vessels predicted by the to-be-trained intelligent agent in the training three-dimensional CT image. Therefore, the training device can determine rewards of the to-be-trained agent based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image.
In one possible implementation, the training device calculates the coincidence of the training motion trajectory with the position of the blood vessel in the training three-dimensional CT image. In the case where the contact ratio is positively correlated with the prize, the prize is determined based on the contact ratio.
In the implementation manner, the higher the coincidence degree of the training motion track and the position of the blood vessel in the training three-dimensional CT image is, the higher the accuracy of the position of the blood vessel predicted by the to-be-trained intelligent agent in the training three-dimensional CT image is, so that the rewards can be determined according to the coincidence degree under the condition that the coincidence degree is positively correlated with the rewards.
In another possible implementation, the training device determines the reward of the agent to be trained based on a difference in the training motion trajectory and the position of the blood vessel in the training three-dimensional CT image, wherein the difference is inversely related to the reward.
In such an implementation, the greater the difference in the training motion trajectory and the position of the blood vessel in the training three-dimensional CT image, the lower the accuracy of the position of the blood vessel predicted by the agent to be trained in the training three-dimensional CT image, and therefore, in the case where the difference is inversely related to the reward, the reward may be determined based on the overlap ratio.
105. And updating the parameters of the intelligent agent to be trained based on the rewards to obtain the target intelligent agent.
The training device updates parameters of the to-be-trained intelligent agent based on rewards, so that the to-be-trained intelligent agent can improve the accuracy of the predicted position of the blood vessel in the training three-dimensional CT image. And updating parameters of the intelligent agent to be trained to obtain the target intelligent agent.
In one possible implementation manner, after updating parameters of the to-be-trained intelligent body based on rewards, the training device obtains a training motion track by controlling the to-be-trained intelligent body to move in a training three-dimensional space, and determines whether to continue training the to-be-trained intelligent body based on the training motion track and the positions of blood vessels in the training three-dimensional CT image. Optionally, the training device stops training the to-be-trained agent and takes the to-be-trained agent as the target agent under the condition that the coincidence degree of the training motion trail and the position of the blood vessel in the training three-dimensional CT image is greater than or equal to a coincidence threshold value. And under the condition that the coincidence degree of the training motion track and the position of the blood vessel in the training three-dimensional CT image is smaller than a coincidence threshold value, the training device determines new rewards, continuously updates parameters of the to-be-trained intelligent body based on the new rewards until the coincidence degree of the training motion track and the position of the blood vessel in the training three-dimensional CT image is larger than or equal to the coincidence threshold value, stops training the to-be-trained intelligent body, and takes the to-be-trained intelligent body as a target intelligent body.
In the embodiment of the application, the training data comprises a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image, wherein the blood vessel prediction probability image comprises a probability that the semantics of voxels in the training three-dimensional CT image are blood vessels, and the labels comprise positions of the blood vessels in the training three-dimensional CT image. After the training device acquires the to-be-trained intelligent body, training data and labels of the training data, the training starting point of the to-be-trained intelligent body is determined based on the blood vessel prediction probability image, and therefore the probability that the training starting point is the position of the blood vessel can be improved. And controlling the to-be-trained intelligent body to move in the training three-dimensional space based on the training starting point, so that a training movement track of the to-be-trained intelligent body in the training three-dimensional CT image can be obtained, wherein the training movement track is a blood vessel of the to-be-trained intelligent body which is separated from the training three-dimensional CT image, and the blood vessel of the to-be-trained intelligent body which is separated from the training three-dimensional CT image is continuous because the training movement track of the to-be-trained intelligent body is continuous, namely, a blood vessel region which is separated from the training three-dimensional CT image by the to-be-trained intelligent body is a single communication region. After the training device obtains the training motion trail, the rewards of the to-be-trained intelligent body can be determined based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image, and finally, the parameters of the to-be-trained intelligent body are updated based on the rewards, so that the target intelligent body can be obtained, the target intelligent body has the capability of segmenting blood vessels from the three-dimensional CT image, and the blood vessel region segmented by the target intelligent body is a single communication region.
As an alternative embodiment, the training device performs the following steps in performing step 103:
201. and acquiring a training perception range.
In this embodiment of the present application, the training sensing range is a sensing range of the to-be-trained agent in the training three-dimensional space, for example, the sensing range is 10 voxels long, 5 voxels wide, and 6 voxels high, and then the to-be-trained agent may sense voxels in the space centered on itself, 10 voxels long, 5 voxels wide, and 6 voxels high.
In the movement process of the to-be-trained intelligent agent, the to-be-trained intelligent agent can acquire the information of the voxels in the perception range, further the next action can be determined based on the information of the voxels in the perception range, in other words, the to-be-trained intelligent agent can predict the position of the blood vessel according to the information of the voxels in the perception range. For example, fig. 4 is a schematic diagram of actions of an agent provided in this embodiment of the present application, as shown in fig. 4, the labeling space is a training three-dimensional space, D, W, H is length, width, and height of the labeling space, the ball in the labeling space is an agent to be trained, and the actions of the agent to be trained include up, down, left, right, front, and back, that is, the next action of the agent to be trained is one of upward movement, downward movement, leftward movement, rightward movement, forward movement, and backward movement.
In one possible implementation, the ratio of the volume of the training perception range to the volume of the training three-dimensional space is less than or equal to the target threshold. In the implementation mode, the ratio of the volume of the training perception range to the volume of the training three-dimensional space is smaller than or equal to a target threshold value, the volume of the characterization training perception range is far smaller than the volume of the training three-dimensional space, and at the moment, the to-be-trained intelligent body can better pass through the voxel information in the perception range and predict the position of a blood vessel in the three-dimensional CT space. Alternatively, the target threshold is 1/8000.
202. And determining a sensing area of the to-be-trained intelligent body in the training three-dimensional space as a training sensing area based on the training starting point and the training sensing range.
In the embodiment of the application, the training three-dimensional space is the environment where the intelligent agent to be trained is located. The training starting point is the starting point of the to-be-trained agent in the training three-dimensional space, so that the training device can determine the perception area of the to-be-trained agent in the training three-dimensional space based on the training starting point and the training perception range. It should be understood that the training sensing area is the state that the agent to be trained acquires from the environment.
203. And processing the training voxels by using the to-be-trained intelligent agent, and determining the training movement direction of the to-be-trained intelligent agent.
Based on the information of the training voxels, the to-be-trained intelligent body can predict the position of the blood vessel in the training three-dimensional space, and then the next movement direction of the to-be-trained intelligent body can be determined, namely the training movement direction. Optionally, the training device processes the training voxels by using the to-be-trained agent, determines the actions of the to-be-trained agent, and further determines the training movement direction according to the actions.
204. And controlling the movement of the to-be-trained intelligent body based on the training movement direction to obtain the training movement track.
In one possible implementation manner, the training device controls the movement of the to-be-trained intelligent body based on the training movement direction and the preset movement step length, and a training movement track is obtained. Optionally, the preset motion step is 1 voxel.
In another possible implementation manner, the training device determines the number of voxels in the training three-dimensional space, and controls the movement of the to-be-trained intelligent body based on the training movement direction to obtain the training movement track when the movement distance of the to-be-trained intelligent body is smaller than or equal to the number. The movement distance of the intelligent body to be trained is smaller than or equal to the number, and the possibility that the training movement track of the intelligent body to be trained can cover all voxels in the training three-dimensional space can be guaranteed. Considering that the training motion trail of the to-be-trained intelligent body possibly has repeated trail points, namely the number of times that the to-be-trained intelligent body moves to the same voxel exceeds 1, the motion distance of the to-be-trained intelligent body is smaller than or equal to the number, the number of repeated trail points in the training motion trail can be reduced as much as possible under the condition that the possibility that the training motion trail of the to-be-trained intelligent body covers all voxels in the training three-dimensional space is guaranteed, and the efficiency of obtaining the training motion trail is improved, so that the training efficiency of the to-be-trained intelligent body is improved.
Alternatively, in the case where the movement step length of the to-be-trained agent is 1 voxel, the maximum movement step number of the to-be-trained agent is the number of voxels in the training three-dimensional space, for example, the number of voxels in the training three-dimensional space is 10000, and then the maximum movement step number of the to-be-trained agent is 10000.
For example, fig. 5 is a schematic diagram of controlling movement of an agent to be trained to obtain a training movement track according to an embodiment of the present application. As shown in fig. 5, D, W, H is the length, width and height of the training three-dimensional space, the ball in the training three-dimensional space is the to-be-trained intelligent agent, and d, w and h are the training sensing areas of the to-be-trained intelligent agent. The training three-dimensional space also comprises a blood vessel, and the blood vessel is the training motion trail of the intelligent body to be trained. Fig. 5 shows two states in which the movement direction of the agent to be trained is downward and the movement direction is rightward.
In this embodiment, after the training device acquires the training perception range, it determines, based on the training start point and the training perception range, that the perception area of the to-be-trained agent in the training three-dimensional space is the training perception area. And then, the training voxels in the training perception area are processed by the to-be-trained intelligent body, so that the position of the blood vessel in the training three-dimensional space can be predicted, the training movement direction of the to-be-trained intelligent body can be determined, and the movement of the to-be-trained intelligent body can be controlled based on the training movement direction, and the training movement track can be obtained.
In one possible implementation manner, the blood vessel is a renal artery, and the target agent obtained by training in the method described above has the capability of segmenting the renal artery from the three-dimensional CT image, and the renal artery segmented by the target agent is a single connected region, compared to the renal artery segmented by the conventional method, which is not a single connected region. For example, fig. 6a is a schematic diagram of a renal artery indicated by a label provided in an embodiment of the present application, fig. 6b is a schematic diagram of a renal artery segmented by a conventional method provided in an embodiment of the present application, and fig. 6c is a schematic diagram of a renal artery segmented by a target agent provided in an embodiment of the present application. It is apparent that the renal arteries in fig. 6a and 6c are both single connected regions, while the renal arteries in fig. 6b are not single connected regions. It will be appreciated that in figures 6a, 6b, and 6c, the blood vessels each have jagged boundaries that are caused by the graininess of the pixels in the image, rather than the actual boundaries of the blood vessels being jagged, nor the image being obscured.
Referring to fig. 7, fig. 7 is a flow chart of another training method of a blood vessel segmentation agent according to an embodiment of the present application. In the flow shown in fig. 7, the blood vessel is a renal artery, and as shown in fig. 7, the three-dimensional CT image of the renal portion, the predicted probability image of the renal artery, and the labeling image of the to-be-labeled agent are first combined into a three-channel four-dimensional image with a size of h×w×d×3 along the channel dimension, where the four-dimensional image is the environment where the to-be-trained agent is located. Alternatively, the three-dimensional CT image of the kidney is a CTA image of the kidney. The kidney three-dimensional CT image is a three-dimensional CT image comprising kidneys, the renal artery prediction probability image is an image comprising the probability that the semantics of voxels in the kidney three-dimensional CT image are renal arteries, and the labeling image of the to-be-labeled agent is an image comprising the motion trail of the to-be-labeled agent. Then, in the environment where the to-be-trained intelligent agent is located, cutting the perception range of the to-be-trained intelligent agent to obtain a state diagram, wherein the realization of the step can be seen from the realization process of the step 202, and the state diagram is the training perception area in the step 202. And inputting the state diagram into the to-be-trained intelligent agent, and outputting actions by processing the state diagram, wherein the implementation of the step can be seen in the implementation process of step 203, specifically, the to-be-trained intelligent agent processes training voxels in the state diagram, and the actions of the to-be-trained intelligent agent are determined. In the flow shown in fig. 7, the motion of the output of the training agent is one of 6 motions of upward motion, downward motion, leftward motion, rightward motion, forward motion, and backward motion, that is, the kinds of the motion of the output of the training agent are 6 in total. The method comprises the steps that an intelligent body to be trained reflects the action of the intelligent body to be trained to be marked on a marking image, specifically, the intelligent body to be trained moves on the marking image of the intelligent body to be marked according to the output action, a training motion track of the intelligent body to be trained is obtained, and then the training motion track is used as a mark in the marking image. And obtaining rewards by calculation based on the labeling image of the to-be-labeled agent and the artificial renal artery labeling, wherein the artificial renal artery labeling is labeling data obtained by manually labeling the renal artery in the three-dimensional CT image of the renal part. The implementation process of obtaining rewards through calculation based on the labeling image of the to-be-labeled agent and the artificial renal artery labeling can be seen from the implementation process of step 104, specifically, the labeling image of the to-be-labeled agent corresponds to the training motion track in step 104, and the artificial renal artery labeling corresponds to the position of the blood vessel in the training three-dimensional CT image in step 104. After obtaining the reward of the to-be-trained agent, the parameters of the to-be-trained agent can be updated based on the reward, and the implementation process of the step can be referred to as the implementation process of step 105. By updating the parameters of the to-be-trained intelligent agent, the training of the to-be-trained intelligent agent can be completed, and the target intelligent agent is obtained.
Referring to fig. 8, fig. 8 is a flow chart of a training method of a blood vessel segmentation agent according to another embodiment of the present application. In the flow shown in fig. 8, the blood vessel is a renal artery, and as shown in fig. 8, the renal artery in the renal portion CTA image is predicted by using an initial segmentation model to obtain a renal artery prediction probability image, wherein the renal portion CTA image is a CTA image including a kidney, and the initial segmentation model is used for predicting a model of the renal artery in the renal portion CTA image. As shown in fig. 8, the sizes of the renal CTA image and the renal artery prediction probability image are: H×W×D. And training the to-be-trained intelligent agent through reinforcement learning, specifically, training the to-be-trained intelligent agent by utilizing the kidney CTA image and the renal artery predictive probability image. In the training process of the to-be-trained intelligent agent, the to-be-trained intelligent agent is marked in N steps to mark the renal artery in the renal CTA image in the marked image of the to-be-trained intelligent agent, so that the to-be-marked intelligent agent learns the capability of segmenting the renal artery from the renal CTA image, wherein the size of the marked image of the to-be-trained intelligent agent is H multiplied by W multiplied by D.
Optionally, after training by the training method of the vascular segmentation agent described above to obtain the target agent for segmenting the renal arteries in the renal portion CTA image, the test set is processed by using the target agent to evaluate the segmentation effect of the target agent on the renal arteries in the test set, where the test set includes a plurality of renal portion CTA images, and the evaluation results can be seen in table 1 below.
TABLE 1
As shown in table 1, the average value of the similarity between the target agent and GT of the renal arteries in the test set obtained by dividing the renal arteries in the test set was 0.791±0.067, that is, the average value of the similarity was between (0.791-0.067) and (0.791+0.067). Alternatively, the similarity is a dess (Dice) coefficient. The average value of the recall rate (recall) of the segmentation result obtained by the target agent by segmenting the renal arteries in the test set is 0.776+/-0.124, namely the average value of the recall rate is between (0.776-0.124) and (0.776+0.124). The average value of the accuracy (precision) of the segmentation result obtained by the target agent by segmenting the renal arteries in the test set is 0.828+/-0.076, namely the average value of the accuracy is between (0.828-0.076) and (0.828+0.076).
Referring to fig. 9, fig. 9 is a flow chart of a blood vessel segmentation method according to an embodiment of the present application. The blood vessel segmentation method is used for segmenting blood vessels from the three-dimensional CT image, and an execution main body of the blood vessel segmentation method is a blood vessel segmentation device, wherein the blood vessel segmentation device can be any electronic equipment capable of executing the technical scheme disclosed by the embodiment of the method. Alternatively, the training device may be one of the following: computer, server.
901. And acquiring a three-dimensional CT image to be segmented.
In an embodiment of the present application, the three-dimensional CT image to be segmented includes a blood vessel, and in a possible implementation manner, the three-dimensional CT image to be segmented is a three-dimensional CT image including a kidney.
902. And acquiring the target agent trained according to the training method of the blood vessel segmentation agent.
903. And processing the three-dimensional CT image to be segmented by using the target intelligent agent to obtain a target motion track of the target intelligent agent in the three-dimensional CT image to be segmented.
In the embodiment of the present application, the target motion track is a motion track of the target agent in the three-dimensional CT image to be segmented, that is, the prediction of the position of the blood vessel in the three-dimensional CT image to be segmented by the target agent.
In one possible implementation manner, the vessel segmentation device determines a target three-dimensional space determined by the three-dimensional CT image to be segmented, and determines any point in the target three-dimensional space as a target starting point of the target agent. And determining a target sensing area of the target intelligent agent based on the target starting point and the target sensing range, wherein the target sensing range is the sensing range of the target intelligent agent in the target three-dimensional space, and the target sensing area is the sensing area of the target intelligent agent in the target three-dimensional space. And processing the target voxels in the target sensing region by using the target intelligent agent, and determining the target motion direction of the target intelligent agent. And controlling the movement of the target intelligent agent based on the movement direction of the target to obtain the movement track of the target.
Optionally, the vessel segmentation device obtains the target motion trajectory by performing the following steps: and obtaining the target movement step length of the target intelligent agent. And controlling the movement of the target intelligent agent based on the target movement direction and the target movement step length to obtain a target movement track.
Optionally, a ratio of the volume of the target perceived range to the volume of the target three-dimensional space is less than or equal to the target threshold.
904. And taking the target motion track as a segmentation result of the blood vessel in the three-dimensional CT image to be segmented.
Because the target motion trail of the target intelligent agent is the position of the blood vessel in the three-dimensional CT image to be segmented, which is predicted by the target intelligent agent, the blood vessel segmentation device can take the target motion trail as the segmentation result of the blood vessel in the three-dimensional CT image to be segmented.
In the embodiment of the application, after the three-dimensional CT image to be segmented and the target intelligent agent are acquired, the target intelligent agent is utilized to process the three-dimensional CT image to be segmented, so that the target motion track of the target intelligent agent in the three-dimensional CT image to be segmented can be obtained, the target motion track can be used as the segmentation result of the blood vessel in the three-dimensional CT image to be segmented, the blood vessel in the segmentation result can be a single communication area, and the accuracy of the segmentation result of the blood vessel is improved.
In one possible implementation, the blood vessel is a renal artery and the three-dimensional CT image to be segmented is a three-dimensional CT image including the kidney. At this time, the segmentation result of the blood vessel in the three-dimensional CT image to be segmented obtained by the target agent is the segmentation result of the renal artery in the three-dimensional CT image to be segmented.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information, and obtains independent consent of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains personal authorization before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. The personal information processing may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a kind of personal information to be processed.
The foregoing details the method of embodiments of the present application, and the apparatus of embodiments of the present application is provided below.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a training device for a blood vessel segmentation agent according to an embodiment of the present application, where the training device 1 for a blood vessel segmentation agent includes an obtaining unit 11, a determining unit 12, a control unit 13, and an updating unit 14, and specifically:
an obtaining unit 11, configured to obtain an agent to be trained, training data, and a label of the training data, where the training data includes a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image, the blood vessel prediction probability image includes a probability that a semantic of a voxel in the training three-dimensional CT image is a blood vessel, and the label includes a position of the blood vessel in the training three-dimensional CT image;
a determining unit 12, configured to determine a training start point of the to-be-trained agent based on the vessel prediction probability image, where the training start point is a start point of the to-be-trained agent in a training three-dimensional space, and the training three-dimensional space is a three-dimensional space determined by the training three-dimensional CT image;
the control unit 13 is configured to control the to-be-trained intelligent agent to move in the training three-dimensional space based on the training starting point, so as to obtain a training motion track of the to-be-trained intelligent agent in the training three-dimensional CT image;
The determining unit 12 is configured to determine rewards of the to-be-trained agent based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image;
and an updating unit 14, configured to update the parameters of the agent to be trained based on the reward, so as to obtain the target agent.
In combination with any one of the embodiments of the present application, the control unit 13 is configured to:
acquiring a training perception range, wherein the training perception range is the perception range of the to-be-trained intelligent body in the training three-dimensional space;
based on the training starting point and the training perception range, determining a perception area of the to-be-trained intelligent body in the training three-dimensional space as a training perception area, wherein voxels in the training perception area are training voxels;
processing the training voxels by using the to-be-trained intelligent agent, and determining the training movement direction of the to-be-trained intelligent agent;
and controlling the movement of the to-be-trained intelligent body based on the training movement direction to obtain the training movement track.
In combination with any one of the embodiments of the present application, the control unit 13 is configured to:
determining a number of voxels within the training three-dimensional space;
and controlling the movement of the to-be-trained intelligent body based on the training movement direction under the condition that the movement distance of the to-be-trained intelligent body is smaller than or equal to the number, so as to obtain the training movement track.
In combination with any one of the embodiments of the present application, the determining unit 12 is configured to:
calculating the coincidence degree of the training motion trail and the position of the blood vessel in the training three-dimensional CT image;
and determining the rewards according to the coincidence degree under the condition that the coincidence degree is positively correlated with the rewards.
In combination with any one of the embodiments of the present application, the determining unit 12 is configured to:
determining the semantic meaning of the voxel as the maximum value of the probability of the blood vessel according to the blood vessel prediction probability image;
and taking the voxel corresponding to the maximum value as the training starting point.
In combination with any of the embodiments of the present application, a ratio of a volume of the training perception range to a volume of the training three-dimensional space is less than or equal to a target threshold.
In combination with any of the embodiments of the present application, the blood vessel is a renal artery.
In the embodiment of the application, the training data comprises a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image, wherein the blood vessel prediction probability image comprises a probability that the semantics of voxels in the training three-dimensional CT image are blood vessels, and the labels comprise positions of the blood vessels in the training three-dimensional CT image. After the training device acquires the to-be-trained intelligent body, training data and labels of the training data, the training starting point of the to-be-trained intelligent body is determined based on the blood vessel prediction probability image, and therefore the probability that the training starting point is the position of the blood vessel can be improved. And controlling the to-be-trained intelligent body to move in the training three-dimensional space based on the training starting point, so that a training movement track of the to-be-trained intelligent body in the training three-dimensional CT image can be obtained, wherein the training movement track is a blood vessel of the to-be-trained intelligent body which is separated from the training three-dimensional CT image, and the blood vessel of the to-be-trained intelligent body which is separated from the training three-dimensional CT image is continuous because the training movement track of the to-be-trained intelligent body is continuous, namely, a blood vessel region which is separated from the training three-dimensional CT image by the to-be-trained intelligent body is a single communication region. After the training device obtains the training motion trail, the rewards of the to-be-trained intelligent body can be determined based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image, and finally, the parameters of the to-be-trained intelligent body are updated based on the rewards, so that the target intelligent body can be obtained, the target intelligent body has the capability of segmenting blood vessels from the three-dimensional CT image, and the blood vessel region segmented by the target intelligent body is a single communication region.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a blood vessel segmentation device provided in an embodiment of the present application, where the blood vessel segmentation device 2 is used for segmenting a blood vessel from a three-dimensional CT image, and the blood vessel segmentation device 2 includes an acquisition unit 21 and a processing unit 22, specifically:
an acquisition unit 21 for acquiring a three-dimensional CT image to be segmented, the three-dimensional CT image to be segmented including a blood vessel;
the acquiring unit 21 is configured to acquire the target agent trained according to the first aspect and any embodiment thereof;
the processing unit 22 is configured to process the three-dimensional CT image to be segmented by using the target agent, so as to obtain a target motion track of the target agent in the three-dimensional CT image to be segmented;
the processing unit 22 is configured to take the target motion trajectory as a segmentation result of a blood vessel in the three-dimensional CT image to be segmented.
In combination with any of the embodiments of the present application, the processing unit 22 is configured to:
determining a target three-dimensional space determined by the three-dimensional CT image to be segmented;
determining any point in the target three-dimensional space as a target starting point of the target intelligent agent;
determining a target sensing area of the target intelligent body based on the target starting point and the target sensing range, wherein the target sensing range is the sensing range of the target intelligent body in the target three-dimensional space, and the target sensing area is the sensing area of the target intelligent body in the target three-dimensional space;
Processing target voxels in the target perception region by using the target intelligent agent, and determining a target motion direction of the target intelligent agent;
and controlling the target intelligent body to move based on the target movement direction to obtain the target movement track.
In combination with any of the embodiments of the present application, the processing unit 22 is configured to:
acquiring a target movement step length of the target intelligent agent;
and controlling the target intelligent body to move based on the target movement direction and the target movement step length to obtain the target movement track.
In combination with any of the embodiments of the present application, a ratio of the volume of the target perception range to the volume of the target three-dimensional space is less than or equal to a target threshold.
In the embodiment of the application, after the three-dimensional CT image to be segmented and the target intelligent agent are acquired, the target intelligent agent is utilized to process the three-dimensional CT image to be segmented, so that the target motion track of the target intelligent agent in the three-dimensional CT image to be segmented can be obtained, the target motion track can be used as the segmentation result of the blood vessel in the three-dimensional CT image to be segmented, the blood vessel in the segmentation result can be a single communication area, and the accuracy of the segmentation result of the blood vessel is improved.
In some embodiments, functions or modules included in the apparatus provided in the embodiments of the present application may be used to perform the methods described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
Fig. 12 is a schematic hardware structure of an electronic device according to an embodiment of the present application. The electronic device 3 comprises a processor 31, a memory 32. Optionally, the electronic device 3 further comprises input means 33 and output means 34. The processor 31, memory 32, input device 33, and output device 34 are coupled by connectors, including various interfaces, transmission lines or buses, etc., as the embodiments are not limited in this respect. It should be understood that in various embodiments of the present application, coupled is intended to mean interconnected by a particular means, including directly or indirectly through other devices, e.g., through various interfaces, transmission lines, buses, etc.
The processor 31 may be one or more graphics processors (graphics processing unit, GPUs), which may be single-core GPUs or multi-core GPUs in the case where the processor 31 is a GPU. Alternatively, the processor 31 may be a processor group formed by a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. In the alternative, the processor may be another type of processor, and the embodiment of the present application is not limited.
Memory 32 may be used to store computer program instructions as well as various types of computer program code for performing aspects of the present application. Optionally, the memory includes, but is not limited to, a random access memory (random access memory, RAM), a read-only memory (ROM), an erasable programmable read-only memory (erasable programmable read only memory, EPROM), or a portable read-only memory (compact disc read-only memory, CD-ROM) for associated instructions and data.
The input means 33 are for inputting data and/or signals and the output means 34 are for outputting data and/or signals. The input device 33 and the output device 34 may be separate devices or may be an integral device.
It will be appreciated that in the embodiments of the present application, the memory 32 may be used to store not only relevant instructions, but also relevant data, and the embodiments of the present application are not limited to the data specifically stored in the memory.
It will be appreciated that fig. 12 shows only a simplified design of an electronic device. In practical applications, the electronic device may further include other necessary elements, including but not limited to any number of input/output devices, processors, memories, etc., and all electronic devices that may implement the embodiments of the present application are within the scope of protection of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein. It will be further apparent to those skilled in the art that the descriptions of the various embodiments herein are provided with emphasis, and that the same or similar parts may not be explicitly described in different embodiments for the sake of convenience and brevity of description, and thus, parts not described in one embodiment or in detail may be referred to in the description of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital versatile disk (digital versatile disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: a read-only memory (ROM) or a random access memory (random access memory, RAM), a magnetic disk or an optical disk, or the like.

Claims (10)

1. A method of training a vessel segmentation agent, the method comprising:
acquiring an agent to be trained, training data and a label of the training data, wherein the training data comprises a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image, the blood vessel prediction probability image comprises the probability that the semantics of voxels in the training three-dimensional CT image are blood vessels, and the label comprises the positions of the blood vessels in the training three-dimensional CT image;
determining a training starting point of the to-be-trained intelligent agent based on the blood vessel prediction probability image, wherein the training starting point is a starting point of the to-be-trained intelligent agent in a training three-dimensional space, and the training three-dimensional space is a three-dimensional space determined by the training three-dimensional CT image;
Controlling the to-be-trained intelligent body to move in the training three-dimensional space based on the training starting point to obtain a training movement track of the to-be-trained intelligent body in the training three-dimensional CT image;
determining rewards of the to-be-trained intelligent agent based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image;
and updating parameters of the intelligent agent to be trained based on the rewards to obtain the target intelligent agent.
2. The method according to claim 1, wherein the controlling the movement of the to-be-trained intelligent agent in the training three-dimensional space based on the training start point to obtain a training movement track of the to-be-trained intelligent agent in the training three-dimensional CT image includes:
acquiring a training perception range, wherein the training perception range is the perception range of the to-be-trained intelligent body in the training three-dimensional space;
based on the training starting point and the training perception range, determining a perception area of the to-be-trained intelligent body in the training three-dimensional space as a training perception area, wherein voxels in the training perception area are training voxels;
processing the training voxels by using the to-be-trained intelligent agent, and determining the training movement direction of the to-be-trained intelligent agent;
And controlling the movement of the to-be-trained intelligent body based on the training movement direction to obtain the training movement track.
3. The method according to claim 2, wherein the controlling the movement of the to-be-trained agent based on the training movement direction, to obtain the training movement trajectory, includes:
determining a number of voxels within the training three-dimensional space;
and controlling the movement of the to-be-trained intelligent body based on the training movement direction under the condition that the movement distance of the to-be-trained intelligent body is smaller than or equal to the number, so as to obtain the training movement track.
4. The method of claim 1, wherein the determining the reward for the agent to be trained based on the training motion profile and the location of the blood vessel in the training three-dimensional CT image comprises:
calculating the coincidence degree of the training motion trail and the position of the blood vessel in the training three-dimensional CT image;
and determining the rewards according to the coincidence degree under the condition that the coincidence degree is positively correlated with the rewards.
5. The method of claim 1, wherein the determining a training starting point for the agent to be trained based on the vessel predictive probability image comprises:
Determining the semantic meaning of the voxel as the maximum value of the probability of the blood vessel according to the blood vessel prediction probability image;
and taking the voxel corresponding to the maximum value as the training starting point.
6. A method of vessel segmentation, the method for segmenting a vessel from a three-dimensional CT image, the method comprising:
acquiring a three-dimensional CT image to be segmented, wherein the three-dimensional CT image to be segmented comprises blood vessels;
obtaining a target agent trained by the method according to any one of claims 1 to 5;
processing the three-dimensional CT image to be segmented by using the target intelligent agent to obtain a target motion track of the target intelligent agent in the three-dimensional CT image to be segmented;
and taking the target motion track as a segmentation result of the blood vessel in the three-dimensional CT image to be segmented.
7. A training device for vascular segmentation agents, the training device comprising:
the training device comprises an acquisition unit, a training unit and a label acquisition unit, wherein the acquisition unit is used for acquiring an agent to be trained, training data and a label of the training data, the training data comprises a training three-dimensional CT image and a blood vessel prediction probability image of the training three-dimensional CT image, the blood vessel prediction probability image comprises the probability that the semantics of voxels in the training three-dimensional CT image are blood vessels, and the label comprises the positions of the blood vessels in the training three-dimensional CT image;
The determining unit is used for determining a training starting point of the to-be-trained intelligent body based on the blood vessel prediction probability image, wherein the training starting point is a starting point of the to-be-trained intelligent body in a training three-dimensional space, and the training three-dimensional space is a three-dimensional space determined by the training three-dimensional CT image;
the control unit is used for controlling the to-be-trained intelligent body to move in the training three-dimensional space based on the training starting point to obtain a training movement track of the to-be-trained intelligent body in the training three-dimensional CT image;
the determining unit is used for determining rewards of the intelligent agent to be trained based on the training motion trail and the positions of blood vessels in the training three-dimensional CT image;
and the updating unit is used for updating the parameters of the intelligent agent to be trained based on the rewards to obtain the target intelligent agent.
8. A vessel segmentation apparatus for segmenting a vessel from a three-dimensional CT image, the segmentation apparatus comprising:
the device comprises an acquisition unit, a segmentation unit and a segmentation unit, wherein the acquisition unit is used for acquiring a three-dimensional CT image to be segmented, and the three-dimensional CT image to be segmented comprises blood vessels;
the obtaining unit is used for obtaining the target intelligent agent obtained by training according to the method of any one of claims 1 to 5;
The processing unit is used for processing the three-dimensional CT image to be segmented by utilizing the target intelligent agent to obtain a target motion track of the target intelligent agent in the three-dimensional CT image to be segmented;
the processing unit is used for taking the target motion trail as a segmentation result of the blood vessels in the three-dimensional CT image to be segmented.
9. An electronic device, comprising: a processor and a memory for storing computer program code, the computer program code comprising computer instructions;
the electronic device performing the method of any one of claims 1 to 5, when the processor executes the computer instructions;
the electronic device, when executing the computer instructions, either performs the vessel segmentation method as set forth in claim 6.
10. A computer readable storage medium having a computer program stored therein, the computer program comprising program instructions;
causing a processor to perform the method of any one of claims 1 to 5, when the program instructions are executed by the processor;
The program instructions, when executed by a processor, or cause the processor to perform the vessel segmentation method as defined in claim 6.
CN202311609234.1A 2023-11-29 2023-11-29 Training method of blood vessel segmentation intelligent agent, blood vessel segmentation method and related products Active CN117314939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311609234.1A CN117314939B (en) 2023-11-29 2023-11-29 Training method of blood vessel segmentation intelligent agent, blood vessel segmentation method and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311609234.1A CN117314939B (en) 2023-11-29 2023-11-29 Training method of blood vessel segmentation intelligent agent, blood vessel segmentation method and related products

Publications (2)

Publication Number Publication Date
CN117314939A CN117314939A (en) 2023-12-29
CN117314939B true CN117314939B (en) 2024-03-19

Family

ID=89285194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311609234.1A Active CN117314939B (en) 2023-11-29 2023-11-29 Training method of blood vessel segmentation intelligent agent, blood vessel segmentation method and related products

Country Status (1)

Country Link
CN (1) CN117314939B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114072838A (en) * 2019-07-17 2022-02-18 西门子医疗有限公司 3D vessel centerline reconstruction from 2D medical images
EP4152339A1 (en) * 2021-09-20 2023-03-22 Universität Zürich Method for determining a surgery plan by means of a reinforcement learning method
CN117036689A (en) * 2023-07-10 2023-11-10 广东工业大学 Left ventricle indoor membrane image segmentation method and system based on deep reinforcement learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114072838A (en) * 2019-07-17 2022-02-18 西门子医疗有限公司 3D vessel centerline reconstruction from 2D medical images
EP4152339A1 (en) * 2021-09-20 2023-03-22 Universität Zürich Method for determining a surgery plan by means of a reinforcement learning method
CN117036689A (en) * 2023-07-10 2023-11-10 广东工业大学 Left ventricle indoor membrane image segmentation method and system based on deep reinforcement learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Boundary-Aware Supervoxel-Level Iteratively Refined Interactive 3D Image Segmentation With Multi-Agent Reinforcement Learning;Chaofan Ma et al.;IEEE TRANSACTIONS ON MEDICAL IMAGING;第40卷(第10期);第2563-2574页 *
Deep Reinforcement Learning Method for 3D-CT Nasopharyngeal Cancer Localization with Prior Knowledge;Guanghui Han et al.;applied sciences;第1-12页 *

Also Published As

Publication number Publication date
CN117314939A (en) 2023-12-29

Similar Documents

Publication Publication Date Title
CN107622240B (en) Face detection method and device
CN112767329B (en) Image processing method and device and electronic equipment
CN112132265B (en) Model training method, cup-disk ratio determining method, device, equipment and storage medium
CN107548557A (en) Method and apparatus for sending and receiving the view data for virtual reality streaming service
US10977549B2 (en) Object animation using generative neural networks
US11036975B2 (en) Human pose estimation
CN113614780A (en) Learning detection models using loss functions
CN108648201A (en) Pupil positioning method and device, storage medium, electronic equipment
CN115546231A (en) Self-adaptive brain glioma segmentation method based on semi-supervised deep learning
CN108734718B (en) Processing method, device, storage medium and equipment for image segmentation
CN117314939B (en) Training method of blood vessel segmentation intelligent agent, blood vessel segmentation method and related products
CN113205488B (en) Blood flow characteristic prediction method, device, electronic equipment and storage medium
CN114022614A (en) Method and system for estimating confidence of three-dimensional reconstruction target position
CN116342888B (en) Method and device for training segmentation model based on sparse labeling
CN116342986A (en) Model training method, target organ segmentation method and related products
CN115049590B (en) Image processing method and device, electronic equipment and storage medium
CN116431004A (en) Control method and system for interactive behavior of rehabilitation robot
US11526967B2 (en) System and method for precise image inpainting to remove unwanted content from digital images
CN112633348B (en) Method and device for detecting cerebral arteriovenous malformation and judging dispersion property of cerebral arteriovenous malformation
CN117011156A (en) Image processing method, device, equipment and storage medium
CN113628221A (en) Image processing method, image segmentation model training method and related device
CN112560959A (en) Skeleton animation posture matching method and device, electronic equipment and storage medium
CN114092484A (en) Interactive image segmentation method, system, device and storage medium
US20200050934A1 (en) System and method for deep memory network
CN113408596B (en) Pathological image processing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant