CN112419340A - Generation method, application method and device of cerebrospinal fluid segmentation model - Google Patents

Generation method, application method and device of cerebrospinal fluid segmentation model Download PDF

Info

Publication number
CN112419340A
CN112419340A CN202011449140.9A CN202011449140A CN112419340A CN 112419340 A CN112419340 A CN 112419340A CN 202011449140 A CN202011449140 A CN 202011449140A CN 112419340 A CN112419340 A CN 112419340A
Authority
CN
China
Prior art keywords
image
network model
cerebrospinal fluid
dimensional
sample data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011449140.9A
Other languages
Chinese (zh)
Inventor
范晟昱
李戈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Shenyang Advanced Medical Equipment Technology Incubation Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Advanced Medical Equipment Technology Incubation Center Co ltd filed Critical Shenyang Advanced Medical Equipment Technology Incubation Center Co ltd
Priority to CN202011449140.9A priority Critical patent/CN112419340A/en
Publication of CN112419340A publication Critical patent/CN112419340A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The embodiment of the invention provides a generation method, an application method and a device of a cerebrospinal fluid segmentation model. The embodiment of the invention obtains the head CT perfusion imaging CTP image which does not comprise the infarct focus, or the CTP image comprises an infarct focus but the infarct focus is not adhered to cerebrospinal fluid, a generating type confrontation network model and initial parameter values of the generating type confrontation network model are set according to the CTP image construction sample data, the generating type confrontation network model is trained by utilizing the sample data to obtain the trained generating type confrontation network model, the generated network model in the trained counternetwork model is used as a cerebrospinal fluid segmentation model, and the sample data required by training the cerebrospinal fluid segmentation model can be constructed by utilizing a normal head CTP image, and then training obtains the cerebrospinal fluid segmentation model, establishes the basis for utilizing the cerebrospinal fluid segmentation model to distinguish the cerebrospinal fluid and the infarct focus of adhesion from the CTP image.

Description

Generation method, application method and device of cerebrospinal fluid segmentation model
Technical Field
The invention relates to the technical field of medical image processing, in particular to a generation method, an application method and a device of a cerebrospinal fluid segmentation model.
Background
Acute cerebrovascular disease is an acute cerebrovascular disease and a chronic cerebrovascular disease caused by various reasons, and mainly comprises hemorrhagic stroke and ischemic stroke. CTP (Computed Tomography Perfusion imaging) is based on the diffusion characteristic of a radioactive isotope of a contrast agent, and performs whole brain volume scanning on a region of interest by injecting a vein bolus with the contrast agent. The core infarction focus and the ischemic penumbra can be obtained by calculating CBF (Cerebral Blood Flow), CBV (Blood Volume), MTT (Mean Transit Time), TPP (Time to Peak Time) and Tmax (residual function maximum Peak Time) through CTP imaging, so that the acute ischemic stroke can be effectively diagnosed.
Cerebrospinal fluid is a colorless, transparent fluid that exists in the ventricles of the brain and in the subarachnoid space. In CT perfusion imaging, the density value of cerebrospinal fluid and infarct focus is similar, when the infarct focus is connected with the ventricle, the phenomenon of adhesion of cerebrospinal fluid and infarct focus can appear in the CTP image, therefore how to distinguish cerebrospinal fluid and infarct focus in the CTP image of cerebrospinal fluid and infarct focus adhesion is an important problem in medical image processing.
Disclosure of Invention
In order to overcome the problems in the related art, the invention provides a generation method, an application method and a device of a cerebrospinal fluid segmentation model, which can accurately distinguish the adhered cerebrospinal fluid and infarct foci in a CTP image.
According to a first aspect of the embodiments of the present invention, there is provided a method for generating a cerebrospinal fluid segmentation model, including:
acquiring a head CT perfusion imaging CTP image, wherein an infarct focus is not included in the CTP image, or the CTP image includes the infarct focus but the infarct focus is not adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
constructing sample data according to the CTP image, wherein the sample data comprises a two-dimensional simulated adhesion image of cerebrospinal fluid and an adhesion area of an infarct focus and a two-dimensional cerebrospinal fluid image;
setting an generative confrontation network model and initial parameter values of the generative confrontation network model;
and training the generative confrontation network model by using the sample data to obtain a trained generative confrontation network model, and taking the generative network model in the trained generative confrontation network model as a cerebrospinal fluid segmentation model.
According to a second aspect of the embodiments of the present invention, there is provided an application method of a cerebrospinal fluid segmentation model, including:
acquiring a head CT perfusion imaging CTP image of a detected object, wherein the CTP image comprises an infarction focus and the infarction focus is adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
acquiring an input image according to the CTP image, wherein the input image comprises cerebrospinal fluid and an area with an infarct focus adhesion; the input image is a two-dimensional image;
inputting the input image into a trained cerebrospinal fluid segmentation model, and obtaining a target cerebrospinal fluid image output by the cerebrospinal fluid segmentation model, wherein the cerebrospinal fluid segmentation model is generated according to the method of any one of claims 1-8.
According to a third aspect of the embodiments of the present invention, there is provided an apparatus for generating a cerebrospinal fluid segmentation model, including:
the non-adhesion image acquisition module is used for acquiring a head CT perfusion imaging CTP image, wherein an infarction focus is not included in the CTP image, or the CTP image includes the infarction focus but the infarction focus is not adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
the construction module is used for constructing sample data according to the CTP image, wherein the sample data comprises a two-dimensional simulated adhesion image of cerebrospinal fluid and an adhesion area of an infarct focus and a two-dimensional cerebrospinal fluid image;
the device comprises a setting module, a generating type confrontation network model and a generating type confrontation network model, wherein the setting module is used for setting an initial parameter value of the generating type confrontation network model;
and the training module is used for training the generative confrontation network model by using the sample data to obtain a trained generative confrontation network model, and the generative network model in the trained generative confrontation network model is used as a cerebrospinal fluid segmentation model.
According to a fourth aspect of the embodiments of the present invention, there is provided an apparatus for applying a cerebrospinal fluid segmentation model, including:
the adhesion image acquisition module is used for acquiring a head CT perfusion imaging CTP image of a detected object, wherein the CTP image comprises an infarction focus and the infarction focus is adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
the input image acquisition module is used for acquiring an input image according to the CTP image, wherein the input image comprises cerebrospinal fluid and an area with an infarct focus adhesion; the input image is a two-dimensional image;
a segmentation module, configured to input the input image into a trained cerebrospinal fluid segmentation model, and obtain a target cerebrospinal fluid image output by the cerebrospinal fluid segmentation model, where the cerebrospinal fluid segmentation model is generated according to the method of the first aspect.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, by acquiring the head CT perfusion imaging CTP image which does not include the infarct focus, or the CTP image comprises an infarct focus but the infarct focus is not adhered to cerebrospinal fluid, a generating type confrontation network model and initial parameter values of the generating type confrontation network model are set according to the CTP image construction sample data, the generating type confrontation network model is trained by utilizing the sample data to obtain the trained generating type confrontation network model, the generated network model in the trained counternetwork model is used as a cerebrospinal fluid segmentation model, and the sample data required by training the cerebrospinal fluid segmentation model can be constructed by utilizing a normal head CTP image, and then training obtains the cerebrospinal fluid segmentation model, establishes the basis for utilizing the cerebrospinal fluid segmentation model to distinguish the cerebrospinal fluid and the infarct focus of adhesion from the CTP image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present specification and together with the description, serve to explain the principles of the specification.
Fig. 1 is a flowchart illustrating a method for generating a cerebrospinal fluid segmentation model according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a CTP image provided by an embodiment of the present invention.
Fig. 3 is an exemplary diagram of generating a network model provided by an embodiment of the present invention.
Fig. 4 is an exemplary diagram of a discriminant network model provided in an embodiment of the present invention.
Fig. 5 is a flowchart illustrating an application method of the cerebrospinal fluid segmentation model according to an embodiment of the present invention.
Fig. 6 is a functional block diagram of an apparatus for generating a cerebrospinal fluid segmentation model according to an embodiment of the present invention.
Fig. 7 is a functional block diagram of an apparatus for applying a cerebrospinal fluid segmentation model according to an embodiment of the present invention.
Fig. 8 is a hardware configuration diagram of a console device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of embodiments of the invention, as detailed in the following claims.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used to describe various information in embodiments of the present invention, the information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The image processing method is explained in detail below by way of examples.
Fig. 1 is a flowchart illustrating a method for generating a cerebrospinal fluid segmentation model according to an embodiment of the present invention. As shown in fig. 1, in this embodiment, the generating method of the cerebrospinal fluid segmentation model may include:
s101, obtaining a head CT perfusion imaging CTP image, wherein the CTP image does not comprise an infarction focus, or the CTP image comprises the infarction focus but the infarction focus is not adhered to cerebrospinal fluid; the CTP image is a three-dimensional image.
S102, constructing sample data according to the CTP image, wherein the sample data comprises a two-dimensional simulated adhesion image of cerebrospinal fluid and an infarct focus adhesion area and a two-dimensional cerebrospinal fluid image.
S103, setting a generative confrontation network model and initial parameter values of the generative confrontation network model.
S104, training the generative confrontation network model by using the sample data to obtain a trained generative confrontation network model, and taking the generative network model in the trained generative confrontation network model as a cerebrospinal fluid segmentation model.
The generating confrontation network model can comprise generating a network model and judging the network model.
In this embodiment, the CTP image may be a volume image in dicom format of CTP first phase. During CT perfusion imaging, the contrast agent diffuses with the flow of blood in the blood vessel and decreases in concentration with the increase of time, thereby causing the CTP image to deteriorate. Therefore, the quality of the first stage CTP image is the highest, and by selecting the first stage CTP image, a higher quality cerebrospinal fluid image can be obtained.
The cerebrospinal fluid segmentation model in the present embodiment is used to identify cerebrospinal fluid from a CTP image in which the cerebrospinal fluid and an infarct focus are adhered, thereby distinguishing the cerebrospinal fluid from the infarct focus. In order to train the cerebrospinal fluid segmentation model, corresponding sample data needs to be obtained. The more sample data, the better the training of the cerebrospinal fluid segmentation model.
The sample data of the cerebrospinal fluid segmentation model is a CTP image containing cerebrospinal fluid and an infarct focus adhesion area, and the CTP image can be obtained by performing CTP scanning on an existing patient. However, in practical situations, the number of patients with infarcted foci inside the skull is limited, and the infarcted foci in these patients are not necessarily adhered to the cerebrospinal fluid, so that the number of CTP images actually including the cerebrospinal fluid and the adhered areas of the infarcted foci is very rare, and sample data of a cerebrospinal fluid segmentation model is scarce. A cerebrospinal fluid segmentation model with good performance cannot be trained by a small amount of sample data.
In order to obtain sufficient sample data, in this embodiment, sample data of the cerebrospinal fluid segmentation model is constructed by using a normal head CTP image (i.e., a head CTP image without an infarct focus) or a normal head CTP image with an infarct focus (i.e., a CTP image with an infarct focus but without the infarct focus being adhered to the cerebrospinal fluid), so that the problem of insufficient training data is effectively solved.
The CTP image in step S101 is a three-dimensional volume image. In this embodiment, a two-dimensional simulated adhesion image including adhesion regions of cerebrospinal fluid and infarct focus, and a two-dimensional cerebrospinal fluid image are constructed using the three-dimensional CTP image.
Fig. 2 is a schematic diagram of a CTP image provided by an embodiment of the present invention. In fig. 2, (a) is a two-dimensional cerebrospinal fluid image in which no infarcted lesion is present, and the dark region is the cerebrospinal fluid region; (b) the figure is (a) a two-dimensional simulated adhesion image corresponding to the figure comprising adhesion regions of cerebrospinal fluid and infarct foci, wherein the dark regions in the figure are the adhesion regions of cerebrospinal fluid and infarct foci; (c) the figure is a cerebrospinal fluid image output by the cerebrospinal fluid segmentation model after the two-dimensional simulated adhesion image shown in (b) is input into the cerebrospinal fluid segmentation model, and a dark region in the figure is a cerebrospinal fluid region.
In one example, constructing sample data from the CTP image may include:
preprocessing the CTP image to obtain a three-dimensional brain tissue image;
extracting a contour of a maximum connected domain of a cerebrospinal fluid region in the brain tissue image;
taking the point on the outline as an initial point, and carrying out random value taking walking on the brain tissue image to obtain a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus;
and extracting a two-dimensional simulated adhesion image comprising cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional simulated adhesion image, and extracting a corresponding two-dimensional cerebrospinal fluid image according to the two-dimensional simulated adhesion image and the contour, wherein the two-dimensional simulated adhesion image and the corresponding two-dimensional cerebrospinal fluid image form a group of sample data.
In one example, preprocessing the CTP image to obtain a three-dimensional brain tissue image may include:
removing a skull image area from the CTP image to obtain an original three-dimensional brain tissue image;
and denoising the original three-dimensional brain tissue image to obtain a preprocessed three-dimensional brain tissue image.
The original CTP image contains the image of the skull, and the image of the skull can be removed from the CTP image through preprocessing, so that the image of the brain tissue is reserved.
In one example, the way to remove the skull image region from the CTP image may be: and stripping the skull in the CTP image according to an active contour method, extracting a brain tissue data mask, wherein CTP data in the CTP image positioned in the brain tissue data mask form an original three-dimensional brain tissue image.
In other examples, other algorithms may also be used to extract the brain tissue data mask from the CTP image, such as a level set algorithm, and the present embodiment does not limit the extraction algorithm of the brain tissue data mask.
In this embodiment, the original three-dimensional brain tissue image may be denoised according to a Non-Local Means algorithm (Non-Local averaging algorithm). Of course, in other embodiments, other denoising algorithms may be used, such as the BM3D algorithm (3D block matching denoising algorithm) and so on.
In this embodiment, the cerebrospinal fluid region may be extracted from the brain tissue image according to the CT value threshold, and then the maximum connected domain may be found from the extracted cerebrospinal fluid region.
For example, the CT value of the cerebrospinal fluid region is between 10 and 15HU, pixel points with the CT value between 10 and 15HU are extracted from the brain tissue image, the pixel points form a plurality of connected domains, and if the largest connected domain is R, the contour C of the connected domain R is calculated, and the contour C is the contour of the largest connected domain of the cerebrospinal fluid region extracted from the brain tissue image.
In one example, taking a point on the contour as an initial point, performing random value walking on the brain tissue image to obtain a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus may include:
randomly taking a point on the outline as an initial point; determining a cerebrospinal fluid region mask according to the contour;
in the first random walk, setting a first mark for a point outside the cerebrospinal fluid region mask, among points in a preset size neighborhood of the initial point;
in the j +1 th random walk, randomly selecting a point in a preset size neighborhood corresponding to the j th random walk as a target point, and setting a first identifier for a point outside the cerebrospinal fluid region mask in the point in the preset size neighborhood of the target point; j is a natural number;
assigning the pixel value of the point which is set with the first identifier in the image after walking for n times randomly as a target pixel value, wherein the target pixel value is the pixel value of the randomly selected point in the contour on the brain tissue image; and taking the assigned brain tissue image as a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus.
In this embodiment, in each random walk, a second identifier may be further set for a point in the cerebrospinal fluid region mask, among the points in the preset size vicinity of the initial point, and a second identifier may be set for a point in the cerebrospinal fluid region mask, among the points in the preset size vicinity of the target point.
Wherein n is the random walk number. In one example, n may be randomly chosen to be an integer between 1000 and 5000.
Wherein, the first flag may be 1, and the second flag may be 0.
For example. Assuming that the image of the contour C is I, randomly selecting a point on the contour C as a starting point, and assuming that the coordinates of the starting point are (x)0,y0,z0) The query is in point (x) in image I0,y0,z0) Is within the cerebrospinal fluid region mask, if there is a marker 0 for that point, and if not there is a marker 1 for that point. At point (x)0,y0,z0) Randomly selecting one point from the points marked as 1 in the 26 neighborhoods as a target point, and setting the coordinates of the point as (x)1,y1,z1) The target point is taken as the starting point to carry out random value taking walking, the process is repeated for j times (j carries out random value taking once between 100-500 times in each random value taking walking), and the starting point of the j-th random value taking walking is (x)j-1,y j-1,zj-1). For any point (x) marked 1j,yj,zj) Randomly taking a point (x) from the maximum connected region R of cerebrospinal fluid in image Ir,yr,zr) Is assigned to the point (x)j,yj,zj) I.e. I (x)j,yj,zj)=I(xr,yr,zr),I(xr,yr,zr) Belonging to the largest connected domain R.
In this embodiment, the generative confrontation network model includes a generative network model and a discriminative network model. Wherein the generated network model is used to convert the input noise data into virtual data approximating real data. And the judging network model is used for judging whether the virtual data generated by generating the network model is real or not.
In one example, generating the network model may include a convolutional layer, a downsampled convolutional layer, an upsampled layer, a fully connected layer, and an output layer.
Fig. 3 is an exemplary diagram of generating a network model provided by an embodiment of the present invention. As shown in fig. 3, the generated network model includes 8 (3,3) sized convolutional layers, 4 downsampled convolutional layers, 4 upsampled layers, 1 4096 sized fully-connected layers, and 1 single-channel output layer, where each layer except the output layer is followed by a normalization (Batch Norm) and an activation function (leak ReLU). The network input size is (2,256,256) and the output size is (1,256,256). In fig. 3, C represents the number of channels.
It should be noted that, in an application, the structure of the generated network model may be adjusted as needed, and the generated network model of the present embodiment is not limited to the structure shown in fig. 3.
In this embodiment, the discriminant network model may adopt a classification network, such as a GoogleNet network, a ResNet network, or the like.
In one example, the discriminative network model may include a convolutional layer, a fully-connected layer, and an output layer.
Fig. 4 is an exemplary diagram of a discriminant network model provided in an embodiment of the present invention. As shown in fig. 4, the discriminative network model includes 4 (4,4) convolutional layers with a step size of (2,2) and 2 fully-connected layers. Each convolution layer is followed by a normalization and activation function, and in order to prevent overfitting, the fully-connected layer is followed by a dropout layer to remove redundant information according to a proportion of 0.5. The network input is (2,256,256) and the output is a probability value with similar data, and when the probability value is close to 1, the image generating the output of the network model is considered to be closer to the real image.
It should be noted that, in the application, the structure of the discriminant network model may be adjusted as needed, and the discriminant network model of the present embodiment is not limited to the structure shown in fig. 4.
The training process of the generative confrontation network model can be understood as a game process of generating a network and discriminating the network. The aim of enabling the data generated by the generation network to be more real is achieved through the countertraining.
In this embodiment, the initial parameter value may adopt a randomly generated value.
In one example, the training process of the cerebrospinal fluid segmentation model may include:
in the training process, the parameter value of the generative confrontation network model corresponding to the 1 st group of sample data is the initial parameter value, the parameter value of the generative confrontation network model corresponding to the (i + 1) th group of sample data is the updated parameter value after the training of the ith group of sample data, i is a natural number, and i is more than or equal to 1; for each set of sample data, the following operations are performed:
updating a first parameter value corresponding to a judgment network model in a generative confrontation network model corresponding to the set of sample data according to the set of sample data to obtain a first generative confrontation network model;
updating a second parameter value corresponding to a generated network model in the first generated confrontation network model according to the set of sample data and the first generated confrontation network model to obtain a second generated confrontation network model; the parameter value in the second generative confrontation network model is the parameter value of the generative confrontation network model updated after the training;
determining whether a preset convergence condition is met after the training, if so, stopping the training, and determining a generated network model in the generated confrontation network model obtained after the training as a cerebrospinal fluid segmentation model; otherwise, continuing training by using the next group of sample data.
In this embodiment, in the training process of the cerebrospinal fluid segmentation model, the parameter values of the discrimination network model are updated first, and then the parameter values of the generated network model are updated.
In one example, updating a first parameter value corresponding to a discriminant network model in a generative confrontation network model corresponding to the set of sample data according to the set of sample data to obtain a first generative confrontation network model may include:
inputting the two-dimensional cerebrospinal fluid image in the set of sample data into a discrimination network model in the generative confrontation network model corresponding to the set of sample data to obtain the discrimination network modelFirst probability value P of type outputo1
According to said first probability value Po1And a first preset value Pr1Determining a first loss value La corresponding to the discrimination network model;
inputting the two-dimensional cerebrospinal fluid image and the two-dimensional simulation adhesion image in the set of sample data into a generated network model in a generated countermeasure network model corresponding to the set of sample data to obtain a first image output by the generated network model;
inputting the first image and the two-dimensional cerebrospinal fluid image in the group of sample data into a discriminant network model in the generative confrontation network model corresponding to the group of sample data to obtain a second probability value P output by the discriminant network modelo2
According to the second probability value Po2And a second preset value Pr2Determining a second loss value Lb;
and updating a first parameter value corresponding to a discrimination network model according to the first loss value and the second loss value, wherein the first generative confrontation network model comprises a generative network model and an updated discrimination network model in the generative confrontation network model corresponding to the group of sample data.
In one example, Pr1=1,Pr2=0。
Wherein the first loss value La and the second loss value Lb may be function values of a cross-entropy loss function, wherein the cross-entropy loss function is expressed as follows:
-[Pr*log(Po)+(1-Pr)*log(1-Po)] (1)
in this embodiment, the parameter value of the discriminant network model may be updated according to the sum Loss of the first Loss value La and the second Loss value Lb, which is La + Lb.
In one example, updating a second parameter value corresponding to a generated network model in the first generated confrontation network model according to the set of sample data and the first generated confrontation network model to obtain a second generated confrontation network model, including:
inputting the two-dimensional cerebrospinal fluid image and the two-dimensional simulation adhesion image in the set of sample data into a generation network model in the first generation countermeasure network model to obtain a second image output by the generation network model;
inputting the second image and the two-dimensional cerebrospinal fluid image in the group of sample data into a discrimination network model in the first generative confrontation network model to obtain a third probability value P output by the discrimination network modelo3
According to the third probability value Po3And a first preset value Pr1Determining a third loss value Lc corresponding to the discrimination network model;
acquiring a similarity loss value Ld between the second image and the two-dimensional cerebrospinal fluid image in the group of sample data;
and updating a second parameter value corresponding to the generated network model according to the third loss value Lc and the similarity loss value Ld, wherein the second generated confrontation network model comprises the updated generated network model and a judgment network model in the first generated confrontation network model.
In one example, Pr1=1。
The third loss value Lc may be a function value of a cross-entropy loss function, where the expression of the cross-entropy loss function is shown in the foregoing formula (1).
Wherein, the function expression for calculating the similarity loss value is as follows:
Figure BDA0002826074550000111
in the present embodiment, the sum Loss of the third Loss value Lc and the similarity Loss value Ld may be equal to
Lc + Ld is propagated backwards to update the parameter values of the generated network model.
In one example, the convergence condition may be: the loss value corresponding to the discrimination network model in the generation type confrontation network model after the training is smaller than or equal to a first error value, and the quality difference value between the output image of the generation network model in the generation type confrontation network model and the input two-dimensional cerebrospinal fluid image after the training is smaller than or equal to a second error value.
The first error value and the second error value can be set according to application requirements.
The quality difference value may be calculated by using an image quality difference algorithm in the related art, which is not described herein again.
The method for generating the cerebrospinal fluid segmentation model provided by the embodiment of the invention comprises the steps of obtaining a head CT perfusion imaging CTP image which does not comprise an infarct focus, or the CTP image comprises an infarct focus but the infarct focus is not adhered to cerebrospinal fluid, a generating type confrontation network model and initial parameter values of the generating type confrontation network model are set according to the CTP image construction sample data, the generating type confrontation network model is trained by utilizing the sample data to obtain the trained generating type confrontation network model, the generated network model in the trained counternetwork model is used as a cerebrospinal fluid segmentation model, and the sample data required by training the cerebrospinal fluid segmentation model can be constructed by utilizing a normal head CTP image, and then training obtains the cerebrospinal fluid segmentation model, establishes the basis for utilizing the cerebrospinal fluid segmentation model to distinguish the cerebrospinal fluid and the infarct focus of adhesion from the CTP image.
Fig. 5 is a flowchart illustrating an application method of the cerebrospinal fluid segmentation model according to an embodiment of the present invention. As shown in fig. 5, in this embodiment, the method for applying the cerebrospinal fluid segmentation model may include:
s501, acquiring a head CT perfusion imaging CTP image of a detected object, wherein the CTP image comprises an infarction focus and the infarction focus is adhered to cerebrospinal fluid; the CTP image is a three-dimensional image.
S502, acquiring an input image according to the CTP image, wherein the input image comprises cerebrospinal fluid and an area with an infarct focus adhesion; the input image is a two-dimensional image.
And S503, inputting the input image into the trained cerebrospinal fluid segmentation model, and obtaining a target cerebrospinal fluid image output by the cerebrospinal fluid segmentation model, wherein the cerebrospinal fluid segmentation model is generated according to any one of the methods for generating the cerebrospinal fluid segmentation model.
In this embodiment, the input image is a two-dimensional actual adhesion image including the cerebrospinal fluid and the infarct focus adhesion region.
In one example, acquiring an input image from the CTP image may include:
preprocessing the CTP image to obtain a three-dimensional brain tissue image;
extracting outlines of cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional brain tissue image;
and extracting a two-dimensional actual adhesion image comprising cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional brain tissue image as an input image according to the contour.
The preprocessing process is the same as the preprocessing process in the above-mentioned embodiment of the method for generating a cerebrospinal fluid segmentation model, and is not described herein again.
In one example, the method may further comprise:
and obtaining an infarct focus image according to the target cerebrospinal fluid image and the input image.
For example, subtracting the target cerebrospinal fluid image from the input image may yield an infarct focus image.
It should be noted that, in the application process, multiple two-dimensional actual adhesion images including adhesion regions of cerebrospinal fluid and infarct foci, for example, m (m is a natural number) images, may be extracted from the three-dimensional brain tissue image, where the m two-dimensional actual adhesion images pass through the cerebrospinal fluid segmentation model one by one, and accordingly, m target cerebrospinal fluid images are obtained, the m target cerebrospinal fluid images correspond to the m two-dimensional actual adhesion images one by one, an infarct image is obtained according to each two-dimensional actual adhesion image and the corresponding target cerebrospinal fluid image, and the m two-dimensional actual adhesion images obtain m infarct foci images in total. The m infarct focus images are spliced according to a third dimension (the first dimension and the second dimension are the dimensions of the infarct focus images), so that a three-dimensional infarct focus image can be obtained.
According to the application method of the cerebrospinal fluid segmentation model provided by the embodiment of the invention, the cerebrospinal fluid and the infarcted focus which are adhered in the CTP image can be accurately distinguished by using the cerebrospinal fluid segmentation model.
Based on the above method embodiment, the embodiment of the present invention further provides corresponding apparatus, device, and storage medium embodiments.
Fig. 6 is a functional block diagram of an apparatus for generating a cerebrospinal fluid segmentation model according to an embodiment of the present invention. As shown in fig. 6, in this embodiment, the generating device of the cerebrospinal fluid segmentation model may include:
a non-adhesion image acquisition module 610, configured to acquire a CTP image for head CT perfusion imaging, where the CTP image does not include an infarct focus, or the CTP image includes an infarct focus but the infarct focus is not adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
a constructing module 620, configured to construct sample data according to the CTP image, where the sample data includes a two-dimensional simulated adhesion image of cerebrospinal fluid and an adhesion region of an infarct focus, and a two-dimensional cerebrospinal fluid image;
a setting module 630, configured to set a generative confrontation network model and initial parameter values of the generative confrontation network model;
the training module 640 is configured to train the generative confrontation network model by using the sample data to obtain a trained generative confrontation network model, and use a generative network model in the trained generative confrontation network model as a cerebrospinal fluid segmentation model.
In one example, the construction module 620 may be specifically configured to:
preprocessing the CTP image to obtain a three-dimensional brain tissue image;
extracting a contour of a maximum connected domain of a cerebrospinal fluid region in the brain tissue image;
taking the point on the outline as an initial point, and carrying out random value taking walking on the brain tissue image to obtain a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus;
and extracting a two-dimensional simulated adhesion image comprising cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional simulated adhesion image, and extracting a corresponding two-dimensional cerebrospinal fluid image according to the two-dimensional simulated adhesion image and the contour, wherein the two-dimensional simulated adhesion image and the corresponding two-dimensional cerebrospinal fluid image form a group of sample data.
In one example, taking a point on the contour as an initial point, performing random value walking on the brain tissue image to obtain a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus, including:
randomly taking a point on the outline as an initial point; determining a cerebrospinal fluid region mask according to the contour;
in the first random walk, setting a first mark for a point outside the cerebrospinal fluid region mask, among points in a preset size neighborhood of the initial point;
in the j +1 th random walk, randomly selecting a point in a preset size neighborhood corresponding to the j th random walk as a target point, and setting a first identifier for a point outside the cerebrospinal fluid region mask in the point in the preset size neighborhood of the target point; j is a natural number;
assigning the pixel value of the point which is set with the first identifier in the image after walking for n times randomly as a target pixel value, wherein the target pixel value is the pixel value of the randomly selected point in the contour on the brain tissue image; and taking the assigned brain tissue image as a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus.
In one example, preprocessing the CTP image to obtain a three-dimensional brain tissue image comprises:
removing a skull image area from the CTP image to obtain an original three-dimensional brain tissue image;
and denoising the original three-dimensional brain tissue image to obtain a preprocessed three-dimensional brain tissue image.
In one example, the training process of the cerebrospinal fluid segmentation model comprises:
in the training process, the parameter value of the generative confrontation network model corresponding to the 1 st group of sample data is the initial parameter value, the parameter value of the generative confrontation network model corresponding to the (i + 1) th group of sample data is the updated parameter value after the training of the ith group of sample data, i is a natural number, and i is more than or equal to 1; for each set of sample data, the following operations are performed:
updating a first parameter value corresponding to a judgment network model in a generative confrontation network model corresponding to the set of sample data according to the set of sample data to obtain a first generative confrontation network model;
updating a second parameter value corresponding to a generated network model in the first generated confrontation network model according to the set of sample data and the first generated confrontation network model to obtain a second generated confrontation network model; the parameter value in the second generative confrontation network model is the parameter value of the generative confrontation network model updated after the training;
determining whether a preset convergence condition is met after the training, if so, stopping the training, and determining a generated network model in the generated confrontation network model obtained after the training as a cerebrospinal fluid segmentation model; otherwise, continuing training by using the next group of sample data.
In one example, updating a first parameter value corresponding to a discriminant network model in a generative confrontation network model corresponding to the set of sample data according to the set of sample data to obtain a first generative confrontation network model, including:
inputting the two-dimensional cerebrospinal fluid image in the set of sample data into a discriminant network model in the generative confrontation network model corresponding to the set of sample data to obtain a first probability value output by the discriminant network model;
determining a first loss value corresponding to the network model according to the first probability value and a first preset value;
inputting the two-dimensional cerebrospinal fluid image and the two-dimensional simulation adhesion image in the set of sample data into a generated network model in a generated countermeasure network model corresponding to the set of sample data to obtain a first image output by the generated network model;
inputting the first image and the two-dimensional cerebrospinal fluid image in the group of sample data into a discrimination network model in a generative confrontation network model corresponding to the group of sample data to obtain a second probability value output by the discrimination network model;
determining a second loss value according to the second probability value and a second preset value;
and updating a first parameter value corresponding to a discrimination network model according to the first loss value and the second loss value, wherein the first generative confrontation network model comprises a generative network model and an updated discrimination network model in the generative confrontation network model corresponding to the group of sample data.
In one example, updating a second parameter value corresponding to a generated network model in the first generated confrontation network model according to the set of sample data and the first generated confrontation network model to obtain a second generated confrontation network model, including:
inputting the two-dimensional cerebrospinal fluid image and the two-dimensional simulation adhesion image in the set of sample data into a generation network model in the first generation countermeasure network model to obtain a second image output by the generation network model;
inputting the second image and the two-dimensional cerebrospinal fluid image in the group of sample data into a discrimination network model in the first generative confrontation network model to obtain a third probability value output by the discrimination network model;
determining a third loss value corresponding to the judgment network model according to the third probability value and the first preset value;
obtaining a similarity loss value between the second image and the two-dimensional cerebrospinal fluid image in the group of sample data;
and updating a second parameter value corresponding to the generated network model according to the third loss value and the similarity loss value, wherein the second generated confrontation network model comprises the updated generated network model and a discrimination network model in the first generated confrontation network model.
In one example, the convergence condition is: the loss value corresponding to the discrimination network model in the generation type confrontation network model after the training is smaller than or equal to a first error value, and the quality difference value between the output image of the generation network model in the generation type confrontation network model and the input two-dimensional cerebrospinal fluid image after the training is smaller than or equal to a second error value.
Fig. 7 is a functional block diagram of an apparatus for applying a cerebrospinal fluid segmentation model according to an embodiment of the present invention. As shown in fig. 7, in this embodiment, the device for applying the cerebrospinal fluid segmentation model may include:
an adhesion image acquisition module 710, configured to acquire a CT perfusion imaging CTP image of a head of a subject, where the CTP image includes an infarction focus and the infarction focus is adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
an input image obtaining module 720, configured to obtain an input image according to the CTP image, where the input image includes a cerebrospinal fluid and an area where an infarct focus is adhered; the input image is a two-dimensional image;
a segmentation module 730, configured to input the input image into a trained cerebrospinal fluid segmentation model, and obtain a target cerebrospinal fluid image output by the cerebrospinal fluid segmentation model, where the cerebrospinal fluid segmentation model is generated according to any one of the above methods for generating a cerebrospinal fluid segmentation model.
In one example, the apparatus may further include:
and the infarct focus image obtaining module is used for obtaining an infarct focus image according to the target cerebrospinal fluid image and the input image.
In one example, the input image acquisition module 720 may be specifically configured to:
preprocessing the CTP image to obtain a three-dimensional brain tissue image;
extracting outlines of cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional brain tissue image;
and extracting a two-dimensional actual adhesion image comprising cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional brain tissue image as an input image according to the contour.
The embodiment of the invention also provides the console equipment. Fig. 8 is a hardware configuration diagram of a console device according to an embodiment of the present invention. As shown in fig. 8, the console device includes: an internal bus 801, and a memory 802, a processor 803, and an external interface 804 connected by the internal bus.
In one example, the memory 802 is configured to store machine readable instructions corresponding to the generation logic of the cerebrospinal fluid segmentation model;
the processor 803 is configured to read the machine-readable instructions on the memory 802 and execute the instructions to implement the following operations:
acquiring a head CT perfusion imaging CTP image, wherein an infarct focus is not included in the CTP image, or the CTP image includes the infarct focus but the infarct focus is not adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
constructing sample data according to the CTP image, wherein the sample data comprises a two-dimensional simulated adhesion image of cerebrospinal fluid and an adhesion area of an infarct focus and a two-dimensional cerebrospinal fluid image;
setting an generative confrontation network model and initial parameter values of the generative confrontation network model;
and training the generative confrontation network model by using the sample data to obtain a trained generative confrontation network model, and taking the generative network model in the trained generative confrontation network model as a cerebrospinal fluid segmentation model.
In one example, constructing sample data from the CTP image comprises:
preprocessing the CTP image to obtain a three-dimensional brain tissue image;
extracting a contour of a maximum connected domain of a cerebrospinal fluid region in the brain tissue image;
taking the point on the outline as an initial point, and carrying out random value taking walking on the brain tissue image to obtain a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus;
and extracting a two-dimensional simulated adhesion image comprising cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional simulated adhesion image, and extracting a corresponding two-dimensional cerebrospinal fluid image according to the two-dimensional simulated adhesion image and the contour, wherein the two-dimensional simulated adhesion image and the corresponding two-dimensional cerebrospinal fluid image form a group of sample data.
In one example, taking a point on the contour as an initial point, performing random value walking on the brain tissue image to obtain a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus, including:
randomly taking a point on the outline as an initial point; determining a cerebrospinal fluid region mask according to the contour;
in the first random walk, setting a first mark for a point outside the cerebrospinal fluid region mask, among points in a preset size neighborhood of the initial point;
in the j +1 th random walk, randomly selecting a point in a preset size neighborhood corresponding to the j th random walk as a target point, and setting a first identifier for a point outside the cerebrospinal fluid region mask in the point in the preset size neighborhood of the target point; j is a natural number;
assigning the pixel value of the point which is set with the first identifier in the image after walking for n times randomly as a target pixel value, wherein the target pixel value is the pixel value of the randomly selected point in the contour on the brain tissue image; and taking the assigned brain tissue image as a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus.
In one example, preprocessing the CTP image to obtain a three-dimensional brain tissue image comprises:
removing a skull image area from the CTP image to obtain an original three-dimensional brain tissue image;
and denoising the original three-dimensional brain tissue image to obtain a preprocessed three-dimensional brain tissue image.
In one example, the training process of the cerebrospinal fluid segmentation model comprises:
in the training process, the parameter value of the generative confrontation network model corresponding to the 1 st group of sample data is the initial parameter value, the parameter value of the generative confrontation network model corresponding to the (i + 1) th group of sample data is the updated parameter value after the training of the ith group of sample data, i is a natural number, and i is more than or equal to 1; for each set of sample data, the following operations are performed:
updating a first parameter value corresponding to a judgment network model in a generative confrontation network model corresponding to the set of sample data according to the set of sample data to obtain a first generative confrontation network model;
updating a second parameter value corresponding to a generated network model in the first generated confrontation network model according to the set of sample data and the first generated confrontation network model to obtain a second generated confrontation network model; the parameter value in the second generative confrontation network model is the parameter value of the generative confrontation network model updated after the training;
determining whether a preset convergence condition is met after the training, if so, stopping the training, and determining a generated network model in the generated confrontation network model obtained after the training as a cerebrospinal fluid segmentation model; otherwise, continuing training by using the next group of sample data.
In one example, updating a first parameter value corresponding to a discriminant network model in a generative confrontation network model corresponding to the set of sample data according to the set of sample data to obtain a first generative confrontation network model, including:
inputting the two-dimensional cerebrospinal fluid image in the set of sample data into a discriminant network model in the generative confrontation network model corresponding to the set of sample data to obtain a first probability value output by the discriminant network model;
determining a first loss value corresponding to the network model according to the first probability value and a first preset value;
inputting the two-dimensional cerebrospinal fluid image and the two-dimensional simulation adhesion image in the set of sample data into a generated network model in a generated countermeasure network model corresponding to the set of sample data to obtain a first image output by the generated network model;
inputting the first image and the two-dimensional cerebrospinal fluid image in the group of sample data into a discrimination network model in a generative confrontation network model corresponding to the group of sample data to obtain a second probability value output by the discrimination network model;
determining a second loss value according to the second probability value and a second preset value;
and updating a first parameter value corresponding to a discrimination network model according to the first loss value and the second loss value, wherein the first generative confrontation network model comprises a generative network model and an updated discrimination network model in the generative confrontation network model corresponding to the group of sample data.
In one example, updating a second parameter value corresponding to a generated network model in the first generated confrontation network model according to the set of sample data and the first generated confrontation network model to obtain a second generated confrontation network model, including:
inputting the two-dimensional cerebrospinal fluid image and the two-dimensional simulation adhesion image in the set of sample data into a generation network model in the first generation countermeasure network model to obtain a second image output by the generation network model;
inputting the second image and the two-dimensional cerebrospinal fluid image in the group of sample data into a discrimination network model in the first generative confrontation network model to obtain a third probability value output by the discrimination network model;
determining a third loss value corresponding to the judgment network model according to the third probability value and the first preset value;
obtaining a similarity loss value between the second image and the two-dimensional cerebrospinal fluid image in the group of sample data;
and updating a second parameter value corresponding to the generated network model according to the third loss value and the similarity loss value, wherein the second generated confrontation network model comprises the updated generated network model and a discrimination network model in the first generated confrontation network model.
In one example, the convergence condition is: the loss value corresponding to the discrimination network model in the generation type confrontation network model after the training is smaller than or equal to a first error value, and the quality difference value between the output image of the generation network model in the generation type confrontation network model and the input two-dimensional cerebrospinal fluid image after the training is smaller than or equal to a second error value.
In another example, the memory 802 is configured to store machine readable instructions corresponding to application logic of a cerebrospinal fluid segmentation model;
the processor 803 is configured to read the machine-readable instructions on the memory 802 and execute the instructions to implement the following operations:
acquiring a head CT perfusion imaging CTP image of a detected object, wherein the CTP image comprises an infarction focus and the infarction focus is adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
acquiring an input image according to the CTP image, wherein the input image comprises cerebrospinal fluid and an area with an infarct focus adhesion; the input image is a two-dimensional image;
inputting the input image into a trained cerebrospinal fluid segmentation model, and obtaining a target cerebrospinal fluid image output by the cerebrospinal fluid segmentation model, wherein the cerebrospinal fluid segmentation model is generated according to the method of any one of claims 1-8.
In one example, further comprising:
and obtaining an infarct focus image according to the target cerebrospinal fluid image and the input image.
In one example, acquiring an input image from the CTP image comprises:
preprocessing the CTP image to obtain a three-dimensional brain tissue image;
extracting outlines of cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional brain tissue image;
and extracting a two-dimensional actual adhesion image comprising cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional brain tissue image as an input image according to the contour.
The embodiment of the invention also provides a CT system, which comprises a CT device, a scanning bed and a console device, wherein:
the CT equipment is used for carrying out CT perfusion imaging CTP scanning on the detected object to obtain a head CT perfusion imaging CTP image of the detected object;
the console device is configured to:
acquiring a head CT perfusion imaging CTP image, wherein an infarct focus is not included in the CTP image, or the CTP image includes the infarct focus but the infarct focus is not adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
constructing sample data according to the CTP image, wherein the sample data comprises a two-dimensional simulated adhesion image of cerebrospinal fluid and an adhesion area of an infarct focus and a two-dimensional cerebrospinal fluid image;
setting an generative confrontation network model and initial parameter values of the generative confrontation network model;
and training the generative confrontation network model by using the sample data to obtain a trained generative confrontation network model, and taking the generative network model in the trained generative confrontation network model as a cerebrospinal fluid segmentation model.
The embodiment of the invention also provides a CT system, which comprises a CT device, a scanning bed and a console device, wherein:
the CT equipment is used for carrying out CT perfusion imaging CTP scanning on the detected object to obtain a head CT perfusion imaging CTP image of the detected object;
the console device is configured to:
acquiring a head CT perfusion imaging CTP image of a detected object, wherein the CTP image comprises an infarction focus and the infarction focus is adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
acquiring an input image according to the CTP image, wherein the input image comprises cerebrospinal fluid and an area with an infarct focus adhesion; the input image is a two-dimensional image;
inputting the input image into a trained cerebrospinal fluid segmentation model, and obtaining a target cerebrospinal fluid image output by the cerebrospinal fluid segmentation model, wherein the cerebrospinal fluid segmentation model is generated according to any one of the methods for generating the cerebrospinal fluid segmentation model.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the program, when executed by a processor, implements any one of the above methods for generating a cerebrospinal fluid segmentation model.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the program, when executed by a processor, implements any one of the methods for applying the cerebrospinal fluid segmentation model.
For the device and apparatus embodiments, as they correspond substantially to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (13)

1. A method for generating a cerebrospinal fluid segmentation model is characterized by comprising the following steps:
acquiring a head CT perfusion imaging CTP image, wherein an infarct focus is not included in the CTP image, or the CTP image includes the infarct focus but the infarct focus is not adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
constructing sample data according to the CTP image, wherein the sample data comprises a two-dimensional simulated adhesion image of cerebrospinal fluid and an adhesion area of an infarct focus and a two-dimensional cerebrospinal fluid image;
setting an generative confrontation network model and initial parameter values of the generative confrontation network model;
and training the generative confrontation network model by using the sample data to obtain a trained generative confrontation network model, and taking the generative network model in the trained generative confrontation network model as a cerebrospinal fluid segmentation model.
2. The method of claim 1, wherein constructing sample data from said CTP image comprises:
preprocessing the CTP image to obtain a three-dimensional brain tissue image;
extracting a contour of a maximum connected domain of a cerebrospinal fluid region in the brain tissue image;
taking the point on the outline as an initial point, and carrying out random value taking walking on the brain tissue image to obtain a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus;
and extracting a two-dimensional simulated adhesion image comprising cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional simulated adhesion image, and extracting a corresponding two-dimensional cerebrospinal fluid image according to the two-dimensional simulated adhesion image and the contour, wherein the two-dimensional simulated adhesion image and the corresponding two-dimensional cerebrospinal fluid image form a group of sample data.
3. The method of claim 2, wherein the random value walking is performed on the brain tissue image with a point on the contour as an initial point to obtain a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and infarct focus, comprising:
randomly taking a point on the outline as an initial point; determining a cerebrospinal fluid region mask according to the contour;
in the first random walk, setting a first mark for a point outside the cerebrospinal fluid region mask, among points in a preset size neighborhood of the initial point;
in the j +1 th random walk, randomly selecting a point in a preset size neighborhood corresponding to the j th random walk as a target point, and setting a first identifier for a point outside the cerebrospinal fluid region mask in the point in the preset size neighborhood of the target point; j is a natural number;
assigning the pixel value of the point which is set with the first identifier in the image after walking for n times randomly as a target pixel value, wherein the target pixel value is the pixel value of the randomly selected point in the contour on the brain tissue image; and taking the assigned brain tissue image as a three-dimensional simulated adhesion image of adhesion of cerebrospinal fluid and an infarct focus.
4. The method of claim 2, wherein preprocessing the CTP image to obtain a three-dimensional brain tissue image comprises:
removing a skull image area from the CTP image to obtain an original three-dimensional brain tissue image;
and denoising the original three-dimensional brain tissue image to obtain a preprocessed three-dimensional brain tissue image.
5. The method of claim 1, wherein the training process of the cerebrospinal fluid segmentation model comprises:
in the training process, the parameter value of the generative confrontation network model corresponding to the 1 st group of sample data is the initial parameter value, the parameter value of the generative confrontation network model corresponding to the (i + 1) th group of sample data is the updated parameter value after the training of the ith group of sample data, i is a natural number, and i is more than or equal to 1; for each set of sample data, the following operations are performed:
updating a first parameter value corresponding to a judgment network model in a generative confrontation network model corresponding to the set of sample data according to the set of sample data to obtain a first generative confrontation network model;
updating a second parameter value corresponding to a generated network model in the first generated confrontation network model according to the set of sample data and the first generated confrontation network model to obtain a second generated confrontation network model; the parameter value in the second generative confrontation network model is the parameter value of the generative confrontation network model updated after the training;
determining whether a preset convergence condition is met after the training, if so, stopping the training, and determining a generated network model in the generated confrontation network model obtained after the training as a cerebrospinal fluid segmentation model; otherwise, continuing training by using the next group of sample data.
6. The method of claim 5, wherein updating the first parameter value corresponding to the discriminant network model in the generative confrontation network model corresponding to the set of sample data according to the set of sample data to obtain a first generative confrontation network model, comprises:
inputting the two-dimensional cerebrospinal fluid image in the set of sample data into a discriminant network model in the generative confrontation network model corresponding to the set of sample data to obtain a first probability value output by the discriminant network model;
determining a first loss value corresponding to the network model according to the first probability value and a first preset value;
inputting the two-dimensional cerebrospinal fluid image and the two-dimensional simulation adhesion image in the set of sample data into a generated network model in a generated countermeasure network model corresponding to the set of sample data to obtain a first image output by the generated network model;
inputting the first image and the two-dimensional cerebrospinal fluid image in the group of sample data into a discrimination network model in a generative confrontation network model corresponding to the group of sample data to obtain a second probability value output by the discrimination network model;
determining a second loss value according to the second probability value and a second preset value;
and updating a first parameter value corresponding to a discrimination network model according to the first loss value and the second loss value, wherein the first generative confrontation network model comprises a generative network model and an updated discrimination network model in the generative confrontation network model corresponding to the group of sample data.
7. The method of claim 5, wherein updating a second parameter value corresponding to the generated network model in the first generative confrontation network model according to the set of sample data and the first generative confrontation network model to obtain a second generative confrontation network model comprises:
inputting the two-dimensional cerebrospinal fluid image and the two-dimensional simulation adhesion image in the set of sample data into a generation network model in the first generation countermeasure network model to obtain a second image output by the generation network model;
inputting the second image and the two-dimensional cerebrospinal fluid image in the group of sample data into a discrimination network model in the first generative confrontation network model to obtain a third probability value output by the discrimination network model;
determining a third loss value corresponding to the judgment network model according to the third probability value and the first preset value;
obtaining a similarity loss value between the second image and the two-dimensional cerebrospinal fluid image in the group of sample data;
and updating a second parameter value corresponding to the generated network model according to the third loss value and the similarity loss value, wherein the second generated confrontation network model comprises the updated generated network model and a discrimination network model in the first generated confrontation network model.
8. The method of claim 5, wherein the convergence condition is: the loss value corresponding to the discrimination network model in the generation type confrontation network model after the training is smaller than or equal to a first error value, and the quality difference value between the output image of the generation network model in the generation type confrontation network model and the input two-dimensional cerebrospinal fluid image after the training is smaller than or equal to a second error value.
9. An application method of a cerebrospinal fluid segmentation model is characterized by comprising the following steps:
acquiring a head CT perfusion imaging CTP image of a detected object, wherein the CTP image comprises an infarction focus and the infarction focus is adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
acquiring an input image according to the CTP image, wherein the input image comprises cerebrospinal fluid and an area with an infarct focus adhesion; the input image is a two-dimensional image;
inputting the input image into a trained cerebrospinal fluid segmentation model, and obtaining a target cerebrospinal fluid image output by the cerebrospinal fluid segmentation model, wherein the cerebrospinal fluid segmentation model is generated according to the method of any one of claims 1-8.
10. The method of claim 9, further comprising:
and obtaining an infarct focus image according to the target cerebrospinal fluid image and the input image.
11. The method of claim 9, wherein acquiring an input image from said CTP image comprises:
preprocessing the CTP image to obtain a three-dimensional brain tissue image;
extracting outlines of cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional brain tissue image;
and extracting a two-dimensional actual adhesion image comprising cerebrospinal fluid and an infarct focus adhesion area from the three-dimensional brain tissue image as an input image according to the contour.
12. An apparatus for generating a cerebrospinal fluid segmentation model, comprising:
the non-adhesion image acquisition module is used for acquiring a head CT perfusion imaging CTP image, wherein an infarction focus is not included in the CTP image, or the CTP image includes the infarction focus but the infarction focus is not adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
the construction module is used for constructing sample data according to the CTP image, wherein the sample data comprises a two-dimensional simulated adhesion image of cerebrospinal fluid and an adhesion area of an infarct focus and a two-dimensional cerebrospinal fluid image;
the device comprises a setting module, a generating type confrontation network model and a generating type confrontation network model, wherein the setting module is used for setting an initial parameter value of the generating type confrontation network model;
and the training module is used for training the generative confrontation network model by using the sample data to obtain a trained generative confrontation network model, and the generative network model in the trained generative confrontation network model is used as a cerebrospinal fluid segmentation model.
13. An apparatus for applying a cerebrospinal fluid segmentation model, comprising:
the adhesion image acquisition module is used for acquiring a head CT perfusion imaging CTP image of a detected object, wherein the CTP image comprises an infarction focus and the infarction focus is adhered to cerebrospinal fluid; the CTP image is a three-dimensional image;
the input image acquisition module is used for acquiring an input image according to the CTP image, wherein the input image comprises cerebrospinal fluid and an area with an infarct focus adhesion; the input image is a two-dimensional image;
a segmentation module, configured to input the input image into a trained cerebrospinal fluid segmentation model, and obtain a target cerebrospinal fluid image output by the cerebrospinal fluid segmentation model, where the cerebrospinal fluid segmentation model is generated according to the method of any one of claims 1 to 8.
CN202011449140.9A 2020-12-09 2020-12-09 Generation method, application method and device of cerebrospinal fluid segmentation model Pending CN112419340A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011449140.9A CN112419340A (en) 2020-12-09 2020-12-09 Generation method, application method and device of cerebrospinal fluid segmentation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011449140.9A CN112419340A (en) 2020-12-09 2020-12-09 Generation method, application method and device of cerebrospinal fluid segmentation model

Publications (1)

Publication Number Publication Date
CN112419340A true CN112419340A (en) 2021-02-26

Family

ID=74776447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011449140.9A Pending CN112419340A (en) 2020-12-09 2020-12-09 Generation method, application method and device of cerebrospinal fluid segmentation model

Country Status (1)

Country Link
CN (1) CN112419340A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113936806A (en) * 2021-09-18 2022-01-14 复旦大学 Brain stimulation response model construction method, response method, device and electronic equipment
CN117495893A (en) * 2023-12-25 2024-02-02 南京筑卫医学科技有限公司 Skull peeling method based on active contour model

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050113680A1 (en) * 2003-10-29 2005-05-26 Yoshihiro Ikeda Cerebral ischemia diagnosis assisting apparatus, X-ray computer tomography apparatus, and apparatus for aiding diagnosis and treatment of acute cerebral infarct
CN104143190A (en) * 2014-07-24 2014-11-12 东软集团股份有限公司 Method and system for partitioning construction in CT image
US20150125057A1 (en) * 2012-05-04 2015-05-07 Emory University Methods, systems and computer readable storage media storing instructions for imaging and determining information associated with regions of the brain
CN106056126A (en) * 2015-02-13 2016-10-26 西门子公司 Plaque vulnerability assessment in medical imaging
CN107248162A (en) * 2017-05-18 2017-10-13 杭州全景医学影像诊断有限公司 The method of preparation method and acute cerebral ischemia the image segmentation of acute cerebral ischemia Image Segmentation Model
EP3425589A1 (en) * 2017-07-03 2019-01-09 Jochen Fiebach Method for assessing a likelihood that an ischemia in a brain tissue area results in an infarction of this brain tissue area by image analysis
CN109190690A (en) * 2018-08-17 2019-01-11 东北大学 The Cerebral microbleeds point detection recognition method of SWI image based on machine learning
CN109583440A (en) * 2017-09-28 2019-04-05 北京西格码列顿信息技术有限公司 It is identified in conjunction with image and reports the medical image aided diagnosis method edited and system
CN109726752A (en) * 2018-12-25 2019-05-07 脑玺(上海)智能科技有限公司 The dividing method and system of perfusion dynamic image based on time signal curve
CN110859622A (en) * 2019-11-18 2020-03-06 东软医疗系统股份有限公司 Imaging method and device and nuclear magnetic system
CN111062963A (en) * 2019-12-16 2020-04-24 上海联影医疗科技有限公司 Blood vessel extraction method, system, device and storage medium
CN111489360A (en) * 2020-03-18 2020-08-04 上海商汤智能科技有限公司 Image segmentation method and related equipment
CN111667458A (en) * 2020-04-30 2020-09-15 杭州深睿博联科技有限公司 Method and device for detecting early acute cerebral infarction in flat-scan CT
CN111833359A (en) * 2020-07-13 2020-10-27 中国海洋大学 Brain tumor segmentation data enhancement method based on generation of confrontation network
CN111881720A (en) * 2020-06-09 2020-11-03 山东大学 Data automatic enhancement expansion method, data automatic enhancement identification method and data automatic enhancement expansion system for deep learning

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050113680A1 (en) * 2003-10-29 2005-05-26 Yoshihiro Ikeda Cerebral ischemia diagnosis assisting apparatus, X-ray computer tomography apparatus, and apparatus for aiding diagnosis and treatment of acute cerebral infarct
US20150125057A1 (en) * 2012-05-04 2015-05-07 Emory University Methods, systems and computer readable storage media storing instructions for imaging and determining information associated with regions of the brain
CN104143190A (en) * 2014-07-24 2014-11-12 东软集团股份有限公司 Method and system for partitioning construction in CT image
CN106056126A (en) * 2015-02-13 2016-10-26 西门子公司 Plaque vulnerability assessment in medical imaging
CN107248162A (en) * 2017-05-18 2017-10-13 杭州全景医学影像诊断有限公司 The method of preparation method and acute cerebral ischemia the image segmentation of acute cerebral ischemia Image Segmentation Model
EP3425589A1 (en) * 2017-07-03 2019-01-09 Jochen Fiebach Method for assessing a likelihood that an ischemia in a brain tissue area results in an infarction of this brain tissue area by image analysis
CN109583440A (en) * 2017-09-28 2019-04-05 北京西格码列顿信息技术有限公司 It is identified in conjunction with image and reports the medical image aided diagnosis method edited and system
CN109190690A (en) * 2018-08-17 2019-01-11 东北大学 The Cerebral microbleeds point detection recognition method of SWI image based on machine learning
CN109726752A (en) * 2018-12-25 2019-05-07 脑玺(上海)智能科技有限公司 The dividing method and system of perfusion dynamic image based on time signal curve
CN110859622A (en) * 2019-11-18 2020-03-06 东软医疗系统股份有限公司 Imaging method and device and nuclear magnetic system
CN111062963A (en) * 2019-12-16 2020-04-24 上海联影医疗科技有限公司 Blood vessel extraction method, system, device and storage medium
CN111489360A (en) * 2020-03-18 2020-08-04 上海商汤智能科技有限公司 Image segmentation method and related equipment
CN111667458A (en) * 2020-04-30 2020-09-15 杭州深睿博联科技有限公司 Method and device for detecting early acute cerebral infarction in flat-scan CT
CN111881720A (en) * 2020-06-09 2020-11-03 山东大学 Data automatic enhancement expansion method, data automatic enhancement identification method and data automatic enhancement expansion system for deep learning
CN111833359A (en) * 2020-07-13 2020-10-27 中国海洋大学 Brain tumor segmentation data enhancement method based on generation of confrontation network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHRISTIAN 等: "AUTOMATED MEASUREMENT OF UPTAKE IN CEREBELLUM, LIVER, AND AORTIC ARCH IN FULL-BODY FDG PET/CT SCANS", MEDICAL PHYSICS, vol. 399, no. 6, pages 3112 - 3123, XP012161070, DOI: 10.1118/1.4711815 *
E.ANAYA 等: "Automatic generation of mr-based attenuation map using conditional generative adversarial network for attenuation correction in pet/mr", 2020IEEE NUCLEAR SCIENCE SYPOSIUM AND MEDICAL IMAGING CONFERENCE, pages 1 - 3 *
廖振洪 等: "基于GRASE-DWI序列的颅脑扩散加权成像技术", 磁共振成像, vol. 11, no. 06, pages 433 - 437 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113936806A (en) * 2021-09-18 2022-01-14 复旦大学 Brain stimulation response model construction method, response method, device and electronic equipment
CN113936806B (en) * 2021-09-18 2024-03-08 复旦大学 Brain stimulation response model construction method, response method, device and electronic equipment
CN117495893A (en) * 2023-12-25 2024-02-02 南京筑卫医学科技有限公司 Skull peeling method based on active contour model
CN117495893B (en) * 2023-12-25 2024-03-19 南京筑卫医学科技有限公司 Skull peeling method based on active contour model

Similar Documents

Publication Publication Date Title
Soni et al. Light weighted healthcare CNN model to detect prostate cancer on multiparametric MRI
US9547902B2 (en) Method and system for physiological image registration and fusion
Kuang et al. Segmenting hemorrhagic and ischemic infarct simultaneously from follow-up non-contrast CT images in patients with acute ischemic stroke
Zhang et al. Multi‐needle localization with attention U‐net in US‐guided HDR prostate brachytherapy
DE102007018763B9 (en) Method for arterial-venous image separation in blood pool contrast media
EP2620909B1 (en) Method, system and computer readable medium for automatic segmentation of a medical image
CN111008984A (en) Method and system for automatically drawing contour line of normal organ in medical image
Harouni et al. Universal multi-modal deep network for classification and segmentation of medical images
CN112419340A (en) Generation method, application method and device of cerebrospinal fluid segmentation model
KR20200082660A (en) Pathological diagnosis method and apparatus based on machine learning
CN113506310A (en) Medical image processing method and device, electronic equipment and storage medium
Fashandi et al. An investigation of the effect of fat suppression and dimensionality on the accuracy of breast MRI segmentation using U‐nets
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN113096137A (en) Adaptive segmentation method and system for OCT (optical coherence tomography) retinal image field
CN113822289A (en) Training method, device and equipment of image noise reduction model and storage medium
CN115311193A (en) Abnormal brain image segmentation method and system based on double attention mechanism
Hao et al. Magnetic resonance image segmentation based on multi-scale convolutional neural network
WO2023125828A1 (en) Systems and methods for determining feature points
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN115841472A (en) Method, device, equipment and storage medium for identifying high-density characteristics of middle cerebral artery
CN113192099B (en) Tissue extraction method, device, equipment and medium
KR102639985B1 (en) Method and device for semgneting body component for conveying fluid
Amaludin et al. Toward more accurate diagnosis of multiple sclerosis: Automated lesion segmentation in brain magnetic resonance image using modified U‐Net model
CN113554640A (en) AI model training method, use method, computer device and storage medium
CN112712507A (en) Method and device for determining calcified area of coronary artery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240204

Address after: 110167 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province

Applicant after: Shenyang Neusoft Medical Systems Co.,Ltd.

Country or region after: China

Address before: Room 336, 177-1, Chuangxin Road, Hunnan New District, Shenyang City, Liaoning Province

Applicant before: Shenyang advanced medical equipment Technology Incubation Center Co.,Ltd.

Country or region before: China