CN116228787A - Image sketching method, device, computer equipment and storage medium - Google Patents
Image sketching method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN116228787A CN116228787A CN202211092789.9A CN202211092789A CN116228787A CN 116228787 A CN116228787 A CN 116228787A CN 202211092789 A CN202211092789 A CN 202211092789A CN 116228787 A CN116228787 A CN 116228787A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- sketching
- training set
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The application relates to an image sketching method, an image sketching device, computer equipment and a storage medium. The method comprises the following steps: acquiring an image to be sketched, and performing first sketching treatment on a target part included in the image to be sketched to obtain a first image, so as to complete primary sketching; identifying an abnormal type corresponding to the image to be sketched, and acquiring a target area sketching criterion based on the abnormal type; determining a target zone boundary in the first image based on a target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image, so that redundant parts in the first image can be removed, and only core part images related to abnormal types are reserved; and carrying out second sketching treatment on the target part included in the second image to obtain a target image for sketching the target part, and finishing finer secondary sketching. By adopting the method, the problems of boundary positioning deviation, shrinkage, small broken points and the like in the target region segmentation task can be effectively solved, and the target region sketching accuracy is improved.
Description
Technical Field
The present application relates to the field of medical imaging technology, and in particular, to an image delineating method, an image delineating apparatus, a computer device, a storage medium, and a computer program product.
Background
The current stage of medical image segmentation may be divided into manual segmentation and automatic segmentation. Wherein the manual segmentation accuracy is high, usually as a gold standard for the segmentation result. The accuracy of manual segmentation is largely related to the prior knowledge of the operator, while the segmentation process takes a long time and takes much effort and time to delineate. Therefore, the method is particularly important and urgent for automatic segmentation of medical images, and especially the precision of an automatic segmentation algorithm based on deep learning is improved year by year since the deep learning is used for amplifying the color in the image field again.
In the current automatic segmentation task of the target area, positioning deviation, shrinkage or small broken points usually occur at the boundary of the target area, and the problem of inaccurate target area delineation exists.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image delineating method, apparatus, computer device, computer readable storage medium, and computer program product that can improve accuracy of target delineating, and also provide an image delineating method, apparatus, computer device, computer readable storage medium, and computer program product.
In a first aspect, the present application provides an image delineating method. The method comprises the following steps:
Acquiring an image to be sketched, and performing first sketching treatment on a target part included in the image to be sketched to obtain a first image;
identifying an abnormal type corresponding to the image to be sketched, and acquiring a target area sketching criterion based on the abnormal type;
determining a target zone boundary in the first image based on a target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image;
and carrying out second sketching processing on the target part included in the second image to obtain a target image sketching the target part.
In one embodiment, identifying an anomaly type corresponding to an image to be delineated, and acquiring a target zone delineating criterion based on the anomaly type includes:
identifying an abnormal region corresponding to a target part included in the image to be sketched, and determining the abnormal type of the image to be sketched based on the abnormal region;
and acquiring target region sketching criteria corresponding to the abnormality type from the plurality of target region sketching criteria based on the correspondence between the abnormality type and the target region sketching criteria, and taking the target region sketching criteria as target region sketching criteria.
In one embodiment, determining a target zone boundary in the first image based on target zone delineation criteria includes:
Determining a target zone boundary and a target zone center in the first image according to a target zone sketching criterion;
acquiring an image block with a fixed size from a first image according to the center of a target area;
and reserving the part of the image block within the boundary range of the target area according to the target area sketching criteria, and removing the part outside the boundary of the target area.
In a second aspect, the present application further provides an image sketching apparatus. The device comprises:
the first sketching module is used for acquiring an image to be sketched, and performing first sketching treatment on a target part included in the image to be sketched to obtain a first image;
the criterion matching module is used for identifying the abnormal type corresponding to the image to be sketched and acquiring a target zone sketching criterion based on the abnormal type;
the image processing module is used for determining a target zone boundary in the first image based on a target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image;
and the second sketching module is used for carrying out second sketching processing on the target part included in the second image to obtain a target image for sketching the target part.
In one embodiment, the criterion matching module is further configured to identify an abnormal region corresponding to the target location included in the image to be sketched, and determine an abnormal type of the image to be sketched based on the abnormal region; and acquiring target region sketching criteria corresponding to the abnormality type from the plurality of target region sketching criteria based on the correspondence between the abnormality type and the target region sketching criteria, and taking the target region sketching criteria as target region sketching criteria.
In one embodiment, the image processing module is further configured to determine a target volume boundary and a target volume center in the first image according to a target volume delineation criterion; acquiring an image block with a fixed size from a first image according to the center of a target area; and reserving the part of the image block within the boundary range of the target area according to the target area sketching criteria, and removing the part outside the boundary of the target area.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring an image to be sketched, and performing first sketching treatment on a target part included in the image to be sketched to obtain a first image;
identifying an abnormal type corresponding to the image to be sketched, and acquiring a target area sketching criterion based on the abnormal type;
determining a target zone boundary in the first image based on a target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image;
and carrying out second sketching processing on the target part included in the second image to obtain a target image sketching the target part.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring an image to be sketched, and performing first sketching treatment on a target part included in the image to be sketched to obtain a first image;
identifying an abnormal type corresponding to the image to be sketched, and acquiring a target area sketching criterion based on the abnormal type;
determining a target zone boundary in the first image based on a target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image;
and carrying out second sketching processing on the target part included in the second image to obtain a target image sketching the target part.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring an image to be sketched, and performing first sketching treatment on a target part included in the image to be sketched to obtain a first image;
identifying an abnormal type corresponding to the image to be sketched, and acquiring a target area sketching criterion based on the abnormal type;
Determining a target zone boundary in the first image based on a target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image;
and carrying out second sketching processing on the target part included in the second image to obtain a target image sketching the target part.
The image sketching method, the image sketching device, the computer equipment, the storage medium and the computer program product are used for obtaining an image to be sketched, and performing first sketching treatment on a target part included in the image to be sketched to obtain a first image, so that primary sketching is completed; identifying an abnormal type corresponding to the image to be sketched, and acquiring a target area sketching criterion based on the abnormal type; determining a target zone boundary in the first image based on a target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image, so that redundant parts in the first image can be removed, and only core part images related to abnormal types are reserved; and carrying out second sketching treatment on the target part included in the second image to obtain a target image for sketching the target part, and finishing finer secondary sketching. The problems of boundary positioning deviation, shrinkage, small broken points and the like in a target region segmentation task can be effectively solved, and the target region sketching accuracy is improved.
In a sixth aspect, the present application further provides a training method for a segmentation model. The method comprises the following steps:
acquiring a first training set, wherein the first training set comprises a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image;
training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model; the first segmentation model is used for carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image;
acquiring a second training set based on the first training set, wherein the second training set comprises a plurality of sample medical images after cutting processing and a second target area sketching image corresponding to each sample medical image after cutting processing;
training a second initial neural network based on a second training set, and taking the trained second initial neural network as a second segmentation model; the second segmentation model is used for carrying out second delineating processing on the target part included in the first image after the clipping processing to obtain a target image for delineating the target part.
In one embodiment, obtaining the second training set based on the first training set includes:
Determining target zone boundaries of each sample medical image based on the first target zone delineation image corresponding to each sample medical image in the first training set;
cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images;
acquiring a second target area sketch image corresponding to each sample segmentation image;
and constructing a second training set based on the sample segmentation images and the second target region sketch images corresponding to the sample segmentation images.
In one embodiment, determining a target boundary for each sample medical image based on a first target delineation image corresponding to each sample medical image in a first training set comprises:
identifying the abnormality types corresponding to the medical images of the samples respectively, and acquiring target area sketching criteria corresponding to the medical images of the samples respectively based on the abnormality types corresponding to the medical images of the samples respectively;
and determining the target zone boundary of each sample medical image based on the target zone delineation criteria corresponding to each sample medical image.
In one embodiment, identifying an anomaly type corresponding to each sample medical image, and acquiring a target region delineation criterion corresponding to each sample medical image based on the anomaly type corresponding to each sample medical image, including:
Identifying corresponding abnormal areas in each sample medical image, and determining corresponding abnormal types in each sample medical image based on the abnormal areas;
and respectively acquiring target region sketching criteria corresponding to each sample medical image from a plurality of target region sketching criteria based on the corresponding relation between the anomaly type and the target region sketching criteria, and taking the target region sketching criteria as the target region sketching criteria respectively corresponding to each sample medical image.
In a seventh aspect, the present application further provides an image sketching apparatus. The device comprises:
the first construction module is used for acquiring a first training set, wherein the first training set comprises a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image;
the first training module is used for training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model; the first segmentation model is used for carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image;
the second construction module is used for acquiring a second training set based on the first training set, wherein the second training set comprises a plurality of sample medical images after cutting processing and a second target area sketch image corresponding to each sample medical image after cutting processing;
The second training module is used for training the second initial neural network based on the second training set, and taking the trained second initial neural network as a second segmentation model; the second segmentation model is used for carrying out second delineating processing on the target part included in the first image after the clipping processing to obtain a target image for delineating the target part.
In one embodiment, the second construction module is further configured to determine a target boundary of each sample medical image based on the first target delineation image corresponding to each sample medical image in the first training set; cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images; acquiring a second target area sketch image corresponding to each sample segmentation image; and constructing a second training set based on the sample segmentation images and the second target region sketch images corresponding to the sample segmentation images.
In one embodiment, the second construction module is further configured to identify an anomaly type corresponding to each of the sample medical images, and obtain a target region delineation criterion corresponding to each of the sample medical images based on the anomaly type corresponding to each of the sample medical images; and determining the target zone boundary of each sample medical image based on the target zone delineation criteria corresponding to each sample medical image.
In one embodiment, the second construction module is further configured to identify an anomaly region corresponding to each of the sample medical images, and determine an anomaly type corresponding to each of the sample medical images based on the anomaly region; and respectively acquiring target region sketching criteria corresponding to each sample medical image from a plurality of target region sketching criteria based on the corresponding relation between the anomaly type and the target region sketching criteria, and taking the target region sketching criteria as the target region sketching criteria respectively corresponding to each sample medical image.
In an eighth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a first training set, wherein the first training set comprises a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image;
training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model; the first segmentation model is used for carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image;
acquiring a second training set based on the first training set, wherein the second training set comprises a plurality of sample medical images after cutting processing and a second target area sketching image corresponding to each sample medical image after cutting processing;
Training a second initial neural network based on a second training set, and taking the trained second initial neural network as a second segmentation model; the second segmentation model is used for carrying out second delineating processing on the target part included in the first image after the clipping processing to obtain a target image for delineating the target part.
In a ninth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a first training set, wherein the first training set comprises a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image;
training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model; the first segmentation model is used for carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image;
acquiring a second training set based on the first training set, wherein the second training set comprises a plurality of sample medical images after cutting processing and a second target area sketching image corresponding to each sample medical image after cutting processing;
Training a second initial neural network based on a second training set, and taking the trained second initial neural network as a second segmentation model; the second segmentation model is used for carrying out second delineating processing on the target part included in the first image after the clipping processing to obtain a target image for delineating the target part.
In a tenth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring a first training set, wherein the first training set comprises a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image;
training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model; the first segmentation model is used for carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image;
acquiring a second training set based on the first training set, wherein the second training set comprises a plurality of sample medical images after cutting processing and a second target area sketching image corresponding to each sample medical image after cutting processing;
Training a second initial neural network based on a second training set, and taking the trained second initial neural network as a second segmentation model; the second segmentation model is used for carrying out second delineating processing on the target part included in the first image after the clipping processing to obtain a target image for delineating the target part.
The training method, the training device, the computer equipment, the storage medium and the computer program product of the segmentation model acquire a first training set, train a first initial neural network based on the first training set, and take the trained first initial neural network as a first segmentation model; and acquiring a second training set based on the first training set, training a second initial neural network based on the second training set, and taking the trained second initial neural network as a second segmentation model. The first segmentation model and the second segmentation model are used for carrying out target region sketching on the image to be sketched twice, so that the problems of boundary positioning deviation, shrinkage, small broken points and the like in a target region segmentation task can be effectively solved, and the target region sketching accuracy is improved.
Drawings
FIG. 1 is a flow diagram of a method of image delineation in one embodiment;
FIG. 2 is a schematic diagram showing the effect of an image delineating method in one embodiment;
FIG. 3 is a flow chart of a training method of a segmentation model in one embodiment;
FIG. 4 is an overall training flow diagram of a training method for a segmentation model in one embodiment;
FIG. 5 is a block diagram of an image delineating device in one embodiment;
FIG. 6 is a block diagram of a training apparatus for segmentation models in one embodiment;
fig. 7 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, an image delineating method is provided, which solves the problems of boundary positioning deviation, shrinkage, small broken points and the like by combining two image delineation and target area delineation criteria together on the basis of traditional medical image segmentation. The embodiment is exemplified by the method applied to a computer device, and it is understood that the computer device may be a terminal or a server. The terminal can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment, portable wearable equipment, and the internet of things equipment can be smart speakers, smart televisions, smart air conditioners, smart medical equipment and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers. In this embodiment, the method includes the steps of:
The image to be delineated can be a medical image, such as a single-mode image, including functional images and structural images, such as CT (Computed Tomography, electronic computed tomography), MRI (Magnetic Resonance Imaging ), PET (Positron Emission Computed Tomography, positron emission tomography), ultrasound images, and the like. The target site refers to a body part or body area of a case where a medical examination is desired, including but not limited to organs, tissues, target areas, etc. of a human or animal; for example, when a case requires tumor radiation therapy, the target site may be a focal target area and/or a high risk target area of the case. The image to be sketched can also be a multi-modal image, and the multi-modal image refers to the collection of the multi-modal medical image data. The image to be sketched may also be a non-medical image including, but not limited to, an optical image, an infrared image, etc.
Optionally, the manner in which the computer device performs the first delineation process on the image to be delineated includes, but is not limited to, employing an image segmentation model. The computer equipment inputs the image to be sketched into an image segmentation model to obtain a first image with a sketching mark, and the first sketching processing is completed. The first delineating process is equivalent to rough target region segmentation of the target region in the image to be delineated.
And step 104, identifying the abnormal type corresponding to the image to be sketched, and acquiring a target area sketching criterion based on the abnormal type.
Wherein the abnormality type may be a lesion type of the target site, for example, the abnormality type may be a tumor or a muscle tear or a fracture, etc.; the type of abnormality may also be the type, size, or the like of the target site, e.g., selecting different delineation criteria based on different organs; but may also be of the type of image, such as selecting a sketching criterion according to the image modality; the image may also be patient information corresponding to the image, such as age, sex, patient information, etc. of the patient. The target region delineation criteria are determined according to the anomaly type, and the target region boundary cutting mode of the target region in the image to be delineated generally requires different delineation criteria according to different medical requirements. For example, a target delineation criterion based on fracture anomaly type acquisition may only ensure that the contour of the damaged bone is within the cut boundary; another target region delineation criterion acquired based on tumor anomaly type needs to ensure that the tumor and the entire treatment radiation region are within the clipping boundary; more specifically, the target region delineating criterion obtained based on the abnormal type of breast cancer is to use the lower boundary of the collarbone head in the image to be delineated (breast cancer CT image) as the upper boundary of the cutting boundary, and the horizontal position of the locating point (or lead point) as the lower boundary of the cutting boundary.
Optionally, the computer device may directly perform image recognition on the image to be sketched to obtain an anomaly type; the abnormal type can be identified according to text information (labels, image names and image description texts) corresponding to the images to be sketched; the anomaly type corresponding to the current image to be sketched can also be directly input from the outside. After the computer equipment determines the abnormality type corresponding to the image to be sketched, a target area sketching criterion matched with the current abnormality type is selected from a plurality of preset target area sketching criteria.
And 106, determining a target zone boundary in the first image based on a target zone sketching criterion, and clipping the first image based on the target zone boundary to obtain a second image.
Wherein the target area boundary range comprises a complete target part.
Optionally, the computer device determines a target boundary in the first image according to a target delineation criterion, where the target boundary may be a regular boundary including, but not limited to, an upper boundary, a lower boundary, a left boundary, and a right boundary, or may be an irregular boundary formed by an irregular curve surrounding the target portion, cuts the first image based on the target boundary, retains a portion within the target boundary, and removes a portion outside the target boundary as the second image.
In one possible embodiment, the computer device determines a target volume boundary and a center in the first image according to target volume delineation criteria, the target volume boundary including an upper boundary, a lower boundary, a left boundary, a right boundary, a front boundary, and a rear boundary. Firstly, an image block with a fixed size (or a specific distance from a boundary) is taken out of a first image according to the center of a target area, then, the part of the image block within a specific boundary range is reserved according to a target area sketching rule, and the part outside the specific boundary range of the target area is removed. The fixed size may be determined based on target delineation criteria, or may be determined in advance or manually.
And step 108, performing second sketching processing on the target part included in the second image to obtain a target image sketching the target part.
Optionally, the manner in which the computer device performs the second delineation process on the second image includes, but is not limited to, employing an image segmentation model. The computer equipment inputs the second image into the image segmentation model to obtain a second image with a sketch mark, the sketch mark can sketch a target part, and the second image with the sketch mark is taken as a target image. The second delineating process is equivalent to fine target region segmentation of the target region in the image to be delineated.
In a possible embodiment, before the second sketching process is performed on the target portion included in the second image, the pixel value of the designated area in the second image may be set to a preset fixed value, where the designated area refers to an area that does not include the target portion, and the preset fixed value may be set to 0.
In one possible embodiment, the second delineation process may be followed by a further number of delineation processes on the target image. For example, a first sketch may be performed first, a second sketch may be performed on the basis of the first sketch, and a third sketch may be performed on the basis of the second sketch; the image after the first sketching treatment can be divided into two area images of a first part and a second part, the first part is sketched in a second way, and the second part is sketched in a third way; the image segmentation model chosen for each delineation may be different.
In the image sketching method, the first image is obtained by acquiring the image to be sketched and performing first sketching treatment on the target part included in the image to be sketched, so that the first image is completed; identifying an abnormal type corresponding to the image to be sketched, and acquiring a target area sketching criterion based on the abnormal type; determining a target zone boundary in the first image based on a target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image, so that redundant parts in the first image can be removed, and only core part images related to abnormal types are reserved; and carrying out second sketching treatment on the target part included in the second image to obtain a target image for sketching the target part, and finishing finer secondary sketching. The problems of boundary positioning deviation, shrinkage, small broken points and the like in a target region segmentation task can be effectively solved, and the target region sketching accuracy is improved.
In one embodiment, the first delineating process of the target part included in the image to be delineated is performed by a first segmentation model, and the obtaining manner of the first segmentation model includes: acquiring a first training set, wherein the first training set comprises a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image; and training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model.
Optionally, the computer device builds a plurality of cascade neural networks (corresponding to the first initial neural network), the network structure is 3D U-Net, the plurality of images with sketching labels are used as a first training set, and each cascade neural network is trained through the first training set. An Adam optimizer may be employed to optimize neural network parameters, employing cross entropy as a loss function for image segmentation and high specificity direction delineation information. And then taking a plurality of images with sketching labels as a test set, respectively testing each trained cascade neural network through the test set to obtain an evaluation parameter of each trained cascade neural network, and selecting one model with the best evaluation parameter as a first segmentation model. The performance of each model may be evaluated using the DICE coefficients (aggregate similarity measure function), and one model with the highest DICE coefficient in the test set is obtained as the first segmentation model.
In this embodiment, a first training set is acquired, where the first training set includes a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image; and training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model. The rough segmentation model can be obtained, and the first sketching of the target part in the image to be sketched can be realized through the rough segmentation model.
Furthermore, on the basis of the embodiment, the sketching can be performed for more times based on the same method, so as to obtain more accurate sketching results.
In one embodiment, obtaining a first training set includes: extracting an image based on the medical image, and preprocessing the extracted image to obtain a sample medical image, wherein the preprocessing comprises at least one of image enhancement, overturning, translation and rotation; acquiring a first target area sketch image corresponding to a sample medical image; a training example is obtained based on the sample medical image and the first target region delineation image, and a first training set is constructed based on the plurality of training examples.
Optionally, the computer device pre-processes the medical image containing the desired delineated target site, wherein the pre-processing comprises: the image data format is converted, the image data is normalized, the data is divided into a training set and a testing set according to proportion, and the training set data is enhanced, including turning, translation and rotation. And taking the preprocessed medical images as sample medical images, adding a sketching label of a target area to each sample medical image, and obtaining first target area sketching images corresponding to each sample medical image. Each set of sample medical images and the first target volume delineation image are taken as one training instance (training sample), and a first training set is constructed based on a plurality of training instances.
In this embodiment, image extraction is performed based on a medical image, and an image obtained by the extraction is preprocessed to obtain a sample medical image, where the preprocessing includes at least one of image enhancement, overturning, translation and rotation; acquiring a first target area sketch image corresponding to a sample medical image; a training example is obtained based on the sample medical image and the first target region delineation image, and a first training set is constructed based on the plurality of training examples. By training the image segmentation model using the first training set, a rough segmentation model can be obtained.
In one embodiment, the second delineating the target portion included in the second image is performed by using a second segmentation model, where the second segmentation model is obtained by: acquiring a second training set based on the first training set, wherein the second training set comprises a plurality of sample medical images after cutting processing and a second target area sketching image corresponding to each sample medical image after cutting processing; and training the second initial neural network based on the second training set, and taking the trained second initial neural network as a second segmentation model.
Optionally, the computer device builds a plurality of cascaded neural networks (corresponding to the second initial neural network) again, the network structure is 3D U-Net, the boundary cutting is carried out on a plurality of images with sketching labels in the first training set, only target parts of the cut images are reserved as far as possible, the cut images are used as the second training set, and each cascaded neural network is trained through the second training set. An Adam optimizer may be employed to optimize neural network parameters, employing cross entropy as a loss function for image segmentation and high specificity direction delineation information. And then taking a plurality of cut images with sketching labels as a test set, respectively testing each trained cascade neural network through the test set to obtain an evaluation parameter of each trained cascade neural network, and selecting one model with the best evaluation parameter as a second segmentation model. The DICE coefficients may be used to evaluate the performance of each model, and the one model with the highest DICE coefficient in the test set may be obtained as the second segmentation model.
In this embodiment, a second training set is acquired based on the first training set, where the second training set includes a plurality of cut sample medical images, and a second target area sketch image corresponding to each cut sample medical image; and training the second initial neural network based on the second training set, and taking the trained second initial neural network as a second segmentation model. The sub-division model can be obtained, and the second sketching of the target part in the image to be sketched can be realized through the sub-division model.
In one embodiment, obtaining the second training set based on the first training set includes: determining target zone boundaries of each sample medical image based on the first target zone delineation image corresponding to each sample medical image in the first training set; cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images; acquiring a second target area sketch image corresponding to each sample segmentation image; and constructing a second training set based on the sample segmentation images and the second target region sketch images corresponding to the sample segmentation images.
Optionally, the computer device obtains one training instance in the first training set, that is, a pair of sample medical images and corresponding first target volume sketching images, determines target volume boundaries of the pair of images based on abnormality types corresponding to the pair of images, and cuts the sample medical images and the first target volume sketching images according to the target volume boundaries, respectively, to obtain one training instance (training sample) in the second training set. The computer device processes each training instance in the first training set using the same method to obtain a second training set.
In this embodiment, a target region boundary of each sample medical image is determined based on a first target region delineating image corresponding to each sample medical image in a first training set; cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images; acquiring a second target area sketch image corresponding to each sample segmentation image; and constructing a second training set based on the sample segmentation images and the second target region sketch images corresponding to the sample segmentation images. A second training set associated with the first training set can be obtained, and by training the image segmentation model using the second training set, a sub-segmentation model associated with the coarse segmentation model can be obtained.
In one embodiment, identifying an anomaly type corresponding to an image to be delineated and obtaining target volume delineation criteria based on the anomaly type includes: identifying an abnormal region corresponding to a target part included in the image to be sketched, and determining the abnormal type of the image to be sketched based on the abnormal region; and acquiring the target region sketching criteria corresponding to the abnormality type from the plurality of target region sketching criteria based on the preset corresponding relation between the abnormality type and the target region sketching criteria, and taking the target region sketching criteria as target region sketching criteria.
Wherein, the abnormal region refers to an organ, a small-range body part or a body region of a patient in need of medical examination, including but not limited to brain, heart, bone, blood vessel, liver, kidney, gall bladder, pancreas, thyroid, urinary system, uterus, appendages, teeth, etc. of a human body or an animal; for example, when a case requires a tumor examination, the abnormal region may be the tumor region of the case.
Optionally, a plurality of target region delineation criteria are preset in the computer device, and each preset target region delineation criteria corresponds to an anomaly type. For example, one target delineation criterion for fracture anomaly types may only ensure that the contour of the damaged bone is within the cut boundary, while another target delineation criterion for tumor anomaly types needs to ensure that the tumor and the entire treatment radiation area are within the cut boundary.
In this embodiment, an abnormal region corresponding to a target portion included in an image to be sketched is identified, and an abnormal type of the image to be sketched is determined based on the abnormal region; and acquiring the target region sketching criteria corresponding to the abnormality type from the plurality of target region sketching criteria based on the preset corresponding relation between the abnormality type and the target region sketching criteria, and taking the target region sketching criteria as target region sketching criteria. The target region drawing method and device can automatically select the proper target region drawing criteria aiming at the target positions included in the image to be drawn, so that the proper target region boundary is determined, the image is cut based on the target region boundary, the cut image is convenient to carry out secondary drawing of the target positions, the problems of boundary positioning deviation, shrinkage, small broken points and the like of the drawing can be prevented, and the target region drawing accuracy is improved.
In one embodiment, an image delineating method includes:
the computer equipment extracts an image based on the medical image, and pre-processes the extracted image to obtain a sample medical image, wherein the pre-processing comprises at least one of image enhancement, overturning, translation and rotation; acquiring a first target area sketch image corresponding to a sample medical image; a training example is obtained based on the sample medical image and the first target region delineation image, and a first training set is constructed based on the plurality of training examples. The first training set comprises a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image.
The computer device trains the first initial neural network based on the first training set, and takes the trained first initial neural network as a first segmentation model.
The computer device determines target zone boundaries of each sample medical image based on the first target zone delineation image corresponding to each sample medical image in the first training set; cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images; acquiring a second target area sketch image corresponding to each sample segmentation image; and constructing a second training set based on the sample segmentation images and the second target region sketch images corresponding to the sample segmentation images. The second training set comprises a plurality of sample medical images after cutting processing and a second target area sketching image corresponding to each sample medical image after cutting processing.
The computer device trains the second initial neural network based on the second training set, and takes the trained second initial neural network as a second segmentation model.
The method comprises the steps that computer equipment obtains an image to be sketched, and performs first sketching treatment on a target part included in the image to be sketched through a first segmentation model to obtain a first image;
the computer equipment identifies an abnormal region corresponding to a target part included in the image to be sketched, and determines the abnormal type of the image to be sketched based on the abnormal region; and acquiring the target region sketching criteria corresponding to the abnormality type from the plurality of target region sketching criteria based on the preset corresponding relation between the abnormality type and the target region sketching criteria, and taking the target region sketching criteria as target region sketching criteria.
The computer device determines a target zone boundary in the first image based on target zone delineation criteria and crop the first image based on the target zone boundary to obtain a second image.
And the computer equipment performs second delineating processing on the target part included in the second image through the second segmentation model to obtain a target image on which the target part is delineated.
In a possible embodiment, an image delineation method is used for delineating a breast cancer target area, and includes:
The computer equipment acquires a CT image of a chest region of a case, and performs rough segmentation processing on a target part included in the CT image through a first segmentation model to obtain a first image shown in a left diagram of fig. 2.
The computer equipment identifies an abnormal region corresponding to a target part included in the image to be sketched, and determines the abnormal type of the image to be sketched based on the abnormal region; and acquiring the target region sketching criteria corresponding to the abnormality type from the plurality of target region sketching criteria based on the preset corresponding relation between the abnormality type and the target region sketching criteria, and taking the target region sketching criteria as target region sketching criteria.
The computer device determines a target volume boundary in the first image based on target volume delineation criteria and crop the first image based on the target volume boundary resulting in a second image as shown in the diagram of fig. 2.
Setting a collarbone truncation line in the second image, wherein the computer device sets the pixel value of the image of the upper part of the collarbone truncation line to 0, to obtain a third image as shown in the right diagram of fig. 2.
And the computer equipment performs sub-segmentation processing on the target part included in the third image through the second segmentation model to obtain a target image outlining the target part.
In one embodiment, as shown in FIG. 3, a method of training a segmentation model is provided. The embodiment is exemplified by the method applied to a computer device, and it is understood that the computer device may be a terminal or a server. The terminal can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment, portable wearable equipment, and the internet of things equipment can be smart speakers, smart televisions, smart air conditioners, smart medical equipment and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers. In this embodiment, the method includes the steps of:
Optionally, the computer device uses a plurality of images with sketched labels as the first training set.
Optionally, the computer device builds a plurality of cascade neural networks (corresponding to the first initial neural network), the network structure is 3D U-Net, and each cascade neural network is trained through the first training set. An Adam optimizer may be employed to optimize neural network parameters, employing cross entropy as a loss function for image segmentation and high specificity direction delineation information. And then taking a plurality of images with sketching labels as a test set, respectively testing each trained cascade neural network through the test set to obtain an evaluation parameter of each trained cascade neural network, and selecting one model with the best evaluation parameter as a first segmentation model. The DICE coefficients may be used to evaluate the performance of each model, and the one model with the highest DICE coefficient in the test set may be obtained as the first segmentation model.
Optionally, the computer device performs boundary clipping on the plurality of images with the sketching labels in the first training set, clips the images to only keep target parts as far as possible, and takes the clipped images as the second training set.
Optionally, the computer device builds a plurality of cascade neural networks (corresponding to the second initial neural network) again, the network structure is 3D U-Net, and each cascade neural network is trained through the second training set. An Adam optimizer may be employed to optimize neural network parameters, employing cross entropy as a loss function for image segmentation and high specificity direction delineation information. And then taking a plurality of cut images with sketching labels as a test set, respectively testing each trained cascade neural network through the test set to obtain an evaluation parameter of each trained cascade neural network, and selecting one model with the best evaluation parameter as a second segmentation model. The DICE coefficients may be used to evaluate the performance of each model, and the one model with the highest DICE coefficient in the test set may be obtained as the second segmentation model.
In the training method of the segmentation model, a first training set is obtained, a first initial neural network is trained based on the first training set, and the trained first initial neural network is used as a first segmentation model; and acquiring a second training set based on the first training set, training a second initial neural network based on the second training set, and taking the trained second initial neural network as a second segmentation model. The first segmentation model and the second segmentation model are used for carrying out target region sketching on the image to be sketched twice, so that the problems of boundary positioning deviation, shrinkage, small broken points and the like in a target region segmentation task can be effectively solved, and the target region sketching accuracy is improved.
In one embodiment, obtaining a first training set includes: extracting an image based on the medical image, and preprocessing the extracted image to obtain a sample medical image, wherein the preprocessing comprises at least one of image enhancement, overturning, translation and rotation; acquiring a first target area sketch image corresponding to a sample medical image; a training example is obtained based on the sample medical image and the first target region delineation image, and a first training set is constructed based on the plurality of training examples.
Optionally, the computer device pre-processes the medical image containing the desired delineated target site, wherein the pre-processing comprises: the image data format is converted, the image data is normalized, the data is divided into a training set and a testing set according to proportion, and the training set data is enhanced, including turning, translation and rotation. And taking the preprocessed medical images as sample medical images, adding a sketching label of a target area to each sample medical image, and obtaining first target area sketching images corresponding to each sample medical image. Each set of sample medical images and the first target volume delineation image are taken as one training instance (training sample), and a first training set is constructed based on a plurality of training instances.
In this embodiment, image extraction is performed based on a medical image, and an image obtained by the extraction is preprocessed to obtain a sample medical image, where the preprocessing includes at least one of image enhancement, overturning, translation and rotation; acquiring a first target area sketch image corresponding to a sample medical image; a training example is obtained based on the sample medical image and the first target region delineation image, and a first training set is constructed based on the plurality of training examples. By training the image segmentation model using the first training set, a rough segmentation model can be obtained.
In one embodiment, obtaining the second training set based on the first training set includes: determining target zone boundaries of each sample medical image based on the first target zone delineation image corresponding to each sample medical image in the first training set; cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images; acquiring a second target area sketch image corresponding to each sample segmentation image; and constructing a second training set based on the sample segmentation images and the second target region sketch images corresponding to the sample segmentation images.
Optionally, the computer device obtains one training instance in the first training set, that is, a pair of sample medical images and corresponding first target volume sketching images, determines target volume boundaries of the pair of images based on abnormality types corresponding to the pair of images, and cuts the sample medical images and the first target volume sketching images according to the target volume boundaries, respectively, to obtain one training instance (training sample) in the second training set. The computer device processes each training instance in the first training set using the same method to obtain a second training set.
In this embodiment, a target region boundary of each sample medical image is determined based on a first target region delineating image corresponding to each sample medical image in a first training set; cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images; acquiring a second target area sketch image corresponding to each sample segmentation image; and constructing a second training set based on the sample segmentation images and the second target region sketch images corresponding to the sample segmentation images. A second training set associated with the first training set can be obtained, and by training the image segmentation model using the second training set, a sub-segmentation model associated with the coarse segmentation model can be obtained.
In one possible embodiment, as shown in fig. 4, a training method of a segmentation model includes:
the computer equipment extracts an image based on the medical image, and pre-processes the extracted image to obtain a sample medical image, wherein the pre-processing comprises at least one of image enhancement, overturning, translation and rotation; acquiring a first target area sketch image corresponding to a sample medical image; a training example is obtained based on the sample medical image and the first target volume delineation image, and the plurality of training examples are divided into a first training set and a first test set. The first training set and the first testing set respectively comprise a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image.
The computer device determines target zone boundaries of each sample medical image based on the first target zone delineation image corresponding to each sample medical image in the first training set; cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images; acquiring a second target area sketch image corresponding to each sample segmentation image; and dividing each sample segmentation image and a second target region sketch image corresponding to each sample segmentation image into a second training set and a second testing set. The second training set and the second testing set respectively comprise a plurality of sample medical images after cutting processing and a second target area sketching image corresponding to each sample medical image after cutting processing.
The computer equipment trains a plurality of first initial neural networks based on a first training set, tests the plurality of trained first initial neural networks through a first testing set, and takes the neural network with the best testing result as a rough segmentation model; the rough segmentation model is used for carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image.
The computer equipment trains the second initial neural network based on the second training set, tests the plurality of trained second initial neural networks through the second testing set, and takes the neural network with the best testing result as a fine segmentation model; the fine segmentation model is used for carrying out second delineating treatment on the target part included in the first image after the clipping treatment to obtain a target image for delineating the target part.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image sketching device for realizing the image sketching method. The implementation of the solution provided by the device is similar to that described in the above method, so specific limitations in the embodiments of one or more image sketching devices provided below may be referred to above for limitations of the image sketching method, and will not be described herein.
In one embodiment, as shown in fig. 5, there is provided an image delineating apparatus 500, comprising: a first sketching module 501, a criterion matching module 502, an image processing module 503 and a second sketching module 504, wherein:
the first sketching module 501 is configured to obtain an image to be sketched, and perform a first sketching process on a target part included in the image to be sketched to obtain a first image;
the criterion matching module 502 is configured to identify an anomaly type corresponding to an image to be sketched, and acquire a target area sketching criterion based on the anomaly type;
an image processing module 503, configured to determine a target region boundary in the first image based on a target region delineation criterion, and crop the first image based on the target region boundary to obtain a second image;
and a second outlining module 504, configured to perform a second outlining process on the target portion included in the second image, so as to obtain a target image outlining the target portion.
In one embodiment, the first sketching module 501 is further configured to obtain a first training set, where the first training set includes a plurality of sample medical images, and a first target area sketching image corresponding to each sample medical image; and training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model.
In one embodiment, the first sketching module 501 is further configured to perform image extraction based on the medical image, and perform preprocessing on the extracted image to obtain a sample medical image, where the preprocessing includes at least one of image enhancement, flipping, translation, and rotation; acquiring a first target area sketch image corresponding to a sample medical image; a training example is obtained based on the sample medical image and the first target region delineation image, and a first training set is constructed based on the plurality of training examples.
In one embodiment, the second sketching module 504 is further configured to obtain a second training set based on the first training set, where the second training set includes a plurality of cropped sample medical images, and a second target area sketching image corresponding to each cropped sample medical image; and training the second initial neural network based on the second training set, and taking the trained second initial neural network as a second segmentation model.
In one embodiment, the second delineation module 504 is further configured to determine a target boundary for each sample medical image based on the first target delineation image corresponding to each sample medical image in the first training set; cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images; acquiring a second target area sketch image corresponding to each sample segmentation image; and constructing a second training set based on the sample segmentation images and the second target region sketch images corresponding to the sample segmentation images.
In one embodiment, the criterion matching module 502 is configured to identify an abnormal region corresponding to a target location included in the image to be sketched, and determine an abnormal type of the image to be sketched based on the abnormal region; and acquiring the target region sketching criteria corresponding to the abnormality type from the plurality of target region sketching criteria based on the preset corresponding relation between the abnormality type and the target region sketching criteria, and taking the target region sketching criteria as target region sketching criteria.
The various modules in the image delineation device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Based on the same inventive concept, the embodiment of the application also provides a training device for the segmentation model, which is used for realizing the training method of the segmentation model. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the training device for one or more segmentation models provided below may be referred to the limitation of the training method for the segmentation model hereinabove, and will not be described herein.
In one embodiment, as shown in fig. 6, a training apparatus 600 for a segmentation model is provided, comprising: a first building module 601, a second building module 602, a first training module 603, and a second training module 604, wherein:
the first construction module 601 is configured to extract an image based on a medical image, and perform preprocessing on the extracted image to obtain a sample medical image, where the preprocessing includes at least one of image enhancement, flipping, translation, and rotation; acquiring a first target area sketch image corresponding to a sample medical image; a training example is obtained based on the sample medical image and the first target region delineation image, and a first training set is constructed based on the plurality of training examples.
A second construction module 602, configured to determine a target boundary of each sample medical image based on the first target delineation image corresponding to each sample medical image in the first training set; cutting each sample medical image based on the target area boundary to obtain a plurality of sample segmentation images; acquiring a second target area sketch image corresponding to each sample segmentation image; and constructing a second training set based on the sample segmentation images and the second target region sketch images corresponding to the sample segmentation images.
A first training module 603, configured to obtain a first training set, where the first training set includes a plurality of sample medical images, and a first target area sketch image corresponding to each sample medical image; training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model; the first segmentation model is used for carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image.
A second training module 604, configured to obtain a second training set, where the second training set includes a plurality of cut sample medical images, and a second target area sketch image corresponding to each cut sample medical image; training a second initial neural network based on a second training set, and taking the trained second initial neural network as a second segmentation model; the second segmentation model is used for carrying out second delineating processing on the target part included in the first image after the clipping processing to obtain a target image for delineating the target part.
The respective modules in the training device of the above-described segmentation model may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by the processor to implement an image delineation method or a training method of a segmentation model. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above-described embodiments of the image delineating method when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the above-described image delineating method embodiments.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, implements the steps of the above-described image delineating method embodiments.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps in the training method embodiments of the segmentation model described above when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements the steps of the training method embodiments of the segmentation model described above.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, implements the steps of the training method embodiment of the segmentation model described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.
Claims (10)
1. A method of image delineation, the method comprising:
acquiring an image to be sketched, and performing first sketching treatment on a target part included in the image to be sketched to obtain a first image;
identifying an abnormal type corresponding to the image to be sketched, and acquiring a target area sketching criterion based on the abnormal type;
determining a target zone boundary in the first image based on the target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image;
And carrying out second sketching processing on the target part included in the second image to obtain a target image for sketching the target part.
2. The method of claim 1, wherein the identifying the type of anomaly corresponding to the image to be delineated and obtaining target volume delineation criteria based on the type of anomaly comprises:
identifying an abnormal region corresponding to the target part included in the image to be sketched, and determining the abnormal type of the image to be sketched based on the abnormal region;
and acquiring target region sketching criteria corresponding to the abnormality type from a plurality of target region sketching criteria based on the correspondence between the abnormality type and the target region sketching criteria, and taking the target region sketching criteria as the target region sketching criteria.
3. The method of claim 1, wherein the determining a target boundary in the first image based on the target delineation criteria comprises:
determining a target zone boundary and a target zone center in the first image according to the target zone sketching criteria;
acquiring an image block with a fixed size from the first image according to the center of the target area;
and reserving the part of the image block within the range of the target zone boundary according to the target zone sketching criteria, and removing the part outside the target zone boundary.
4. A method of training a segmentation model, the method comprising:
acquiring a first training set, wherein the first training set comprises a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image;
training a first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model; the first segmentation model is used for carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image;
acquiring a second training set based on the first training set, wherein the second training set comprises a plurality of sample medical images after cutting processing and a second target area sketching image corresponding to each sample medical image after cutting processing;
training a second initial neural network based on the second training set, and taking the trained second initial neural network as a second segmentation model; and the second segmentation model is used for carrying out second sketching treatment on the target part included in the first image after the cutting treatment to obtain a target image for sketching the target part.
5. The method of claim 4, wherein the obtaining a second training set based on the first training set comprises:
Determining target zone boundaries of each sample medical image based on a first target zone delineation image corresponding to each sample medical image in the first training set;
cutting each sample medical image based on the target region boundary to obtain a plurality of sample segmentation images;
acquiring a second target area sketch image corresponding to each sample segmentation image;
and constructing the second training set based on the sample segmentation images and the second target region sketch image corresponding to the sample segmentation images.
6. The method of claim 5, wherein the determining the target boundary for each sample medical image based on the first target delineation image corresponding to each sample medical image in the first training set comprises:
identifying the abnormality types corresponding to the medical images of the samples respectively, and acquiring target area sketching criteria corresponding to the medical images of the samples respectively based on the abnormality types corresponding to the medical images of the samples respectively;
and determining target zone boundaries of the sample medical images based on target zone delineation criteria respectively corresponding to the sample medical images.
7. The method of claim 6, wherein the identifying the respective abnormality type for each of the sample medical images and obtaining the respective target region delineation criteria for each of the sample medical images based on the respective abnormality type for each of the sample medical images comprises:
Identifying abnormal areas corresponding to the sample medical images respectively, and determining abnormal types corresponding to the sample medical images based on the abnormal areas;
and respectively acquiring target region sketching criteria corresponding to the sample medical images from a plurality of target region sketching criteria based on the corresponding relation between the abnormal type and the target region sketching criteria, and taking the target region sketching criteria as the target region sketching criteria respectively corresponding to the sample medical images.
8. An image delineating apparatus, the apparatus comprising:
the first sketching module is used for acquiring an image to be sketched, and carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image;
the criterion matching module is used for identifying the abnormal type corresponding to the image to be sketched and acquiring a target area sketching criterion based on the abnormal type;
the image processing module is used for determining a target zone boundary in the first image based on the target zone sketching criterion, and cutting the first image based on the target zone boundary to obtain a second image;
and the second sketching module is used for carrying out second sketching processing on the target part included in the second image to obtain a target image for sketching the target part.
9. A training apparatus for a segmentation model, the apparatus comprising:
the first construction module is used for acquiring a first training set, wherein the first training set comprises a plurality of sample medical images and a first target area sketching image corresponding to each sample medical image;
the first training module is used for training the first initial neural network based on the first training set, and taking the trained first initial neural network as a first segmentation model; the first segmentation model is used for carrying out first sketching treatment on a target part included in the image to be sketched to obtain a first image;
the second construction module is used for acquiring a second training set based on the first training set, wherein the second training set comprises a plurality of sample medical images after cutting processing and a second target area sketch image corresponding to each sample medical image after cutting processing;
the second training module is used for training a second initial neural network based on the second training set, and taking the trained second initial neural network as a second segmentation model; and the second segmentation model is used for carrying out second sketching treatment on the target part included in the first image after the cutting treatment to obtain a target image for sketching the target part.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211092789.9A CN116228787A (en) | 2022-09-08 | 2022-09-08 | Image sketching method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211092789.9A CN116228787A (en) | 2022-09-08 | 2022-09-08 | Image sketching method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116228787A true CN116228787A (en) | 2023-06-06 |
Family
ID=86581194
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211092789.9A Pending CN116228787A (en) | 2022-09-08 | 2022-09-08 | Image sketching method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116228787A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580037A (en) * | 2023-07-10 | 2023-08-11 | 天津医科大学第二医院 | Nasopharyngeal carcinoma image segmentation method and system based on deep learning |
CN117095798A (en) * | 2023-07-28 | 2023-11-21 | 广州中医药大学第一附属医院 | Deep learning data preprocessing system and electronic equipment for automatic sketching of radiotherapy |
CN117152442A (en) * | 2023-10-27 | 2023-12-01 | 吉林大学 | Automatic image target area sketching method and device, electronic equipment and readable storage medium |
-
2022
- 2022-09-08 CN CN202211092789.9A patent/CN116228787A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580037A (en) * | 2023-07-10 | 2023-08-11 | 天津医科大学第二医院 | Nasopharyngeal carcinoma image segmentation method and system based on deep learning |
CN116580037B (en) * | 2023-07-10 | 2023-10-13 | 天津医科大学第二医院 | Nasopharyngeal carcinoma image segmentation method and system based on deep learning |
CN117095798A (en) * | 2023-07-28 | 2023-11-21 | 广州中医药大学第一附属医院 | Deep learning data preprocessing system and electronic equipment for automatic sketching of radiotherapy |
CN117152442A (en) * | 2023-10-27 | 2023-12-01 | 吉林大学 | Automatic image target area sketching method and device, electronic equipment and readable storage medium |
CN117152442B (en) * | 2023-10-27 | 2024-02-02 | 吉林大学 | Automatic image target area sketching method and device, electronic equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111161275B (en) | Method and device for segmenting target object in medical image and electronic equipment | |
CN116228787A (en) | Image sketching method, device, computer equipment and storage medium | |
Qi et al. | Automatic lacunae localization in placental ultrasound images via layer aggregation | |
CN114332132A (en) | Image segmentation method and device and computer equipment | |
CN117218133A (en) | Lung image processing method and device, electronic equipment and storage medium | |
WO2021097595A1 (en) | Method and apparatus for segmenting lesion area in image, and server | |
CN114998374A (en) | Image segmentation method, device and equipment based on position prior and storage medium | |
CN113888566B (en) | Target contour curve determination method and device, electronic equipment and storage medium | |
WO2021030995A1 (en) | Inferior vena cava image analysis method and product based on vrds ai | |
CN110992310A (en) | Method and device for determining partition where mediastinal lymph node is located | |
US20230420096A1 (en) | Document creation apparatus, document creation method, and document creation program | |
Kaibori et al. | Novel liver visualization and surgical simulation system | |
CN116128895A (en) | Medical image segmentation method, apparatus and computer readable storage medium | |
CN113177953B (en) | Liver region segmentation method, liver region segmentation device, electronic equipment and storage medium | |
CN115760813A (en) | Screw channel generation method, device, equipment, medium and program product | |
Chen et al. | Computer-aided liver surgical planning system using CT volumes | |
Wu et al. | Automatic segmentation of ultrasound tomography image | |
CN114419375A (en) | Image classification method, training method, device, electronic equipment and storage medium | |
CN113362350A (en) | Segmentation method and device for cancer medical record image, terminal device and storage medium | |
WO2021081839A1 (en) | Vrds 4d-based method for analysis of condition of patient, and related products | |
WO2021081772A1 (en) | Analysis method based on vrds ai brain image, and related apparatus | |
CN117058405B (en) | Image-based emotion recognition method, system, storage medium and terminal | |
CN111310669B (en) | Fetal head circumference real-time measurement method and device | |
US20240095916A1 (en) | Information processing apparatus, information processing method, and information processing program | |
CN115148341B (en) | AI structure sketching method and system based on body position recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |