CN112396606B - Medical image segmentation method, system and device based on user interaction - Google Patents

Medical image segmentation method, system and device based on user interaction Download PDF

Info

Publication number
CN112396606B
CN112396606B CN202011197897.3A CN202011197897A CN112396606B CN 112396606 B CN112396606 B CN 112396606B CN 202011197897 A CN202011197897 A CN 202011197897A CN 112396606 B CN112396606 B CN 112396606B
Authority
CN
China
Prior art keywords
image
medical image
modified
segmented
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011197897.3A
Other languages
Chinese (zh)
Other versions
CN112396606A (en
Inventor
徐璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202410218979.3A priority Critical patent/CN117994263A/en
Priority to CN202011197897.3A priority patent/CN112396606B/en
Publication of CN112396606A publication Critical patent/CN112396606A/en
Priority to US17/452,795 priority patent/US20220138957A1/en
Application granted granted Critical
Publication of CN112396606B publication Critical patent/CN112396606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The specification discloses a medical image segmentation method, system and device based on user interaction, wherein the method comprises the following steps: acquiring a first image, wherein the first image is obtained based on a medical image to be segmented; taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is acquired, wherein the iterative processes comprise: acquiring at least one modification of an image to be modified; modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into a medical image segmentation model, and outputting a second image; judging whether the second image meets a first preset condition or not; if yes, taking the second image as a target medical image; otherwise, the second image is used as a new image to be modified.

Description

Medical image segmentation method, system and device based on user interaction
Technical Field
The present disclosure relates to the field of medical image segmentation, and in particular, to a method, system, and apparatus for medical image segmentation based on user interaction.
Background
The medical image segmentation model can distinguish various areas with complex distribution in the medical image, so that reliable information is provided for clinical diagnosis and treatment. However, in training the medical image segmentation model alone, a large number of training samples and standard medical segmentation images need to be relied upon. In particular, the radiotherapy target region (including the general target region, the clinical target region and the planned target region) is delineated, because no obvious tissue boundary exists, and the clinical requirement of a doctor cannot be met at one time by directly delineating the radiotherapy target region with the help of the professional field knowledge of the clinician, and interaction with the doctor is needed to promote the final segmentation effect. For radiotherapy target areas, all hospitals draw consensus and guidelines based on a certain target area, but different habits exist in clinical operation, so that data with gold standards need to be collected for all hospitals respectively. The number of training sample sets thus collected tends to be limited, and thus, there is a need to continuously input newly acquired data into the deep learning model for continuous optimization during use by the user, increasing the training sample sets and reducing the number of interactions by the physician.
It is therefore desirable to provide a medical image segmentation method, system and apparatus based on user interaction.
Disclosure of Invention
One aspect of the present specification provides a medical image segmentation method based on user interaction, wherein the method is applied to a server and comprises: acquiring a first image based on the medical image to be segmented; taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is acquired, wherein the iterative processes comprise: the method comprises the steps of sending an image to be modified to a client, and receiving at least one modification of the image to be modified by a user from the client; modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into a medical image segmentation model, and outputting a second image; the second image is sent to the client, and judgment of whether the second image meets the first preset condition or not is received from the client; outputting the second image as a target medical image, and updating a medical image segmentation model based on the target medical image; otherwise, the second image is used as a new image to be modified.
Another aspect of the present specification provides a medical image segmentation system based on user interaction, the system implemented on a server, the system comprising: the pre-segmentation module is used for acquiring a first image based on the medical image to be segmented; the target medical image acquisition module is used for taking the first image as an image to be modified, executing a plurality of iterative processes until the target medical image is acquired, and comprises: the modification receiving module is used for sending the image to be modified to the client and receiving at least one modification of the image to be modified from the client; the image segmentation module is used for modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into a medical image segmentation model and outputting a second image; the output module is used for sending the second image to the client and receiving the judgment of whether the second image meets the first preset condition from the client; if yes, the second image is taken as a target medical image to be output, and a medical image segmentation model is updated based on the target medical image; otherwise, the second image is used as a new image to be modified.
Another aspect of the present specification provides a medical image segmentation method based on user interaction, wherein the method is applied to a client, and includes: receiving an image to be modified from a server; based on the image to be modified, performing a plurality of iterative processes until a target medical image is acquired, the iterative processes including: acquiring at least one modification of an image to be modified by a user, and transmitting the at least one modification to a server; receiving a second image from the server; acquiring a judgment of whether the second image meets a first preset condition or not by a user, and sending the judgment to a server, so that the server executes the following processing based on the judgment: outputting the second image as a target medical image, and updating a medical image segmentation model based on the target medical image; otherwise, the second image is used as a new image to be modified.
Another aspect of the present specification provides a system for medical image segmentation based on user interaction, the system implemented on a client, comprising: the image to be modified receiving module is used for receiving the image to be modified from the server; an iteration module for performing a plurality of iteration processes based on an image to be modified until a target medical image is acquired, the iteration module comprising: the modification sending module is used for obtaining at least one modification of the image to be modified by the user and sending the at least one modification to the server; a second image receiving module for receiving a second image from the server; the judging module is used for acquiring the judgment of whether the second image meets the first preset condition or not by the user and sending the judgment to the server, so that the server executes the following processing based on the judgment: outputting the second image as a target medical image, and updating a medical image segmentation model based on the target medical image; otherwise, the second image is used as a new image to be modified.
Another aspect of the present specification provides a medical image segmentation method based on user interaction, characterized in that the method comprises: acquiring a first image, wherein the first image is obtained based on a medical image to be segmented; taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is acquired, wherein the iterative processes comprise: acquiring at least one modification of an image to be modified; modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into a medical image segmentation model, and outputting a second image; judging whether the second image meets a first preset condition or not; if yes, taking the second image as a target medical image; otherwise, the second image is used as a new image to be modified.
Another aspect of the present specification provides a medical image segmentation system based on user interaction, the system comprising: the pre-segmentation module is used for acquiring a first image, and the first image is obtained based on the medical image to be segmented; a target medical image acquisition module for performing a plurality of iterative processes until a target medical image is acquired, the target medical image acquisition module comprising: the modification receiving module is used for acquiring at least one modification of the image to be modified; the image segmentation module is used for modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into a medical image segmentation model and outputting a second image; the output module is used for judging whether the second image meets a first preset condition or not; if yes, taking the second image as a target medical image; otherwise, the second image is used as a new image to be modified.
Another aspect of the embodiments of the present specification provides a computer-readable storage medium, characterized in that the storage medium stores computer instructions that, when executed by a processor, implement a medical image segmentation method based on user interaction.
Drawings
The present specification will be further described by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic illustration of an application scenario of a medical image segmentation system shown in accordance with some embodiments of the present description;
FIG. 2 is an exemplary block diagram of a server shown in accordance with some embodiments of the present description;
FIG. 3 is an exemplary block diagram of a client shown in accordance with some embodiments of the present description;
FIG. 4 is an exemplary flow chart of a user interaction based medical image segmentation method applied to a server, according to some embodiments of the present description;
FIG. 5 is an exemplary flow chart of a medical image segmentation method based on user interaction applied to a client, shown in accordance with some embodiments of the present description;
FIG. 6 is an exemplary flow chart of updating a medical image segmentation model, shown in accordance with some embodiments of the present description;
FIG. 7 is a schematic illustration of a medical image segmentation method based on user interaction, shown in accordance with some embodiments of the present description;
fig. 8 is a schematic illustration of a medical image shown according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is also possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It should be appreciated that "system," "apparatus," "unit," and/or "module" as used in this specification is a method for distinguishing between different components, elements, parts, portions, or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. Generally, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
Fig. 1 is a schematic view of an application scenario of a medical image segmentation system according to some embodiments of the present description.
The medical image segmentation system 100 may implement the methods and/or processes disclosed herein based on less user interaction, may obtain a target medical image that meets user segmentation requirements, while training out a medical image segmentation model that meets user habits.
As shown in fig. 1, the medical image segmentation system 100 may include a server 110, a network 120, a client 130, a storage device 140, and the like.
In some embodiments, server 110 may be used to process information and/or data related to data processing.
In some embodiments, server 110 may access information and/or material stored in clients 130 and storage devices 140 over network 120. For example, the server 110 may send the image to be modified to the client 130 via the network 120. For another example, the server 110 may receive, via the network 120, a modification of the image to be modified by a user sent by the client 130. In some embodiments, server 110 may be directly connected to clients 130 and/or storage devices 140 to access information and/or material stored therein. For example, the server 110 may retrieve the medical image to be segmented directly from the storage device 140. For another example, the server 110 may save the target medical image to the storage device 140.
In some embodiments, the server 110 may be a stand-alone server or a group of servers. The server farm may be centralized or distributed (e.g., server 110 may be a distributed system). In some embodiments, the server 110 may be regional or remote. For example, server 110 may access information and/or material stored in clients 130, storage devices 140 via network 120. In some embodiments, server 110 may be directly connected to clients 130, storage devices 140 to access information and/or material stored therein. In some embodiments, server 110 may execute on a cloud platform. For example, the cloud platform may include one of a private cloud, a public cloud, a hybrid cloud, or the like, or any combination thereof.
In some embodiments, the server 110 may include a processor 112. The processor 112 may process and perform one or more of the functions described herein. For example, the processor 112 may segment the medical image to be segmented and acquire a first image. For another example, the processor 112 may acquire a target medical image based on the image to be modified. For another example, the processor 112 may also update parameters of the medical image segmentation model.
In some embodiments, the processor 112 may include one or more sub-processors (e.g., single core processing devices or multi-core processing devices). By way of example only, processor 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Processor (ASIP), a Graphics Processor (GPU), a Physical Processor (PPU), a Digital Signal Processor (DSP), an Field Programmable Gate Array (FPGA), an programmable logic circuit (PLD), a controller, a microcontroller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, and the like, or any combination thereof.
The network 120 may facilitate exchange of data and/or information, which may include medical images to be segmented, first images, at least one modification to the first images, target medical images, and the like. In some embodiments, one or more components in system 100 (e.g., server 110, client 130, storage device 140) may send data and/or information to other components in system 100 over network 120. For example, the client 130 may send at least one modification to the first image to the server 110 over the network 120. In some embodiments, network 120 may be any type of wired or wireless network. For example, the network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the Internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a Bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, and the like, or any combination thereof. In some embodiments, network 120 may include one or more network ingress and egress points. For example, network 120 may include wired or wireless network access points, such as base station 120-1 and/or Internet switching point 120-2, through which one or more components of system 100 may connect to network 120 to exchange data and/or information.
The client 130 may be any type of device having information receiving and/or transmitting capabilities. User 150 may interact with server 110 through client 130. For example, the user 150 may receive the segmentation result of the medical image to be segmented through the client 130. For another example, the user 150 may send the medical image to be segmented and/or a modification of the segmentation result of the medical image to be segmented through the client 130. In some embodiments, the user 150 may be a hospital or a doctor of a hospital. In some embodiments, the client 130 may be a device that displays/annotates/modifies image functionality. For example, the clients 130 may include cell phones 130-1, tablet computers 130-2, personal computers 130-3, and other electronic devices, among others.
Storage device 140 may be used to store data and/or instructions. For example, the storage device 140 may store medical images to be segmented, target medical images, medical image segmentation models, and the like. For another example, the storage device 140 may store algorithmic instructions to perform one or more functions. Storage device 140 may include one or more storage components, each of which may be a separate device or may be part of another device. In some embodiments, the storage device 140 may include Random Access Memory (RAM), read Only Memory (ROM), mass storage, removable memory, volatile read-write memory, and the like, or any combination thereof. By way of example, mass storage may include magnetic disks, optical disks, solid state disks, and the like. In some embodiments, the storage device 140 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-layer cloud, or the like, or any combination thereof. Data refers to a digitized representation of information and may include various types such as binary data, text data, image data, video data, and the like. Instructions refer to programs that may control a device or apparatus to perform a particular function.
In some embodiments, storage device 140 may be connected to network 120 to communicate with one or more components of system 100 (e.g., server 110, client 130, etc.). One or more components of system 100 may access materials or instructions stored in storage device 140 via network 120. In some embodiments, the storage device 140 may be directly connected to or in communication with one or more components in the system 100 (e.g., the server 110, the client 130). In some embodiments, the storage device 140 may be part of the server 110.
Fig. 2 is an exemplary block diagram of a server shown in accordance with some embodiments of the present description.
In some embodiments, a pre-segmentation module 210 and a target medical image acquisition module 220 may be included in the module 200 of the server 110.
The pre-segmentation module 210 is configured to acquire a first image based on the medical image to be segmented.
In some embodiments, the pre-segmentation module 210 is further configured to pre-segment the medical image to be segmented, and obtain a third image; judging whether the third image meets a second preset condition or not; if yes, outputting the third image as a target medical image; otherwise, the third image is acquired as the first image.
In some embodiments, pre-segmentation of the medical image to be segmented is performed by a pre-segmentation model.
For more description of the pre-segmentation module 210, see step 410, which is not repeated here.
The target medical image acquisition module 220 is configured to perform a plurality of iterative processes with respect to the first image as the image to be modified until the target medical image is acquired. In some embodiments, the target medical image acquisition module 220 includes a modification receiving module 222, an image segmentation module 224, and an output module 226.
The modification receiving module 222 is configured to send the image to be modified to the client, and receive at least one modification of the image to be modified from the client. For more description of the modification acquisition module 222, see step 422, which is not described in detail herein.
The image segmentation module 224 is configured to modify at least one of the medical image to be segmented, the image to be modified, and the image to be modified by inputting the medical image segmentation model, and output a second image. For more description of the image segmentation module 224, see step 424, which is not described in detail herein.
The output module 226 is configured to send the second image to the client, and receive, from the client, a determination whether the second image meets a first preset condition by a user; outputting the second image as a target medical image, and updating a medical image segmentation model based on the target medical image; otherwise, the second image is used as a new image to be modified.
In some embodiments, the output module is further configured to add the medical image to be segmented and the first image as training samples, the target medical image as a label, into a training sample set for updating the medical image segmentation model; based on the training sample set, parameters of the medical image segmentation model are updated.
In some embodiments, the training sample further comprises: at least one modification of the image to be modified by the user is performed at least once.
In some embodiments, the parameters of the medical image segmentation model include parameters characterizing user habits. In some embodiments, the parameters of the medical image segmentation model may also include, for example, model network architecture, neuron weights, loss functions, and the like, as the embodiment is not limited. The medical image segmentation model can be more accordant with the sketching habit of the user by optimizing the parameters.
For more description of the output module 226, see step 426, which is not described in detail herein.
Fig. 3 is an exemplary block diagram of a client shown in accordance with some embodiments of the present description.
In some embodiments, the module 300 of the server 130 may include therein an image receiving module 310 to be modified and an iterating module 320.
The image to be modified receiving module 310 is configured to receive the image to be modified from the server. For more description of the image receiving module 310 to be modified, see step 510, which is not described in detail herein.
An iteration module 320, configured to perform a plurality of iteration processes based on the image to be modified until the target medical image is acquired. In some embodiments, the iteration module 320 includes a modification send module 322, a second image receive module 324, and a decision module 326.
The modification transmitting module 322 is configured to obtain at least one modification of the image to be modified by the user, and transmit the at least one modification to the server. For further description of the modification of the transmission module 322, reference is made to step 522, which is not described in detail herein.
A second image receiving module 324 for receiving the second image from the server. For more description of the second image receiving module 324, see step 524, which is not described herein.
A judging module 326, configured to obtain a judgment of whether the second image meets the first preset condition by the user, and send the judgment to the server, so that the server performs the following processing based on the judgment: outputting the second image as a target medical image, and updating a medical image segmentation model based on the target medical image; otherwise, the second image is used as a new image to be modified. For more description of the decision block 326, see step 526, which is not described in detail herein.
FIG. 4 is an exemplary flow chart of a user interaction based medical image segmentation method applied to a server according to some embodiments of the present description. As shown in fig. 4, the method 400 may include:
in step 410, a first image is acquired based on the medical image to be segmented.
Specifically, step 410 may be performed by pre-segmentation module 210.
The medical image is an internal tissue image acquired non-invasively for a target object for medical treatment or medical study. In some embodiments, the target object may be a human body, organ, body, object, lesion site, tumor, or the like.
The target object region is an image of the target object in the medical image. Accordingly, the background area is an image other than the target object in the medical image. For example, the medical image is an image of the brain of the patient, the target region is an image of one or more diseased tissue in the brain of the patient, and the background region may be an image other than the one or more diseased tissue in the brain image of the patient.
The medical image to be segmented is a medical image that needs segmentation processing. The segmentation process distinguishes a target object region and a background region in the medical image to be segmented.
It will be appreciated that there is a boundary between the target object region and the background region in the medical image to be segmented. In some embodiments, the segmentation result may be represented by delineating a boundary between the target object region and the background region in the medical image to be segmented.
In some embodiments, the medical image to be segmented may include, but is not limited to, a combination of one or more of an X-ray image, a Computed Tomography (CT) image, a Positron Emission Tomography (PET) image, a Single Photon Emission Computed Tomography (SPECT), a Magnetic Resonance Image (MRI), an Ultrasound Scan (US) image, a Digital Subtraction Angiography (DSA) image, a Magnetic Resonance Angiography (MRA) image, a time-of-flight magnetic resonance image (TOF-MRI), a brain magnetic Map (MEG), and the like.
In some embodiments, the formats of the medical image to be segmented may include Joint Photographic Experts Group (JPEG) image format, tagged Image File Format (TIFF) image format, graphics Interchange Format (GIF) image format, kodak Flash PiX (FPX) image format, digital Imaging and Communications in Medicine (DICOM) image format, and the like.
In some embodiments, the medical image to be segmented may be a two-dimensional (2 d) image, or a three-dimensional (3 d) image. In some embodiments, the three-dimensional image may be composed of a series of two-dimensional slices or layers.
The third image is a medical image obtained after the medical image to be segmented is subjected to pre-segmentation processing. It will be appreciated that the boundary between the target object region and the background region is initially delineated in the third image by the pre-segmentation model. The type and format of the third image may be referred to as a medical image to be segmented, and will not be described here.
In some embodiments, pre-segmentation of the medical image to be segmented may be performed by a pre-segmentation model. The pre-segmentation module inputs the medical image to be segmented into a pre-segmentation model and outputs a third image.
The pre-segmentation model is a model for pre-segmenting a medical image to be segmented. In some embodiments, the pre-segmentation model is a pre-trained model.
In some embodiments, the pre-segmentation model may be a traditional segmentation algorithm model. In some embodiments, conventional segmentation algorithms may include, but are not limited to, a combination of one or more of thresholding, region growing, edge detection, and the like.
In some embodiments, the pre-segmentation model may also be an image segmentation algorithm model in conjunction with a particular tool. In some embodiments, the tool-specific image segmentation algorithm may include, but is not limited to, a combination of one or more of genetic algorithms, wavelet analysis, wavelet transformation, active contour models, and the like.
In some embodiments, the pre-segmentation model may also be a neural network model. In some embodiments, the pre-segmentation model may include, but is not limited to, a convolutional neural network (Convolutional Neural Networks, CCN) model, a Long Short-Term Memory (LSTM) model, a Bi-directional Long Short-Term Memory (Bi-LSTM) model, and the like.
In some embodiments, pre-segmentation of the medical image to be segmented may be manual segmentation or otherwise, the present embodiment is not limited.
It can be understood that the pre-segmentation is only performed on the medical image to be segmented, and when the distribution of the target object area and the background area in the medical image to be segmented is simple and the contour is clear, the third image output by the pre-segmentation can meet the segmentation requirement; when the distribution of the target object area and the background area in the medical image to be segmented is complex and the contour is fuzzy, further segmentation processing is needed to be carried out on the third image based on user interaction. Wherein the user interaction refers to the user participating in a further segmentation process of the third image.
As shown in fig. 7, the pre-segmentation module further determines whether the third image satisfies a second preset condition; if yes, outputting the third image as a target medical image; otherwise, the third image is acquired as the first image.
The second preset condition is a condition that the third image satisfies the segmentation requirement. It will be appreciated that the delineation of the pre-segmentation model in the third image may be erroneous and thus not meet the segmentation requirements. For example, the target object region is delineated to the background region. For another example, the background region is delineated to the target object region. Therefore, the pre-segmentation module may determine whether the third image meets the segmentation requirement based on the second preset condition.
In some embodiments, the second preset condition may be that the user determines that the third image meets the segmentation requirement.
As previously described, the pre-segmentation model may be a pre-trained model. In some embodiments, the pre-segmentation model may be trained based on a sketched gold standard corresponding to the first image.
In some embodiments of training the pre-segmentation model, the third image may be evaluated by a similarity metric function. The second preset condition may be that a similarity metric function value between a sketching result of the third image and a sketching gold standard corresponding to the third image is greater than a second threshold. The similarity measurement function is an evaluation index of the relation between the sketching result of the third image and the sketching gold standard corresponding to the first image. In some embodiments, the similarity measure function value may be a numerical value, where a larger numerical value indicates that the sketching result of the third image is closer to the gold standard corresponding to the third image. In some embodiments, the similarity metric function may include, but is not limited to, at least one of Dice (dice similarity coefficient) coefficients, IOU (intersection over union) coefficients, hausdorff distance (Hausdorff Distance), cross entropy, and the like, or a combination thereof. For example, if the second threshold is 80% and the similarity measure function has a value of 70%, the third image does not satisfy the second preset condition.
The target medical image is a medical image that meets the user segmentation requirements, i.e. a medical image that does not require further segmentation processing based on user interactions. The type and format of the target medical image may refer to the medical image to be segmented, and will not be described herein.
The first image is a medical image that does not meet the segmentation requirements. The type and format of the first image may refer to the medical image to be segmented, and will not be described herein.
In some embodiments, the server may determine whether the third image satisfies the second preset condition through the determination model. In some embodiments, the decision model may include, but is not limited to, a support vector machine model, a Logistic regression model, a naive bayes classification model, a gaussian distributed bayes classification model, a decision tree model, a random forest model, a KNN classification model, a neural network model, or the like.
In some embodiments, the server may also send the third image to the client, and determine whether the third image satisfies the second preset condition based on a user determination result received from the client.
As shown in fig. 8, after the pre-segmentation model performs only rough segmentation processing on the medical image "fig. 8a" to be segmented, a third image "fig. 8b" is obtained, and the third image "fig. 8b" does not meet the second preset condition, and the third image "fig. 8b" is used as the first image.
In step 420, the first image is used as the image to be modified, and a plurality of iterative processes are performed until the target medical image is obtained.
Specifically, step 420 may be performed by the target medical image acquisition module 220, and the iterative process includes:
step 422, the image to be modified is sent to the client, and at least one modification of the first image by the user is received from the client.
Specifically, step 422 may be performed by modification receiving module 222.
The image to be modified is a medical image that requires further segmentation processing based on user interactions.
In the first iteration process, the image to be modified is the first image. As previously described, there are errors in the delineation of the first image by the pre-segmentation model. Thus, the image to be modified is the first image.
In a subsequent iteration process, the image to be modified is the second image. For a detailed description of the second image, see step 424, which is not repeated here.
The modification refers to the correction of the delineation error of the boundary between the target object area and the background area in the image to be modified by the user. It will be appreciated that the foregoing user interactions may be effected by a user modifying an image to be modified. In some embodiments, there may be multiple points in the image to be modified where the delineation of the boundary between the target object region and the background region is wrong, and the modification may be one or more of them.
In some embodiments, the modification may include, but is not limited to, marking (e.g., box) areas that delineate errors, erasing boundaries that are delineated by errors, delineating the correct boundaries, and the like. Wherein, the labeling of the region sketched with errors means that a user can label the target object region sketched to the background region or label the background region sketched to the target object region. Erasing the incorrectly delineated boundary and delineating the correct boundary is a direct correction of the delineated boundary by the user.
The user is the subject of modifying the image to be modified at the client. In some embodiments, the user may be a hospital, department of a hospital, or doctor. It will be appreciated that the modification of the image to be modified is different from user to user.
In some embodiments, modification receiving module 222 may send the image to be modified to the client over network 120.
For a detailed description of the user modifying at least one of the images to be modified, see step 520, which is not described in detail herein.
Step 424, modifying the input medical image segmentation model with at least one of the medical image to be segmented, the image to be modified, and outputting a second image.
Specifically, step 424 may be performed by the image segmentation module 224.
The second image is a medical image obtained after the medical image segmentation model performs further segmentation processing on the image to be modified. As shown in fig. 7, the input of the medical image segmentation model includes at least one modification of the medical image to be segmented, the image to be modified, and is output as a second image.
In some embodiments, the medical image segmentation model may include an image block segmentation layer, a feature extraction layer, a fusion layer, and an output layer.
Specifically, the image block segmentation layer may segment a plurality of image blocks from a medical image to be segmented and an image to be modified, respectively, through a Sliding window (Sliding-window) of a multi-scale, a Selective Search (Selective Search), a neural network, or other methods; the feature extraction layer may extract a feature vector of each image block and a modified feature vector contained in each image block; further, the fusion layer fuses the feature vector of each image block and the modified feature vector contained in each image block into a probability corresponding to each image block, where the probability may represent a probability that the image block belongs to a target object region (or a background region); the output layer distinguishes a target object area and a background area on the medical image to be segmented based on the probability of each image block and a preset threshold value, and delineates the boundary of the target object area and the background area. Wherein the medical image to be segmented, i.e. the second image, is included that delineates the boundary.
In some embodiments, the medical image segmentation model may include, but is not limited to, a combination of one or more of a Full convolutional network (Fully Convolutional Networks, FCN) model, a visual geometry group network (Visual Geometry Group, VGG Net) model, a high efficiency neural network (Efficient Neural Network, ene) model, a Full resolution residual network (Full-Resolution Residual Networks, FRRN) model, a Mask Region convolutional neural network (Mask Region-based Convolutional Neural Network, mask R-CNN) model, a Multi-dimensional recurrent neural network (Multi-Dimensional Recurrent Neural Networks, MDRNNs) model, and the like.
Continuing with fig. 8 as an example, the pre-segmentation model delineates the right part of the target object region in the image to be modified (i.e., the first image) of the first iteration "fig. 8b" as a background region, and the user boxes the region with the delineated error on "fig. 8b" to obtain at least one modification of the image to be modified (i.e., the first image) and the image to be modified (i.e., the first image), namely, the box selection modification in "fig. 8 c". During the first iteration, the image segmentation module outputs a second image "fig. 8d" based on the medical image to be segmented (i.e. fig. 8 a), the image to be modified (i.e. the first image, i.e. 8 b) and at least one modification of the image to be modified (i.e. the frame selection modification in fig. 8 c). In this example, the user may select the region of error with a rectangular box, but the invention is not so limited, as any shape may be used to select the region of error.
Step 426, the second image is sent to the client, and the judgment of whether the second image meets the first preset condition or not is received from the client; outputting the second image as a target medical image, and updating a medical image segmentation model based on the target medical image; otherwise, the second image is used as a new image to be modified.
In particular, step 426 may be performed by output module 226.
In some embodiments, the output module 226 may send the second image to the client via the network 130 and receive a user determination from the client as to whether the second image satisfies the first preset condition.
As previously mentioned, the target medical image is a medical image that meets the segmentation requirements, i.e. a medical image that does not require further segmentation processing based on user interaction; the image to be modified is a medical image that requires further segmentation processing based on user interactions.
The first preset condition is that the second image satisfies the requirement of user segmentation. It will be appreciated that the delineation of the medical image segmentation model in the second image may still be erroneous or may not conform to the segmentation habit of the specific user, thereby not meeting the segmentation requirements. Accordingly, the output module may the server may transmit the second image to the client and determine whether the second image satisfies the first preset condition based on a user received from the client.
The detailed description of the user determining whether the second image meets the first preset condition based on the client may refer to step 522, which is not described herein.
In some alternative embodiments, the server may also determine whether the second image satisfies the first preset condition through the determination model. The detailed description of the judgment model may refer to step 410, and will not be described herein.
Continuing taking fig. 8 as an example, if the second image "fig. 8d" is judged that the first preset condition is not met, the second image "fig. 8d" is taken as a new image to be modified acquired in the second iteration process.
Further, the server iteratively performs step 420 until the target medical image is acquired.
Continuing to take fig. 8 as an example, the server iteratively executes step 420 again based on the new image to be modified "fig. 8d", obtains at least one modification of the image to be modified "fig. 8d", that is, the box selection label in "fig. 8e", inputs the medical image to be segmented (that is, fig. 8 a), the image to be modified (that is, fig. 8 d) and at least one modification of the image to be modified (that is, the box selection modification in fig. 8 e) into the medical image segmentation model, outputs the new second image "fig. 8f", and outputs "fig. 8f" as the target medical image if the judgment of "fig. 8f" satisfies the first preset condition.
Further, the server updates parameters of the medical image segmentation model based on the target medical image.
In some embodiments, the medical image segmentation model is a model that is pre-trained based on an initial training sample set. In some embodiments, the initial training sample set includes at least one raw medical image and at least one standard medical segmentation image corresponding to the raw medical image.
The original medical image is a medical image that has not been segmented. In some embodiments, the raw medical image may be acquired by reading data from a storage device, invoking an associated interface, or otherwise. In some embodiments, the raw medical images may be acquired from a large sample library of different users. Such as a medical image database, etc.
The standard medical segmentation image is a medical image which is acquired after the original medical image is segmented and accords with the segmentation standard. In some embodiments, the standard medical segmentation image may be obtained by segmentation of the original medical image by a different user. In some embodiments, the standard medical segmentation image may be acquired by reading data from a storage device, invoking a correlation interface, or otherwise.
It will be appreciated that medical image segmentation models obtained based on training of an initial training sample set may be adapted to the segmentation requirements of a general user, but have poor adaptability to the specific segmentation requirements of a specific user. Therefore, the model updating module can further train and update parameters of the medical image segmentation model based on the training sample set acquired by the specific user interaction, and improve the adaptability of the medical image segmentation model to the specific segmentation requirement of the specific user.
In some embodiments, the parameters may include parameters that characterize user habits. For example, the habit of the first hospital for segmenting the heart image is to take the left ventricle as a target object only, so that the updated parameters can change the general mode of extracting the heart image features in the medical image segmentation model, and the habit of the updated medical image segmentation model for segmenting the heart image is more in line with the requirements of the first hospital.
In some embodiments, the parameters of the medical image segmentation model may also include, for example, model network architecture, neuron weights, loss functions, and the like, as the embodiment is not limited.
In some embodiments, parameters of the image segmentation model may be further updated based on the training sample set. Specifically, a training sample with a label is input into an image segmentation model, and parameters of the image segmentation model are updated through training.
Details of the training samples and labels, and detailed descriptions of the parameters for updating the image segmentation model can be found in fig. 6, and will not be described here again.
Fig. 5 is an exemplary flow chart of a medical image segmentation method based on user interaction applied to a client, according to some embodiments of the present description. As shown in fig. 5, the method 500 may include:
step 510, receiving an image to be modified from a server.
Specifically, step 510 may be performed by the receiving module to be modified 310.
In some embodiments, the to-be-modified receiving module 310 may receive the to-be-modified image from the server through the network 130.
As previously mentioned, the image to be modified is a medical image that requires further segmentation processing based on user interactions. In an iterative process, an image to be modified is a first image; in a subsequent iteration process, the image to be modified is the second image.
For a detailed description of the medical image that requires further segmentation processing based on user interaction, see step 410, which is not repeated here.
Based on the image to be modified, a number of iterative processes are performed 520 until a target medical image is acquired.
Specifically, step 520 may be performed by iteration module 320, which includes:
Step 522, at least one modification of the image to be modified by the user is obtained, and the at least one modification is sent to the server.
Specifically, step 522 may be performed by modification transmission module 322.
As described above, the user is a subject who modifies an image to be modified at the client. Specifically, the user modifies the image to be modified by touching or clicking a client screen on the client.
As previously described, modification refers to the user correcting a delineation error of the boundary between the target object region and the background region in the image to be modified. In some embodiments, the modification may include, but is not limited to, annotating (e.g., box) areas of delineating errors, erasing boundaries of erroneous delineations, delineating correct boundaries, and the like.
In some embodiments, the client may obtain the at least one modification by detecting a touch or click operation of the screen by the user on the client.
Further, the modification transmitting module 322 may transmit at least one modification to the server via the network 130.
Step 524, a second image is received from the server.
Specifically, step 524 may be performed by the second image receiving module 324.
The second image is a medical image obtained after the medical image segmentation model performs further segmentation processing on the image to be modified. The related description of acquiring the second image is referred to in fig. 4, and will not be described herein.
In some embodiments, the second image receiving module 324 may receive the second image from the server over the network 130.
Step 526, obtaining a determination of whether the second image meets the first preset condition by the user, and sending the determination to the server.
Specifically, step 526 may be performed by the determination module 326.
As described above, the first preset condition is a condition that the second image satisfies the user division requirement. Therefore, the user can judge whether the second image received by the client meets the requirement of user segmentation. Specifically, the client may obtain the judgment "yes" or "no" of the user through operations such as touch, clicking, or text input of the user on the client.
In some embodiments, the determination module 326 may send the determination to the server over the network 130.
Further, the server performs the following processing based on the judgment: if yes, the second image is taken as a target medical image to be output, and the medical image segmentation model is updated based on the target medical image; otherwise, the second image is used as a new image to be modified.
In some embodiments, the client and server may be located in the same device, which may perform the methods of fig. 2 and 3.
In summary, as shown in fig. 7, regardless of the image segmentation step execution subject, an exemplary flow 700 of a medical image segmentation method based on user interaction includes: the image to be segmented is pre-segmented to obtain a third image. If the third image meets the second preset condition, the third image is directly output as a target medical image; if the third image does not meet the second preset condition, the third image is used as the first image, and is input into an iterative process to be further segmented. The iterative process comprises the following steps: the user modifies the image to be modified, and the image segmentation model acquires a second image based on the previously acquired image to be segmented, the previously acquired image to be modified and the user modifies the image to be modified. In the first iteration process, the image to be modified is the first image. If the user judges that the second image does not meet the first preset condition meeting the user requirement, the second image is used as a new image to be modified, and the iteration process is carried out again, namely, in the subsequent iteration process, the image to be modified is the second image; and if the user judges that the second image meets the first preset condition meeting the user requirement, outputting the second image as a target medical image. Further, the first image, the modification to the first image, and the image to be segmented which have been acquired in the previous iteration process may be used as a training sample, and the target medical image may be used as a label to train the image segmentation model.
FIG. 6 is an exemplary flow chart of updating a medical image segmentation model, shown according to some embodiments of the present description.
In particular, fig. 6 may be performed by the output module 226.
Step 610, taking the medical image to be segmented and the first image as training samples, taking the target medical image as a label, and adding the training sample set for updating the medical image segmentation model.
In some embodiments, the training sample may include a medical image to be segmented and a first image.
Continuing with the example of fig. 8, the training sample includes a medical image "8a" to be segmented and the first image is "fig. 8b".
In some embodiments, the training sample further comprises: at least one modification of the image to be modified by the user is performed at least once. It will be appreciated that during iterative acquisition of a target medical image, each iteration acquires at least one modification of the image to be modified by the user.
In some embodiments, all modifications may be used as training samples.
Continuing with the example of fig. 8, at least one modification of the image to be modified (i.e., the first image) 'fig. 8 b' in the first iteration, i.e., the modification of "fig. 8c", and at least one modification of the image to be modified (i.e., the second image output in the first iteration) 'fig. 8 d' in the second iteration, i.e., the modification of "fig. 8e", may be used as training samples.
By way of example, the training sample may be [ medical image to be segmented "8a", the first image being a modification in "fig. 8b", "fig. 8c", a modification in "fig. 8e ].
In some embodiments, some of the modifications may be made as training samples.
For example, the modification in the first iteration process "fig. 8c" described above is a user's misoperation, and only the modification in "fig. 8e" in the second iteration process may be taken as a training sample.
For example, the training sample may be [ medical image to be segmented "8a", the first image being the modification in "fig. 8b", "fig. 8e ].
The training sample set is a set of training samples and labels for training the medical image of the target.
In some embodiments, the training sample set may include training samples acquired based on user interactions. In some embodiments, the training sample set may further comprise an initial training sample set to acquire the target medical image. For a detailed description of the initial training sample set, see step 426, which is not described in detail herein.
As previously mentioned, the target medical image is a medical image that meets the user segmentation requirements. It will be appreciated that the target medical image is the target of modification of the first image in the iterative process.
Continuing with fig. 8 as an example, at least one of the medical image to be segmented "fig. 8a", the first image "fig. 8b" and the image to be modified (i.e. the first image) in the first iteration process may be modified, i.e. the modification in "fig. 8c" is used as a set of training samples, and the target medical image "fig. 8f" is used as a label: training samples: modified |tag in "fig. 8a", "fig. 8b", "fig. 8 c": "FIG. 8 f"; the medical image to be segmented "fig. 8a", the first image "fig. 8b", at least one modification of the image to be modified (i.e. the first image) in the first iteration process, i.e. the modification in "fig. 8c", and at least one modification of the image to be modified "fig. 8d" in the second iteration process, i.e. the modification in "fig. 8e", may also be used as a set of training samples, and the target medical image "fig. 8f" is used as a label: training samples: modified in "fig. 8a", "fig. 8b", "fig. 8c", modified in "fig. 8e" |tag: "fig. 8f" adds to the training sample set.
Step 620, updating parameters of the medical image segmentation model based on the training sample set.
As previously mentioned, the parameters may include parameters that characterize the user's modification habits. In some embodiments, the model update module may train the image segmentation model based on a training sample set, thereby updating parameters of the image segmentation model.
In some embodiments, training may be performed by conventional methods based on training samples. For example, training may be performed based on gradient descent methods, newton methods, quasi-newton methods, and the like.
In some embodiments, training is ended when the trained model meets a preset condition. The preset condition is that the loss function converges.
From the foregoing, it can be seen that the more times a user acquires a target medical image using a medical image segmentation model, the more the number of training samples in a training sample set, the closer the output result of the medical image segmentation model is to the ideal result of a user participating in interaction, and the higher the accuracy of the updated medical image segmentation model.
Possible benefits of embodiments of the present description include, but are not limited to: (1) Based on user interaction, a training sample and a label are acquired while a target medical image is acquired, so that a medical image segmentation model does not need to rely on a large number of training samples and standard medical segmentation image updating parameters, and meanwhile, the medical image segmentation model does not need to be trained independently, and the training efficiency is improved; (2) Based on multiple user interactions, the medical image segmentation model can learn the segmentation operation of a corresponding user, so that the medical image segmentation model conforming to the habit of the user is obtained, an output target medical image can gradually approach to an ideal segmentation result of the user, and the adaptability of the medical image segmentation model is improved; (3) The modification in the iterative process can be selected as a training sample, and the modification of misoperation is excluded, so that the influence of the training sample of misoperation on updating the medical image segmentation model is avoided; (4) On the one hand, the target medical image corresponding to the simple medical image to be segmented can be directly obtained, on the other hand, the subsequent iteration process can be converged more quickly, and the efficiency of the medical image segmentation model is improved. It should be noted that, the advantages that may be generated by different embodiments may be different, and in different embodiments, the advantages that may be generated may be any one or a combination of several of the above, or any other possible advantages that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the specification can be illustrated and described in terms of several patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer storage medium may be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present description may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python and the like, a conventional programming language such as C language, visual Basic, fortran2003, perl, COBOL2002, PHP, ABAP, dynamic programming languages such as Python, ruby and Groovy, or other programming languages and the like. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing processing device or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the present description. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.

Claims (19)

1. A medical image segmentation method based on user interaction, wherein the method is applied to a server and comprises the following steps:
acquiring a first image based on the medical image to be segmented;
taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is acquired, wherein the iterative processes comprise:
the image to be modified is sent to a client, and at least one modification of the image to be modified by a user is received from the client, wherein the modification comprises marking of a wrong region;
modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into a medical image segmentation model, and outputting a second image;
the second image is sent to the client, and judgment of whether the second image meets a first preset condition or not by the user is received from the client; outputting the second image as the target medical image and updating the medical image segmentation model based on the target medical image; otherwise, the second image is used as the new image to be modified.
2. The method of claim 1, wherein the acquiring a first image based on the medical image to be segmented comprises:
pre-segmenting the medical image to be segmented to obtain a third image;
judging whether the third image meets a second preset condition or not;
outputting the third image as the target medical image;
otherwise, the third image is acquired as the first image.
3. The method of claim 1, wherein the updating the medical image segmentation model based on the target medical image comprises:
taking the medical image to be segmented and the first image as training samples, taking the target medical image as a label, and adding a training sample set for updating a medical image segmentation model;
based on the training sample set, parameters of the medical image segmentation model are updated.
4. The method of claim 3, wherein the training sample further comprises: at least one modification to the first image by the user is performed at least once.
5. A method as claimed in claim 3, wherein the parameters of the medical image segmentation model comprise parameters characterizing the user habit.
6. A medical image segmentation system based on user interaction, the system implemented on a server, the system comprising:
the pre-segmentation module is used for acquiring a first image based on the medical image to be segmented;
a target medical image acquisition module, configured to perform a plurality of iterative processes with the first image as an image to be modified until a target medical image is acquired, where the target medical image acquisition module includes:
the modification receiving module is used for sending the image to be modified to a client and receiving at least one modification of the image to be modified by a user from the client, wherein the modification comprises marking and sketching an erroneous area;
the image segmentation module is used for modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into the medical image segmentation model and outputting a second image;
the output module is used for sending the second image to the client and receiving the judgment of whether the second image meets a first preset condition or not from the client; outputting the second image as the target medical image and updating the medical image segmentation model based on the target medical image; otherwise, the second image is used as the new image to be modified.
7. The system of claim 6, wherein the pre-segmentation module is further to:
pre-segmenting the medical image to be segmented to obtain a third image;
judging whether the third image meets a second preset condition or not;
outputting the third image as the target medical image;
otherwise, the third image is acquired as the first image.
8. The system of claim 6, wherein the output module is further to:
taking the medical image to be segmented and the first image as training samples, taking the target medical image as a label, and adding a training sample set for updating a medical image segmentation model;
based on the training sample set, parameters of the medical image segmentation model are updated.
9. The system of claim 8, wherein the at least one set of training samples further comprises: at least one modification to the first image by the user is performed at least once.
10. The system of claim 8, wherein the parameters of the medical image segmentation model include parameters characterizing the user's habits.
11. A medical image segmentation method based on user interaction, wherein the method is applied to a client and comprises the following steps:
Receiving an image to be modified from a server, the image to be modified being determined based on the medical image to be segmented;
based on the image to be modified, performing a plurality of iterative processes until a target medical image is acquired, the iterative processes comprising:
acquiring at least one modification of the image to be modified by a user, wherein the modification comprises marking a region with a sketch error, and transmitting the at least one modification to the server;
receiving a second image from the server, the second image being determined based on a processing of at least one of the medical image to be segmented, the image to be modified, and the image to be modified by a medical image segmentation model;
acquiring a judgment of whether the second image meets a first preset condition or not by the user, and sending the judgment to the server, so that the server executes the following processing based on the judgment: outputting the second image as the target medical image and updating the medical image segmentation model based on the target medical image; otherwise, the second image is used as a new image to be modified.
12. A system for medical image segmentation based on user interaction, the system implemented on a client, comprising:
The image to be modified receiving module is used for receiving an image to be modified from the server, and the image to be modified is determined based on the medical image to be segmented;
an iteration module for performing a plurality of iteration processes based on the image to be modified until a target medical image is acquired, the iteration module comprising:
the modification sending module is used for obtaining at least one modification of the image to be modified by a user, wherein the modification comprises marking and sketching an error area and sending the at least one modification to the server;
a second image receiving module for receiving a second image from the server, the second image being determined based on a processing of at least one modification of the medical image to be segmented, the image to be modified, and the image to be modified by a medical image segmentation model;
the judging module is used for acquiring the judgment of whether the second image meets the first preset condition or not by the user and sending the judgment to the server, so that the server executes the following processing based on the judgment: outputting the second image as the target medical image and updating the medical image segmentation model based on the target medical image; otherwise, the second image is used as the new image to be modified.
13. A medical image segmentation method, comprising:
acquiring a first image, wherein the first image is obtained based on a medical image to be segmented;
taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is acquired, wherein the iterative processes comprise:
acquiring at least one modification of the image to be modified, wherein the modification comprises marking of a region with a sketch error;
modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into a medical image segmentation model, and outputting a second image;
judging whether the second image meets a first preset condition or not; if yes, taking the second image as the target medical image; otherwise, the second image is used as the new image to be modified.
14. The method of claim 13, wherein the first image is derived based on a medical image to be segmented, comprising:
pre-segmenting the medical image to be segmented to obtain a third image;
judging whether the third image meets a second preset condition or not;
outputting the third image as the target medical image;
otherwise, the third image is acquired as the first image.
15. The method of claim 13, further comprising updating the medical image segmentation model based on the target medical image.
16. The method of claim 15, wherein the updating the medical image segmentation model based on the target medical image comprises:
taking the medical image to be segmented and the image to be modified as training samples, taking the target medical image as a label, and adding a training sample set for updating a medical image segmentation model;
based on the training sample set, parameters of the medical image segmentation model are updated.
17. The method of claim 16, wherein the training sample further comprises: at least one user modification to at least one of the first images.
18. A medical image segmentation system, comprising:
the pre-segmentation module is used for acquiring a first image, and the first image is obtained based on the medical image to be segmented;
a target medical image acquisition module, configured to perform a plurality of iterative processes with the first image as an image to be modified until a target medical image is acquired, where the target medical image acquisition module includes:
the modification receiving module is used for acquiring at least one modification of the image to be modified, wherein the modification comprises marking and sketching the wrong area;
The image segmentation module is used for modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into a medical image segmentation model and outputting a second image;
the output module is used for judging whether the second image meets a first preset condition or not; if yes, taking the second image as the target medical image; otherwise, the second image is used as the new image to be modified.
19. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 5, 11 and 13 to 17.
CN202011197897.3A 2020-10-30 2020-10-30 Medical image segmentation method, system and device based on user interaction Active CN112396606B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202410218979.3A CN117994263A (en) 2020-10-30 2020-10-30 Medical image segmentation method, system and device based on user interaction
CN202011197897.3A CN112396606B (en) 2020-10-30 2020-10-30 Medical image segmentation method, system and device based on user interaction
US17/452,795 US20220138957A1 (en) 2020-10-30 2021-10-29 Methods and systems for medical image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011197897.3A CN112396606B (en) 2020-10-30 2020-10-30 Medical image segmentation method, system and device based on user interaction

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410218979.3A Division CN117994263A (en) 2020-10-30 2020-10-30 Medical image segmentation method, system and device based on user interaction

Publications (2)

Publication Number Publication Date
CN112396606A CN112396606A (en) 2021-02-23
CN112396606B true CN112396606B (en) 2024-01-05

Family

ID=74597808

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011197897.3A Active CN112396606B (en) 2020-10-30 2020-10-30 Medical image segmentation method, system and device based on user interaction
CN202410218979.3A Pending CN117994263A (en) 2020-10-30 2020-10-30 Medical image segmentation method, system and device based on user interaction

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202410218979.3A Pending CN117994263A (en) 2020-10-30 2020-10-30 Medical image segmentation method, system and device based on user interaction

Country Status (1)

Country Link
CN (2) CN112396606B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802036A (en) * 2021-03-16 2021-05-14 上海联影医疗科技股份有限公司 Method, system and device for segmenting target area of three-dimensional medical image
CN113077445A (en) * 2021-04-01 2021-07-06 中科院成都信息技术股份有限公司 Data processing method and device, electronic equipment and readable storage medium
CN114119645B (en) * 2021-11-25 2022-10-21 推想医疗科技股份有限公司 Method, system, device and medium for determining image segmentation quality

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345890A (en) * 2018-03-01 2018-07-31 腾讯科技(深圳)有限公司 Image processing method, device and relevant device
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN111127471A (en) * 2019-12-27 2020-05-08 之江实验室 Gastric cancer pathological section image segmentation method and system based on double-label loss

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345890A (en) * 2018-03-01 2018-07-31 腾讯科技(深圳)有限公司 Image processing method, device and relevant device
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN111127471A (en) * 2019-12-27 2020-05-08 之江实验室 Gastric cancer pathological section image segmentation method and system based on double-label loss

Also Published As

Publication number Publication date
CN117994263A (en) 2024-05-07
CN112396606A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
EP3879485B1 (en) Tissue nodule detection and model training method and apparatus thereof, device and system
US11593943B2 (en) RECIST assessment of tumour progression
CN112396606B (en) Medical image segmentation method, system and device based on user interaction
US9959486B2 (en) Voxel-level machine learning with or without cloud-based support in medical imaging
US20200167928A1 (en) Segmentation of anatomical regions and lesions
US20200193594A1 (en) Hierarchical analysis of medical images for identifying and assessing lymph nodes
US20140341449A1 (en) Computer system and method for atlas-based consensual and consistent contouring of medical images
WO2021114130A1 (en) Unsupervised self-adaptive mammary gland lesion segmentation method
US10726948B2 (en) Medical imaging device- and display-invariant segmentation and measurement
JP7346553B2 (en) Determining the growth rate of objects in a 3D dataset using deep learning
US20220180516A1 (en) Identifying boundaries of lesions within image data
US11574717B2 (en) Medical document creation support apparatus, medical document creation support method, and medical document creation support program
US10762629B1 (en) Segmenting medical images
US20220301224A1 (en) Systems and methods for image segmentation
CN107567638A (en) The segmentation based on model to anatomical structure
CN114255235A (en) Method and arrangement for automatic localization of organ segments in three-dimensional images
CN112074912A (en) Interactive coronary artery labeling using interventional X-ray images and deep learning
CN107610772A (en) A kind of thyroid nodule CT image diagnostic system design methods
US20220138957A1 (en) Methods and systems for medical image segmentation
JPWO2019208130A1 (en) Medical document creation support devices, methods and programs, trained models, and learning devices, methods and programs
US11734849B2 (en) Estimating patient biographic data parameters
CN113316803A (en) Correcting segmentation of medical images using statistical analysis of historical corrections
CN112419339B (en) Medical image segmentation model training method and system
EP4339961A1 (en) Methods and systems for providing a template data structure for a medical report
US20220391599A1 (en) Information saving apparatus, method, and program and analysis record generation apparatus, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant