CN112396606A - Medical image segmentation method, system and device based on user interaction - Google Patents
Medical image segmentation method, system and device based on user interaction Download PDFInfo
- Publication number
- CN112396606A CN112396606A CN202011197897.3A CN202011197897A CN112396606A CN 112396606 A CN112396606 A CN 112396606A CN 202011197897 A CN202011197897 A CN 202011197897A CN 112396606 A CN112396606 A CN 112396606A
- Authority
- CN
- China
- Prior art keywords
- image
- medical image
- modified
- segmentation
- medical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003709 image segmentation Methods 0.000 title claims abstract description 124
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000003993 interaction Effects 0.000 title claims abstract description 40
- 238000012986 modification Methods 0.000 claims abstract description 89
- 230000004048 modification Effects 0.000 claims abstract description 89
- 238000012804 iterative process Methods 0.000 claims abstract description 23
- 230000011218 segmentation Effects 0.000 claims description 83
- 238000012549 training Methods 0.000 claims description 69
- 238000003860 storage Methods 0.000 claims description 31
- 230000008569 process Effects 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 24
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 3
- 230000000747 cardiac effect Effects 0.000 description 3
- 238000013145 classification model Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000002583 angiography Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000001959 radiotherapy Methods 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 238000011524 similarity measure Methods 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 210000005240 left ventricle Anatomy 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The specification discloses a medical image segmentation method, a medical image segmentation system and a medical image segmentation device based on user interaction, wherein the method comprises the following steps: acquiring a first image, wherein the first image is obtained based on a medical image to be segmented; taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is obtained, wherein the iterative processes comprise: acquiring at least one modification of an image to be modified; modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified, inputting the modified medical image segmentation model, and outputting a second image; judging whether the second image meets a first preset condition or not; if so, taking the second image as a target medical image; and otherwise, taking the second image as a new image to be modified.
Description
Technical Field
The present disclosure relates to the field of medical image segmentation, and in particular, to a medical image segmentation method, system and apparatus based on user interaction.
Background
The medical image segmentation model can distinguish various areas with complex distribution in the medical image, so that reliable information is provided for clinical diagnosis and treatment. However, in the process of separately training the medical image segmentation model, a large number of training samples and standard medical segmentation images need to be relied on. Especially, the delineation of the radiotherapy target area (including the gross target area, the clinical target area and the planning target area) has no obvious tissue boundary, and needs to be assisted by the professional domain knowledge of the clinician, and generally the direct delineation cannot meet all clinical requirements of the physician at one time, so the interaction with the physician is needed to improve the final segmentation effect. For a radiotherapy target area, although each hospital draws consensus and guidelines based on a certain target area, different habits exist in clinical operation, and therefore data with gold standards need to be collected for each hospital. The number of training sample sets collected in this way is bound to be limited, and therefore, new acquired data needs to be continuously input into the deep learning model for continuous optimization during the use process of the user, so that the training sample sets are increased, and the interaction times of doctors are reduced.
It is therefore desirable to provide a method, system and apparatus for medical image segmentation based on user interaction.
Disclosure of Invention
One aspect of the present specification provides a medical image segmentation method based on user interaction, wherein the method is applied to a server, and includes: acquiring a first image based on a medical image to be segmented; taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is obtained, wherein the iterative processes comprise: sending the image to be modified to a client, and receiving at least one modification of the image to be modified by a user from the client; modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into an input medical image segmentation model, and outputting a second image; sending the second image to the client, and receiving the judgment of whether the second image meets the first preset condition from the client; if yes, outputting the second image as a target medical image, and updating the medical image segmentation model based on the target medical image; and otherwise, taking the second image as a new image to be modified.
Another aspect of the present specification provides a medical image segmentation system based on user interaction, the system being implemented on a server, the system comprising: the pre-segmentation module is used for acquiring a first image based on a medical image to be segmented; a target medical image acquisition module for performing a plurality of iterative processes with the first image as an image to be modified until a target medical image is acquired, the target medical image acquisition module comprising: the modification receiving module is used for sending the image to be modified to the client and receiving at least one modification of the image to be modified by the user from the client; the image segmentation module is used for modifying at least one of the medical image to be segmented, the image to be modified and inputting the modified medical image into the medical image segmentation model and outputting a second image; the output module is used for sending the second image to the client and receiving the judgment of whether the second image meets the first preset condition from the client; if yes, outputting the second image as a target medical image, and updating the medical image segmentation model based on the target medical image; and otherwise, taking the second image as a new image to be modified.
Another aspect of the present specification provides a medical image segmentation method based on user interaction, where the method is applied to a client, and includes: receiving an image to be modified from a server; based on the image to be modified, executing a plurality of times of iteration processes until a target medical image is obtained, wherein the iteration processes comprise: acquiring at least one modification of an image to be modified by a user, and sending the at least one modification to a server; receiving a second image from the server; acquiring judgment of whether the second image meets a first preset condition by a user, and sending the judgment to a server, so that the server executes the following processing based on the judgment: if yes, outputting the second image as a target medical image, and updating the medical image segmentation model based on the target medical image; and otherwise, taking the second image as a new image to be modified.
Another aspect of the present specification provides a system for medical image segmentation based on user interaction, the system implemented on a client, comprising: the image to be modified receiving module is used for receiving the image to be modified from the server; an iteration module for performing a plurality of iterations based on an image to be modified until a target medical image is acquired, the iteration module comprising: the modification sending module is used for acquiring at least one modification of the image to be modified by the user and sending the at least one modification to the server; the second image receiving module is used for receiving a second image from the server; the judging module is used for obtaining the judgment of whether the second image meets the first preset condition by the user and sending the judgment to the server, so that the server executes the following processing based on the judgment: if yes, outputting the second image as a target medical image, and updating the medical image segmentation model based on the target medical image; and otherwise, taking the second image as a new image to be modified.
Another aspect of the present specification provides a medical image segmentation method based on user interaction, characterized in that the method includes: acquiring a first image, wherein the first image is obtained based on a medical image to be segmented; taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is obtained, wherein the iterative processes comprise: acquiring at least one modification of an image to be modified; modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified, inputting the modified medical image segmentation model, and outputting a second image; judging whether the second image meets a first preset condition or not; if so, taking the second image as a target medical image; and otherwise, taking the second image as a new image to be modified.
Another aspect of the present specification provides a medical image segmentation system based on user interaction, the system comprising: the pre-segmentation module is used for acquiring a first image, and the first image is obtained based on a medical image to be segmented; a target medical image acquisition module for performing a plurality of iterative processes until a target medical image is acquired, the target medical image acquisition module comprising: the modification receiving module is used for acquiring at least one modification of the image to be modified; the image segmentation module is used for modifying at least one of the medical image to be segmented, the image to be modified and inputting the modified medical image into the medical image segmentation model and outputting a second image; the output module is used for judging whether the second image meets a first preset condition or not; if so, taking the second image as a target medical image; and otherwise, taking the second image as a new image to be modified.
Another aspect of embodiments of the present specification provides a computer-readable storage medium characterized in that the storage medium stores computer instructions that, when executed by a processor, implement a medical image segmentation method based on user interaction.
Drawings
The present description will be further described by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a medical image segmentation system shown in accordance with some embodiments of the present description;
FIG. 2 is an exemplary block diagram of a server shown in accordance with some embodiments of the present description;
FIG. 3 is an exemplary block diagram of a client shown in accordance with some embodiments of the present description;
FIG. 4 is an exemplary flow diagram illustrating a method for user interaction-based medical image segmentation applied to a server in accordance with some embodiments of the present description;
FIG. 5 is an exemplary flow diagram illustrating a method for user interaction-based medical image segmentation applied to a client in accordance with some embodiments of the present description;
FIG. 6 is an exemplary flow diagram illustrating updating a medical image segmentation model according to some embodiments of the present description;
FIG. 7 is a schematic diagram of a medical image segmentation method based on user interaction, shown in accordance with some embodiments of the present description;
fig. 8 is a schematic illustration of a medical image shown in accordance with some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used in this specification is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are specifically identified and do not constitute an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an application scenario of a medical image segmentation system according to some embodiments of the present description.
The medical image segmentation system 100 can implement the method and/or process disclosed in the present specification, so as to achieve a target medical image satisfying the segmentation requirements of the user based on less user interaction, and train a medical image segmentation model conforming to the habits of the user.
As shown in fig. 1, the medical image segmentation system 100 may include a server 110, a network 120, a client 130, a storage device 140, and the like.
In some embodiments, server 110 may be used to process information and/or data related to data processing.
In some embodiments, server 110 may access information and/or data stored in clients 130 and storage devices 140 via network 120. For example, the server 110 may send the image to be modified to the client 130 via the network 120. As another example, server 110 may receive, via network 120, a modification sent by client 130 to an image to be modified by a user. In some embodiments, server 110 may interface directly with clients 130 and/or storage devices 140 to access information and/or material stored therein. For example, the server 110 may retrieve the medical image to be segmented directly from the storage device 140. As another example, the server 110 may save the target medical image to the storage device 140.
In some embodiments, the server 110 may be a stand-alone server or a group of servers. The set of servers can be centralized or distributed (e.g., server 110 can be a distributed system). In some embodiments, the server 110 may be regional or remote. For example, server 110 may access information and/or data stored in clients 130, storage devices 140 via network 120. In some embodiments, server 110 may directly interface with client 130, storage device 140 to access information and/or material stored therein. In some embodiments, the server 110 may execute on a cloud platform. For example, the cloud platform may include one or any combination of a private cloud, a public cloud, a hybrid cloud, and the like.
In some embodiments, the server 110 may include a processor 112. The processor 112 may process and perform one or more of the functions described herein. For example, the processor 112 may segment the medical image to be segmented and acquire a first image. As another example, the processor 112 may acquire a target medical image based on the image to be modified. As another example, the processor 112 may also update parameters of the medical image segmentation model.
In some embodiments, the processor 112 may include one or more sub-processors (e.g., a single core processing device or a multi-core processing device). Merely by way of example, the processor 112 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a programmable logic circuit (PLD), a controller, a micro-controller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like or any combination thereof.
The network 120 may facilitate the exchange of data and/or information, which may include the medical image to be segmented, the first image, at least one modification to the first image, the target medical image, and so forth. In some embodiments, one or more components in system 100 (e.g., server 110, client 130, storage 140) may send data and/or information to other components in system 100 over network 120. For example, the client 130 may send at least one modification to the first image to the server 110 over the network 120. In some embodiments, network 120 may be any type of wired or wireless network. For example, network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a wireless area network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, network 120 may include one or more network entry and exit points. For example, network 120 may include wired or wireless network access points, such as base station 120-1 and/or Internet switching point 120-2, through which one or more components of system 100 may connect to network 120 to exchange data and/or information.
In some embodiments, storage device 140 may be connected to network 120 to communicate with one or more components of system 100 (e.g., server 110, client 130, etc.). One or more components of system 100 may access data or instructions stored in storage device 140 via network 120. In some embodiments, storage device 140 may be directly connected to or in communication with one or more components (e.g., server 110, client 130) in system 100. In some embodiments, storage device 140 may be part of server 110.
Fig. 2 is an exemplary block diagram of a server shown in accordance with some embodiments of the present description.
In some embodiments, pre-segmentation module 210 and target medical image acquisition module 220 may be included in modules 200 of server 110.
The pre-segmentation module 210 is configured to obtain a first image based on a medical image to be segmented.
In some embodiments, the pre-segmentation module 210 is further configured to pre-segment the medical image to be segmented, and obtain a third image; judging whether the third image meets a second preset condition or not; if yes, outputting the third image as a target medical image; otherwise, the third image is acquired as the first image.
In some embodiments, the pre-segmentation of the medical image to be segmented is performed by a pre-segmentation model.
For more description of the pre-segmentation module 210, refer to step 410, which is not described herein.
The target medical image obtaining module 220 is configured to perform a plurality of iterative processes on the first image as the image to be modified until the target medical image is obtained. In some embodiments, the target medical image acquisition module 220 includes a modification receiving module 222, an image segmentation module 224, and an output module 226.
And a modification receiving module 222, configured to send the image to be modified to the client, and receive at least one modification of the image to be modified from the client. For more description of the modification obtaining module 222, refer to step 422, which is not described herein.
And the image segmentation module 224 is used for modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into the medical image segmentation model and outputting a second image. For more description of the image segmentation module 224, refer to step 424, which is not described herein.
The output module 226 is configured to send the second image to the client, and receive, from the client, a determination that whether the second image meets the first preset condition; if yes, outputting the second image as a target medical image, and updating the medical image segmentation model based on the target medical image; and otherwise, taking the second image as a new image to be modified.
In some embodiments, the output module is further configured to take the medical image to be segmented and the first image as training samples, take the target medical image as a label, and add the training samples to a training sample set for updating the medical image segmentation model; and updating parameters of the medical image segmentation model based on the training sample set.
In some embodiments, the training samples further comprise: at least one time, the user modifies at least one place of the image to be modified.
In some embodiments, the parameters of the medical image segmentation model comprise parameters characterizing user habits. In some embodiments, the parameters of the medical image segmentation model may further include, for example, a model network architecture, neuron weights, a loss function, and the like, which is not limited by the embodiment. By optimizing the parameters, the medical image segmentation model can better conform to the drawing habit of the user.
For more description of the output module 226, refer to step 426, which is not described herein.
Fig. 3 is an exemplary block diagram of a client shown in accordance with some embodiments of the present description.
In some embodiments, the module 300 of the server 130 may include an image to be modified receiving module 310 and an iteration module 320.
A to-be-modified image receiving module 310, configured to receive an image to be modified from a server. For more description of the image receiving module 310 to be modified, refer to step 510, and will not be described herein.
And the iteration module 320 is used for executing a plurality of iteration processes based on the image to be modified until the target medical image is obtained. In some embodiments, the iteration module 320 includes a modification sending module 322, a second image receiving module 324, and a determination module 326.
A modification sending module 322, configured to obtain at least one modification of the image to be modified by the user, and send the at least one modification to the server. For more description of modifying the sending module 322, refer to step 522, which is not described herein.
A second image receiving module 324, configured to receive a second image from the server. For more description of the second image receiving module 324, refer to step 524, which is not described herein.
A determining module 326, configured to obtain a determination of whether the second image satisfies a first preset condition by the user, and send the determination to the server, so that the server performs the following processing based on the determination: if yes, outputting the second image as a target medical image, and updating the medical image segmentation model based on the target medical image; and otherwise, taking the second image as a new image to be modified. For more description of the decision module 326, refer to step 526, which is not described herein.
Fig. 4 is an exemplary flow diagram illustrating a method for user interaction-based medical image segmentation applied to a server according to some embodiments of the present description. As shown in fig. 4, the method 400 may include:
In particular, step 410 may be performed by the pre-segmentation module 210.
Medical images are internal tissue images that are acquired non-invasively with respect to a target object for medical treatment or medical research. In some embodiments, the target object may be a human body, an organ, a body, an object, a lesion, a tumor, or the like.
The target object region is an image of a target object in the medical image. Accordingly, the background region is an image other than the target object in the medical image. For example, the medical image is an image of a patient's brain, the target object region is an image of one or more diseased tissues in the patient's brain, and the background region may be an image of the patient's brain other than the one or more diseased tissues.
The medical image to be segmented is a medical image that needs to be subjected to segmentation processing. The segmentation process is to distinguish a target object region from a background region in a medical image to be segmented.
It is understood that a boundary exists between the target object region and the background region in the medical image to be segmented. In some embodiments, the segmentation result may be represented by delineating a boundary between the target object region and the background region in the medical image to be segmented.
In some embodiments, the medical image to be segmented may include, but is not limited to, a combination of one or more of an X-ray image, a Computed Tomography (CT) image, a Positron Emission Tomography (PET) image, a Single Photon Emission Computed Tomography (SPECT) image, a Magnetic Resonance Image (MRI), an Ultrasound (US) image, a Digital Subtraction Angiography (DSA) image, a Magnetic Resonance Angiography (MRA) image, a time-of-flight magnetic resonance image (TOF-MRI), a Magnetoencephalogram (MEG), and the like.
In some embodiments, the Format of the medical Image to be segmented may include Joint Photographic Experts Group (JPEG) Image Format, Tagged Image File Format (TIFF) Image Format, Graphics Interchange Format (GIF) Image Format, Kodak Flash PiX (FPX) Image Format, Digital Imaging and Communications in Medicine (DICOM) Image Format, and the like.
In some embodiments, the medical image to be segmented may be a two-dimensional (2D) image, or a three-dimensional (3D) image. In some embodiments, the three-dimensional image may be made up of a series of two-dimensional slices or layers.
The third image is a medical image obtained by pre-dividing the medical image to be divided. It will be appreciated that the third image has a preliminary delineation of the boundary between the target object region and the background region by the pre-segmentation model. The type and format of the third image may refer to the medical image to be segmented, which is not described herein.
In some embodiments, the pre-segmentation of the medical image to be segmented may be performed by a pre-segmentation model. The pre-segmentation module inputs the medical image to be segmented into the pre-segmentation model and outputs a third image.
The pre-segmentation model is a model for pre-segmenting the medical image to be segmented. In some embodiments, the pre-segmentation model is a pre-trained model.
In some embodiments, the pre-segmentation model may be a conventional segmentation algorithm model. In some embodiments, conventional segmentation algorithms may include, but are not limited to, combinations of one or more of thresholding, region growing, edge detection, and the like.
In some embodiments, the pre-segmentation model may also be an image segmentation algorithm model in conjunction with a particular tool. In some embodiments, the image segmentation algorithm in combination with the particular tool may include, but is not limited to, a combination of one or more of genetic algorithms, wavelet analysis, wavelet transforms, active contour models, and the like.
In some embodiments, the pre-segmentation model may also be a neural network model. In some embodiments, the pre-segmentation model may include, but is not limited to, a combination of one or more of a Convolutional Neural Networks (CCN) model, a Long Short-Term Memory (LSTM) model, a Bi-directional Long Short-Term Memory (Bi-LSTM) model, and the like.
In some embodiments, the pre-segmentation of the medical image to be segmented may be manual segmentation or other ways, and the embodiment is not limited.
It can be understood that the pre-segmentation only performs rough segmentation on the medical image to be segmented, and when the target object region and the background region in the medical image to be segmented are simple in distribution and clear in outline, the pre-segmented output third image can meet the segmentation requirement; when the target object region and the background region in the medical image to be segmented are complex in distribution and fuzzy in contour, the third image needs to be further segmented based on user interaction. Wherein, the user interaction means that the user participates in the further segmentation processing of the third image.
As shown in fig. 7, the pre-segmentation module further determines whether the third image satisfies a second preset condition; if yes, outputting the third image as a target medical image; otherwise, the third image is acquired as the first image.
The second preset condition is that the third image satisfies the segmentation requirement. It will be appreciated that the delineation of the pre-segmentation model in the third image may be erroneous and thus not meet the segmentation requirements. For example, the target object area is sketched to the background area. For another example, the background area is sketched to the target object area. Therefore, the pre-segmentation module may determine whether the third image satisfies the segmentation requirement based on a second preset condition.
In some embodiments, the second preset condition may be that the user judges that the third image satisfies the segmentation requirement.
As previously described, the pre-segmentation model may be a pre-trained model. In some embodiments, the pre-segmentation model may be trained based on the delineation gold criteria corresponding to the first image.
In some embodiments of training the pre-segmentation model, the third image may be evaluated by a similarity metric function. The second preset condition may be that a similarity measure function value between the delineation result of the third image and the delineation gold standard corresponding to the third image is greater than a second threshold. And the similarity measurement function is an evaluation index of the relationship between the drawing result of the third image and the drawing golden standard corresponding to the first image. In some embodiments, the similarity measure function value may be a numerical value, and the larger the numerical value, the closer the delineation result of the third image is to the gold standard corresponding to the third image. In some embodiments, the similarity metric function may include, but is not limited to, at least one of a dice (similarity coefficient) coefficient, an IOU (interaction over unit) coefficient, a Hausdorff Distance (Hausdorff Distance), a cross entropy, and the like, or a combination thereof. For example, if the second threshold is 80% and the value of the similarity metric function is 70%, the third image does not satisfy the second preset condition.
The target medical image is a medical image that satisfies the user segmentation requirements, i.e. a medical image that does not require further segmentation processing based on user interaction. The type and format of the target medical image may refer to the medical image to be segmented, which is not described herein again.
The first image is a medical image that does not meet the segmentation requirements. The type and format of the first image may refer to a medical image to be segmented, which is not described herein again.
In some embodiments, the server may determine whether the third image satisfies the second preset condition by the determination model. In some embodiments, the decision model may include, but is not limited to, a combination of one or more of a support vector machine model, a Logistic regression model, a naive bayes classification model, a gaussian distributed bayes classification model, a decision tree model, a random forest model, a KNN classification model, a neural network model, and the like.
In some embodiments, the server may also transmit the third image to the client, and determine whether the third image satisfies the second preset condition based on a user determination result received from the client.
As shown in fig. 8, after the pre-segmentation model performs only the coarse segmentation processing on the medical image to be segmented "fig. 8 a", a third image "fig. 8 b" is obtained, and if the "fig. 8 b" does not satisfy the second preset condition, the third image is taken as the first image.
And step 420, taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is obtained.
Specifically, step 420 may be performed by the target medical image acquisition module 220, and the iterative process includes:
step 422, the image to be modified is sent to the client, and at least one modification of the first image by the user is received from the client.
In particular, step 422 may be performed by modification receiving module 222.
The image to be modified is a medical image that needs further segmentation processing based on user interaction.
In the first iteration process, the image to be modified is the first image. As previously described, there is an error in the delineation of the first image by the pre-segmentation model. Thus, the image to be modified is the first image.
In a subsequent iteration process, the image to be modified is the second image. For a detailed description of the second image, refer to step 424, which is not repeated herein.
The modification means that the user corrects the delineation error of the boundary between the target object area and the background area in the image to be modified. It will be appreciated that the foregoing user interaction may be achieved by a user modifying the image to be modified. In some embodiments, there may be multiple delineation errors of the boundary between the target object region and the background region in the image to be modified, and the modification may be one or more of them.
In some embodiments, the modification may include, but is not limited to, a combination of one or more of marking (e.g., box, circle) an area delineating the error, erasing the boundary delineated by the error, delineating the correct boundary, and the like. The area with the marking error refers to a target object area which can be marked to the background area by the user, or a background area which is marked to the target object area. The erasing of the wrongly drawn boundary and the drawing of the correct boundary are that the user directly corrects the drawn boundary.
The user is the subject of modifying the image to be modified at the client. In some embodiments, the user may be a hospital, a department of a hospital, or a doctor. It will be appreciated that the modification of the image to be modified will differ from user to user.
In some embodiments, the modification receiving module 222 may send the image to be modified to the client through the network 120.
For a detailed description of at least one modification of the image to be modified by the user, refer to step 520, which is not described herein again.
In particular, step 424 may be performed by image segmentation module 224.
The second image is a medical image obtained after the medical image segmentation model carries out further segmentation processing on the image to be modified. As shown in fig. 7, the input of the medical image segmentation model includes at least one modification of the medical image to be segmented, the image to be modified and the image to be modified, and the output is the second image.
In some embodiments, the medical image segmentation model may include an image block segmentation layer, a feature extraction layer, a fusion layer, and an output layer.
Specifically, the image block segmentation layer may segment a plurality of image blocks from the medical image to be segmented and the image to be modified respectively through a multi-scale (multi-scale) Sliding window (Sliding-window), Selective Search (Selective Search), a neural network, or other methods; the feature extraction layer may extract a feature vector of each image block and a modified feature vector included in each image block; further, the fusion layer fuses the feature vector of each image block and the modified feature vector contained in each image block into a probability corresponding to each image block, where the probability may represent a probability that the image block belongs to a target object region (or a background region); the output layer distinguishes a target object area and a background area on the medical image to be segmented based on the probability of each image block and a preset threshold value, and outlines the boundary of the target object area and the background area. Wherein, the medical image to be segmented, namely the second image, which is used for delineating the boundary is included.
In some embodiments, the medical image segmentation model may include, but is not limited to, a combination of one or more of a Full volume Network (FCN) model, a Visual Geometry Group Network (VGG Net) model, an Efficient Neural Network (ent) model, a Full Resolution Residual Network (FRRN) model, a Mask Region Convolutional Neural Network (Mask R-CNN) model, a Multi-Dimensional Recurrent Neural Network (ns) model, and the like.
Continuing with fig. 8 as an example, the pre-segmentation model delineates the right portion target object region in the image to be modified (i.e., the first image) "fig. 8 b" of the first iteration as a background region, and the user boxes and selects the region with the drawing error on "fig. 8 b" to obtain at least one modification of the image to be modified (i.e., the first image) and the image to be modified (i.e., the first image), i.e., the box selection modification in "fig. 8 c". During the first iteration, the image segmentation module outputs a second image "fig. 8 d" based on the medical image to be segmented (i.e. fig. 8a), the image to be modified (i.e. the first image, i.e. 8b) and at least one modification of the image to be modified (i.e. the box selection modification in fig. 8 c). In this example, the user may select the area in which the error is outlined by a rectangular frame, but the present invention is not limited thereto, and may select the area in which the error is outlined by an arbitrary shape.
In particular, step 426 may be performed by output module 226.
In some embodiments, the output module 226 may send the second image to the client via the network 130, and receive a user's determination from the client whether the second image satisfies the first preset condition.
As mentioned before, the target medical image is a medical image that meets the segmentation requirements, i.e. a medical image that does not need to be further segmented based on user interaction; the image to be modified is a medical image that needs further segmentation processing based on user interaction.
The first preset condition is that the second image meets the user segmentation requirement. It will be appreciated that the delineation of the medical image segmentation model in the second image may still be erroneous or not conform to the segmentation habits of the particular user, and thus not meet the segmentation requirements. Accordingly, the output module may transmit the second image to the client, and determine whether the second image satisfies the first preset condition based on the user received from the client.
For the detailed description that the user determines whether the second image satisfies the first preset condition based on the client, refer to step 522, which is not described herein again.
In some alternative embodiments, the server may also determine whether the second image satisfies the first preset condition by the determination model. For a detailed description of the judgment model, refer to step 410, which is not described herein again.
Continuing to take fig. 8 as an example, if the second image "fig. 8 d" is determined not to satisfy the first preset condition, the second image "fig. 8 d" is taken as a new image to be modified obtained in the second iteration process.
Further, the server iteratively performs step 420 until a target medical image is acquired.
Continuing to take fig. 8 as an example, the server iteratively executes step 420 again based on the new image to be modified "fig. 8 d", acquires at least one modification of the image to be modified "fig. 8 d", that is, the frame selection label in "fig. 8 e", inputs the medical image to be segmented (that is, fig. 8a), the image to be modified (that is, 8d), and at least one modification of the image to be modified (that is, the frame selection modification in fig. 8 e) into the medical image segmentation model, outputs a new second image "fig. 8 f", and outputs "fig. 8 f" as the target medical image if the determination of "fig. 8 f" satisfies the first preset condition.
Further, the server updates parameters of the medical image segmentation model based on the target medical image.
In some embodiments, the medical image segmentation model is a pre-trained model based on an initial training sample set. In some embodiments, the initial training sample set includes at least one original medical image and at least one standard medical segmentation image corresponding to the original medical image.
The original medical image is a medical image that has not been subjected to segmentation processing. In some embodiments, the raw medical image may be acquired by reading data from a storage device, invoking an associated interface, or otherwise. In some embodiments, the raw medical images may be obtained from a large-scale sample library of different users. Such as medical image databases, etc.
The standard medical segmented image is a medical image which is obtained after an original medical image is segmented and meets the segmentation standard. In some embodiments, the standard medical segmentation image may be obtained by segmentation of the original medical image by a different user. In some embodiments, the standard medical segmentation image may be acquired by reading data from a storage device, invoking an associated interface, or otherwise.
It is understood that the medical image segmentation model obtained by training based on the initial training sample set may be suitable for the segmentation requirements of the general user, but is less adaptive to the specific segmentation requirements of the specific user. Therefore, the model updating module can further train and update parameters of the medical image segmentation model based on a training sample set obtained by interaction of a specific user, and the adaptability of the medical image segmentation model to specific segmentation requirements of the specific user is improved.
In some embodiments, the parameters may include parameters characterizing user habits. For example, the habit of the first hospital to segment the cardiac image is to use only the left ventricle as a target object, and the updated parameters can change the general way of extracting the cardiac image features from the medical image segmentation model, so that the updated medical image segmentation model can better meet the requirement of the first hospital on the habit of segmenting the cardiac image.
In some embodiments, the parameters of the medical image segmentation model may further include, for example, a model network architecture, neuron weights, a loss function, and the like, which is not limited by the embodiment.
In some embodiments, the parameters of the image segmentation model may be further updated based on the training sample set. Specifically, a training sample with a label is input into the image segmentation model, and parameters of the image segmentation model are updated through training.
For a detailed description of the specific contents of the training samples and the labels and the parameters for updating the image segmentation model, reference may be made to fig. 6, which is not described herein again.
Fig. 5 is an exemplary flow diagram illustrating a method for user interaction-based medical image segmentation applied to a client in accordance with some embodiments of the present description. As shown in fig. 5, the method 500 may include:
step 510, receiving an image to be modified from a server.
In particular, step 510 may be performed by the receiving module to be modified 310.
In some embodiments, the to-be-modified receiving module 310 may receive the to-be-modified image from a server through the network 130.
As mentioned before, the image to be modified is a medical image that requires further segmentation processing based on user interaction. In the process of one iteration, an image to be modified is a first image; in a subsequent iteration, the image to be modified, i.e. the second image.
For a detailed description of the medical image that needs to be further segmented based on user interaction, see step 410, it is not described here in detail.
Based on the image to be modified, a plurality of iterations are performed until a target medical image is obtained, step 520.
In particular, step 520 may be performed by the iteration module 320, the iteration process including:
step 522, at least one modification of the image to be modified by the user is acquired, and the at least one modification is sent to the server.
In particular, step 522 may be performed by modification sending module 322.
As previously mentioned, the user is the subject of the modification of the image to be modified at the client. Specifically, the user modifies the image to be modified by touching or clicking a client screen on the client.
As mentioned above, modifying means that the user corrects a drawing error of a boundary between the target object region and the background region in the image to be modified. In some embodiments, the modification may include, but is not limited to, a combination of one or more of marking (e.g., box, circle) a region delineated by an error, erasing a boundary delineated by an error, delineating a correct boundary, and the like.
In some embodiments, the client may obtain the at least one modification by detecting a touch or click operation of the user on the screen on the client.
Further, the modification transmission module 322 may transmit the at least one modification to the server via the network 130.
In particular, step 524 may be performed by the second image receiving module 324.
The second image is a medical image obtained after the medical image segmentation model carries out further segmentation processing on the image to be modified. For a description of the second image acquisition, reference is made to fig. 4, which is not repeated here.
In some embodiments, the second image receiving module 324 may receive the second image from the server over the network 130.
Step 526, obtaining the judgment of the user whether the second image meets the first preset condition, and sending the judgment to the server.
In particular, step 526 may be performed by decision module 326.
As described above, the first preset condition is that the second image satisfies the user segmentation requirement. Therefore, the user can judge whether the second image received by the client meets the user segmentation requirement. Specifically, the client may obtain the "yes" or "no" judgment of the user through operations of the user in the client, such as touch, click, or text input.
In some embodiments, the determination module 326 may send the determination to the server over the network 130.
Further, the server performs the following processing based on the determination: if yes, outputting the second image as a target medical image, and updating the medical image segmentation model based on the target medical image; and otherwise, taking the second image as a new image to be modified.
In some embodiments, the client and server may be located in the same device, which may perform the methods of fig. 2 and 3.
In summary, as shown in fig. 7, an exemplary flow 700 of a medical image segmentation method based on user interaction, regardless of the step of image segmentation to perform the subject, includes: and pre-segmenting the image to be segmented to obtain a third image. If the third image meets a second preset condition, directly outputting the third image as a target medical image; and if the third image does not meet the second preset condition, taking the third image as the first image, inputting the first image into an iterative process, and further segmenting the first image. The iterative process comprises: and modifying the image to be modified by the user, wherein the image segmentation model acquires a second image based on the image to be segmented acquired before, the image to be modified acquired before and the modification of the image to be modified by the user. In the first iteration process, the image to be modified is the first image. If the user judges that the second image does not meet the first preset condition required by the user, the second image is used as a new image to be modified, and the iteration process is started again, namely in the subsequent iteration process, the image to be modified is the second image; and if the user judges that the second image meets the first preset condition meeting the user requirement, outputting the second image as the target medical image. Further, the first image which has been acquired in the previous iteration process, the modification of the first image and the image to be segmented can be used as training samples, the target medical image can be used as a label, and the image segmentation model can be trained.
Fig. 6 is an exemplary flow diagram illustrating updating a medical image segmentation model according to some embodiments of the present description.
In particular, fig. 6 may be performed by output module 226.
In some embodiments, the training sample may include the medical image to be segmented and the first image.
Continuing with the example of fig. 8, the training sample includes the medical image "8 a" to be segmented and the first image is "fig. 8 b".
In some embodiments, the training samples further comprise: and at least once modifying at least one position of the image to be modified by the user. It will be appreciated that in iteratively acquiring the target medical image, at least one modification of the image to be modified by the user is acquired for each iteration.
In some embodiments, all modifications may be taken as training samples.
Continuing with fig. 8 as an example, at least one modification of "fig. 8 b", i.e. the modification in "fig. 8 c", of the image to be modified (i.e. the first image) during the first iteration and at least one modification of "fig. 8 d", i.e. the modification in "fig. 8 e", of the image to be modified (i.e. the second image output from the first iteration) during the second iteration can be used as training samples.
Illustratively, the training sample may be [ medical image to be segmented "8 a", the first image is a modification in "fig. 8 b", "fig. 8 c", a modification in "fig. 8 e".
In some embodiments, some of the modifications may be used as training samples.
For example, the modification in "fig. 8 c" in the first iteration is a user's misoperation, and only the modification in "fig. 8 e" in the second iteration may be used as a training sample.
Illustratively, the training sample may be [ medical image to be segmented "8 a", first image is "fig. 8 b", modification in "fig. 8 e".
The training sample set is a set of training samples and labels used to train the target medical image.
In some embodiments, the training sample set may include training samples acquired based on user interaction. In some embodiments, the training sample set may also include an initial training sample set for acquiring the target medical image. For a detailed description of the initial training sample set, refer to step 426, which is not repeated herein.
As mentioned before, the target medical image is a medical image that meets the user segmentation requirements. It is to be understood that the target medical image is a modification target of the first image in the iterative process.
Continuing with fig. 8 as an example, at least one modification of the medical image to be segmented "fig. 8 a", the first image "fig. 8 b" and the image to be modified in the first iteration process (i.e. the first image), i.e. the modification in "fig. 8 c" may be used as a set of training samples, and the target medical image "fig. 8 f" is used as a label: [ training samples: modified | tags in "fig. 8 a", "fig. 8 b", "fig. 8 c": "FIG. 8 f" ] adding to the training sample set; it is also possible to modify at least one of the medical image to be segmented "fig. 8 a", the first image "fig. 8 b", the image to be modified during the first iteration (i.e. the first image), i.e. the modification in "fig. 8 c", and the image to be modified "fig. 8 d" during the second iteration, i.e. the modification in "fig. 8 e", as a set of training samples, with the target medical image "fig. 8 f" as a label: [ training sample: the "modification in fig. 8 a", "fig. 8 b", "fig. 8 c", and the "modification in fig. 8 e" | tags: "figure 8 f" is added to the training sample set.
And step 620, updating parameters of the medical image segmentation model based on the training sample set.
As previously mentioned, the parameters may include parameters characterizing the user's modification habits. In some embodiments, the model update module may train the image segmentation model based on the training sample set, thereby updating parameters of the image segmentation model.
In some embodiments, training may be performed by a commonly used method based on the training samples. For example, training may be based on a gradient descent method, a newton method, a quasi-newton method, and the like.
In some embodiments, the training is ended when the trained model satisfies a preset condition. Wherein the preset condition is the convergence of a loss function.
From the foregoing, the more times that the user acquires the target medical image by using the medical image segmentation model, the more training samples in the training sample set, the closer the output result of the medical image segmentation model is to the ideal result of the user participating in the interaction, and the higher the accuracy of the updated medical image segmentation model.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) based on user interaction, the training samples and the labels are obtained while the target medical image is obtained, so that the medical image segmentation model does not need to rely on a large number of training samples and standard medical segmentation image update parameters, and meanwhile, the medical image segmentation model does not need to be trained independently, and the training efficiency is improved; (2) based on multiple user interactions, the medical image segmentation model can learn the segmentation operation of the corresponding user, so that the medical image segmentation model conforming to the habit of the user is obtained, the output target medical image can gradually approach to the ideal segmentation result of the user, and the adaptability of the medical image segmentation model is improved; (3) modification in the iterative process can be selected as a training sample, and modification of misoperation is excluded, so that the influence of the training sample of misoperation on updating of the medical image segmentation model is avoided; (4) the medical image to be segmented is roughly segmented based on the pre-segmentation model, so that on one hand, a target medical image corresponding to the simple medical image to be segmented can be directly obtained, on the other hand, the subsequent iteration process can be converged more quickly, and the efficiency of the medical image segmentation model is improved. It should be noted that different embodiments may produce different advantages, and in different embodiments, the advantages that may be produced may be any one or combination of the above, or any other advantages that may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therewith, for example, in baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more embodiments of the invention. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Some embodiments have been described using numbers to describe components, attributes, and quantities, it being understood that such numbers used in the description of the embodiments have been modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.
Claims (19)
1. A medical image segmentation method based on user interaction is applied to a server and comprises the following steps:
acquiring a first image based on a medical image to be segmented;
taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is obtained, wherein the iterative processes comprise:
sending the image to be modified to a client, and receiving at least one modification of the image to be modified by a user from the client;
modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into an input medical image segmentation model, and outputting a second image;
sending the second image to the client, and receiving the judgment of whether the second image meets a first preset condition from the client by the user; if yes, outputting the second image as the target medical image, and updating the medical image segmentation model based on the target medical image; and if not, taking the second image as the new image to be modified.
2. The method of claim 1, wherein acquiring the first image based on the medical image to be segmented comprises:
pre-dividing the medical image to be divided to obtain a third image;
judging whether the third image meets a second preset condition or not;
if yes, outputting the third image as the target medical image;
otherwise, the third image is taken as the first image.
3. The method of claim 1, wherein updating the medical image segmentation model based on the target medical image comprises:
taking the medical image to be segmented and the first image as training samples, taking the target medical image as a label, and adding the label into a training sample set for updating a medical image segmentation model;
updating parameters of the medical image segmentation model based on the training sample set.
4. The method of claim 3, wherein the training samples further comprise: at least one modification of the first image by the user at least once.
5. The method of claim 3, wherein the parameters of the medical image segmentation model include parameters characterizing the user's habits.
6. A medical image segmentation system based on user interaction, the system implemented on a server, the system comprising:
the pre-segmentation module is used for acquiring a first image based on a medical image to be segmented;
a target medical image acquisition module, configured to perform a plurality of iterative processes with the first image as an image to be modified until a target medical image is acquired, the target medical image acquisition module including:
the modification receiving module is used for sending the image to be modified to a client and receiving at least one modification of the image to be modified by a user from the client;
the image segmentation module is used for modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into the medical image segmentation model and outputting a second image;
the output module is used for sending the second image to the client and receiving the judgment of whether the second image meets a first preset condition from the client; if yes, outputting the second image as the target medical image, and updating the medical image segmentation model based on the target medical image; and if not, taking the second image as the new image to be modified.
7. The system of claim 6, wherein the pre-segmentation module is further to:
pre-dividing the medical image to be divided to obtain a third image;
judging whether the third image meets a second preset condition or not;
if yes, outputting the third image as the target medical image;
otherwise, the third image is taken as the first image.
8. The system of claim 6, wherein the output module is further to:
taking the medical image to be segmented and the first image as training samples, taking the target medical image as a label, and adding the label into a training sample set for updating a medical image segmentation model;
updating parameters of the medical image segmentation model based on the training sample set.
9. The system of claim 8, wherein the at least one set of training samples further comprises: at least one modification of the first image by the user at least once.
10. The system of claim 8, wherein the parameters of the medical image segmentation model include parameters characterizing the user's habits.
11. A medical image segmentation method based on user interaction is applied to a client and comprises the following steps:
receiving an image to be modified from a server;
based on the image to be modified, executing a plurality of iterative processes until a target medical image is obtained, wherein the iterative processes comprise:
acquiring at least one modification of the image to be modified by a user, and sending the at least one modification to the server;
receiving a second image from the server;
acquiring judgment of whether the second image meets a first preset condition by the user, and sending the judgment to the server, so that the server executes the following processing based on the judgment: if yes, outputting the second image as the target medical image, and updating the medical image segmentation model based on the target medical image; and if not, taking the second image as a new image to be modified.
12. A system for medical image segmentation based on user interaction, the system implemented on a client, comprising:
the image to be modified receiving module is used for receiving the image to be modified from the server;
an iteration module, configured to perform a plurality of iteration processes based on the image to be modified until a target medical image is obtained, where the iteration module includes:
the modification sending module is used for acquiring at least one modification of the image to be modified by the user and sending the at least one modification to the server;
a second image receiving module for receiving a second image from the server;
a judging module, configured to obtain a judgment of whether the second image meets a first preset condition by the user, and send the judgment to the server, so that the server performs the following processing based on the judgment: if yes, outputting the second image as the target medical image, and updating the medical image segmentation model based on the target medical image; and if not, taking the second image as the new image to be modified.
13. A medical image segmentation method, comprising:
acquiring a first image, wherein the first image is obtained based on a medical image to be segmented;
taking the first image as an image to be modified, and executing a plurality of iterative processes until a target medical image is obtained, wherein the iterative processes comprise:
acquiring at least one modification of the image to be modified;
modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into an input medical image segmentation model, and outputting a second image;
judging whether the second image meets a first preset condition or not; if so, taking the second image as the target medical image; and if not, taking the second image as the new image to be modified.
14. The method of claim 13, wherein the first image is derived based on a medical image to be segmented, comprising:
pre-dividing the medical image to be divided to obtain a third image;
judging whether the third image meets a second preset condition or not;
if yes, outputting the third image as the target medical image;
otherwise, the third image is taken as the first image.
15. The method of claim 13, further comprising updating the medical image segmentation model based on the target medical image.
16. The method of claim 15, wherein updating the medical image segmentation model based on the target medical image comprises:
taking the medical image to be segmented and the image to be modified as training samples, taking the target medical image as a label, and adding the label into a training sample set for updating a medical image segmentation model;
updating parameters of the medical image segmentation model based on the training sample set.
17. The method of claim 16, wherein the training samples further comprise: at least one modification of the first image by the user at least once.
18. A medical image segmentation system, comprising:
the pre-segmentation module is used for acquiring a first image, and the first image is obtained based on a medical image to be segmented;
a target medical image acquisition module for performing a plurality of iterative processes until a target medical image is acquired, the target medical image acquisition module comprising:
the modification receiving module is used for acquiring at least one modification of the image to be modified;
the image segmentation module is used for modifying at least one of the medical image to be segmented, the image to be modified and the image to be modified into an input medical image segmentation model and outputting a second image;
the output module is used for judging whether the second image meets a first preset condition or not; if so, taking the second image as the target medical image; and if not, taking the second image as the new image to be modified.
19. A computer readable storage medium, wherein the storage medium stores computer instructions which, when executed by a processor, implement the method of any of claims 1-5, 11 and 13-17.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011197897.3A CN112396606B (en) | 2020-10-30 | 2020-10-30 | Medical image segmentation method, system and device based on user interaction |
CN202410218979.3A CN117994263A (en) | 2020-10-30 | 2020-10-30 | Medical image segmentation method, system and device based on user interaction |
US17/452,795 US20220138957A1 (en) | 2020-10-30 | 2021-10-29 | Methods and systems for medical image segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011197897.3A CN112396606B (en) | 2020-10-30 | 2020-10-30 | Medical image segmentation method, system and device based on user interaction |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410218979.3A Division CN117994263A (en) | 2020-10-30 | 2020-10-30 | Medical image segmentation method, system and device based on user interaction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112396606A true CN112396606A (en) | 2021-02-23 |
CN112396606B CN112396606B (en) | 2024-01-05 |
Family
ID=74597808
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011197897.3A Active CN112396606B (en) | 2020-10-30 | 2020-10-30 | Medical image segmentation method, system and device based on user interaction |
CN202410218979.3A Pending CN117994263A (en) | 2020-10-30 | 2020-10-30 | Medical image segmentation method, system and device based on user interaction |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410218979.3A Pending CN117994263A (en) | 2020-10-30 | 2020-10-30 | Medical image segmentation method, system and device based on user interaction |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN112396606B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112802036A (en) * | 2021-03-16 | 2021-05-14 | 上海联影医疗科技股份有限公司 | Method, system and device for segmenting target area of three-dimensional medical image |
CN113077445A (en) * | 2021-04-01 | 2021-07-06 | 中科院成都信息技术股份有限公司 | Data processing method and device, electronic equipment and readable storage medium |
CN114119645A (en) * | 2021-11-25 | 2022-03-01 | 推想医疗科技股份有限公司 | Method, system, device and medium for determining image segmentation quality |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108345890A (en) * | 2018-03-01 | 2018-07-31 | 腾讯科技(深圳)有限公司 | Image processing method, device and relevant device |
CN109389587A (en) * | 2018-09-26 | 2019-02-26 | 上海联影智能医疗科技有限公司 | A kind of medical image analysis system, device and storage medium |
CN111127471A (en) * | 2019-12-27 | 2020-05-08 | 之江实验室 | Gastric cancer pathological section image segmentation method and system based on double-label loss |
-
2020
- 2020-10-30 CN CN202011197897.3A patent/CN112396606B/en active Active
- 2020-10-30 CN CN202410218979.3A patent/CN117994263A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108345890A (en) * | 2018-03-01 | 2018-07-31 | 腾讯科技(深圳)有限公司 | Image processing method, device and relevant device |
CN109389587A (en) * | 2018-09-26 | 2019-02-26 | 上海联影智能医疗科技有限公司 | A kind of medical image analysis system, device and storage medium |
CN111127471A (en) * | 2019-12-27 | 2020-05-08 | 之江实验室 | Gastric cancer pathological section image segmentation method and system based on double-label loss |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112802036A (en) * | 2021-03-16 | 2021-05-14 | 上海联影医疗科技股份有限公司 | Method, system and device for segmenting target area of three-dimensional medical image |
CN113077445A (en) * | 2021-04-01 | 2021-07-06 | 中科院成都信息技术股份有限公司 | Data processing method and device, electronic equipment and readable storage medium |
CN114119645A (en) * | 2021-11-25 | 2022-03-01 | 推想医疗科技股份有限公司 | Method, system, device and medium for determining image segmentation quality |
Also Published As
Publication number | Publication date |
---|---|
CN112396606B (en) | 2024-01-05 |
CN117994263A (en) | 2024-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Khamparia et al. | Internet of health things-driven deep learning system for detection and classification of cervical cells using transfer learning | |
US11423540B2 (en) | Segmentation of anatomical regions and lesions | |
CN111008984B (en) | Automatic contour line drawing method for normal organ in medical image | |
Deb et al. | Brain tumor detection based on hybrid deep neural network in MRI by adaptive squirrel search optimization | |
CN112396606B (en) | Medical image segmentation method, system and device based on user interaction | |
US20200074634A1 (en) | Recist assessment of tumour progression | |
CN107563983A (en) | Image processing method and medical imaging devices | |
US20220301224A1 (en) | Systems and methods for image segmentation | |
JP7346553B2 (en) | Determining the growth rate of objects in a 3D dataset using deep learning | |
CN111275762B (en) | System and method for patient positioning | |
Sadad et al. | Internet of medical things embedding deep learning with data augmentation for mammogram density classification | |
Naga Srinivasu et al. | Variational Autoencoders‐BasedSelf‐Learning Model for Tumor Identification and Impact Analysis from 2‐D MRI Images | |
EP3286728B1 (en) | Model-based segmentation of an anatomical structure | |
EP3657514A1 (en) | Interactive iterative image annotation | |
CN113724185B (en) | Model processing method, device and storage medium for image classification | |
US20240087697A1 (en) | Methods and systems for providing a template data structure for a medical report | |
Jadwaa | X‐Ray Lung Image Classification Using a Canny Edge Detector | |
CN112419339A (en) | Medical image segmentation model training method and system | |
CN115861716B (en) | Glioma classification method and device based on twin neural network and image histology | |
US20220138957A1 (en) | Methods and systems for medical image segmentation | |
CN116433734A (en) | Registration method for multi-mode image guided radiotherapy | |
CN113614788A (en) | Deep reinforcement learning for computer-aided reading and analysis | |
CN116188412A (en) | Heart blood vessel branch identification method, system and storage medium | |
CN115860087A (en) | Model training method, system and storage medium | |
CN113177953B (en) | Liver region segmentation method, liver region segmentation device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |