CN116758286A - Medical image segmentation method, system, device, storage medium and product - Google Patents

Medical image segmentation method, system, device, storage medium and product Download PDF

Info

Publication number
CN116758286A
CN116758286A CN202310752469.XA CN202310752469A CN116758286A CN 116758286 A CN116758286 A CN 116758286A CN 202310752469 A CN202310752469 A CN 202310752469A CN 116758286 A CN116758286 A CN 116758286A
Authority
CN
China
Prior art keywords
data
domain data
medical image
target domain
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310752469.XA
Other languages
Chinese (zh)
Other versions
CN116758286B (en
Inventor
吴东东
陈子航
张诗慧
车贺宾
庄严
缪学磊
汪安安
徐洪丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese PLA General Hospital
Original Assignee
Chinese PLA General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese PLA General Hospital filed Critical Chinese PLA General Hospital
Priority to CN202310752469.XA priority Critical patent/CN116758286B/en
Publication of CN116758286A publication Critical patent/CN116758286A/en
Application granted granted Critical
Publication of CN116758286B publication Critical patent/CN116758286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application discloses a medical image segmentation method, a system, a device, a storage medium and a product, wherein the medical image segmentation method comprises the following steps: generating image data based on pre-acquired medical image data and clinical medical background information, wherein the image data comprises source domain data or target domain data; constructing a domain self-adaptive module, distributing a preset label to the target domain data, and generating source domain data, target domain data and distance information of the source domain data and the target domain data based on the domain self-adaptive module so as to improve data distribution consistency; constructing an attention inheritance module, and inputting the source domain data into the attention inheritance module to output inheritance characteristic data; and constructing a cross-modal model, inputting the target domain data into the cross-modal model, and generating a segmented medical image based on the distance information and the inheritance characteristic data. By the method, the accuracy of image segmentation is improved, and the segmentation process is simplified.

Description

Medical image segmentation method, system, device, storage medium and product
Technical Field
The present application relates generally to the field of medical treatment, and in particular, to a method, system, apparatus, storage medium and product for medical image segmentation.
Background
Modern medicine often uses a variety of medical images in diagnostic procedures, which are the basis for various medical image applications, and medical image segmentation techniques have shown increasing clinical value in clinically assisted diagnosis, image-guided surgery and radiation therapy. However, when analyzing medical images, the requirements on the authenticity and accuracy of the images are extremely high, and the analysis result may be directly affected by the change of a few pixels in the images.
In the prior art, the medical image segmentation is based on manual segmentation by experienced doctors, and the purely manual segmentation method is time-consuming and labor-consuming and is greatly influenced by the subjective influence of the doctors. Deep learning often relies on massive amounts of high quality tagged data, while medical image data is often scarce and it is often difficult to obtain high quality tagged data.
Disclosure of Invention
In view of the foregoing drawbacks or shortcomings of the prior art, it is desirable to provide a medical image segmentation method, system, apparatus, storage medium and product.
In one aspect, the present application provides a medical image segmentation method, including:
generating image data based on pre-acquired medical image data and clinical medical background information, wherein the image data comprises source domain data or target domain data;
constructing a domain self-adaptive module, distributing a preset label to the target domain data, and generating source domain data, target domain data and distance information of the source domain data and the target domain data based on the domain self-adaptive module so as to improve data distribution consistency;
constructing an attention inheritance module, and inputting the source domain data into the attention inheritance module to output inheritance characteristic data;
and constructing a cross-modal model, inputting the target domain data into the cross-modal model, and generating a segmented medical image based on the distance information and the inheritance characteristic data.
In some embodiments, a domain adaptive module is constructed, a preset label is allocated to the target domain data, and distance information of source domain data, target domain data and source domain data-target domain data is generated based on the domain adaptive module, so as to improve data distribution consistency, and the method further includes:
distributing a preset label to the target domain data;
and generating a target domain data score based on the preset label and the target domain data, and selecting the target domain data.
In some embodiments, constructing a cross-modal model, inputting the source domain data or the target domain data into the cross-modal model, generating a segmented medical image based on the distance information and the inherited feature data, and further comprising:
generating loss function information;
and generating the segmented medical image based on the loss function information, the distance information and the inherited characteristic data.
In some embodiments, image data is generated based on pre-acquired medical image data and clinical medical context information, the image data including source domain data or target domain data, further comprising:
and processing medical image information, wherein the preprocessing comprises one or more of cutting, rotating, deforming, zooming and noise reduction.
In some embodiments, image data is generated based on pre-acquired medical image data and clinical medical context information, the image data including source domain data or target domain data, further comprising:
and enhancing the medical image information.
In a second aspect, the present application provides a medical image segmentation system comprising:
the acquisition module is used for generating image data based on the medical image data and clinical medical background information acquired in advance, wherein the image data comprises source domain data or target domain data;
the domain self-adaptive module is used for distributing a preset label to the target domain data and generating source domain data, target domain data and distance information of the source domain data and the target domain data based on the domain self-adaptive module so as to improve data distribution consistency;
the attention inheritance module is used for inputting the source domain data into the attention inheritance module to output inheritance characteristic data;
and the cross-modal model is used for inputting the target domain data into the cross-modal model and generating the segmented medical image based on the distance information and the inheritance characteristic data.
In some embodiments, the domain adaptation module is further to:
distributing a preset label to the target domain data;
and generating a target domain data score based on the preset label and the target domain data, and selecting the target domain data.
In a third aspect, the present application provides a medical image segmentation apparatus, including a processor and a memory, where at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, where the instruction, the program, the code set, or the instruction set is loaded and executed by the processor to implement the medical image segmentation method described above.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium, which when executed by a processor of a mobile terminal, causes the mobile terminal to perform the above-described medical image segmentation method.
In another aspect, the application provides a computer program product which, when executed by a processor of a mobile terminal, enables the mobile terminal to perform the above-described method of generating an end-to-end entity end connection.
In summary, according to the medical image segmentation method provided by the application, the domain self-adaptive module and the attention integration module are constructed, so that the data of the domain self-adaptive module and the attention integration module are subjected to cross-modal service, and a segmented medical image is formed with medical image data.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is a flowchart of a medical image segmentation method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a constructed domain adaptation module provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a constructed attention inheritance module provided by an embodiment of the present application;
FIG. 4 is a block diagram illustrating a medical image segmentation system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a medical image segmentation apparatus according to an embodiment of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
The present application may relate to the field of medical science in general, so as to improve accuracy of image segmentation and simplify a segmentation process.
Referring to fig. 1 in detail, the present application provides a medical image segmentation method, which includes:
s101, generating image data based on pre-acquired medical image data and clinical medical background information, wherein the image data comprises source domain data or target domain data.
Specifically, the organ images of CT and MRI are obtained from the public data set and the medical record system in advance, image data are generated based on relevant clinical medical background information, and the image data are labeled. The image data marked by the existing information is the source domain data and is marked as D S The method comprises the steps of carrying out a first treatment on the surface of the The non-standard image data is the target domain data and is marked as D T
In some embodiments, image data is generated based on pre-acquired medical image data and clinical medical context information, the image data including source domain data or target domain data, further comprising:
and processing medical image information, wherein the preprocessing comprises one or more of cutting, rotating, deforming, zooming and noise reduction.
In particular, the medical effect is preprocessed to increase the image's recognition, the preprocessing mode including one or more of cropping, rotation, deformation, scaling, and noise reduction.
In some embodiments, image data is generated based on pre-acquired medical image data and clinical medical context information, the image data including source domain data or target domain data, further comprising: and enhancing the medical image information.
Specifically, the image is processed based on a preset formula, which is as follows:
wherein X represents the signal value in the original medical image, and MinBound and MaxBound are the upper and lower limits of the acquired signals;
wherein, respectively represent the mean value and standard deviation of X' values. Thereby obtaining a processed image based on the formula.
S102, constructing a domain self-adaptive module, distributing a preset label to the target domain data, and generating distance information of source domain data, target domain data and source domain data-target domain data based on the domain self-adaptive module so as to improve data distribution consistency.
Specifically, as shown in fig. 2, the constructed domain adaptation module constructs the mapping space G in advance f Source domain data D to be tagged S And unlabeled target domain data D T Mapping to G f In the method, semantic features of image data of different modes are extracted, and the distribution of source domain data and target domain data is enabled to be as consistent as possible in the space by using an unsupervised learning mode, namely, the distribution difference is reduced. On the basis of this, another mapping space G is designed d For predicting target domain data D T . Combination G f And G d To construct a domain adaptation module that minimizes feature distribution and domain gaps during training. After an unsupervised domain adaptive network framework is constructed, a segmentation loss function containing domain offset information is designed. Generating source domain data, target domain data and distance information { U } of the source domain data-target domain data according to the pseudo tag distributed on the target domain data s ,U t ,U st Minimization of { U } by constructed domain adaptation modules s ,U t ,U st The distance D of } is shown in the following formula:
wherein H represents a hypothesis class; d (D) S And D T Representing source domain data and destination domain data, respectively.
In some embodiments, a domain adaptive module is constructed, a preset label is allocated to the target domain data, and distance information of source domain data, target domain data and source domain data-target domain data is generated based on the domain adaptive module, so as to improve data distribution consistency, and the method further includes:
distributing a preset label to the target domain data;
and generating a target domain data score based on the preset label and the target domain data, and selecting the target domain data.
Specifically, a score is generated based on the label and the target domain data, and the target domain data with the highest score is selected for analysis.
S103, constructing an attention inheritance module, inputting the source domain data into the attention inheritance module, and outputting inheritance feature data.
Specifically, the constructed attention inheritance module, as shown in fig. 3, will source domain data D S Prediction matrix m of pre-trained model p Inputting the information into a softmax activation function sigma (·) and calculating the maximum probability information of all classes except the background, thereby obtaining an inherited guide map g:
wherein c is connected domain information except the background, and A (-) is an attention module. And (3) taking g as attention weight, and performing attention interaction with the characteristics of the label-free target domain data after the backbone network to obtain inherited characteristics fs:
wherein M consists of several convolution layers for feature fusion and represents multiplication and summation by element, respectively, fs being inherited features enhanced by the attention module.
S104, constructing a cross-modal model, inputting the target domain data into the cross-modal model, and generating a segmented medical image based on the distance information and the inheritance characteristic data.
Specifically, the source domain data D with labels is input S And unlabeled target domain data D T The two types of images are projected to a high dimensionality through a field self-adaptive module, and the source domain data is adjusted to serve as current training data input according to the target domain data, so that the domain gap between the current training data and the target domain data is quantized; then inheriting high-level semantic information in the pre-training model through an attention inheritance module; finally, continuously adjusting source domain data and introducing domain gap characteristics into a loss function in the training process, gradually reducing domain gaps in the training process, and inheriting original high-level semantic information while improving the generalization performance of the model on target domain data so as to realize end-to-end cross-mode segmentation.
In some embodiments, constructing a cross-modal model, inputting the source domain data or the target domain data into the cross-modal model, generating a segmented medical image based on the distance information and the inherited feature data, and further comprising:
generating loss function information;
and generating the segmented medical image based on the loss function information, the distance information and the inherited characteristic data.
Specifically, the loss function (SP loss) can sense the domain adaptation state, integrates the domain gap characteristic distance D, the class granularity L1, the sample granularity L2 of the prediction score distribution and the Dice (dloss) of the evaluation segmentation result of the domain adaptation module and gives corresponding weights, and the loss function is as follows:
SP=w1*D+w2*(L1+L2)+3*loss
in summary, according to the medical image segmentation method provided by the application, the domain self-adaptive module and the attention integration module are constructed, so that the data of the domain self-adaptive module and the attention integration module are subjected to cross-modal service, and a segmented medical image is formed with medical image data.
Referring further to fig. 4, a schematic diagram of a medical image segmentation system 200 according to one embodiment of the application is shown, comprising: the system comprises an acquisition module 201, a domain adaptation module 202, an attention inheritance module 203 and a cross-modal model 204.
An acquisition module 201, configured to generate image data based on medical image data and clinical medical background information acquired in advance, where the image data includes source domain data or target domain data;
the domain adaptive module 202 is configured to allocate a preset tag to the target domain data, and generate source domain data, target domain data, and distance information of the source domain data and the target domain data based on the domain adaptive module, so as to improve data distribution consistency;
an attention inheritance module 203 for inputting the source domain data to the attention inheritance module to output inheritance feature data;
and a cross-modal model 204, configured to input the target domain data into the cross-modal model, and generate a segmented medical image based on the distance information and the inheritance feature data.
In some embodiments, the domain adaptation module 202 is further configured to:
distributing a preset label to the target domain data;
and generating a target domain data score based on the preset label and the target domain data, and selecting the target domain data.
The division of the modules or units mentioned in the above detailed description is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation instructions of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, blocks shown in two separate connections may in fact be performed substantially in parallel, or they may sometimes be performed in the reverse order, depending on the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in the present application is not limited to the specific combinations of technical features described above, but also covers other technical features which may be formed by any combination of the technical features described above or their equivalents without departing from the spirit of the disclosure. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.
Referring further to fig. 5, a schematic diagram of a medical image segmentation apparatus 300 according to an embodiment of the present application is shown.
The main execution body of the medical image segmentation method in this embodiment is a medical image segmentation apparatus, and the medical image segmentation apparatus may be implemented in software and/or hardware.
The electronic device in this embodiment may include, but is not limited to, a personal computer, a platform computer, a smart phone, and the like, and the embodiment is not particularly limited to the electronic device.
The medical image segmentation apparatus 300 according to the present embodiment includes a processor and a memory, the processor and the memory being connected to each other, wherein the memory is configured to store a computer program, the computer program including program instructions, the processor being configured to invoke the program instructions to perform the method according to any of the preceding claims.
In an embodiment of the present application, the processor is a processing device that performs logic operations, such as a Central Processing Unit (CPU), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a single chip Microcomputer (MCU), an application specific logic circuit (ASIC), an image processor (GPU), or the like, that has data processing capability and/or program execution capability. It will be readily appreciated that the processor is typically communicatively coupled to a memory, on which is stored any combination of one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read-only memory (ROM), hard disk, erasable programmable read-only memory (EPROM), USB memory, flash memory, and the like. One or more computer instructions may be stored on the memory and executed by the processor to perform the relevant analysis functions. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
In the embodiment of the application, each module can be realized by executing related computer instructions by a processor, for example, the acquisition module can be realized by executing acquired instructions by the processor, the input module can be realized by executing instructions of a rule model by the processor, and the neural network can be realized by executing instructions of a neural network algorithm by the processor.
In the embodiment of the application, each module can run on the same processor or can run on a plurality of processors; the modules may be run on processors of the same architecture, e.g., all on processors of the X86 system, or on processors of different architectures, e.g., the image processing module runs on the CPU of the X86 system and the machine learning module runs on the GPU. The modules may be packaged in one computer product, for example, the modules are packaged in one computer software and run in one computer (server), or may be packaged separately or partially in different computer products, for example, the image processing modules are packaged in one computer software and run in one computer (server), and the machine learning modules are packaged separately in separate computer software and run in another computer (server); the computing platform when each module executes may be local computing, cloud computing, or hybrid computing composed of local computing and cloud computing.
The computer system includes a Central Processing Unit (CPU) 301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage section 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data required for operation instructions of the system are also stored. The CPU301, ROM302, and RAM303 are connected to each other through a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
The following components are connected to the I/O interface 305; an input section 306 including a keyboard, a mouse, and the like; an output portion 307 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 308 including a hard disk or the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. The drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 310 as needed, so that a computer program read therefrom is installed into the storage section 308 as needed.
In particular, the process described above with reference to flowchart fig. 1 may be implemented as a computer software program according to an embodiment of the application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program contains program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 309, and/or installed from the removable medium 311. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 301.
The electronic device provided by the embodiment of the application is provided with a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program is executed by a processor to implement the method according to any one of the above.
The computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
In one embodiment, a computer program product is provided, which, when executed by a processor of an electronic device, causes a medical image segmentation apparatus to perform the steps of: generating image data based on pre-acquired medical image data and clinical medical background information, wherein the image data comprises source domain data or target domain data; constructing a domain self-adaptive module, distributing a preset label to the target domain data, and generating source domain data, target domain data and distance information of the source domain data and the target domain data based on the domain self-adaptive module so as to improve data distribution consistency; constructing an attention inheritance module, and inputting the source domain data into the attention inheritance module to output inheritance characteristic data; and constructing a cross-modal model, inputting the target domain data into the cross-modal model, and generating a segmented medical image based on the distance information and the inheritance characteristic data.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are merely for convenience in describing and simplifying the description based on the orientation or positional relationship shown in the drawings, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus are not to be construed as limiting the application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Unless defined otherwise, technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application pertains. The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the application. Terms such as "disposed" or the like as used herein may refer to either one element being directly attached to another element or one element being attached to another element through an intermediate member. Features described herein in one embodiment may be applied to another embodiment alone or in combination with other features unless the features are not applicable or otherwise indicated in the other embodiment.
The present application has been described in terms of the above embodiments, but it should be understood that the above embodiments are for purposes of illustration and description only and are not intended to limit the application to the embodiments described. Those skilled in the art will appreciate that many variations and modifications are possible in light of the teachings of the application, which variations and modifications are within the scope of the application as claimed.

Claims (10)

1. A medical image segmentation method, comprising:
generating image data based on pre-acquired medical image data and clinical medical background information, wherein the image data comprises source domain data or target domain data;
constructing a domain self-adaptive module, distributing a preset label to the target domain data, and generating source domain data, target domain data and distance information of the source domain data and the target domain data based on the domain self-adaptive module so as to improve data distribution consistency;
constructing an attention inheritance module, and inputting the source domain data into the attention inheritance module to output inheritance characteristic data;
and constructing a cross-modal model, inputting the target domain data into the cross-modal model, and generating a segmented medical image based on the distance information and the inheritance characteristic data.
2. The medical image segmentation method according to claim 1, wherein a domain adaptation module is constructed, a preset label is allocated to the target domain data, and source domain data, target domain data, and distance information of the source domain data and the target domain data are generated based on the domain adaptation module, so as to improve data distribution consistency, and the method further comprises:
distributing a preset label to the target domain data;
and generating a target domain data score based on the preset label and the target domain data, and selecting the target domain data.
3. The medical image segmentation method according to claim 1, wherein constructing a cross-modal model, inputting the source domain data or the target domain data into the cross-modal model, generating a segmented medical image based on the distance information and the inherited feature data, and further comprising:
generating loss function information;
and generating the segmented medical image based on the loss function information, the distance information and the inherited characteristic data.
4. The medical image segmentation method according to claim 1, wherein image data is generated based on pre-acquired medical image data and clinical medical background information, the image data including source domain data or target domain data, further comprising:
and processing medical image information, wherein the preprocessing comprises one or more of cutting, rotating, deforming, zooming and noise reduction.
5. The medical image segmentation method according to claim 1, wherein image data is generated based on pre-acquired medical image data and clinical medical background information, the image data including source domain data or target domain data, further comprising:
and enhancing the medical image information.
6. A medical image segmentation system, comprising:
the acquisition module is used for generating image data based on the medical image data and clinical medical background information acquired in advance, wherein the image data comprises source domain data or target domain data;
the domain self-adaptive module is used for distributing a preset label to the target domain data and generating source domain data, target domain data and distance information of the source domain data and the target domain data based on the domain self-adaptive module so as to improve data distribution consistency;
the attention inheritance module is used for inputting the source domain data into the attention inheritance module to output inheritance characteristic data;
and the cross-modal model is used for inputting the target domain data into the cross-modal model and generating the segmented medical image based on the distance information and the inheritance characteristic data.
7. The medical image segmentation system as set forth in claim 6, wherein the domain adaptation module is further configured to:
distributing a preset label to the target domain data;
and generating a target domain data score based on the preset label and the target domain data, and selecting the target domain data.
8. A medical image segmentation apparatus, comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, the program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the medical image segmentation method according to any one of claims 1-5.
9. A non-transitory computer readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the medical image segmentation method according to any one of claims 1-5.
10. A computer program product, characterized in that instructions in the computer program product, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the medical image segmentation method according to any one of claims 1-5.
CN202310752469.XA 2023-06-25 2023-06-25 Medical image segmentation method, system, device, storage medium and product Active CN116758286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310752469.XA CN116758286B (en) 2023-06-25 2023-06-25 Medical image segmentation method, system, device, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310752469.XA CN116758286B (en) 2023-06-25 2023-06-25 Medical image segmentation method, system, device, storage medium and product

Publications (2)

Publication Number Publication Date
CN116758286A true CN116758286A (en) 2023-09-15
CN116758286B CN116758286B (en) 2024-02-06

Family

ID=87958708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310752469.XA Active CN116758286B (en) 2023-06-25 2023-06-25 Medical image segmentation method, system, device, storage medium and product

Country Status (1)

Country Link
CN (1) CN116758286B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322446A (en) * 2019-07-01 2019-10-11 华中科技大学 A kind of domain adaptive semantic dividing method based on similarity space alignment
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning
CN112434754A (en) * 2020-12-14 2021-03-02 前线智能科技(南京)有限公司 Cross-modal medical image domain adaptive classification method based on graph neural network
CN112784879A (en) * 2020-12-31 2021-05-11 前线智能科技(南京)有限公司 Medical image segmentation or classification method based on small sample domain self-adaption
CN113205528A (en) * 2021-04-02 2021-08-03 上海慧虎信息科技有限公司 Medical image segmentation model training method, segmentation method and device
CN113222985A (en) * 2021-06-04 2021-08-06 中国人民解放军总医院 Image processing method, image processing device, computer equipment and medium
US20210312674A1 (en) * 2020-04-02 2021-10-07 GE Precision Healthcare LLC Domain adaptation using post-processing model correction
CN114048474A (en) * 2021-11-05 2022-02-15 中南大学 Group intelligence-based image recognition backdoor defense method, device and medium
CN114723950A (en) * 2022-01-25 2022-07-08 南京大学 Cross-modal medical image segmentation method based on symmetric adaptive network
WO2023065070A1 (en) * 2021-10-18 2023-04-27 中国科学院深圳先进技术研究院 Multi-domain medical image segmentation method based on domain adaptation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322446A (en) * 2019-07-01 2019-10-11 华中科技大学 A kind of domain adaptive semantic dividing method based on similarity space alignment
US20210312674A1 (en) * 2020-04-02 2021-10-07 GE Precision Healthcare LLC Domain adaptation using post-processing model correction
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning
CN112434754A (en) * 2020-12-14 2021-03-02 前线智能科技(南京)有限公司 Cross-modal medical image domain adaptive classification method based on graph neural network
CN112784879A (en) * 2020-12-31 2021-05-11 前线智能科技(南京)有限公司 Medical image segmentation or classification method based on small sample domain self-adaption
CN113205528A (en) * 2021-04-02 2021-08-03 上海慧虎信息科技有限公司 Medical image segmentation model training method, segmentation method and device
CN113222985A (en) * 2021-06-04 2021-08-06 中国人民解放军总医院 Image processing method, image processing device, computer equipment and medium
WO2023065070A1 (en) * 2021-10-18 2023-04-27 中国科学院深圳先进技术研究院 Multi-domain medical image segmentation method based on domain adaptation
CN114048474A (en) * 2021-11-05 2022-02-15 中南大学 Group intelligence-based image recognition backdoor defense method, device and medium
CN114723950A (en) * 2022-01-25 2022-07-08 南京大学 Cross-modal medical image segmentation method based on symmetric adaptive network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MASOOMEH RAHIMPOUR; JEROEN BERTELS;: "《Cross-Modal Distillation to Improve MRI-Based Brain Tumor Segmentation With Missing MRI Sequences》", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》 *
杨长春;叶赞挺;刘半藤;王柯;崔海东: "《基于多源信息融合的医学图像分割方法》", 《浙江大学学报(工学版)》 *
贾颖霞;郎丛妍;冯松鹤;: "基于类别相关的领域自适应交通图像语义分割方法", 计算机研究与发展, no. 04 *

Also Published As

Publication number Publication date
CN116758286B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
EP3511942B1 (en) Cross-domain image analysis using deep image-to-image networks and adversarial networks
Mali et al. Making radiomics more reproducible across scanner and imaging protocol variations: a review of harmonization methods
US11568533B2 (en) Automated classification and taxonomy of 3D teeth data using deep learning methods
CN107665736B (en) Method and apparatus for generating information
CN112102321A (en) Focal image segmentation method and system based on deep convolutional neural network
CN110517254B (en) Deep learning-based automatic clinical target area delineation method and device and related equipment
CN113256592B (en) Training method, system and device of image feature extraction model
Yasutomi et al. Shadow estimation for ultrasound images using auto-encoding structures and synthetic shadows
CN112767505B (en) Image processing method, training device, electronic terminal and storage medium
CN116664713B (en) Training method of ultrasound contrast image generation model and image generation method
EP3973508A1 (en) Sampling latent variables to generate multiple segmentations of an image
CN114897756A (en) Model training method, medical image fusion method, device, equipment and medium
CN113658175A (en) Method and device for determining symptom data
Gheorghiță et al. Improving robustness of automatic cardiac function quantification from cine magnetic resonance imaging using synthetic image data
Vaiyapuri et al. Design of Metaheuristic Optimization‐Based Vascular Segmentation Techniques for Photoacoustic Images
CN116681790B (en) Training method of ultrasound contrast image generation model and image generation method
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN111507950B (en) Image segmentation method and device, electronic equipment and computer-readable storage medium
CN116758286B (en) Medical image segmentation method, system, device, storage medium and product
WO2020078252A1 (en) Method, apparatus and system for automatic diagnosis
CN115239655A (en) Thyroid ultrasonic image tumor segmentation and classification method and device
Toosi et al. State-of-the-art object detection algorithms for small lesion detection in PSMA PET: use of rotational maximum intensity projection (MIP) images
Dinh et al. Medical image fusion based on transfer learning techniques and coupled neural P systems
CN113052930A (en) Chest DR dual-energy digital subtraction image generation method
Wemmert et al. Deep learning for histopathological image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant