CN116205289B - Animal organ segmentation model training method, segmentation method and related products - Google Patents

Animal organ segmentation model training method, segmentation method and related products Download PDF

Info

Publication number
CN116205289B
CN116205289B CN202310491998.9A CN202310491998A CN116205289B CN 116205289 B CN116205289 B CN 116205289B CN 202310491998 A CN202310491998 A CN 202310491998A CN 116205289 B CN116205289 B CN 116205289B
Authority
CN
China
Prior art keywords
animal
animal organ
segmentation model
module
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310491998.9A
Other languages
Chinese (zh)
Other versions
CN116205289A (en
Inventor
张雨萌
池琛
贾泽涵
杨晶晶
周凡渝
祁霞
罗富良
黄乾富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hygea Medical Technology Co Ltd
Original Assignee
Hygea Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hygea Medical Technology Co Ltd filed Critical Hygea Medical Technology Co Ltd
Priority to CN202310491998.9A priority Critical patent/CN116205289B/en
Publication of CN116205289A publication Critical patent/CN116205289A/en
Application granted granted Critical
Publication of CN116205289B publication Critical patent/CN116205289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an animal organ segmentation model training method, a segmentation method and related products. The training method comprises the following steps: training an initial animal organ segmentation model by using a transfer learning method based on a pre-trained human organ segmentation model; acquiring a second animal medical image of the unlabeled animal organ, and acquiring animal organ pre-labeling data in the second animal medical image based on the initial animal organ segmentation model; training an initial animal organ segmentation model to obtain an optimized animal organ segmentation model based on the first animal medical image, the second animal medical image and corresponding animal organ labeling data, wherein the animal organ labeling data corresponding to the second animal medical image is corrected based on the animal organ pre-labeling data. Not only realizing the animal organ segmentation model and data training based on transfer learning, but also solving the problems that the parameters of the human organ segmentation model are difficult to adapt and the segmentation is inaccurate when the parameters of the human organ segmentation model are transferred into the animal organ segmentation model under the condition of insufficient data quantity.

Description

Animal organ segmentation model training method, segmentation method and related products
Technical Field
The invention relates to the technical field of medical image segmentation, in particular to an animal organ segmentation model training method, a segmentation method and related products.
Background
This section is intended to provide a background or context for the embodiments recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
With the continuous improvement of the living standard of people, pet raising becomes an indispensable part of people's life. However, pets also suffer from illness, such as tumors, fractures, pulmonary infections, and the like, and besides, a large number of animal experiments also require preoperative planning, so that animal CT reconstruction techniques are also becoming particularly important.
Because animal organ annotation data are very limited, compared with the huge annotation data of human organs, the animal organ segmentation model cannot be trained by using the traditional supervision training method.
Disclosure of Invention
The invention provides an animal organ segmentation model training method, a segmentation method and related products.
In a first aspect, an embodiment of the present invention provides a method for training an animal organ segmentation model, including:
acquiring a first animal medical image of a marked animal organ and corresponding animal organ marking data, and training an initial animal organ segmentation model by using a transfer learning method based on the first animal medical image, the corresponding animal organ marking data and a pre-trained human organ segmentation model;
Acquiring a second animal medical image of an unlabeled animal organ, and acquiring animal organ pre-labeling data of the second animal medical image based on the initial animal organ segmentation model;
training the initial animal organ segmentation model based on the first animal medical image and corresponding animal organ labeling data and the second animal medical image and corresponding animal organ labeling data to obtain an optimized animal organ segmentation model, wherein the animal organ labeling data corresponding to the second animal medical image is corrected based on the animal organ pre-labeling data.
In some implementations, the animal organ segmentation model training method further comprises: and performing fine tuning training on the optimized animal organ segmentation model by using a reinforcement learning method.
In some implementations, the human organ segmentation model includes an efficiency Net neural network including a first convolution module, an efficiency Net module, and a first segmentation head connected in sequence; the initial animal organ segmentation model comprises a first convolution module, an effect Net module, an up-sampling layer, a second convolution module, a lightweight convolution neural network module and a second segmentation head which are sequentially connected, wherein the first convolution module and the effect Net module are migrated from the effect Net neural network.
In some implementations, the lightweight convolutional neural network module includes a scSE Mobile Net V3 module, and the scSE Mobile Net V3 module is obtained by replacing the cSE attention mechanism module in the Mobile Net V3 module with the scSE attention mechanism module entirely.
In some implementations, the training an initial animal organ segmentation model by using a transfer learning method based on the first animal medical image, the corresponding animal organ labeling data and the pre-trained human organ segmentation model includes performing a preset training process of a second preset number of rounds on the initial animal organ segmentation model;
the preset training process comprises the following steps:
inputting the first animal medical image, and calculating to obtain a first feature map through the first convolution module, the effect Net module and the up-sampling layer;
inputting the first animal medical image and corresponding animal organ labeling data, and calculating by using the first convolution module to obtain a second feature map;
fusing the first characteristic diagram and the second characteristic diagram to obtain a third characteristic diagram;
the third feature map is input into the second convolution module to be calculated to obtain a fourth feature map;
Inputting the fourth characteristic diagram into the lightweight convolutional neural network module to calculate a fifth characteristic diagram;
and inputting the fifth characteristic diagram into the second segmentation head for decoding to obtain animal organ labeling data.
In some implementations, all parameters of the efficiency Net module are fixed during the first N training processes; in the later M training processes, opening all parameters of the efficiency Net module to allow the parameters and the parameters of the lightweight convolutional neural network module to participate in feedback regulation at the same time; wherein the sum of N and M is equal to the second preset number of rounds.
In some implementations, the loss function employed to train the human organ segmentation model and the initial animal organ segmentation model is a sum of a BCE loss function and a Tversky loss function;
in the Tversky loss function used for training the human organ segmentation model, the parameter alpha and the parameter beta are both 0.5; in the Tversky loss function used for training the initial animal organ segmentation model, the value range of the parameter alpha is 0.2-0.35, and the value range of the parameter beta is 0.6-0.75.
In some implementations, the fine tuning training of the optimized animal organ segmentation model using reinforcement learning methods includes:
Copying the optimized animal organ segmentation model into two parts, namely a first animal organ segmentation model and a second animal organ segmentation model;
inputting the first animal organ segmentation model and the second animal organ segmentation model by taking the first animal medical image and the second animal medical image as input data to respectively obtain a first animal organ labeling result and a second animal organ labeling result;
respectively calculating a first Dice precision value between the first animal organ labeling result and animal organ labeling data corresponding to the first animal medical image and a second Dice precision value between the second animal organ labeling result and animal organ labeling data corresponding to the second animal medical image;
calculating a penalty term based on the first and second Dice precision values;
calculating reinforcement learning rewards based on the score and the penalty items by taking the second price precision as a score, and feeding back the rewards to the second animal organ segmentation model for parameter updating;
every training period, copying the latest updated parameters of the second animal organ segmentation model to cover the last updated parameters of the first animal organ segmentation model.
In some implementations, the calculating the penalty term based on the first and second Dice precision values uses the following calculation formula:
Figure SMS_1
in the method, in the process of the invention,λa penalty term is indicated and is used to indicate,
Figure SMS_2
representing a first Dice precision value, +.>
Figure SMS_3
Representing a second Dice accuracy value.
In some implementations, the fine tuning training of the optimized animal organ segmentation model using reinforcement learning methods further comprises:
and constructing a reinforcement learning experience pool for offline learning based on the animal organ labeling result generated by the first animal organ segmentation model of which the parameters are updated last time and the animal organ labeling result and the final rewards generated by the second animal organ segmentation model of which the parameters are updated last time.
In a second aspect, an embodiment of the present invention provides a method for dividing an animal organ, including:
obtaining a medical image of an animal to be segmented of an unlabeled animal organ;
and performing animal organ segmentation on the to-be-segmented animal medical image by using the animal organ segmentation model which is obtained by training in advance based on the animal organ segmentation model training method of the first aspect, so as to obtain an animal organ segmentation result.
In a third aspect, an embodiment of the present invention provides an animal organ segmentation model training apparatus, including:
The first training module is used for acquiring a first animal medical image of the marked animal organ and corresponding animal organ marking data, and training an initial animal organ segmentation model by using a transfer learning method based on the first animal medical image, the corresponding animal organ marking data and a pre-trained human organ segmentation model;
the pre-labeling module is used for acquiring a second animal medical image of an unlabeled animal organ and obtaining animal organ pre-labeling data of the second animal medical image based on the initial animal organ segmentation model;
the second training module is used for training the initial animal organ segmentation model based on the first animal medical image and corresponding animal organ labeling data and the second animal medical image and corresponding animal organ labeling data to obtain an optimized animal organ segmentation model, and the animal organ labeling data corresponding to the second animal medical image is corrected based on the animal organ pre-labeling data.
In a fourth aspect, an embodiment of the present invention provides an animal organ segmentation apparatus comprising:
the acquisition module is used for acquiring medical images of the animal to be segmented of the unlabeled animal organ;
The segmentation module is used for carrying out animal organ segmentation on the to-be-segmented animal medical image by utilizing the animal organ segmentation model which is obtained by training in advance based on the animal organ segmentation model training device of the third aspect, so as to obtain an animal organ segmentation result.
In a fifth aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by at least one processor, implements a method as described in the first or second aspect.
In a sixth aspect, an embodiment of the present invention provides an electronic device, including a memory and at least one processor, the memory having stored thereon a computer program which, when executed by the at least one processor, implements a method according to the first or second aspect.
In a seventh aspect, embodiments of the present invention provide a computer program product which, when run on a processor, performs the method according to the first or second aspect.
One or more embodiments of the present invention can provide at least the following advantages:
training a human organ segmentation model, and training by using a transfer learning method to obtain an initial animal organ segmentation model; introducing a new second animal medical image of the unlabeled animal organ, obtaining animal organ pre-labeling data of the animal organ in the second animal medical image based on the initial animal organ segmentation model, correcting the animal organ pre-labeling data by a doctor, and training the initial animal organ segmentation model again based on the two parts of data to optimize the initial animal organ segmentation model, thereby realizing the training of the animal organ segmentation model and the data based on transfer learning, and solving the problems that parameters of the human organ segmentation model are difficult to adapt to and the segmentation is inaccurate when the data amount is insufficient.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate certain embodiments of the present invention and therefore should not be considered as limiting the scope.
FIG. 1 is a flowchart of an animal organ segmentation model training method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a human organ segmentation model architecture according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an initial animal organ segmentation model architecture according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a cSE attention mechanism module architecture according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an architecture of a sSE attention mechanism module provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of an architecture of a scSE attention mechanism module provided by an embodiment of the present invention;
FIG. 7 is a schematic flow chart of fine tuning training of an optimized animal organ segmentation model using reinforcement learning method according to an embodiment of the present invention;
FIG. 8 is an example of a training process for an animal organ segmentation model provided by an embodiment of the present invention;
FIG. 9 is a flow chart of a method for dividing an animal organ provided by an embodiment of the invention;
FIG. 10 is a block diagram of an animal organ segmentation model training apparatus according to an embodiment of the present invention;
Fig. 11 is a block diagram of an animal organ segmentation apparatus according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
With the integration of digitization technology and medicine, deep learning plays an increasingly important role in processing medical images. The deep learning needs to label the data sets, and the higher the quality of the data sets, the larger the quantity, and the better the model effect. Representative examples of the CV (Computer Vision) world are ImageNet and NLP (natural language processing, natural Language Processing, NLP) world are BERT (Bidirectional Encoder Representation from Transformers, the Encoder of a bidirectional transducer). In the field of medical image processing, a supervised learning method is often adopted, and a training set is made through gold standards marked by doctors. Because the cost of data annotation is too high, various weak annotation (annotating part of data or labels is insufficient to complete the whole task) and non-annotation (plain text or plain picture) methods are born, such as contrast learning and energy model methods, so that a large number of weak supervision and non-supervision models and methods are generated. A representative example of the CV world is DC-GAN, and a representative example of the NLP world is GPT-2. The methods such as contrast learning still need to train a large amount of data, and the cost of selecting data is even higher than that of supervised learning, so that the generalization of functions on new tasks can be realized without any data is more important. Therefore, in recent years, the technology of transfer learning (field adaptation) has been developed at a high speed. The migration learning method is basically consistent in CV and NLP, no special model exists, and various tuning skills are used for adjusting the original model, so that the method is suitable for new tasks, fields, modes, data or languages and the like.
In the related art, the application of the transfer learning in the medical image processing mainly includes: 1) Tumor detection: by utilizing the transfer learning technology, useful characteristics can be learned from other data sets, so that the accuracy of tumor detection is improved; 2) Pathological image analysis: by utilizing the transfer learning technology, useful features can be learned from other data sets, so that the accuracy of pathological image analysis is improved; 3) Brain magnetic resonance imaging: by using the transfer learning technology, useful features can be learned from other data sets, thereby improving the accuracy of brain magnetic resonance imaging.
However, the animal organ marking data size is usually small, the segmentation model cannot be trained alone to meet the accuracy requirement of segmentation, and under the condition that the existing animal organ marking data size is insufficient, even if the human organ model parameters are directly migrated into the animal organ segmentation model, the situation that the model is difficult to adapt to animal organ segmentation and the segmentation is inaccurate can occur.
Example 1
The embodiment provides a training method for an animal organ segmentation model, as shown in fig. 1, at least comprising steps S101 to S103:
step S101, a first animal medical image of the marked animal organ and corresponding animal organ marking data are obtained, and an initial animal organ segmentation model is trained by using a transfer learning method based on the first animal medical image, the corresponding animal organ marking data and a pre-trained human organ segmentation model. Wherein the animal is, for example, a rabbit.
And step S102, acquiring a second animal medical image of the unlabeled animal organ, and acquiring animal organ pre-labeling data of the second animal medical image based on the initial animal segmentation model.
Step S103, training an initial animal organ segmentation model based on the first animal medical image and corresponding animal organ labeling data and the second animal medical image and corresponding animal organ labeling data to obtain an optimized animal organ segmentation model, wherein the animal organ labeling data corresponding to the second animal medical image is corrected based on the animal organ pre-labeling data.
In some implementations, the human organ segmentation model includes an efficiency Net neural network, such as an efficiency Net b1 neural network. The effective Net neural network adopted by the human organ segmentation model comprises a first convolution module, an effective Net module and a first segmentation head which are connected in sequence.
Prior to step S101, the method of the present embodiment further includes:
step S100, training a human organ segmentation model.
In this embodiment, an efficiency Net neural network is used as a backbone network (backbone) to perform a human organ segmentation model training, and the architecture is shown in fig. 2. The training process of the human organ segmentation model can be as follows:
Step S100a, zero-equalizing the input image.
Step S100b, carrying out data enhancement processing on the data after the equalization processing.
In some examples, the probability of motion blur in the data enhancement process is 0.04-0.1, the probability of image horizontal flip is 0.1-0.3, the probability of image vertical flip is 0.1-0.3, the probability of random translation, size and rotation transformation of the image is 0.2-0.4, the probability of grid distortion, optical distortion or elastic transformation of the image is 0.1-0.3, and the probability of gaussian noise of the image is 0.05-0.1.
Step S100c, dividing CT images which are not less than a first preset number into a training set, a verification set and a test set according to preset proportion, training the first preset number of rounds, and further selecting a final model according to the Dice value obtained by the model in the verification set.
In a specific example, the first preset number may be 500 sets, the preset ratio may be 8:1:1, and the first preset number of rounds may be 10. And dividing at least 500 sets of CT images into a training set, a verification set and a test set according to the proportion of 8:1:1, training for 10 rounds, and selecting a final model according to the Dice result obtained by the model in the verification set.
In some implementations, after training the first preset number of rounds, if the Dice value is greater than 0.95, selecting a model corresponding to the maximum Dice value as a trained human organ segmentation model; if the Dice value is smaller than 0.95, training can be continued for 1-2 rounds, and after training is continued for 1-2 rounds, if the Dice value is still smaller than 0.95, a model corresponding to the largest Dice is directly selected. The 0.95 value can also be set as other threshold values according to actual requirements, and the embodiment does not limit the threshold value uniquely.
In some examples, model training may be performed using an Adam optimizer, whose learning rate is set to 10 -4 ~10 -5 The weight decay is set to 10 -4 ~10 -6
In some implementations, the loss function employed to train the human organ segmentation model is the sum of the BCE loss function BCE loss and the Tversky loss function Tversky loss;
the expression of the loss function is:
BCE loss + Tversky loss,
wherein, the parameter α=0.5, β=0.5 of the Tversky loss, and when the parameter α and the parameter β are both 0.5, the Tversky loss corresponds to the Dice loss (Dice loss function).
BCE loss functionloss BCE The calculation formula of (2) is as follows:
Figure SMS_4
wherein, the liquid crystal display device comprises a liquid crystal display device,nrepresenting the total number of samples,xrepresenting samples, collectionsA(x) Representing labeling data drawn by doctors, and collectionB(x) Representing the annotation data generated by the model.
The Tversky loss function has two parametersαAndβadvantageously by adjustingαAndβthe two parameters can control the balance between the false positive FP and the false negative FN, thereby affecting the segmentation accuracy of the smaller segmentation region and the larger segmentation region.
Tversky loss function
Figure SMS_5
The calculation formula of (2) is as follows:
Figure SMS_6
wherein, the collection
Figure SMS_7
Labeling data generated by the representation model, set +.>
Figure SMS_8
Marking data drawn by a doctor is represented;
Figure SMS_9
represents false positive FP,/->
Figure SMS_10
Indicates false negative FN, < > >
Figure SMS_11
Representing the smoothing coefficient, preferably taking the value of
Figure SMS_12
The calculation is accurate.
Dice loss function
Figure SMS_13
The calculation formula of (2) is as follows:
Figure SMS_14
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_17
representing the total number of samples->
Figure SMS_19
Representing samples, set->
Figure SMS_22
For the true label, i.e. the labeling data drawn by the doctor, set +.>
Figure SMS_16
Predictive labels, i.e. model-generated annotation data, obtained by generator segmentation +.>
Figure SMS_20
Representing a collection
Figure SMS_23
And set->
Figure SMS_25
Common element between->
Figure SMS_15
Represents the number of elements in A, +.>
Figure SMS_18
Represents the number of elements in B, +.>
Figure SMS_21
Representing a smoothing coefficient, preferably with a value +.>
Figure SMS_24
The calculation is accurate.
Because the number of the existing animal medical images of animal organs marked by doctors is very limited, compared with the huge marking data of human organs, the traditional supervision and training method cannot be used for training the animal organ segmentation model. The embodiment trains an initial animal organ segmentation model based on a human organ segmentation model based on a transfer learning method so as to initially obtain the animal organ segmentation model.
In the step S101, the first animal medical image of the animal organ and the corresponding animal organ labeling data of a small amount of the animal organs already labeled by the doctor are obtained, and then the data are used for performing the transfer learning, which should be noted that the animal organ types are in one-to-one correspondence with the human organ types.
When the initial animal organ segmentation model is obtained through training by using the migration learning method, an initial animal organ segmentation model frame needs to be constructed, as shown in fig. 3, the initial animal organ segmentation model comprises a first convolution module, an effect Net module, an up-sampling layer, a second convolution module, a lightweight convolution neural network module and a second segmentation head which are sequentially connected, wherein the first convolution module and the effect Net module migrate from the effect Net neural network.
When the neural network of the initial animal organ segmentation model is constructed, firstly, a first segmentation head of the effective Net neural network is removed, a first convolution module and an effective Net module of the effective Net are reserved, an up-sampling layer is added after the effective Net module for restoring images to the size of 1 x 16 x 512, a first characteristic image can be output after the input first animal medical image passes through the first convolution module and the effective Net module successively and the sampling layer is closed, the first characteristic image and the channel number dimension of the second characteristic image output by the first convolution module are connected, a third characteristic image of 1 x 32 x 512 can be obtained, characteristic fusion is achieved, and animal organ characteristics can be enriched. And (3) placing the third characteristic diagram into a second convolution module for calculation, and obtaining a fourth characteristic diagram of which the number is 1×16×512×512. After the light convolutional neural network module is connected to the second convolutional module, a second segmentation head is added, and a final output image can be further calculated based on the fourth characteristic diagram.
In some implementations, the lightweight convolutional neural network module may include a scSE Mobile Net V3 module. Specifically, the scSE Mobile Net V module is obtained by replacing all cSE attention mechanism modules in the Mobile Net V3 module with scSE attention mechanism modules.
The scSE Mobile Net V module is a modified Mobile Net V3 module, and only the cSE attention mechanism module in the Mobile Net V3 module is required to be replaced by the scSE attention mechanism module. The output result of the scSE attention mechanism module is the sum of the results of the cSE attention mechanism module and the sSE attention mechanism module. The architecture of cSE and sSE attention mechanism modules is shown in fig. 4 and 5, respectively, and the architecture of the scSE attention mechanism module is shown in fig. 6.
cSE the attention mechanism module has better immunity to disturbances, such as good resistance to gaussian noise, but it converges more slowly and is not easy to focus on the region of interest quickly. sSE the attention mechanism is very sensitive to the region of interest focused on, but is also sensitive to noise, is easily interfered by noise, and has the phenomenon that the segmented regions are not concentrated, for example, an image added with Gaussian noise is not correctly identified by the sSE attention mechanism, and a scattered point is often generated, so that focusing is not possible. Therefore, the original cSE attention mechanism module in the Mobile Net V3 module is replaced by the scSE attention mechanism module, so that the anti-interference performance of the model can be improved, and the convergence speed of the model can be improved, and the model is more sensitive to the region of interest. The parameters of the Mobile Net V3 module are small, and under the condition of small data volume, the quick fitting can be easily realized by improving the Mobile Net V3 module.
Based on the first animal medical image, corresponding animal organ labeling data and a pre-trained human organ segmentation model, training by using a transfer learning method to obtain an initial animal organ segmentation model, wherein the method comprises the following steps:
step S101a, performing zero-equalizing processing on the input image.
Step S101b, performing data enhancement processing on the data after the zero-equalization processing.
In some examples, the probability of motion blur in the data enhancement process is 0.04-0.1, the probability of image horizontal flip is 0.1-0.3, the probability of image vertical flip is 0.1-0.3, the probability of random translation, size and rotation transformation of the image is 0.2-0.4, the probability of grid distortion, optical distortion or elastic transformation of the image is 0.1-0.3, and the probability of gaussian noise of the image is 0.05-0.1.
Step S101c, dividing the CT images which are not less than the second preset number into a training set, a verification set and a test set according to preset proportion.
Step S101d, executing a preset training process of the second preset round number pair initial animal organ segmentation model.
In some examples, the second predetermined number may take 30 sets, the predetermined ratio may take 8:1:1, and the second predetermined number may be 100 rounds.
In some examples, model training may be performed using Adam optimizers with a learning rate of
Figure SMS_26
Weight decay is +.>
Figure SMS_27
The preset training process comprises the following steps:
based on the input first animal medical image, a first feature map is obtained through calculation by a first convolution module, an effect Net module and an up-sampling layer;
calculating a second feature map by using a first convolution module based on the input first animal medical image;
fusing the first feature map and the second feature map to obtain a third feature map;
inputting the third feature map into a second convolution module for calculation to obtain a fourth feature map;
inputting the fourth feature map into a lightweight convolutional neural network module for calculation to obtain a fifth feature map; and
and inputting the fifth characteristic diagram into a second segmentation head for decoding to obtain animal organ labeling data.
The efficiency Net module is a module trained on a human organ segmentation model, learns the characteristics of human organs, and in some implementations, in the initial animal organ segmentation model training of a second preset number of rounds, all parameters of the efficiency Net module are fixed in the previous N training processes, and the parameters are not allowed to participate in feedback regulation; in the later M training processes, opening all parameters of the efficiency Net module to allow all parameters of the efficiency Net module and parameters of the lightweight convolutional neural network module to participate in feedback regulation at the same time; wherein the sum of N and M is equal to the second preset number of rounds.
Taking training 100 rounds as an example, in the first 1-50 rounds of training (n=50), all parameters of the efficiency Net module are fixed, and the parameters are not allowed to participate in feedback regulation. In 51-100 rounds of training (m=50), all parameters of the efficiency Net module are opened, allowing the parameters to be adjusted simultaneously with the parameters of the Mobile Net module.
And during the training period of 51-100 rounds, determining a trained initial animal organ segmentation model according to the Dice value obtained by the model in the verification set, and specifically, selecting the model with the highest Dice value as the initial animal organ segmentation model.
The embedding of the Mobile Net small model greatly improves the convergence rate of the whole model and improves the migration effect of the animal organ segmentation model on the parameters of the human organ segmentation model.
In some implementations, the loss function employed to train the initial animal organ segmentation model is a sum of a BCE loss function and a Tversky loss function;
the expression of the loss function is:
BCE loss + Tversky loss,
wherein, the value range of the parameter α of Tversky loss is 0.2-0.35, the value range of β is 0.6-0.75, in one example, α=0.3, and β=0.7. The arrangement of the parameter alpha and the parameter beta increases the weight of the image segmentation sensitivity, and is beneficial to the segmentation of small areas (organs) such as animal organs and bones.
The human organ segmentation model is migrated to the initial animal organ segmentation model, the obtained segmentation effect is still not ideal, the animal organ initial model is used for pre-labeling by continuously obtaining new animal medical image data, and after the doctor modifies the pre-labeling result, the doctor trains again to obtain the optimized animal organ segmentation model, so that the segmentation effect is effectively improved.
In this embodiment, the second animal medical image of the unlabeled animal organ may be introduced to continue training the animal organ segmentation model, specifically, the initial animal organ segmentation model is used to calculate the animal organ pre-labeling data of the second animal medical image, and then the doctor modifies the pre-labeling data, and then based on the first animal medical image and the corresponding animal organ labeling data, the second animal medical image and the corresponding animal organ labeling data, the initial animal organ segmentation model obtained by the transfer learning is trained, so as to obtain the optimized animal organ segmentation model, the data required by training is effectively supplemented, and the animal organ labeling data corresponding to the second animal medical image is obtained after the doctor modifies the animal organ pre-labeling data, so that the doctor modified data is combined with the original first animal medical image and the corresponding animal organ labeling data, and the initial animal organ segmentation model is trained again, thereby being beneficial to improving the segmentation effect of the animal organ segmentation model.
The optimized animal organ segmentation model can be used as an initial animal organ segmentation model, pre-labeling data can be continuously provided for doctors in subsequent model training, model fine adjustment can be performed by using a reinforcement learning method in subsequent steps to form a final animal organ segmentation model, and the model can be directly deployed as the final animal organ segmentation model to perform animal organ segmentation of unlabeled animal medical images.
The model architecture of the optimized animal organ segmentation model is unchanged compared to the initial animal organ segmentation model, as shown in fig. 3. In some implementations, training of the optimized animal organ segmentation model includes:
step S103a, performing zero-equalizing processing on the input image.
Step S103b, performing data enhancement processing on the data after the zero-equalization processing.
The probability of motion blur in data enhancement processing is 0.04-0.1, the probability of image horizontal overturn is 0.1-0.3, the probability of image vertical overturn is 0.1-0.3, the probability of random translation, size and rotation transformation of images is 0.2-0.4, the probability of grid distortion, optical distortion or elastic transformation of images is 0.1-0.3, and the probability of Gaussian noise of images is 0.05-0.1.
Model training can be performed by using an Adam optimizer, and the learning rate is that
Figure SMS_28
Weight decay is +.>
Figure SMS_29
Likewise, the loss function is: BCE loss + Tversky loss,
wherein, the value range of the parameter α of Tversky loss is 0.2-0.35, the value range of β is 0.6-0.75, in one example, α=0.3, and β=0.7.
The expressions of the loss function BCE loss and the loss function Tversky loss are the same as before.
Step S103c, dividing the first animal medical image and the corresponding animal organ labeling data and the second animal medical image and the corresponding (doctor corrected) animal organ labeling data into a training set, a verification set and a test set.
Let m be the number of raw data (first animal medical image and corresponding animal organ labeling data) in the database, mtr be the number of training sets, mv be the number of verification sets, mts be the number of test sets, and n be the number of newly introduced data (second animal medical image and corresponding animal organ labeling data). Thus, all data when retraining (optimizing) the initial animal organ segmentation model again included m+n sets of CT images, which were divided into mtr+ (n 0.8) sets of training sets, mv+ (n 0.1) sets of validation sets, and mts+ (n 0.1) sets of test sets.
For the training of the optimized animal organ segmentation model, the foregoing preset training process of the third preset number of rounds needs to be executed, and reference is made to the foregoing for specific content of the preset training process, which is not repeated herein, and it should be understood that the training of this time newly introduces the second animal medical image after the doctor revises the labeling and the existing first animal medical image as the input image.
In one example, the third preset number of rounds is 30 rounds.
Similarly, in the initial animal organ segmentation model re-optimization training of the third preset number of rounds, fixing all parameters of the efficiency Net module in the previous N' training processes, and not allowing the parameters to participate in feedback regulation; in the post M' training process, opening all parameters of the efficiency Net module to allow all parameters of the efficiency Net module and parameters of the lightweight convolutional neural network module to participate in feedback regulation at the same time; wherein the sum of N 'and M' is equal to the third preset number of rounds.
Taking training 30 rounds as an example, in the first 1-20 rounds of training (N' =20), all parameters of the efficiency Net module are fixed, and the parameters are not allowed to participate in feedback regulation. In 21-30 rounds of training (M' =10), all parameters of the efficiency Net module are opened, allowing the parameters to be adjusted simultaneously with the parameters of the Mobile Net module.
And during the training period of 21-30 rounds, determining a trained initial animal organ segmentation model according to the Dice value obtained by the model in the verification set. When the Dice value is larger than 0.95, selecting a model corresponding to the largest Dice value as an animal organ optimization model; when the Dice value is smaller than 0.95, the training can be continued for 5-10 times; if the Dice value is still smaller than 0.95, directly selecting the corresponding model when the Dice is maximum. In some cases, the judgment threshold may also be 0.9, and is not limited to 0.95, and the specific setting of the judgment threshold may be set according to the accuracy requirement of the actual model, and the specific numerical value is not limited uniquely in this embodiment.
In practical application, if the segmentation effect of the current optimized model cannot meet the requirement, the optimized animal organ segmentation model can be used as an initial animal organ segmentation model, new data are introduced to perform a new training round, wherein the new data can be data which are obtained by obtaining pre-labeling data based on a second animal medical image of a new unlabeled animal organ and the initial animal organ segmentation model, and are corrected by a doctor and are then collated with the first animal medical image and the labeling data of the corresponding animal organ.
In some cases, the model may not learn more features in the segmentation task, relying only on a limited amount of data, and therefore further fine tuning of the optimized animal organ segmentation model may also be performed by means of reinforcement learning methods.
In some implementations, the animal organ segmentation model training method of the present embodiment further includes:
and step S104, performing fine tuning training on the optimized animal organ segmentation model by using a reinforcement learning method.
In some implementations, step S104 may further include:
step S104a, copying the optimized animal organ segmentation model into two parts, namely a first animal organ segmentation model and a second animal organ segmentation model;
step S104b, taking the first animal medical image and the second animal medical image as input data, and inputting a first animal organ segmentation model and a second animal organ segmentation model to respectively obtain a first animal organ labeling result and a second animal organ labeling result;
step S104c, respectively calculating a first position precision value between a first animal organ labeling result and animal organ labeling data corresponding to the first animal medical image and a second position precision value between a second animal organ labeling result and animal organ labeling data corresponding to the second animal medical image;
Step S104d, calculating a penalty term based on the first and second Dice precision values.
In some implementations, the calculation penalty term employs the following calculation formula:
Figure SMS_30
in the method, in the process of the invention,λa penalty term is indicated and is used to indicate,
Figure SMS_31
representing a first Dice precision value, +.>
Figure SMS_32
Representing a second Dice accuracy value. In practical applications, the expression of the Dice coefficient (DSC) may be as follows:
Figure SMS_33
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_36
representing the total number of samples; />
Figure SMS_39
Representing the sample; set->
Figure SMS_43
Labeling results drawn for doctors; collection and integration->
Figure SMS_35
Labeling results generated for the model; />
Figure SMS_38
Representation set->
Figure SMS_41
And (1) and (2) the concentration>
Figure SMS_44
Common elements between; />
Figure SMS_34
Represents the number of elements in A, +.>
Figure SMS_37
Representing the number of elements in B; />
Figure SMS_40
Representing a smoothing coefficient, preferably with a value +.>
Figure SMS_42
The calculated Dice value is more accurate.
Step S104e, using the second Dice precision as score
Figure SMS_45
And calculating the reward of reinforcement learning based on the score and the penalty term, and feeding back the reward to the second animal organ segmentation model for parameter updating.
Figure SMS_46
Calculating rewards for reinforcement learning based on score and penalty terms
Figure SMS_47
The formula of (2) is as follows:
Figure SMS_48
step S104f, copying the parameters of the second animal organ segmentation model updated last time to cover the parameters of the first animal organ segmentation model updated last time every training for a certain period.
By adopting the asynchronous updating strategies of the two identical models, the training speed of the models can be increased, the speed and the direction of model optimization can be controlled, and each stage can be regulated stably. Wherein penalty termλThe method is a mode for controlling the update amplitude, and the update amplitude is evaluated by comparing the difference of output results of the two models, so that the update amplitude of the models is controlled, the update is not violently ensured, and the training stability is improved. Through continuous rewarding feedback, the model can learn more abundant features than supervised learning, and the features are the results of the self exploration of the model, so that the marking diversity is improved under the condition of insufficient data, and the model achieves a better effect.
In some implementations, fine tuning training of the optimized animal organ segmentation model using reinforcement learning methods may further include:
and constructing a reinforcement learning experience pool for offline learning based on the animal organ labeling result generated by the first animal organ segmentation model of the last updated parameter, the animal organ labeling result generated by the second animal organ segmentation model of the latest updated parameter and the final rewards.
In this embodiment, a reinforcement learning experience pool (the experience pool includes three data) is constructed through the data label generated by the segmentation of the first animal organ with the last updated parameter, the data label generated by the segmentation model of the second animal organ with the new updated parameter and the final reward, and is used for offline learning, and playback of these experiences is beneficial to improving the sample utilization rate and increasing the training stability; the correlation among samples is reduced, deviation of model learning is avoided, and model forgetting is reduced; the model is allowed to perform offline learning, so that sample waste during online training can be avoided; and the multi-task learning is supported, and the generalization capability of the model is improved.
In practical applications, the reinforcement learning method may employ a PPO (Proximal Policy Optimization) architecture. The finally formed animal organ segmentation model can be finely adjusted through reinforcement learning and can be deployed as a final model.
In the model fine tuning process, as shown in fig. 7, the animal organ optimization model is first duplicated into two models, namely a last updated model (a) and a last updated model (B), wherein the updating refers to updating of model parameters. The first animal medical image and corresponding labeling data and the second animal medical image and corresponding labeling data are used as input data, and are respectively input into a last updated model (A) and a last updated model (B), wherein the last updated model (A) can generate the labeling data and the last updated model (B) can generate the output labeling data respectively. Calculating punishment items by using the labeling data generated by the two models and the animal organ labeling data drawn by the corresponding doctorsλCalculating the Dice precision of the labeling data of the model (B) updated at the latest time as a score according to the labeling result of the model (B) updated at the latest time and the labeling data drawn by a doctorscoreRewards ultimately fed back to reinforcement learning rewardHold onThe parameters of the model (A) updated at one time are fixed, and only the parameters of the model (B) updated at the latest time are updated. After training for a certain period, copying the parameters of the latest updated model (B) to cover the parameters of the last updated model (A), and updating the model. The period of model update depends on the specific training situation. After the parameters of the model (A) updated last time are updated, the parameters of the model (A) updated last time are kept unchanged, the parameters of the model (B) updated last time are continuously trained and adjusted, model optimization is further achieved, and fine adjustment of the model is achieved.
Fig. 8 shows an example of a training procedure for an animal organ segmentation model. Firstly, a human organ segmentation model is transferred and learned into an initial animal organ segmentation model through a small amount of image data of marked animal organs. And introducing unlabeled image data, and calculating to obtain a pre-labeling result by using the initial animal organ segmentation model. And the doctor continuously modifies the pre-labeling result to form animal organ labeling data drawn by the doctor. And (3) finishing the new animal organ labeling data with the old data in the server, and training the initial animal organ segmentation model again to obtain an optimized animal organ segmentation model. The optimized animal organ segmentation model can be used as an initial animal organ segmentation model to continuously provide pre-labeling data for doctors, and can also be subjected to model fine adjustment by using a reinforcement learning method, so that a fine-adjusted animal organ segmentation model is finally formed and is used as a final model for deployment.
Example two
The present embodiment provides an animal organ segmentation method, as shown in fig. 9, comprising:
step S201, obtaining an animal medical image to be segmented of an unlabeled animal organ;
step S202, performing animal organ segmentation on the animal medical image to be segmented by using an animal organ segmentation model which is obtained by training in advance based on the animal organ segmentation model training method of the first embodiment, so as to obtain an animal organ segmentation result.
In the embodiment, the animal organ segmentation model obtained by training in advance based on the animal organ segmentation model training method of the embodiment is utilized to segment the animal organ of the animal medical image to be segmented, so that an accurate animal organ segmentation result can be obtained, and preoperative planning of an animal experiment is facilitated.
Example III
The present embodiment provides an animal organ segmentation model training apparatus, as shown in fig. 10, including:
the first training module 301 is configured to acquire a first animal medical image of a labeled animal organ and corresponding animal organ labeling data, and train an initial animal organ segmentation model by using a migration learning method based on the first animal medical image, the corresponding animal organ labeling data, and a pre-trained human organ segmentation model;
The pre-labeling module 302 is configured to obtain a second animal medical image of an unlabeled animal organ, and obtain animal organ pre-labeling data of the second animal medical image based on the initial animal organ segmentation model;
the second training module 303 is configured to train the initial animal organ segmentation model based on the first animal medical image and corresponding animal organ labeling data, and on the second animal medical image and corresponding animal organ labeling data, to obtain an optimized animal organ segmentation model, where the animal organ labeling data corresponding to the second animal medical image is corrected based on the animal organ pre-labeling data.
The specific implementation manner of each module may be referred to in the first embodiment, and will not be described in detail in this embodiment.
It should be noted that the device of this embodiment has all the advantages of the first embodiment.
Example IV
The present embodiment provides an animal organ segmentation apparatus, as shown in fig. 11, comprising:
an acquisition module 401, configured to acquire a medical image of an animal to be segmented of an unlabeled animal organ;
the segmentation module 402 is configured to perform animal organ segmentation on the to-be-segmented animal medical image by using the animal organ segmentation model obtained by training in advance by using the animal organ segmentation model training device according to the third embodiment, so as to obtain an animal organ segmentation result.
The specific implementation manner of each module may be referred to in the second embodiment, and will not be described in detail in this embodiment.
It should be noted that the device of this embodiment has all the advantages of the second embodiment.
Example five
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by at least one processor, implements the method of embodiment one or embodiment two.
It should be noted that this embodiment has all the advantageous effects of the first or second embodiment.
Example six
The present embodiment provides an electronic device including a memory and at least one processor, the memory storing a computer program that when executed by the at least one processor implements the method of embodiment one or embodiment two.
It should be noted that this embodiment has all the advantageous effects of the embodiment one or the embodiment.
Example seven
The present embodiment provides a computer program product which, when run on a processor, performs the method of embodiment one or embodiment two.
It should be noted that this embodiment has all the advantageous effects of the embodiment one or the embodiment.
The processor may be an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), a digital signal processor (Digital Signal Processor, abbreviated as DSP), a digital signal processing device (Digital Signal Processing Device, abbreviated as DSPD), a programmable logic device (Programmable Logic Device, abbreviated as PLD), a field programmable gate array (Field Programmable Gate Array, abbreviated as FPGA), a controller, a microcontroller (Microcontroller Unit, MCU), a microprocessor, or other electronic component implementation for executing the walk-slip break strain type determination method in the above embodiment.
The aforementioned computer-readable storage medium may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus and method embodiments described above are merely illustrative.
It should be noted that, in this document, the terms "first," "second," and the like in the description and the claims of the present application and the above drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Although the embodiments of the present invention are described above, the embodiments are only used for facilitating understanding of the present invention, and are not intended to limit the present invention. Any person skilled in the art can make any modification and variation in form and detail without departing from the spirit and scope of the present disclosure, but the scope of the present disclosure is still subject to the scope of the appended claims.

Claims (12)

1. A method for training an animal organ segmentation model, comprising:
acquiring a first animal medical image of a marked animal organ and corresponding animal organ marking data, and training an initial animal organ segmentation model by using a transfer learning method based on the first animal medical image, the corresponding animal organ marking data and a pre-trained human organ segmentation model;
acquiring a second animal medical image of an unlabeled animal organ, and acquiring animal organ pre-labeling data of the second animal medical image based on the initial animal organ segmentation model;
training the initial animal organ segmentation model based on the first animal medical image and corresponding animal organ labeling data and the second animal medical image and corresponding animal organ labeling data to obtain an optimized animal organ segmentation model, wherein the animal organ labeling data corresponding to the second animal medical image is corrected based on the animal organ pre-labeling data;
the human organ segmentation model comprises an effect Net neural network, wherein the effect Net neural network comprises a first convolution module, an effect Net module and a first segmentation head which are connected in sequence; the initial animal organ segmentation model comprises a first convolution module, an effect Net module, an up-sampling layer, a second convolution module, a lightweight convolution neural network module and a second segmentation head which are connected in sequence, wherein the first convolution module and the effect Net module are migrated from the effect Net neural network;
The lightweight convolutional neural network module comprises a scSE Mobile Net V module, wherein the scSE Mobile Net V module is obtained by replacing cSE attention mechanism modules in a Mobile Net V3 module with scSE attention mechanism modules;
the training of the initial animal organ segmentation model by using a transfer learning method based on the first animal medical image, the corresponding animal organ labeling data and the pre-trained human organ segmentation model comprises the steps of executing a preset training process of a second preset round number on the initial animal organ segmentation model, wherein the preset training process comprises the following steps:
based on the input first animal medical image, calculating to obtain a first feature map through the first convolution module, the effect Net module and the up-sampling layer;
calculating a second feature map by using the first convolution module based on the input first animal medical image;
fusing the first characteristic diagram and the second characteristic diagram to obtain a third characteristic diagram;
the third feature map is input into the second convolution module to be calculated to obtain a fourth feature map;
inputting the fourth characteristic diagram into the lightweight convolutional neural network module to calculate a fifth characteristic diagram;
And inputting the fifth characteristic diagram into the second segmentation head for decoding to obtain animal organ labeling data.
2. The method for training an animal organ segmentation model according to claim 1, further comprising: and performing fine tuning training on the optimized animal organ segmentation model by using a reinforcement learning method.
3. The method for training an animal organ segmentation model according to claim 1,
in the previous N training processes, fixing all parameters of the Effect Net module;
in the later M training processes, opening all parameters of the efficiency Net module to allow the parameters and the parameters of the lightweight convolutional neural network module to participate in feedback regulation at the same time;
wherein the sum of N and M is equal to the second preset number of rounds.
4. The method of claim 1, wherein the loss function used to train the human organ segmentation model and the initial animal organ segmentation model is the sum of BCE loss function and Tversky loss function;
in the Tversky loss function used for training the human organ segmentation model, the parameter alpha and the parameter beta are both 0.5; in the Tversky loss function used for training the initial animal organ segmentation model, the value range of the parameter alpha is 0.2-0.35, and the value range of the parameter beta is 0.6-0.75.
5. The method of claim 2, wherein the performing fine-tuning training on the optimized animal organ segmentation model using a reinforcement learning method comprises:
copying the optimized animal organ segmentation model into two parts, namely a first animal organ segmentation model and a second animal organ segmentation model;
inputting the first animal organ segmentation model and the second animal organ segmentation model by taking the first animal medical image and the second animal medical image as input data to respectively obtain a first animal organ labeling result and a second animal organ labeling result;
respectively calculating a first Dice precision value between the first animal organ labeling result and animal organ labeling data corresponding to the first animal medical image and a second Dice precision value between the second animal organ labeling result and animal organ labeling data corresponding to the second animal medical image;
calculating a penalty term based on the first and second Dice precision values;
calculating reinforcement learning rewards based on the score and the penalty items by taking the second price precision as a score, and feeding back the rewards to the second animal organ segmentation model for parameter updating;
Every training period, copying the latest updated parameters of the second animal organ segmentation model to cover the last updated parameters of the first animal organ segmentation model.
6. The method of claim 5, wherein the calculating a penalty term based on the first and second Dice accuracy values uses the following calculation formula:
Figure QLYQS_1
in the method, in the process of the invention,λa penalty term is indicated and is used to indicate,
Figure QLYQS_2
representing a first Dice precision value, +.>
Figure QLYQS_3
Representing a second Dice accuracy value.
7. The method of claim 5, wherein the fine-tuning the optimized animal organ segmentation model using reinforcement learning method further comprises:
and constructing a reinforcement learning experience pool for offline learning based on the animal organ labeling result generated by the first animal organ segmentation model of which the parameters are updated last time and the animal organ labeling result and the final rewards generated by the second animal organ segmentation model of which the parameters are updated last time.
8. A method of dividing an animal organ, comprising:
obtaining a medical image of an animal to be segmented of an unlabeled animal organ;
Animal organ segmentation is performed on the animal medical image to be segmented by using an animal organ segmentation model which is obtained by training in advance based on the animal organ segmentation model training method according to any one of claims 1 to 7, so as to obtain an animal organ segmentation result.
9. An animal organ segmentation model training equipment, characterized by comprising:
the first training module is used for acquiring a first animal medical image of the marked animal organ and corresponding animal organ marking data, and training an initial animal organ segmentation model by using a transfer learning method based on the first animal medical image, the corresponding animal organ marking data and a pre-trained human organ segmentation model;
the pre-labeling module is used for acquiring a second animal medical image of an unlabeled animal organ and obtaining animal organ pre-labeling data of the second animal medical image based on the initial animal organ segmentation model;
the second training module is used for training the initial animal organ segmentation model based on the first animal medical image and corresponding animal organ labeling data and the second animal medical image and corresponding animal organ labeling data to obtain an optimized animal organ segmentation model, and the animal organ labeling data corresponding to the second animal medical image is corrected based on the animal organ pre-labeling data;
The human organ segmentation model comprises an effect Net neural network, wherein the effect Net neural network comprises a first convolution module, an effect Net module and a first segmentation head which are connected in sequence; the initial animal organ segmentation model comprises a first convolution module, an effect Net module, an up-sampling layer, a second convolution module, a lightweight convolution neural network module and a second segmentation head which are connected in sequence, wherein the first convolution module and the effect Net module are migrated from the effect Net neural network;
the lightweight convolutional neural network module comprises a scSE Mobile Net V module, wherein the scSE Mobile Net V module is obtained by replacing cSE attention mechanism modules in a Mobile Net V3 module with scSE attention mechanism modules;
the training of the initial animal organ segmentation model by using a transfer learning method based on the first animal medical image, the corresponding animal organ labeling data and the pre-trained human organ segmentation model comprises the steps of executing a preset training process of a second preset round number on the initial animal organ segmentation model, wherein the preset training process comprises the following steps:
based on the input first animal medical image, calculating to obtain a first feature map through the first convolution module, the effect Net module and the up-sampling layer;
Calculating a second feature map by using the first convolution module based on the input first animal medical image;
fusing the first characteristic diagram and the second characteristic diagram to obtain a third characteristic diagram;
the third feature map is input into the second convolution module to be calculated to obtain a fourth feature map;
inputting the fourth characteristic diagram into the lightweight convolutional neural network module to calculate a fifth characteristic diagram;
and inputting the fifth characteristic diagram into the second segmentation head for decoding to obtain animal organ labeling data.
10. An animal organ segmentation apparatus, comprising:
the acquisition module is used for acquiring medical images of the animal to be segmented of the unlabeled animal organ;
the segmentation module is used for carrying out animal organ segmentation on the to-be-segmented animal medical image by utilizing the animal organ segmentation model which is obtained by training in advance based on the animal organ segmentation model training device of claim 9, so as to obtain an animal organ segmentation result.
11. A computer-readable storage medium, on which a computer program is stored which, when executed by at least one processor, implements the method according to any one of claims 1 to 8.
12. An electronic device comprising a memory and at least one processor, the memory having stored thereon a computer program which, when executed by the at least one processor, implements the method of any of claims 1-8.
CN202310491998.9A 2023-05-05 2023-05-05 Animal organ segmentation model training method, segmentation method and related products Active CN116205289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310491998.9A CN116205289B (en) 2023-05-05 2023-05-05 Animal organ segmentation model training method, segmentation method and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310491998.9A CN116205289B (en) 2023-05-05 2023-05-05 Animal organ segmentation model training method, segmentation method and related products

Publications (2)

Publication Number Publication Date
CN116205289A CN116205289A (en) 2023-06-02
CN116205289B true CN116205289B (en) 2023-07-04

Family

ID=86514981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310491998.9A Active CN116205289B (en) 2023-05-05 2023-05-05 Animal organ segmentation model training method, segmentation method and related products

Country Status (1)

Country Link
CN (1) CN116205289B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911797A (en) * 2024-03-19 2024-04-19 武汉理工大学 Crop CT image semiautomatic labeling method and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507993B (en) * 2020-03-18 2023-05-19 南方电网科学研究院有限责任公司 Image segmentation method, device and storage medium based on generation countermeasure network
JP7396482B2 (en) * 2020-06-09 2023-12-12 富士通株式会社 Judgment program, judgment device, and judgment method
CN112070777B (en) * 2020-11-10 2021-10-08 中南大学湘雅医院 Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning
CN113205528B (en) * 2021-04-02 2023-07-07 上海慧虎信息科技有限公司 Medical image segmentation model training method, segmentation method and device
AU2021101851A4 (en) * 2021-04-12 2021-06-03 Vishwakarma Institute Of Information Technology A deep learning based system for automatic segmentation and quantification of covid-19 in CT images
CN115439486A (en) * 2022-05-27 2022-12-06 陕西科技大学 Semi-supervised organ tissue image segmentation method and system based on dual-countermeasure network
CN115018865A (en) * 2022-06-30 2022-09-06 西安理工大学 Medical image segmentation method based on transfer learning
CN115131565B (en) * 2022-07-20 2023-05-02 天津大学 Histological image segmentation model based on semi-supervised learning
CN115908451A (en) * 2022-11-04 2023-04-04 北京航空航天大学 Heart CT image segmentation method combining multi-view geometry and transfer learning

Also Published As

Publication number Publication date
CN116205289A (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN110490251B (en) Artificial intelligence-based prediction classification model obtaining method and device and storage medium
CN110111399B (en) Image text generation method based on visual attention
CN110427875B (en) Infrared image target detection method based on deep migration learning and extreme learning machine
EP3361423B1 (en) Learning system, learning device, learning method, learning program, teacher data creation device, teacher data creation method, teacher data creation program, terminal device, and threshold value changing device
US11056227B2 (en) System and method for generating textual descriptions from medical images
CN116205289B (en) Animal organ segmentation model training method, segmentation method and related products
CN108416065A (en) Image based on level neural network-sentence description generates system and method
CN111966820B (en) Method and system for constructing and extracting generative abstract model
CN110991190B (en) Document theme enhancement system, text emotion prediction system and method
CN111553159B (en) Question generation method and system
CN112532746A (en) Cloud edge cooperative sensing method and system
CN112069827B (en) Data-to-text generation method based on fine-grained subject modeling
CN111198966A (en) Natural language video clip retrieval method based on multi-agent boundary perception network
CN112529054B (en) Multi-dimensional convolution neural network learner modeling method for multi-source heterogeneous data
Gonzalez Duque et al. Spatio-temporal consistency and negative label transfer for 3D freehand US segmentation
CN114757310B (en) Emotion recognition model and training method, device, equipment and readable storage medium thereof
CN116994695A (en) Training method, device, equipment and storage medium of report generation model
CN113537307A (en) Self-supervision domain adaptation method based on meta-learning
Zeng Intelligent test algorithm for English writing using English semantic and neural networks
CN115687910A (en) Data processing method and device, computer equipment and readable storage medium
Ma et al. Cascaded LSTMs based deep reinforcement learning for goal-driven dialogue
Karpagam et al. Facial emotion detection using convolutional neural network algorithm
CN114332939B (en) Pose sequence generation method and system
US20230386644A1 (en) Medical image post-processing
CN117217291A (en) Target model training method, classification method, model and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant