CN114332129A - Model training method and device and image segmentation method and device - Google Patents

Model training method and device and image segmentation method and device Download PDF

Info

Publication number
CN114332129A
CN114332129A CN202111663382.2A CN202111663382A CN114332129A CN 114332129 A CN114332129 A CN 114332129A CN 202111663382 A CN202111663382 A CN 202111663382A CN 114332129 A CN114332129 A CN 114332129A
Authority
CN
China
Prior art keywords
sampling
lesion
samples
sample
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111663382.2A
Other languages
Chinese (zh)
Inventor
孙岩峰
张欢
潘明阳
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202111663382.2A priority Critical patent/CN114332129A/en
Publication of CN114332129A publication Critical patent/CN114332129A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to the technical field of image processing, in particular to a model training method, a model training device, an image segmentation method, an image segmentation device, a computer-readable storage medium and electronic equipment, which solve the problem that a network model has poor segmentation effect on nodules in a medical image sequence. Each combined sample used in the model training method provided by the embodiment of the application corresponds to one sampling distribution weight, so that the combined sample obtained according to various different sampling distribution weights can be used for training the initial network model, and the optimal nodule segmentation model can be selected. In addition, the sampling distribution weight is the proportion of the number of the focus samples corresponding to the S kinds of focus attributes in the total number of the focus samples included in the combined sample, so that the influence of various focus attributes on the initial network model training can be balanced, the differential sampling is realized, and the optimal segmentation effect of the nodule segmentation model obtained by the model training method is improved.

Description

Model training method and device and image segmentation method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a model training method and a model training apparatus, an image segmentation method and an image segmentation apparatus, and a computer-readable storage medium and an electronic device.
Background
The image segmentation by using the network model refers to classifying each pixel in the image and noting the classification probability of each pixel. Due to the complex and various shapes of the nodules in the medical image sequence and the limited learning capacity of the network model, training samples for training the network model cannot be increased infinitely, and therefore the effect of segmenting the nodules in the medical image sequence by the existing network model is poor.
Disclosure of Invention
In view of this, embodiments of the present application provide a model training method and a model training apparatus, an image segmentation method and an image segmentation apparatus, as well as a computer-readable storage medium and an electronic device, which solve the problem that a network model has a poor effect of segmenting nodules in a medical image sequence.
In a first aspect, an embodiment of the present application provides a model training method, including: sampling P focus sampling regions contained in a medical image sequence sample set for N times to obtain N combined samples, training an initial network model based on the N combined samples respectively to obtain loss results and a nodule segmentation model corresponding to the N times of training, wherein each combined sample corresponds to a sampling distribution weight, each combined sample comprises focus samples with S kinds of focus attributes, the sampling distribution weight is the proportion of the number of the focus samples corresponding to the S kinds of focus attributes in the total number of the focus samples contained in the combined sample, and P, N, S is a positive integer; and determining an optimal nodule segmentation model in the nodule segmentation models respectively corresponding to the N times of training based on loss results respectively corresponding to the N times of training, wherein the nodule segmentation models respectively corresponding to the N times of training comprise at least one non-overfitting nodule segmentation model, and the optimal nodule segmentation model is the non-overfitting nodule segmentation model corresponding to the smallest loss result in the loss results respectively corresponding to the at least one non-overfitting nodule segmentation model.
With reference to the first aspect of the present application, in some embodiments, sampling P lesion sampling regions included in a medical image sequence sample set N times to obtain N combined samples, training an initial network model based on the N combined samples, and obtaining a loss result and a nodule segmentation model corresponding to each of the N training times, includes: distributing weight based on P lesion sampling areas and the ith sampling contained in the medical image sequence sample set, and sampling to obtain the ith combined sample, wherein i belongs to [2, N ]; and performing ith training on the initial network model based on the ith combined sample, and determining a loss result and a nodule segmentation model corresponding to the ith training, wherein the ith sampling distribution weight is determined based on the loss result corresponding to the (i-1) th training, the (i-1) th sampling distribution weight and a preset weight adjustment parameter.
With reference to the first aspect of the present application, in some embodiments, the assigning a weight to the ith sample based on P lesion sampling regions included in the sample set of the medical image sequence, and before the sampling to obtain the ith combined sample, further includes: and determining the 1 st sampling distribution weight and preset weight adjustment parameters based on the M times of trial training.
With reference to the first aspect of the present application, in some embodiments, the determining the 1 st sampling assignment weight and the preset weight adjustment parameter based on M times of training includes: sampling to obtain test combination samples corresponding to M preset test sampling distribution weights based on P focus sampling areas and M preset test sampling distribution weights corresponding to a medical image sequence sample set, wherein each test combination sample comprises focus samples with S kinds of focus attributes, the preset test sampling distribution weights are the proportion of the number of the focus samples corresponding to the S kinds of focus attributes to the total number of the focus samples contained in the test combination samples, and M is a positive integer; distributing test combination samples corresponding to the weights respectively based on the M preset test samples, respectively performing M times of test training on the initial network model, and determining loss results corresponding to the M times of test training; and determining the 1 st sampling distribution weight and the preset weight adjusting parameter based on the loss results corresponding to the M times of trial training respectively.
In combination with the first aspect of the present application, in some embodiments, the sampling an ith combined sample based on P lesion sampling regions contained in a sample set of a medical image sequence and the weighting assigned to the ith sample, includes: determining Z batches of combined samples based on P lesion sampling areas, ith sampling distribution weight and preset batch sampling number contained in the medical image sequence sample set, wherein the ith combined sample comprises Z batches of combined samples; based on the ith combined sample, the ith training is carried out on the initial network model, and the loss result and the nodule segmentation model corresponding to the ith training are determined, wherein the method comprises the following steps: aiming at each batch of sub-combined samples in the Z batches of sub-combined samples, based on the current sub-combined sample, batch training is carried out on the initial network model or the nodule segmentation model obtained by the previous batch training, and the sub-loss result and the nodule segmentation model corresponding to the current sub-combined sample are determined; and determining a loss result and a nodule segmentation model corresponding to the ith training based on the sub-loss result and the nodule segmentation model corresponding to each Z batch of sub-combined samples.
In combination with the first aspect of the present application, in some embodiments, the determining Z batches of sub-combined samples based on the P lesion sampling areas, the ith sampling assignment weight, and the preset batch sampling number included in the medical image sequence sample set includes: determining a preliminary sampling grid corresponding to each of P focus sampling areas based on the P focus sampling areas contained in the medical image sequence sample set; and determining Z batches of combined samples based on P focus sampling areas, the ith sampling distribution weight, preliminary sampling grids corresponding to the P focus sampling areas, the sampling quantity of the preset batches and the preset grid enhancement parameters of each batch contained in the medical image sequence sample set.
With reference to the first aspect of the present application, in some embodiments, determining Z batches of combined samples based on P lesion sampling regions, the ith sampling assignment weight, preliminary sampling grids corresponding to the P lesion sampling regions, a preset batch sampling number, and grid enhancement parameters of each batch included in a sample set of a medical image sequence includes: determining Z batches of preset sampling data based on P focus sampling areas, ith sampling distribution weight and preset batch sampling number contained in a medical image sequence sample set, wherein each batch of preset sampling data comprises grid enhancement parameters corresponding to P1 focus sampling areas and P1 focus sampling areas, and P1 is not more than P; aiming at each batch of preset sampling data in the Z batches of preset sampling data, based on the preliminary sampling grids and grid enhancement parameters which respectively correspond to P1 focus sampling regions and are included in the current preset sampling data, performing enhancement operation on the preliminary sampling grids which respectively correspond to P1 focus sampling regions and are included in the current preset sampling data, and determining Q enhancement sampling grids which respectively correspond to P1 focus sampling regions and are included in the current preset sampling data; determining sub-combined samples corresponding to the current preset sampling data based on P1 focus sampling areas included by the current preset sampling data and Q enhanced sampling grids corresponding to P1 focus sampling areas included by the current preset sampling data; and determining Z batches of sub-combined samples based on the sub-combined samples corresponding to the Z batches of preset sampling data respectively.
With reference to the first aspect of the present application, in some embodiments, the determining, by a subset of the sub-combined samples corresponding to the current preset sampling data, a sub-combined sample corresponding to the current preset sampling data based on Q enhanced sampling grids corresponding to P1 lesion sampling regions included in the current preset sampling data and P1 lesion sampling regions included in the current preset sampling data includes: determining unmarked sub-combined samples corresponding to the current preset sampling data based on P1 unmarked lesion sampling regions included in the current preset sampling data and Q enhanced sampling grids corresponding to P1 unmarked lesion sampling regions included in the current preset sampling data; determining a labeling sub-combined sample corresponding to the current preset sampling data based on P1 labeling focus sampling regions included in the current preset sampling data and Q enhanced sampling grids corresponding to P1 labeling focus sampling regions included in the current preset sampling data, wherein the Q enhanced sampling grids corresponding to the P1 non-labeling focus sampling regions are the same as the Q enhanced sampling grids corresponding to the P1 labeling focus sampling regions; and determining the sub-combined sample corresponding to the current preset sampling data based on the unmarked sub-combined sample and the marked sub-combined sample corresponding to the current preset sampling data.
In combination with the first aspect of the present application, in some embodiments, the medical image sequence sample set includes a plurality of medical image sequence samples, each of the medical image sequence samples includes at least one lesion sampling region, and the determining, based on P lesion sampling regions corresponding to the medical image sequence sample set, a preliminary sampling grid corresponding to each of the P lesion sampling regions includes: aiming at each focus sampling region in the P focus sampling regions, determining the physical spatial resolution of a primary sampling grid corresponding to the current focus sampling region based on the image volume and the physical spatial resolution corresponding to the current focus sampling region; isotropic processing is carried out on the physical spatial resolution of the preliminary sampling grid, and the spatial resolution of the preliminary sampling grid is determined; determining a preliminary sampling grid corresponding to the focus sampling area based on the current focus sampling area, the spatial resolution of the preliminary sampling grid and the size of a preset grid; and determining the initial sampling grids corresponding to the P lesion sampling areas respectively based on the initial sampling grid corresponding to each lesion sampling area.
With reference to the first aspect of the present application, in some embodiments, a lesion sampling region includes a nodule lesion image, and a preliminary sampling grid corresponding to the lesion sampling region is determined based on a medical image sequence sample to which the current lesion sampling region belongs, a spatial resolution of the preliminary sampling grid, and a preset grid size, including: determining a sampling central point based on the current focus sampling area, wherein the sampling central point is positioned in the current focus sampling area; and determining a preliminary sampling grid corresponding to the focus sampling area based on the sampling central point, the medical image sequence sample to which the current focus sampling area belongs, the spatial resolution of the preliminary sampling grid and the size of a preset grid.
With reference to the first aspect of the present application, in some embodiments, the determining a sub-loss result and a nodule segmentation model corresponding to a current sub-combined sample includes: inputting the current unlabeled sub-combined sample into an initial network model or a nodule segmentation model obtained by training in the last batch to obtain the nodule probability and the nodule segmentation model of each pixel corresponding to the current unlabeled sub-combined sample; detecting the nodule edge in the currently unlabeled sub-combined sample by using an edge detection algorithm based on the currently labeled sub-combined sample to obtain the gradient value of each pixel corresponding to the currently labeled sub-combined sample; distributing loss weight for each pixel corresponding to the current unlabeled sub-combined sample by using a Gaussian algorithm based on the gradient value of each pixel corresponding to the current labeled sub-combined sample to obtain the loss weight of each pixel corresponding to the current unlabeled sub-combined sample; and determining a sub-loss result corresponding to the current sub-combined sample based on the nodule probability of each pixel corresponding to the current unlabeled sub-combined sample and the loss weight of each pixel corresponding to the current unlabeled sub-combined sample.
In combination with the first aspect of the present application, in some embodiments, the S lesion attributes include: s1 kinds of preset nodule sizes and S2 kinds of preset nodule shapes.
In a second aspect, an embodiment of the present application provides an image segmentation method, including: obtaining an optimal nodule segmentation model based on the model training method mentioned in the first aspect; and performing nodule segmentation on the medical image blocks to be detected by using the optimal nodule segmentation model to obtain a characteristic probability map corresponding to the medical image blocks to be detected, wherein the characteristic probability map comprises the nodule segmentation probability of each pixel in the medical image blocks to be detected.
In a third aspect, an embodiment of the present application provides a model training apparatus, including: the segmentation model training module is configured to perform N-time sampling on P lesion sampling areas contained in a medical image sequence sample set to obtain N combined samples, train an initial network model based on the N combined samples respectively to obtain loss results and nodule segmentation models corresponding to the N-time training, wherein each combined sample corresponds to one sampling distribution weight, each combined sample comprises lesion samples with S kinds of lesion attributes, the sampling distribution weights are proportions of the number of the lesion samples corresponding to the S kinds of lesion attributes in the total number of the lesion samples contained in the combined samples, and P, N, S are positive integers; and the segmentation model determination module is configured to determine an optimal nodule segmentation model in the nodule segmentation models corresponding to the N times of training based on loss results corresponding to the N times of training, wherein the nodule segmentation models corresponding to the N times of training comprise at least one non-over-fitted nodule segmentation model, and the optimal nodule segmentation model is the non-over-fitted nodule segmentation model corresponding to the smallest loss result in the loss results corresponding to the at least one non-over-fitted nodule segmentation model.
In a fourth aspect, an embodiment of the present application provides an image segmentation apparatus, including: a nodule segmentation model acquisition module configured to obtain an optimal nodule segmentation model based on the model training method mentioned in the first aspect; and the nodule segmentation module is configured to perform nodule segmentation on the medical image blocks to be detected by using the optimal nodule segmentation model to obtain a characteristic probability map corresponding to the medical image blocks to be detected, wherein the characteristic probability map comprises the nodule segmentation probability of each pixel in the medical image blocks to be detected.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the model training method mentioned in the first aspect and/or the image segmentation method mentioned in the second aspect.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: a processor; a memory for storing computer executable instructions; a processor for executing computer executable instructions to implement the model training method of the first aspect and/or the image segmentation method of the second aspect.
The model training method provided by the embodiment of the application carries out N times of sampling on P focus sampling areas contained in a medical image sequence sample set to obtain N combined samples, trains initial network models respectively based on the N combined samples to obtain loss results and nodule segmentation models corresponding to the N times of training, and determines the optimal nodule segmentation model in the nodule segmentation models corresponding to the N times of training based on the loss results corresponding to the N times of training. Each combined sample corresponds to a sampling distribution weight, each combined sample comprises lesion samples with S kinds of lesion attributes, and the sampling distribution weight is the proportion of the number of the lesion samples corresponding to the S kinds of lesion attributes in the total number of the lesion samples included in the combined sample. Therefore, the initial network model can be trained using a plurality of combined samples, so as to select an optimal nodule segmentation model. In addition, the combined sample is determined according to the focus attributes and the sampling distribution weights, so that the influence of various focus attributes on the initial network model training can be balanced, the differential sampling is realized, and the optimal segmentation effect of the nodule segmentation model obtained by the model training method is improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a model training method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a model training method according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of a model training method according to another embodiment of the present application.
Fig. 4 is a schematic flowchart of a model training method according to another embodiment of the present application.
Fig. 5 is a schematic flow chart illustrating a model training method according to another embodiment of the present application.
Fig. 6 is a schematic flow chart illustrating a model training method according to another embodiment of the present application.
Fig. 7 is a schematic flowchart of a model training method according to another embodiment of the present application.
Fig. 8 is a schematic flowchart of a model training method according to another embodiment of the present application.
Fig. 9 is a schematic flowchart of a model training method according to another embodiment of the present application.
Fig. 10 is a schematic flowchart of a model training method according to another embodiment of the present application.
Fig. 11 is a schematic flowchart of a model training method according to another embodiment of the present application.
Fig. 12 is a schematic flowchart of a model training method according to another embodiment of the present application.
Fig. 13 is a schematic flowchart illustrating an image segmentation method according to an embodiment of the present application.
Fig. 14 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application.
Fig. 15 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 16 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 17 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 18 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 19 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 20 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 21 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 22 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 23 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 24 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application.
Fig. 25 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application.
Fig. 26 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Exemplary scenarios
Fig. 1 is a schematic view of an application scenario of a model training method according to an embodiment of the present application. The scenario shown in fig. 1 includes a server 110 and an image acquisition device 120 communicatively coupled to the server 110. Specifically, the server 110 is configured to perform N-time sampling on P lesion sampling regions included in a medical image sequence sample set to obtain N combined samples, train an initial network model based on the N combined samples, and obtain a loss result and a nodule segmentation model corresponding to each of the N-time training, where each combined sample corresponds to one sampling distribution weight, each combined sample includes lesion samples with S kinds of lesion attributes, the sampling distribution weight is a ratio of the number of the lesion samples corresponding to each of the S kinds of lesion attributes to the total number of the lesion samples included in the combined sample, and P, N, S are positive integers; and determining an optimal nodule segmentation model in the nodule segmentation models respectively corresponding to the N times of training based on loss results respectively corresponding to the N times of training, wherein the nodule segmentation models respectively corresponding to the N times of training comprise at least one non-overfitting nodule segmentation model, and the optimal nodule segmentation model is the non-overfitting nodule segmentation model corresponding to the smallest loss result in the loss results respectively corresponding to the at least one non-overfitting nodule segmentation model. The image acquiring device 120 is configured to acquire a sample set of a medical image sequence and send the sample set of the medical image sequence to the server 110, so that the server 110 performs the above operations.
Exemplary method
Fig. 2 is a schematic flow chart of a model training method according to an embodiment of the present application. As shown in fig. 2, the model training method includes the following steps.
Step 210, sampling P focus sampling regions contained in the medical image sequence sample set for N times to obtain N combined samples, and training the initial network model based on the N combined samples to obtain loss results and nodule segmentation models corresponding to the N training times.
Specifically, the P lesion sampling regions may be regions determined by the detection frame. The region determined by the detection frame may be a region with poor accuracy obtained by preliminary lesion segmentation, that is, the region determined by the detection frame may include the entire nodule or only a part of the nodule. Each combined sample corresponds to a sampling distribution weight, each combined sample comprises focus samples with S kinds of focus attributes, the sampling distribution weight is the proportion of the number of the focus samples corresponding to the S kinds of focus attributes in the total number of the focus samples contained in the combined sample, and P, N, S is a positive integer. The initial network model may be a reset network model. The loss result may be a loss value based on a loss function. The nodule segmentation model may be a model trained on an initial network model.
In one embodiment of the present application, the S lesion attributes include: s1Preset nodule size, S2And (5) presetting the nodular shape. S, S1And S2Are all positive integers. S may be S1And S2The sum of (1). Predetermine the knot size and can be 1mm, also can be 2mm, predetermine the knot size and can set up according to actual conditions, this application does not do specifically and limits. The preset nodule shape can be irregular shapes such as burr shape, etc., and can also be regular shapes such as round ball, oval ball, etc.
Illustratively, N may be 5. Each of the 5 combination samples contained 1000 lesion samples. The first combined sample may include: 500 lesion samples containing large nodules and 500 lesion samples containing small nodules. The second combined sample may include: 300 lesion samples containing large nodules and 700 lesion samples containing small nodules. The third combined sample may include: 800 lesion samples containing large nodules and 200 lesion samples containing small nodules. The fourth combined sample may include: 100 lesion samples containing large nodules and 900 lesion samples containing small nodules. The fifth combined sample may include: 650 lesion samples containing large nodules and 350 lesion samples containing small nodules.
Illustratively, N may be 3. Each of the 3 combination samples contained 2000 lesion samples. The first combined sample may include: 800 lesion samples containing burred nodules, 600 lesion samples containing spherical nodules, and 600 lesion samples containing elliptical nodules. The second combined sample may include: 700 lesion samples containing burred nodules, 700 lesion samples containing spherical nodules, and 600 lesion samples containing elliptical nodules. The third combined sample may include: 600 lesion samples containing burred nodules, 500 lesion samples containing spherical nodules, and 900 lesion samples containing elliptical nodules.
And step 220, determining the optimal nodule segmentation model in the nodule segmentation models corresponding to the N times of training based on the loss results corresponding to the N times of training.
Specifically, the nodule segmentation models corresponding to the N training passes each include at least one non-overfitting nodule segmentation model, and the optimal nodule segmentation model is the non-overfitting nodule segmentation model corresponding to the smallest loss result in the loss results corresponding to the at least one non-overfitting nodule segmentation model.
Specifically, the specific value of N may be set according to actual requirements. For example, N may be 60. That is, based on the loss results corresponding to each of the 60 trainings, the optimal nodule segmentation model among the nodule segmentation models corresponding to each of the 60 trainings is determined. The specific value of N may also be set according to the convergence of the loss function in the model training process. For example, when the model is trained to the 40 th time, the convergence condition of the loss function is already satisfied, and N may be 40. The convergence condition may be a convergence threshold or a convergence speed, and the application is not particularly limited.
Each combined sample used in the model training method provided by the embodiment of the application corresponds to one sampling distribution weight, so that the combined sample obtained according to various different sampling distribution weights can be used for training the initial network model, and the nodule segmentation model with the best nodule segmentation effect is selected. In addition, the sampling distribution weight is the proportion of the number of the focus samples corresponding to the S kinds of focus attributes in the total number of the focus samples included in the combined sample, so that the influence of various focus attributes on the initial network model training can be balanced, the differential sampling is realized, and the segmentation effect of the nodule segmentation model is improved.
Fig. 3 is a schematic flow chart of a model training method according to another embodiment of the present application. The embodiment shown in fig. 3 is extended based on the embodiment shown in fig. 2, and the differences between the embodiment shown in fig. 3 and the embodiment shown in fig. 2 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 3, in the embodiment of the present application, the step of performing N-time sampling on P lesion sampling regions included in a medical image sequence sample set to obtain N combined samples, and training an initial network model based on the N combined samples to obtain loss results and a nodule segmentation model corresponding to the N-time training respectively includes the following steps.
Step 310, based on P lesion sampling areas and the ith sampling distribution weight contained in the medical image sequence sample set, sampling to obtain the ith combined sample.
And 320, performing ith training on the initial network model based on the ith combined sample, and determining a loss result and a nodule segmentation model corresponding to the ith training.
Specifically, the ith sample distribution weight is determined based on the loss result corresponding to the (i-1) th training, the (i-1) th sample distribution weight and a preset weight adjustment parameter.
Specifically, i ∈ [2, N ]. Steps 310 and 320 are steps performed in a loop. When i is 2, i-1 is 1, step 310 assigns weights to the 2 nd sample based on the P lesion sampling regions included in the medical image sequence sample set, and the 2 nd combined sample is obtained by sampling. The 1 st sample assignment weight may be preset. Step 320 is to perform the 2 nd training on the initial network model based on the 2 nd combined sample, and determine the loss result and the nodule segmentation model corresponding to the 2 nd training. The 2 nd sample distribution weight may be determined based on the loss result corresponding to the 1 st training, the 1 st sample distribution weight, and a preset weight adjustment parameter.
Illustratively, when i is 3, i-1 is 2, step 310 assigns a weight to the 3 rd sample based on P lesion sampling regions included in the medical image sequence sample set, and the 3 rd combined sample is obtained by sampling. Step 320 is to perform the 3 rd training on the initial network model based on the 3 rd combined sample, and determine the loss result and the nodule segmentation model corresponding to the 3 rd training. The 3 rd sample distribution weight may be determined based on the loss result corresponding to the 2 nd training, the 2 nd sample distribution weight, and a preset weight adjustment parameter. That is, the sample distribution weight is determined based on the loss result corresponding to the last training, the sample distribution weight corresponding to the last training, and the preset weight adjustment parameter.
By executing the steps 310 and 320 in a loop, and the sampling distribution weight is determined based on the loss result corresponding to the previous training, the sampling distribution weight corresponding to the previous training and the preset weight adjustment parameter, the whole model training effect can be developed to a better direction, and the model training effect is further improved.
Fig. 4 is a schematic flowchart of a model training method according to another embodiment of the present application. The embodiment shown in fig. 4 is extended based on the embodiment shown in fig. 3, and the differences between the embodiment shown in fig. 4 and the embodiment shown in fig. 3 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 4, in the embodiment of the present application, before the step of obtaining the ith combined sample by sampling based on the P lesion sampling areas and the ith sampling weight included in the sample set of the medical image sequence, the following steps are further included.
Step 410, based on M trials, determining the 1 st sampling distribution weight and the preset weight adjustment parameter.
Specifically, the trial training may be to preset M trial sample distribution weights, then sample the M trial sample distribution weights respectively, so as to obtain trial combination samples corresponding to the M trial sample distribution weights, and finally perform trial training according to the trial combination samples corresponding to the M trial sample distribution weights, so as to obtain loss results corresponding to the M trial training.
By analyzing the loss results corresponding to the M times of trial training, the variation trend of the loss results corresponding to the M times of trial training can be obtained, and therefore the 1 st sampling distribution weight can be determined according to the respective test sampling distribution weights of the M times of trial training. In addition, according to the variation trend of the loss result corresponding to each of the M times of trial training, the adjustment direction and the adjustment size of the sampling distribution weight can be obtained, so as to determine the preset weight adjustment parameter. The determining of the preset weight adjustment parameter may be an adjustment direction and an adjustment size of the sample distribution weight. The adjustment direction of the sample distribution weight may be larger for the sample distribution weight or smaller for the sample distribution weight. The sample allocation weight may be resized to a specific value.
Through M times of trial training, the 1 st sampling distribution weight and the preset weight adjustment parameter are determined, and through data obtained through M times of trial training, reference is provided for determination of the 1 st sampling distribution weight and the preset weight adjustment parameter, so that a more accurate data base is provided for subsequent model training.
Fig. 5 is a schematic flow chart illustrating a model training method according to another embodiment of the present application. The embodiment shown in fig. 5 is extended based on the embodiment shown in fig. 4, and the differences between the embodiment shown in fig. 5 and the embodiment shown in fig. 4 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 5, in the embodiment of the present application, the step of determining the 1 st sample assignment weight and the preset weight adjustment parameter based on M times of training includes the following steps.
And 510, distributing weights based on the P lesion sampling areas and the M preset test samples corresponding to the medical image sequence sample set, and sampling to obtain test combination samples corresponding to the M preset test sample distribution weights.
Specifically, each trial combination sample includes lesion samples of S lesion attributes. The preset sampling distribution weight is the proportion of the number of the lesion samples corresponding to the S kinds of lesion attributes in the total number of the lesion samples included in the sample combination. M is a positive integer.
And 520, distributing test combination samples corresponding to the weights respectively based on the M preset test samples, performing M times of test training on the initial network model respectively, and determining loss results corresponding to the M times of test training respectively.
Step 530, determining the 1 st sampling distribution weight and the preset weight adjustment parameter based on the loss results corresponding to the M times of training.
The 1 st sampling distribution weight and the preset weight adjusting parameter are determined through the loss results corresponding to the M times of trial training, so that the loss results corresponding to the M times of trial training can provide more accurate reference for the determination of the 1 st sampling distribution weight and the preset weight adjusting parameter, and more accurate data basis is provided for subsequent model training.
Fig. 6 is a schematic flow chart illustrating a model training method according to another embodiment of the present application. The embodiment shown in fig. 6 is extended based on the embodiment shown in fig. 3, and the differences between the embodiment shown in fig. 6 and the embodiment shown in fig. 3 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 6, in the embodiment of the present application, the step of obtaining the ith combined sample by sampling based on the P lesion sampling areas and the ith sampling assignment weight included in the sample set of the medical image sequence includes the following steps.
Step 610, determining Z batches of combined samples based on P lesion sampling regions, the ith sampling distribution weight and the number of samples in a preset batch contained in the medical image sequence sample set.
Specifically, the ith combined sample includes Z batches of combined samples. That is, each combined sample includes Z batches of combined samples. For example, if a combined sample includes 1000 samples, and Z equals 40, then each combined sample includes 25 samples.
In the embodiment of the application, the step of training the initial network model for the ith time based on the ith combined sample and determining the loss result and the nodule segmentation model corresponding to the ith training time comprises the following steps.
And step 620, aiming at each batch of sub-combined samples in the Z batches of sub-combined samples, performing batch training on the initial network model or the nodule segmentation model obtained by the previous batch training based on the current sub-combined sample, and determining the sub-loss result and the nodule segmentation model corresponding to the current sub-combined sample.
Specifically, if the current sub-combination sample is the first sample subjected to batch training, the initial network model is subjected to batch training based on the current sub-combination sample, and the sub-loss result and the nodule segmentation model corresponding to the current sub-combination sample are determined. And if the current sub-combined sample is not the first sample subjected to batch training, performing batch training on the nodule segmentation model obtained by the last batch training based on the current sub-combined sample, and determining the sub-loss result and the nodule segmentation model corresponding to the current sub-combined sample.
And step 630, determining a loss result and a nodule segmentation model corresponding to the ith training based on the sub-loss result and the nodule segmentation model corresponding to each Z batch of combined samples.
By dividing the combined sample into Z batches of sub-combined samples, the variation trend among a plurality of sub-loss results can be analyzed according to the sub-loss results corresponding to each batch of sub-combined samples, so that a rich data base is provided for subsequent model training.
Fig. 7 is a schematic flowchart of a model training method according to another embodiment of the present application. The embodiment shown in fig. 7 is extended based on the embodiment shown in fig. 6, and the differences between the embodiment shown in fig. 7 and the embodiment shown in fig. 6 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 7, in the embodiment of the present application, the step of determining Z batches of combined samples based on the P lesion sampling areas, the ith sampling assignment weight, and the preset batch sampling number included in the sample set of the medical image sequence includes the following steps.
Step 710, determining a preliminary sampling grid corresponding to each of P lesion sampling areas based on the P lesion sampling areas included in the medical image sequence sample set.
Specifically, one lesion sampling region may correspond to one preliminary sampling grid. The preliminary sampling grid is a sampling grid that has not undergone enhancement operations such as rotation, warping, and the like.
Step 720, determining Z batches of sub-combined samples based on the P focus sampling areas, the ith sampling distribution weight, the preliminary sampling grids corresponding to the P focus sampling areas, the sampling number of the preset batches and the preset grid enhancement parameters of each batch contained in the medical image sequence sample set.
Sampling is carried out by utilizing the primary sampling grids, the sampling quantity of the preset batches and the preset grid enhancement parameters of each batch to obtain Z batch of combined samples, different preset grid enhancement parameters can be set for each batch of combined samples, so that the difference of the Z batch of combined samples is improved, a differentiated data basis is provided for subsequent model training, the learning capacity of the nodule segmentation model on focus samples with different focus attributes is improved, and the training effect of the model is improved.
Fig. 8 is a schematic flowchart of a model training method according to another embodiment of the present application. The embodiment shown in fig. 8 is extended based on the embodiment shown in fig. 7, and the differences between the embodiment shown in fig. 8 and the embodiment shown in fig. 7 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 8, in the embodiment of the present application, the step of determining Z batches of combined samples based on P lesion sampling regions, the ith sampling assignment weight, preliminary sampling grids corresponding to the P lesion sampling regions, the number of samples in a preset batch, and grid enhancement parameters of each batch included in the medical image sequence sample set includes the following steps.
Step 810, determining Z batches of preset sampling data based on P focus sampling areas, the ith sampling distribution weight and the preset batch sampling number contained in the medical image sequence sample set.
Specifically, each batch of preset sampling data includes grid enhancement parameters corresponding to P1 lesion sampling regions and P1 lesion sampling regions respectively. P1 is less than or equal to P. The grid enhancement parameters comprise a spatial rotation angle and a spatial torsion angle. For example, the grid enhancement parameter may be a spatial rotation of 30 degrees followed by a spatial twist of 50 degrees.
Step 820, aiming at each batch of preset sampling data in the Z batches of preset sampling data, based on the preliminary sampling grids and grid enhancement parameters corresponding to the P1 lesion sampling regions included in the current preset sampling data, performing enhancement operation on the preliminary sampling grids corresponding to the P1 lesion sampling regions included in the current preset sampling data, and determining Q enhancement sampling grids corresponding to the P1 lesion sampling regions included in the current preset sampling data.
Illustratively, P may be 200 and P1 may be 30. Namely, 30 lesion sampling areas are selected from 200 lesion sampling areas according to sampling distribution weight and the number of sampling in a preset batch, and then grid enhancement parameters corresponding to the 30 lesion sampling areas are determined.
Step 830, determining sub-combined samples corresponding to the current preset sampling data based on the P1 lesion sampling areas included in the current preset sampling data and the Q enhanced sampling grids corresponding to the P1 lesion sampling areas included in the current preset sampling data.
Specifically, P1 lesion sampling areas are respectively sampled by Q enhanced sampling grids, so as to determine a sub-combined sample corresponding to the current preset sampling data. For example, Q may be 10, P1 may be 5, and the number of determined sub-combined samples corresponding to the current preset sample data may be 50.
And 840, determining Z batches of sub-combined samples based on the sub-combined samples corresponding to the Z batches of preset sampling data respectively.
The enhanced sampling grid is obtained by enhancing the preliminary sampling grid, and then the enhanced sampling grid is used for sampling, so that the data enhancement of the sub-combined sample obtained by sampling is not needed for multiple times, and the calculation amount is reduced. In addition, data enhancement is carried out on the sub-combination samples obtained through sampling for many times, and the problems of loss of the sub-combination sample data, image black edges and the like can be caused.
Fig. 9 is a schematic flowchart of a model training method according to another embodiment of the present application. The embodiment shown in fig. 9 is extended based on the embodiment shown in fig. 8, and the differences between the embodiment shown in fig. 9 and the embodiment shown in fig. 8 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 9, in the embodiment of the present application, the step of determining the sub-combination sample corresponding to the current preset sampling data based on the P1 lesion sampling areas included in the current preset sampling data and the Q enhanced sampling grids corresponding to the P1 lesion sampling areas included in the current preset sampling data includes the following steps.
Step 910, determining unlabeled sub-combined samples corresponding to the current preset sampling data based on P1 unlabeled lesion sampling regions included in the current preset sampling data and Q enhanced sampling grids corresponding to P1 unlabeled lesion sampling regions included in the current preset sampling data.
Specifically, the lesion sampling region includes an unmarked lesion sampling region and a marked lesion sampling region.
Step 920, determining labeling sub-combination samples corresponding to the current preset sampling data based on the P1 labeling focus sampling areas included in the current preset sampling data and the Q enhancement sampling grids corresponding to the P1 labeling focus sampling areas included in the current preset sampling data.
Specifically, the Q enhanced sampling grids corresponding to the P1 unlabeled lesion sampling regions are the same as the Q enhanced sampling grids corresponding to the P1 labeled lesion sampling regions.
Step 930, determining a sub-combined sample corresponding to the current preset sampling data based on the unmarked sub-combined sample and the marked sub-combined sample corresponding to the current preset sampling data.
The unmarked sub-combined samples and the marked sub-combined samples can be directly obtained by utilizing Q enhanced sampling grids, so that the calculation amount is reduced.
Fig. 10 is a schematic flowchart of a model training method according to another embodiment of the present application. The embodiment shown in fig. 10 is extended based on the embodiment shown in fig. 7, and the differences between the embodiment shown in fig. 10 and the embodiment shown in fig. 7 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 10, in the embodiment of the present application, the step of determining a preliminary sampling grid corresponding to each of P lesion sampling regions based on P lesion sampling regions corresponding to a sample set of a medical image sequence includes the following steps.
Step 1010, determining a physical spatial resolution of a preliminary sampling grid corresponding to the current lesion sampling area based on an image volume and a physical spatial resolution corresponding to the current lesion sampling area for each of the P lesion sampling areas.
In particular, the set of medical image sequence samples comprises a plurality of medical image sequence samples. Each medical image sequence sample comprises at least one lesion sampling area. The product of the image volume corresponding to the current lesion sampling region and the physical spatial resolution is the physical volume corresponding to the current lesion sampling region. And dividing the physical volume corresponding to the current focus sampling region by the volume of the primary sampling grid to obtain the physical spatial resolution of the primary sampling grid. The volume of the preliminary sampling grid may be 64 × 128, or may be other sizes, and the present application is not limited specifically.
Step 1020, performing isotropic processing on the physical spatial resolution of the preliminary sampling grid to determine the spatial resolution of the preliminary sampling grid.
Specifically, the isotropic processing is to adjust the pixel pitches in the three directions of xyz to the same pitch. For example, the physical spatial resolution of the preliminary sampling grid may be 0.5 × 1, and the spatial resolution of the preliminary sampling grid determined by isotropic processing may be 0.5 × 0.5.
And step 1030, determining a primary sampling grid corresponding to the focus sampling area based on the current focus sampling area, the spatial resolution of the primary sampling grid and the size of a preset grid.
Step 1040, determining the preliminary sampling grids corresponding to the P lesion sampling areas respectively based on the preliminary sampling grid corresponding to each lesion sampling area.
The physical spatial resolution of the primary sampling grid is isotropically processed, so that the spatial resolution of the primary sampling grid is closer to the resolution of an actual nodule focus, a sample obtained by sampling is closer to the actual state of the nodule focus, and the calculation and learning cost of the model is reduced.
Fig. 11 is a schematic flowchart of a model training method according to another embodiment of the present application. The embodiment shown in fig. 11 is extended based on the embodiment shown in fig. 10, and the differences between the embodiment shown in fig. 11 and the embodiment shown in fig. 10 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 11, in the embodiment of the present application, the step of determining the preliminary sampling grid corresponding to the lesion sampling area based on the medical image sequence sample to which the current lesion sampling area belongs, the spatial resolution of the preliminary sampling grid, and the size of the preset grid includes the following steps.
Step 1110, determining a sampling center point based on the current lesion sampling region.
Specifically, the sampling center point is located in the current lesion sampling region. The sampling center point may be randomly selected within the current lesion sampling area. The lesion sampling region includes a nodule lesion image.
Step 1120, determining a preliminary sampling grid corresponding to the focus sampling area based on the sampling center point, the medical image sequence sample to which the current focus sampling area belongs, the spatial resolution of the preliminary sampling grid, and the size of the preset grid.
In particular, the preliminary sampling grid may be mapped to the [ -1,1] interval to accommodate the input interface of the initial network model.
The sampling central point is arranged in the current lesion sampling area, so that the probability of obtaining the nodule lesion by sampling is improved.
Fig. 12 is a schematic flowchart of a model training method according to another embodiment of the present application. The embodiment shown in fig. 12 is extended based on the embodiment shown in fig. 6, and the differences between the embodiment shown in fig. 12 and the embodiment shown in fig. 6 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 12, in the embodiment of the present application, the step of performing batch training on the initial network model or the nodule segmentation model obtained by the last batch training based on the current sub-combination sample, and determining the sub-loss result and the nodule segmentation model corresponding to the current sub-combination sample includes the following steps.
Step 1210, inputting the current unlabeled sub-combined sample into the initial network model or the nodule segmentation model obtained by training in the previous batch, and obtaining the nodule probability and the nodule segmentation model of each pixel corresponding to the current unlabeled sub-combined sample.
In particular, the current sub-combined sample includes a currently unlabeled sub-combined sample and a currently labeled sub-combined sample.
And step 1220, detecting the nodule edge in the currently unlabeled sub-combined sample based on the currently labeled sub-combined sample by using an edge detection algorithm to obtain a gradient value of each pixel corresponding to the currently labeled sub-combined sample.
Step 1230, distributing a loss weight to each pixel corresponding to the currently unlabeled sub-combination sample based on the gradient value of each pixel corresponding to the currently labeled sub-combination sample by using a gaussian algorithm, so as to obtain the loss weight of each pixel corresponding to the currently unlabeled sub-combination sample.
Specifically, the gradient value of each pixel corresponding to the current labeled sub-combined sample may be mapped to the [1,3] interval, so as to obtain a weight distribution with a smooth distribution at the edge of the nodule.
Step 1240, determining a sub-loss result corresponding to the current sub-combined sample based on the nodule probability of each pixel corresponding to the current unlabeled sub-combined sample and the loss weight of each pixel corresponding to the current unlabeled sub-combined sample.
And distributing loss weight for each pixel corresponding to the current unmarked sub-combined sample by using a Gaussian algorithm based on the gradient value of each pixel corresponding to the current marked sub-combined sample to obtain the loss weight of each pixel corresponding to the current unmarked sub-combined sample, so that the weight of the nodule edge region can be improved, the learning effect of the model on the nodule edge can be improved, and the segmentation precision of the nodule segmentation model can be improved.
Fig. 13 is a schematic flowchart illustrating an image segmentation method according to an embodiment of the present application. As shown in fig. 13, the image segmentation method provided in the embodiment of the present application includes the following steps.
Step 1310, an optimal nodule segmentation model is obtained based on the model training method in the above embodiment.
And 1320, performing nodule segmentation on the medical image blocks to be detected by using the optimal nodule segmentation model to obtain a characteristic probability map corresponding to the medical image blocks to be detected.
In particular, the feature probability map comprises a nodule segmentation probability for each pixel in the medical image slice to be detected.
Method embodiments of the present application are described in detail above in conjunction with fig. 1-13, and apparatus embodiments of the present application are described in detail below in conjunction with fig. 14-25. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.
Exemplary devices
Fig. 14 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application. As shown in fig. 14, the model training apparatus 1400 of the present embodiment includes a segmentation model training module 1410 and a segmentation model determination module 1420.
Specifically, the segmentation model training module 1410 is configured to perform N-time sampling on P lesion sampling regions included in the medical image sequence sample set to obtain N combined samples, respectively train the initial network model based on the N combined samples to obtain loss results and nodule segmentation models corresponding to the N-time training, where each combined sample corresponds to one sampling allocation weight, each combined sample includes lesion samples with S kinds of lesion attributes, the sampling allocation weight is a proportion of the number of the lesion samples corresponding to the S kinds of lesion attributes in the total number of the lesion samples included in the combined sample, and P, N, S is a positive integer. The segmentation model determination module 1420 is configured to determine an optimal nodule segmentation model among N times of training of respective corresponding nodule segmentation models based on N times of training of respective corresponding loss results, where the N times of training of respective corresponding nodule segmentation models include at least one non-over-fit nodule segmentation model, and the optimal nodule segmentation model is the non-over-fit nodule segmentation model corresponding to a smallest loss result among the respective loss results of the at least one non-over-fit nodule segmentation model.
Fig. 15 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 15 is extended based on the embodiment shown in fig. 14, and the differences between the embodiment shown in fig. 15 and the embodiment shown in fig. 14 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 15, the segmentation model training module 1410 of the embodiment of the present application includes: a combined sample determination unit 1411 and a model training unit 1412.
Specifically, the combined sample determination unit 1411 is configured to assign weights to the ith sample based on P lesion sampling regions and the ith sample contained in the medical image sequence sample set, and sample to obtain the ith combined sample, where i belongs to [2, N ]. The model training unit 1412 is configured to perform an ith training on the initial network model based on the ith combined sample, and determine a loss result and a nodule segmentation model corresponding to the ith training, wherein the ith sample distribution weight is determined based on the loss result corresponding to the (i-1) th training, the (i-1) th sample distribution weight and a preset weight adjustment parameter.
Fig. 16 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 16 is extended based on the embodiment shown in fig. 15, and the differences between the embodiment shown in fig. 16 and the embodiment shown in fig. 15 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 16, the model training apparatus 1400 of the embodiment of the present application further includes: a training module 1430.
Specifically, the training module 1430 is configured to determine the 1 st sample distribution weight and the preset weight adjustment parameter based on M times of training.
Fig. 17 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 17 is extended based on the embodiment shown in fig. 16, and the differences between the embodiment shown in fig. 17 and the embodiment shown in fig. 16 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 17, the training module 1430 of the embodiment of the present application includes: a trial combination sample determination unit 1431, a trial training unit 1432, and a trial training result determination unit 1433.
Specifically, the trial combination sample determination unit 1431 is configured to, based on P lesion sampling areas and M preset trial sampling allocation weights corresponding to the medical image sequence sample set, obtain, through sampling, trial combination samples corresponding to the M preset trial sampling allocation weights, where each trial combination sample includes lesion samples of S kinds of lesion attributes, the preset trial sampling allocation weight is a ratio of the number of the lesion samples corresponding to each of the S kinds of lesion attributes to the total number of the lesion samples included in the trial combination sample, and M is a positive integer. The training unit 1432 is configured to perform M times of training on the initial network model respectively based on the trial combination samples corresponding to the M preset trial sample distribution weights, and determine loss results corresponding to the M times of training. The training result determination unit 1433 is configured to determine the 1 st sampling assignment weight and the preset weight adjustment parameter based on the loss results corresponding to the M times of training.
Fig. 18 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 18 is extended based on the embodiment shown in fig. 15, and the differences between the embodiment shown in fig. 18 and the embodiment shown in fig. 15 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 18, the combined sample determination unit 1411 of the embodiment of the present application includes: the sub-combined sample determination subunit 1810. The model training unit 1412 of the embodiment of the present application includes: batch training subunit 1820 and batch training result determination subunit 1830.
Specifically, the sub-combined sample determining subunit 1810 is configured to determine Z batches of sub-combined samples based on the P lesion sampling areas, the ith sampling assignment weight, and the preset batch sampling number included in the medical image sequence sample set, where the ith combined sample includes the Z batches of combined samples. The batch training subunit 1820 is configured to, for each batch of sub-combined samples in the Z batches of sub-combined samples, perform batch training on the initial network model or the nodule segmentation model obtained by the previous batch training based on the current sub-combined sample, and determine the sub-loss result and the nodule segmentation model corresponding to the current sub-combined sample. The batch training result determining subunit 1830 is configured to determine a loss result and a nodule segmentation model corresponding to the i-th training based on the respective sub-loss results and nodule segmentation models corresponding to the Z-th batch of sub-combined samples.
Fig. 19 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 19 is extended based on the embodiment shown in fig. 18, and the differences between the embodiment shown in fig. 19 and the embodiment shown in fig. 18 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 19, the sub-combination sample determination subunit 1810 of the embodiment of the present application includes: a preliminary mesh determination subunit 1811 and a Z-batch combined sample determination subunit 1812.
Specifically, the preliminary grid determining subunit 1811 is configured to determine, based on P lesion sampling regions included in the sample set of the medical image sequence, a preliminary sampling grid corresponding to each of the P lesion sampling regions. The Z batch sub-combined sample determination subunit 1812 is configured to determine Z batch sub-combined samples based on P lesion sampling regions, the ith sampling allocation weight, preliminary sampling grids corresponding to the P lesion sampling regions, the number of samples in a preset batch, and a preset grid enhancement parameter of each batch included in the medical image sequence sample set.
Fig. 20 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 20 is extended based on the embodiment shown in fig. 19, and the differences between the embodiment shown in fig. 20 and the embodiment shown in fig. 19 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 20, the Z lot sub-combination sample determination subunit 1812 of the embodiment of the present application includes: the preset sampling data determining subunit 2010, the enhancement grid determining subunit 2020, the single batch sub-combined sample determining subunit 2030, and the Z batch sub-combined sample synthesizing subunit 2040.
Specifically, the preset sampling data determining subunit 2010 is configured to determine Z batches of preset sampling data based on P lesion sampling regions, the ith sampling allocation weight, and the preset batch sampling number included in the medical image sequence sample set, where each batch of preset sampling data includes grid enhancement parameters corresponding to P1 lesion sampling regions and P1 lesion sampling regions, where P1 is not greater than P. The enhancement grid determining subunit 2020 is configured to, for each batch of preset sample data in the Z batches of preset sample data, perform enhancement operation on the preliminary sampling grid corresponding to each of the P1 lesion sample regions included in the current preset sample data based on the preliminary sampling grid and the grid enhancement parameter corresponding to each of the P1 lesion sample regions included in the current preset sample data, and determine Q enhancement sampling grids corresponding to each of the P1 lesion sample regions included in the current preset sample data. The single-batch sub-combined-sample determining subunit 2030 is configured to determine a sub-combined sample corresponding to the current preset sampling data based on the P1 lesion sampling regions included in the current preset sampling data and the Q enhanced sampling grids corresponding to the P1 lesion sampling regions included in the current preset sampling data. The Z batch sub-combined sample synthesis subunit 2040 is configured to determine Z batches of sub-combined samples based on the sub-combined samples corresponding to the Z batches of preset sampling data, respectively.
Fig. 21 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 21 is extended based on the embodiment shown in fig. 20, and the differences between the embodiment shown in fig. 21 and the embodiment shown in fig. 20 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 21, each lot of sub-combined sample determination subunit 2030 of the embodiment of the present application includes: an unlabeled sub-combined sample determination subunit 2031, an labeled sub-combined sample determination subunit 2032, and a current sub-combined sample determination subunit 2033.
Specifically, the lesion sampling region includes an unmarked lesion sampling region and a marked lesion sampling region. The unlabeled sub-combined sample determining subunit 2031 is configured to determine, based on the Q enhanced sampling grids corresponding to the P1 unlabeled lesion sampling regions included in the current preset sampling data and the P1 unlabeled lesion sampling regions included in the current preset sampling data, the unlabeled sub-combined sample corresponding to the current preset sampling data. The labeling sub-combined sample determining subunit 2032 is configured to determine a labeling sub-combined sample corresponding to the current preset sampling data based on P1 labeling lesion sampling regions included in the current preset sampling data and Q enhanced sampling grids corresponding to P1 labeling lesion sampling regions included in the current preset sampling data, where the Q enhanced sampling grids corresponding to the P1 non-labeling lesion sampling regions are the same as the Q enhanced sampling grids corresponding to the P1 labeling lesion sampling regions. The current sub-combined sample determining subunit 2033 is configured to determine a sub-combined sample corresponding to the current preset sampling data based on the unlabeled sub-combined sample and the labeled sub-combined sample corresponding to the current preset sampling data.
Fig. 22 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 22 is extended based on the embodiment shown in fig. 19, and the differences between the embodiment shown in fig. 22 and the embodiment shown in fig. 19 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 22, the preliminary mesh determination subunit 1811 of the embodiment of the present application includes: a grid physical resolution determination subunit 2210, a grid spatial resolution determination subunit 2220, a single grid determination subunit 2230, and a grid synthesis subunit 2240.
In particular, the set of medical image sequence samples comprises a plurality of medical image sequence samples. Each medical image sequence sample comprises at least one lesion sampling area. The grid physical resolution determining subunit 2210 is configured to, for each of the P lesion sampling areas, determine a physical spatial resolution of a preliminary sampling grid corresponding to the current lesion sampling area based on an image volume and a physical spatial resolution corresponding to the current lesion sampling area. The grid spatial resolution determination subunit 2220 is configured to perform isotropic processing on the physical spatial resolution of the preliminary sampling grid, and determine the spatial resolution of the preliminary sampling grid. The single grid determining subunit 2230 is configured to determine a preliminary sampling grid corresponding to the lesion sampling region based on the current lesion sampling region, the spatial resolution of the preliminary sampling grid, and the preset grid size. The grid integration subunit 2240 is configured to determine the preliminary sampling grid corresponding to each of the P lesion sampling areas based on the preliminary sampling grid corresponding to each lesion sampling area.
Fig. 23 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 23 is extended based on the embodiment shown in fig. 22, and the differences between the embodiment shown in fig. 23 and the embodiment shown in fig. 22 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 23, the single grid determination subunit 2230 of the embodiment of the present application includes: the sampling center point determines the subcell 2231 and the preliminary sampling grid determines the subcell 2232.
Specifically, the lesion sampling region includes a nodule lesion image. The sampling center point determining subunit 2231 is configured to determine a sampling center point based on the current lesion sampling region, where the sampling center point is located. The preliminary sampling grid determining subunit 2232 is configured to determine a preliminary sampling grid corresponding to the lesion sampling area based on the sampling center point, the medical image sequence sample to which the current lesion sampling area belongs, the spatial resolution of the preliminary sampling grid, and the size of the preset grid.
Fig. 24 is a schematic structural diagram of a model training apparatus according to another embodiment of the present application. The embodiment shown in fig. 24 is extended based on the embodiment shown in fig. 18, and the differences between the embodiment shown in fig. 24 and the embodiment shown in fig. 18 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 24, the batch training subunit 1820 of the embodiment of the present application includes: a single batch training subunit 1821, an edge detection subunit 1822, a gaussian computation subunit 1823, and a loss determination subunit 1824.
In particular, the current sub-combined sample includes a currently unlabeled sub-combined sample and a currently labeled sub-combined sample. The single-batch training subunit 1821 is configured to input the current unlabeled sub-combined sample into the initial network model or the nodule segmentation model obtained through training in the previous batch, and obtain the nodule probability and the nodule segmentation model of each pixel corresponding to the current unlabeled sub-combined sample. The edge detection subunit 1822 is configured to detect a nodule edge in the currently unlabeled sub-combined sample based on the currently labeled sub-combined sample by using an edge detection algorithm, so as to obtain a gradient value of each pixel corresponding to the currently labeled sub-combined sample. The gaussian calculating subunit 1823 is configured to, by using a gaussian algorithm, allocate a loss weight to each pixel corresponding to the currently unlabeled sub-combined sample based on the gradient value of each pixel corresponding to the currently labeled sub-combined sample, so as to obtain a loss weight of each pixel corresponding to the currently unlabeled sub-combined sample. The loss determination subunit 1824 is configured to determine a sub-loss result corresponding to the current sub-combined sample based on the nodule probability of each pixel corresponding to the current unlabeled sub-combined sample and the loss weight of each pixel corresponding to the current unlabeled sub-combined sample.
Fig. 25 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application. As shown in fig. 15, the image segmentation apparatus 2500 includes: a nodule segmentation model acquisition module 2510 and a nodule segmentation module 2520.
Specifically, the nodule segmentation model obtaining module 2510 is configured to obtain an optimal nodule segmentation model based on the model training method mentioned in the first aspect. The nodule segmentation module 2520 is configured to perform nodule segmentation on the medical image to be detected by using the optimal nodule segmentation model to obtain a feature probability map corresponding to the medical image to be detected, where the feature probability map includes a nodule segmentation probability of each pixel in the medical image to be detected.
Exemplary electronic device
Fig. 26 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 26, the electronic device 2600 includes: one or more processors 2601 and memory 2602; and computer program instructions stored in the memory 2602, which when executed by the processor 2601, cause the processor 2601 to perform the model training method and/or the image segmentation method of any of the embodiments described above.
The processor 2601 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
Memory 2602 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer-readable storage medium and executed by the processor 2601 to implement the steps in the model training method and/or the image segmentation method of the various embodiments of the present application above and/or other desired functions.
In one example, electronic device 2600 can further comprise: an input device 2603 and an output device 2604, which are interconnected by a bus system and/or other form of connection mechanism (not shown in fig. 26).
The input device 2603 may also include, for example, a keyboard, a mouse, a microphone, and the like.
The output device 2604 may output various information to the outside. The output devices 2604 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 2600 related to the present application are shown in fig. 26, and components such as buses, input devices/output interfaces, and the like are omitted. In addition, electronic device 2600 may include any other suitable components, depending on the particular application.
Exemplary computer readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the model training method and/or the image segmentation method of any of the above-described embodiments.
The computer program product may include program code for carrying out operations for embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the steps in the model training method and/or the image segmentation method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
A computer-readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a random access memory ((RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
The present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (17)

1. A method of model training, comprising:
sampling P focus sampling areas contained in a medical image sequence sample set for N times to obtain N combined samples, training an initial network model based on the N combined samples respectively to obtain loss results and a nodule segmentation model corresponding to the N times of training, wherein each combined sample corresponds to a sampling distribution weight, each combined sample comprises focus samples with S kinds of focus attributes, the sampling distribution weight is the proportion of the number of the focus samples corresponding to the S kinds of focus attributes in the total number of the focus samples contained in the combined sample, and P, N, S is a positive integer;
and determining an optimal nodule segmentation model in the nodule segmentation models respectively corresponding to the N times of training based on the loss results respectively corresponding to the N times of training, wherein the nodule segmentation models respectively corresponding to the N times of training comprise at least one non-over-fitted nodule segmentation model, and the optimal nodule segmentation model is the non-over-fitted nodule segmentation model corresponding to the minimum loss result in the loss results respectively corresponding to the at least one non-over-fitted nodule segmentation model.
2. The model training method according to claim 1, wherein the obtaining N combined samples by sampling P lesion sampling regions included in the medical image sequence sample set N times, and the obtaining N loss results and nodule segmentation models corresponding to the N training times by training the initial network model based on the N combined samples, respectively comprises:
distributing weight based on P lesion sampling areas and the ith sampling contained in the medical image sequence sample set, and sampling to obtain the ith combined sample, wherein i belongs to [2, N ];
and performing ith training on the initial network model based on the ith combined sample, and determining a loss result and a nodule segmentation model corresponding to the ith training, wherein the ith sampling distribution weight is determined based on the loss result corresponding to the (i-1) th training, the (i-1) th sampling distribution weight and a preset weight adjustment parameter.
3. The model training method according to claim 2, wherein the assigning a weight to the ith sample based on P lesion sampling regions and the ith sample included in the sample set of the medical image sequence further comprises, before the sampling to obtain the ith combined sample:
and determining the 1 st sampling distribution weight and the preset weight adjusting parameter based on M times of trial training.
4. The model training method of claim 3, wherein the determining the 1 st sample distribution weight and the preset weight adjustment parameter based on the M trials comprises:
sampling to obtain test combination samples corresponding to M preset test sampling distribution weights based on P focus sampling areas and M preset test sampling distribution weights corresponding to a medical image sequence sample set, wherein each test combination sample comprises focus samples with S kinds of focus attributes, the preset test sampling distribution weights are proportions of the number of the focus samples corresponding to the S kinds of focus attributes in the total number of the focus samples contained in the test combination samples, and M is a positive integer;
distributing test combination samples corresponding to the weights of the M preset test samples, respectively performing the M times of test training on an initial network model, and determining loss results corresponding to the M times of test training;
and determining the 1 st sampling distribution weight and preset weight adjustment parameters based on the loss results corresponding to the M times of trial exercises.
5. The model training method of claim 2, wherein the obtaining of the ith combined sample by sampling based on the P lesion sampling regions and the ith sampling weight included in the sample set of the medical image sequence comprises:
determining Z batches of combined samples based on P lesion sampling areas, ith sampling distribution weight and preset batch sampling number contained in a medical image sequence sample set, wherein the ith combined sample comprises the Z batches of combined samples;
wherein, the training an initial network model for the ith time based on the ith combined sample, and determining the loss result and the nodule segmentation model corresponding to the ith training time comprises:
for each of the Z sets of sub-combined samples,
based on the current sub-combination sample, carrying out batch training on the initial network model or the nodule segmentation model obtained by the last batch training, and determining the sub-loss result and the nodule segmentation model corresponding to the current sub-combination sample;
and determining a loss result and a nodule segmentation model corresponding to the ith training based on the sub-loss result and the nodule segmentation model corresponding to each Z batch of sub-combined samples.
6. The model training method of claim 5, wherein the determining the Z batch of combined samples based on the P lesion sampling regions, the ith sampling distribution weight and the preset batch sampling number included in the medical image sequence sample set comprises:
determining a preliminary sampling grid corresponding to each P lesion sampling areas based on the P lesion sampling areas contained in the medical image sequence sample set;
and determining Z batches of combined samples based on P focus sampling areas, the ith sampling distribution weight, preliminary sampling grids corresponding to the P focus sampling areas, the sampling quantity of the preset batches and the preset grid enhancement parameters of each batch contained in the medical image sequence sample set.
7. The model training method of claim 6, wherein the determining of the Z batches of combined samples based on the P lesion sampling regions, the ith sampling assignment weight, the preliminary sampling grids corresponding to the P lesion sampling regions, the sampling number of the preset batches, and the grid enhancement parameters of each batch included in the medical image sequence sample set comprises:
p lesion sampling based on medical image sequence sample setDetermining Z batches of preset sampling data according to the region, the ith sampling distribution weight and the preset batch sampling number, wherein each batch of preset sampling data comprises P1A focal sampling region and said P1Grid enhancement parameters corresponding to each focus sampling region, wherein P1≤P;
For each of the Z batches of preset sample data,
p included based on current preset sampling data1The preliminary sampling grid and grid enhancement parameter corresponding to each focus sampling region respectively, and the P included in the current preset sampling data1Performing enhancement operation on the preliminary sampling grids corresponding to the focus sampling areas respectively, and determining P included in the current preset sampling data1Q enhanced sampling grids corresponding to the focus sampling areas respectively;
p included based on the current preset sampling data1Each focus sampling region and P included in the current preset sampling data1Q enhanced sampling grids corresponding to the focus sampling areas respectively determine sub-combined samples corresponding to the current preset sampling data;
and determining Z batches of sub-combined samples based on the sub-combined samples corresponding to the Z batches of preset sampling data respectively.
8. The model training method of claim 7, wherein the lesion sampling region comprises an unmarked lesion sampling region and a marked lesion sampling region, and the current preset sampling data comprises P based on the basis1Each focus sampling region and P included in the current preset sampling data1Determining sub-combined samples corresponding to the current preset sampling data by Q enhanced sampling grids corresponding to each focus sampling region, wherein the Q enhanced sampling grids comprise:
p included based on the current preset sampling data1The sampling region of each unmarked focus and the P included by the current preset sampling data1Q enhanced sampling grids corresponding to the unmarked focus sampling areas respectively determine unmarked sub-combined samples corresponding to the current preset sampling data;
p included based on the current preset sampling data1Marking focus sampling area, P included in the current preset sampling data1Q enhanced sampling grids corresponding to the respective marked focus sampling areas, and determining a marked sub-combined sample corresponding to the current preset sampling data, wherein P is1Q enhanced sampling grids corresponding to each unmarked focus sampling region and the P1Q enhancement sampling grids corresponding to the marked focus sampling areas are the same;
and determining the sub-combined sample corresponding to the current preset sampling data based on the unmarked sub-combined sample and the marked sub-combined sample corresponding to the current preset sampling data.
9. The model training method according to claim 6, wherein the medical image sequence sample set comprises a plurality of medical image sequence samples, each of the medical image sequence samples comprises at least one of the lesion sampling regions, and the determining a preliminary sampling grid corresponding to each of the P lesion sampling regions based on P lesion sampling regions corresponding to the medical image sequence sample set comprises:
for each of the P lesion sampling areas,
determining the physical spatial resolution of a preliminary sampling grid corresponding to the current focus sampling region based on the image volume and the physical spatial resolution corresponding to the current focus sampling region;
isotropic processing is carried out on the physical spatial resolution of the preliminary sampling grid, and the spatial resolution of the preliminary sampling grid is determined;
determining a preliminary sampling grid corresponding to the focus sampling area based on the current focus sampling area, the spatial resolution of the preliminary sampling grid and the size of a preset grid;
and determining the initial sampling grids corresponding to the P lesion sampling areas respectively based on the initial sampling grid corresponding to each lesion sampling area.
10. The model training method of claim 9, wherein the lesion sampling region comprises a nodule lesion image, and the determining a preliminary sampling grid corresponding to the lesion sampling region based on the medical image sequence sample to which the current lesion sampling region belongs, the spatial resolution of the preliminary sampling grid, and a preset grid size comprises:
determining a sampling center point based on the current lesion sampling area, the sampling center point being located within the current lesion sampling area;
and determining a preliminary sampling grid corresponding to the focus sampling area based on the sampling central point, the medical image sequence sample to which the current focus sampling area belongs, the spatial resolution of the preliminary sampling grid and the size of a preset grid.
11. The model training method according to claim 5, wherein the current sub-combination samples include currently unlabeled sub-combination samples and currently labeled sub-combination samples, and the batch training of the initial network model or the nodule segmentation model obtained by the previous batch training is performed based on the current sub-combination samples to determine the sub-loss result and the nodule segmentation model corresponding to the current sub-combination samples includes:
inputting the current unlabeled sub-combination sample into the initial network model or the nodule segmentation model obtained by the last batch training to obtain the nodule probability and the nodule segmentation model of each pixel corresponding to the current unlabeled sub-combination sample;
detecting the nodule edge in the currently unlabeled sub-combined sample based on the currently labeled sub-combined sample by using an edge detection algorithm to obtain a gradient value of each pixel corresponding to the currently labeled sub-combined sample;
distributing loss weight to each pixel corresponding to the current unlabeled sub-combined sample based on the gradient value of each pixel corresponding to the current labeled sub-combined sample by using a Gaussian algorithm to obtain the loss weight of each pixel corresponding to the current unlabeled sub-combined sample;
determining a sub-loss result corresponding to the current sub-combined sample based on the nodule probability of each pixel corresponding to the current unlabeled sub-combined sample and the loss weight of each pixel corresponding to the current unlabeled sub-combined sample.
12. The model training method of any one of claims 1 to 11, wherein the S lesion attributes comprise: s1Preset nodule size, S2And (5) presetting the nodular shape.
13. An image segmentation method, comprising:
obtaining an optimal nodule segmentation model based on the model training method of any one of claims 1 to 12;
and performing nodule segmentation on the medical image blocks to be detected by using the optimal nodule segmentation model to obtain a characteristic probability map corresponding to the medical image blocks to be detected, wherein the characteristic probability map comprises the nodule segmentation probability of each pixel in the medical image blocks to be detected.
14. A model training apparatus, comprising:
a segmentation model training module configured to perform N-time sampling on P lesion sampling regions included in a medical image sequence sample set to obtain N combined samples, respectively train an initial network model based on the N combined samples to obtain loss results and a nodule segmentation model corresponding to the N-time training, wherein each combined sample corresponds to one sampling distribution weight, each combined sample includes lesion samples with S kinds of lesion attributes, the sampling distribution weights are ratios of the number of the lesion samples corresponding to the S kinds of lesion attributes to the total number of the lesion samples included in the combined sample, and P, N, S are positive integers;
and a segmentation model determination module configured to determine an optimal nodule segmentation model in the nodule segmentation models corresponding to the N times of training based on the loss results corresponding to the N times of training, where the nodule segmentation models corresponding to the N times of training include at least one non-over-fit nodule segmentation model, and the optimal nodule segmentation model is the non-over-fit nodule segmentation model corresponding to the smallest loss result in the loss results corresponding to the at least one non-over-fit nodule segmentation model.
15. An image segmentation apparatus, comprising:
a nodule segmentation model acquisition module configured to obtain an optimal nodule segmentation model based on the model training method according to any one of claims 1 to 12;
and the nodule segmentation module is configured to perform nodule segmentation on the medical image cut block to be detected by using the optimal nodule segmentation model to obtain a characteristic probability map corresponding to the medical image cut block to be detected, wherein the characteristic probability map comprises the nodule segmentation probability of each pixel in the medical image cut block to be detected.
16. A computer-readable storage medium, characterized in that the storage medium stores instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the model training method of any of the preceding claims 1 to 12 and/or the image segmentation method of claim 13.
17. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing computer executable instructions;
the processor is configured to execute the computer-executable instructions to implement the model training method of any one of the preceding claims 1 to 12 and/or the image segmentation method of claim 13.
CN202111663382.2A 2021-12-30 2021-12-30 Model training method and device and image segmentation method and device Pending CN114332129A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111663382.2A CN114332129A (en) 2021-12-30 2021-12-30 Model training method and device and image segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111663382.2A CN114332129A (en) 2021-12-30 2021-12-30 Model training method and device and image segmentation method and device

Publications (1)

Publication Number Publication Date
CN114332129A true CN114332129A (en) 2022-04-12

Family

ID=81020958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111663382.2A Pending CN114332129A (en) 2021-12-30 2021-12-30 Model training method and device and image segmentation method and device

Country Status (1)

Country Link
CN (1) CN114332129A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147420A (en) * 2022-09-05 2022-10-04 北方健康医疗大数据科技有限公司 Inter-slice correlation detection model training method, detection method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147420A (en) * 2022-09-05 2022-10-04 北方健康医疗大数据科技有限公司 Inter-slice correlation detection model training method, detection method and electronic equipment

Similar Documents

Publication Publication Date Title
CN109272016B (en) Target detection method, device, terminal equipment and computer readable storage medium
CN108229299A (en) The recognition methods of certificate and device, electronic equipment, computer storage media
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
CN109840883B (en) Method and device for training object recognition neural network and computing equipment
CN111723815B (en) Model training method, image processing device, computer system and medium
CN103164687B (en) A kind of method and system of pornographic image detecting
CN112507991B (en) Method and system for setting gate of flow cytometer data, storage medium and electronic equipment
CN113780201B (en) Hand image processing method and device, equipment and medium
CN112861870B (en) Pointer instrument image correction method, system and storage medium
CN109948521A (en) Image correcting error method and device, equipment and storage medium
CN114332129A (en) Model training method and device and image segmentation method and device
CN113763348A (en) Image quality determination method and device, electronic equipment and storage medium
CN112668640A (en) Text image quality evaluation method, device, equipment and medium
CN113657202A (en) Component identification method, training set construction method, device, equipment and storage medium
CN113222043B (en) Image classification method, device, equipment and storage medium
CN109101922A (en) Operating personnel device, assay, device and electronic equipment
JP5704909B2 (en) Attention area detection method, attention area detection apparatus, and program
CN114897872A (en) Method and device suitable for identifying cells in cell cluster and electronic equipment
CN109919214B (en) Training method and training device for neural network model
KR102610900B1 (en) Golf ball overhead detection method, system and storage medium
JP6733984B2 (en) Image analysis device
CN111597845A (en) Two-dimensional code detection method, device and equipment and readable storage medium
US8913838B2 (en) Visual information processing allocation between a mobile device and a network
CN110428012A (en) Brain method for establishing network model, brain image classification method, device and electronic equipment
JP2020204800A (en) Learning dataset generation system, learning server, and learning dataset generation program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination