CN115081621A - Model training method, focus segmentation device, computer device, and medium - Google Patents
Model training method, focus segmentation device, computer device, and medium Download PDFInfo
- Publication number
- CN115081621A CN115081621A CN202210710434.5A CN202210710434A CN115081621A CN 115081621 A CN115081621 A CN 115081621A CN 202210710434 A CN202210710434 A CN 202210710434A CN 115081621 A CN115081621 A CN 115081621A
- Authority
- CN
- China
- Prior art keywords
- training
- neural network
- data
- network model
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a model training method, a focus segmentation device, computer equipment and a storage medium, wherein the model training method is applied to focus segmentation of a PET image and comprises the following steps: acquiring a pre-training neural network model, wherein the pre-training neural network model is obtained based on initial data training; acquiring target task data; and training the pre-training neural network model based on the target task data to obtain the target neural network model. By the method and the device, the problems of high false alarm rate, high missing report rate and low accuracy rate of the segmentation model due to the fact that the available training data are few are solved, and therefore the accuracy of the segmentation model is improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a model training method, a lesion segmentation apparatus, a computer device, and a medium.
Background
The existing medical image segmentation algorithm based on deep learning faces the problem that the proportion of focus pixels in the total pixels of the whole Positron Emission Tomography (PET) image sequence is small, and the algorithm result of deep learning is to achieve low false alarm rate, low missing report rate and high accuracy rate, so that the effective data volume of a training data set is large, and single disease category data is sufficient to increase the data volume of a positive sample. Meanwhile, for different models of medical image systems, different reconstruction parameters and parameters of medical images are different, so that the existing data cannot be fully utilized.
Aiming at the problems of low available training data, high false report rate, high missing report rate and low accuracy rate of a segmentation model in the related technology, no effective solution is provided at present.
Disclosure of Invention
The embodiment provides a model training method, a focus segmentation device, a computer device and a medium, so as to solve the problems of low available training data, high false alarm rate, high missing report rate and low accuracy rate of a segmentation model in the related technology.
In a first aspect, in this embodiment, a model training method is provided, which is applied to lesion segmentation of a PET image, and the method includes:
acquiring a pre-training neural network model, wherein the pre-training neural network model is obtained based on initial data training;
acquiring target task data;
and training the pre-training neural network model based on the target task data to obtain a target neural network model.
In some of these embodiments, said training said pre-trained neural network model based on said target task data comprises:
comparing the target task data with the initial data;
preprocessing the target task data based on a comparison result to obtain training data;
training the pre-trained neural network model based on the training data.
In some of these embodiments, said comparing said target task data to said initial data comprises:
determining the coincidence degree of the target task data and the initial data;
and comparing the contact ratio with a preset threshold value to obtain a comparison result.
In some embodiments, the preprocessing the target task data based on the comparison result to obtain training data includes:
if the coincidence degree is larger than the preset threshold value, eliminating data which are not coincident with the initial data in the target task data to obtain the training data;
and if the contact ratio is not greater than the preset threshold value, interpolating the target task data into the initial data to obtain the training data.
In some of these embodiments, said comparing said target task data to said initial data comprises:
and comparing the target task data with the distribution condition of the initial data, wherein the distribution condition comprises at least one of the interlayer spacing of the image, the pixel size of each layer, the pixel maximum value, the minimum value, the average value and the variance of each image sequence.
In some embodiments, the training the pre-trained neural network model based on the target task data to obtain a target neural network model further includes:
determining task parameters based on the target task data;
determining a target output layer of a neural network model based on the task parameters;
and replacing the target output layer with the output layer of the pre-training neural network model to obtain the target neural network model.
In a second aspect, in this embodiment, a lesion segmentation method applied to a PET image is provided, the method including:
inputting the image to be segmented into a target neural network model to obtain a focus segmentation image, wherein the target neural network model is obtained by training through the model training method of the first aspect.
In a third aspect, there is provided in this embodiment a model training apparatus, the apparatus comprising:
the model acquisition module is used for acquiring a pre-training neural network model, and the pre-training neural network model is obtained based on initial data training;
the data acquisition module is used for acquiring target task data;
and the training module is used for training the pre-training neural network model based on the target task data to obtain a target neural network model.
In a fourth aspect, there is provided a computer device comprising a memory and a processor, wherein the memory stores a computer program, and wherein the processor implements the steps of the method according to the first or second aspect when executing the computer program.
In a fifth aspect, there is provided in this embodiment a computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the steps of the method of the first or second aspect.
Compared with the related art, the model training method provided in this embodiment can effectively use the characteristic information of the initial data to obtain the pre-trained neural network model based on the pre-trained neural network model obtained by the initial data training, and further, training the pre-trained neural network model according to the target task data to obtain a target neural network model, so that the pre-trained neural network model can adjust the model parameters thereof according to the training requirements of the target task data to obtain the target neural network model, the pre-trained neural network model can be effectively trained by using target task data with less data quantity in a transfer learning mode to obtain a target neural network model, therefore, the false alarm rate and the missing report rate of the target neural network model are reduced, and the accuracy of the target neural network model in segmenting the focus of the PET image is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a terminal of a model training method according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a model training method according to an embodiment of the present disclosure.
Fig. 3 is a flowchart of training a target neural network model according to an embodiment of the present disclosure.
Fig. 4 is a block diagram of a model training apparatus according to an embodiment of the present application.
Detailed Description
For a clearer understanding of the objects, technical solutions and advantages of the present application, reference is made to the following description and accompanying drawings.
Unless defined otherwise, technical or scientific terms used herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of this application do not denote a limitation of quantity, either in the singular or the plural. The terms "comprises," "comprising," "has," "having," and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus. Reference throughout this application to "connected," "coupled," and the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes the association relationship of the associated object, indicating that there may be three relationships, for example, "a and/or B" may indicate: a exists alone, A and B exist simultaneously, and B exists alone. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or a similar computing device. For example, the method is executed on a terminal, and fig. 1 is a block diagram of a hardware structure of the terminal according to the model training method provided in the embodiment of the present application. As shown in fig. 1, the terminal may include one or more processors 102 (only one is shown in fig. 1) and a memory 104 for storing data, wherein the processors 102 may include, but are not limited to, a processing device such as a Microprocessor (MCU) or a Programmable logic device (FPGA). The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely an illustration and is not intended to limit the structure of the terminal described above. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 can be used for storing computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the model training method in the present embodiment, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some embodiments, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network described above includes a wireless network provided by a communication provider of the terminal. In one embodiment, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one embodiment, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, a model training method is provided, which is applied to lesion segmentation of a PET image, and an execution subject of the method may be an electronic device, and optionally, the electronic device may be a server or a terminal device, but the application is not limited thereto.
Fig. 2 is a flowchart of a model training method provided in an embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
step S201, a pre-training neural network model is obtained.
The pre-training neural network model is obtained based on initial data training.
Illustratively, the neural network model is trained according to the initial data to obtain a pre-trained neural network model.
Specifically, the initial data may be a large number of labeled PET images, and the initial data may be used to train the neural network model to obtain the target network model, where the type of label in the initial data determines the role of the target network model. For example, the initial data may be a PET image labeled with a lesion type, and the initial data may be used to train a neural network model to obtain a lesion classification model, which is used to classify a lesion of the PET image to be classified. The initial data may also be a PET image marked with a lesion position, and the initial data may be used to train the neural network model to obtain a lesion segmentation model, which is used to segment the lesion position of the PET image to be segmented.
It should be noted that, in the embodiment of the present application, only the labeling type is used as a lesion type, the target network model is used to classify the PET image into lesions, and the labeling type is used as a lesion position, and the target network model is used to divide the PET image into lesion positions.
It should be noted that the neural network model in the embodiment of the present application may be one or more of a u-net neural network, a Convolutional Neural Network (CNN), a Generative Adaptive Network (GAN), a Recurrent Neural Network (RNN), and a Deep Residual Network (DRN), and may also be other types of neural networks, which is not limited herein.
Step S202, target task data is obtained.
Illustratively, the target task data may be a small number of PET images labeled with a lesion location, which may be used to train a pre-trained neural network model, resulting in a lesion segmentation network model. For example, the target task data is a small number of PET images labeled with gastric lesions.
Specifically, the body of a volunteer is scanned through PET equipment to obtain a PET image, and further, medical staff marks the gastric lesion of the PET image to obtain target task data. Generally, the mode of manually labeling the image is low in efficiency, and the data volume of the obtained labeling data is small, so that the data volume of the labeled stomach lesion PET image obtained by the manual labeling mode is small, namely the data volume of the target task data is small.
It should be noted that, in the embodiment of the present application, only a gastric lesion is taken as an example for illustration, and in practical applications, the gastric lesion may be a lung lesion or a pancreatic lesion, or may be other types of lesions, which is not limited herein.
Step S203, training the pre-trained neural network model based on the target task data to obtain the target neural network model.
Illustratively, the pre-trained neural network model is trained based on a small number of PET images labeled with focus positions to obtain a focus segmentation network model, and the focus segmentation network model is used for performing focus segmentation on the PET image to be segmented. In the training process, partial parameters in the pre-training neural network model are adjusted to obtain a target neural network model, namely a focus segmentation network model.
Specifically, a small amount of PET images marked with the gastric lesions are trained on the pre-trained neural network model to obtain a gastric lesion segmentation model, and the gastric lesion segmentation model can be used for performing gastric lesion segmentation on the PET images to be segmented.
Illustratively, fig. 3 is a flowchart of a training process of a target neural network model provided in an embodiment of the present application, and as shown in fig. 3, the process includes the following steps:
step 301: initial data is acquired.
Specifically, a large amount of initial data with a label is acquired, and the initial data may be a PET image with a lesion type label or a PET image with a lesion position label.
Step 302: and training the neural network model based on the initial data to obtain a pre-trained neural network model.
Specifically, if the initial data is a PET image labeled with a lesion type, the obtained pre-trained neural network model is a lesion classification model, and the lesion classification model is used for performing lesion classification on the PET image to be classified.
If the initial data is the PET image marked with the focus position, the obtained pre-trained neural network model is a focus segmentation model, and the focus segmentation model is used for performing focus position segmentation on the PET image to be segmented.
Step 303: and acquiring target task data.
Specifically, the target task data may be a small number of PET images marked with the focus positions, and the target task data may be used to train the pre-trained neural network model to obtain a focus segmentation network model for focus segmentation.
Step 304: and training the pre-training neural network model according to the target task data.
Specifically, when the pre-trained neural network model is trained through the target task data, the pre-trained neural network model can be implemented through steps 3041 to 3043.
Step 3041: part of the network structure and parameters are frozen.
Specifically, a network structure and parameters of the pre-training network model are obtained, and a part of the network structure and parameters are frozen.
Step 3042: the unfrozen network structure and parameters are trained.
Specifically, the unfrozen network structure and parameters are trained according to target task data.
Step 3043: and updating the structure and parameters of the output layer according to the training requirements of the target task data.
Specifically, since the training requirement of the target task data is different from the function of the pre-trained neural network model, the structure and the parameters of the output layer of the pre-trained neural network model are updated according to the requirement of the target task data, that is, the structure and the parameters of the output layer of the pre-trained neural network model are trained through the target task data.
Step 305: and obtaining a target neural network model.
Specifically, a target neural network model is obtained according to the training results of steps 3041 to 3043. If the target task data is a small number of PET images marked with focus positions, the target neural network model is used for carrying out focus position segmentation on the PET images to be segmented.
As an embodiment, the network structure of all convolutional layers in the pre-training neural network model is frozen, the full connection layer of the network is trained by using target task data, and partial parameters of the model are adjusted to obtain the target neural network model.
As another example, freezing part of convolutional layers in the pre-trained neural network model, training the remaining convolutional layers and full-link layers in the pre-trained neural network model by using target task data, and adjusting part of parameters of the model to obtain the target neural network model.
As another embodiment, a feature vector obtained by inputting initial data into a pre-training neural network model is determined, and a preset full connection layer is trained according to target task data and the feature vector to obtain the target neural network model, wherein the preset full connection layer is used for executing a target task, and the target task can also be understood as a training requirement of model training. For example, if the target task data is a small number of PET images labeled with gastric lesions, the preset full-junction layer is used for outputting gastric lesion images.
It should be noted that, in the embodiment of the present application, the description is performed only by taking the example of freezing the network structures of all the convolutional layers, training the full-link layer, freezing part of the convolutional layers, training the remaining convolutional layers and the full-link layer, and freezing the feature vectors obtained in the pre-trained neural network model, and training the full-link layer according to the feature vectors.
In the implementation process, a neural network model is trained through a large amount of initial data to obtain a pre-trained neural network model, further, a small amount of PET images marked with focus positions are used as input, a training task is determined according to marking types, the pre-trained neural network model is trained to obtain a target neural network model, so that the target neural network model is suitable for training requirements, for example, the target neural network model is suitable for focus segmentation of the PET images, partial network structures and parameters of the pre-trained neural network model are adjusted in the training process, the whole structure and parameters of the neural network model do not need to be adjusted, the data amount of training data in the training process and the time of model training are reduced, the training efficiency is improved, and the pre-trained neural network model is trained by using a transfer learning algorithm and target task data, the target neural network model is obtained, the false alarm rate and the false negative rate of the target neural network model can be effectively reduced, and the precision of the target neural network model is improved.
In some of these embodiments, training the pre-trained neural network model based on the target task data may include the steps of:
step 1: and comparing the target task data with the initial data.
Illustratively, a small number of PET images labeled with gastric lesions are compared with a large number of image features of the labeled PET images, and image feature similarity between the PET images labeled with gastric lesions and the large number of labeled PET images is obtained, i.e., a comparison result is obtained.
Step 2: and preprocessing the target task data based on the comparison result to obtain training data.
Illustratively, a small amount of PET images marked with the gastric lesions are preprocessed according to the comparison result so as to improve the image characteristic similarity between the PET images marked with the gastric lesions and a large amount of PET images with marks, and the PET images of the gastric lesions to be trained are obtained.
It should be noted that in the embodiment of the present application, the preprocessing may include at least one of bias field correction, image resampling, random clipping, flipping, and data normalization, but the present application is not limited thereto.
And step 3: the pre-trained neural network model is trained based on the training data.
Illustratively, a pre-trained neural network model is trained by using a PET image of the gastric lesion to be trained, and a partial network structure and parameters of the pre-trained neural network model are adjusted to obtain a target neural network model, wherein the target neural network model is used for performing gastric lesion segmentation on the PET image.
In the implementation process, the image feature similarity between the target task data and the initial data is determined according to the comparison between the target task data and the initial data, the target task data is further preprocessed to obtain training data, the image feature similarity between the training data and the initial data is improved, further, the pre-training neural network model is trained based on the training data to obtain the target neural network model, so that the false alarm rate and the missing report rate of the target neural network model are reduced, and the accuracy of the target neural network model in lesion segmentation of the PET image is improved.
In some of these embodiments, comparing the target task data to the initial data may include the steps of:
step 1: and determining the coincidence degree of the target task data and the initial data.
Step 2: and comparing the contact ratio with a preset threshold value to obtain a comparison result.
Illustratively, a small number of PET images labeled with gastric lesions are compared to a large number of PET images labeled with gastric lesions, and the degree of coincidence of the gastric lesions in the large number of PET images labeled with gastric lesions is determined.
The overlap ratio may refer to a percentage of the data amount of the target task data falling within the initial data distribution range to the total data amount, may refer to a ratio of the data amount of the target task data to the data amount of the initial data, and may have other meanings, which is not limited herein.
Further, the coincidence degree of the gastric lesion is compared with a preset threshold value to obtain a comparison result.
In the implementation process, the data feature similarity between the data features of the target task data and the data features of the initial data is effectively reflected through the coincidence degree of the target task data and the initial data.
In some embodiments, preprocessing the target task data based on the comparison result to obtain the training data may include the following steps:
step 1: and if the coincidence degree is greater than a preset threshold value, eliminating data which are not coincident with the initial data in the target task data to obtain training data.
Step 2: and if the contact ratio is not greater than the preset threshold value, interpolating the target task data into the initial data to obtain training data.
Illustratively, taking a preset threshold as 60%, if the coincidence degree of the target task data and the initial data is greater than 60%, eliminating data which is not coincident with the initial data in the target task data to obtain training data.
If the coincidence degree of the target task data and the initial data is not more than 60%, adjusting the pixel size of each layer of the marked stomach lesion PET image and the image layer spacing within the distribution range of a large number of PET images with marks by using an interpolation method, thereby obtaining training data.
It should be noted that, in the embodiment of the present application, only the preset threshold is taken as an example for description, in practical applications, the preset threshold may also be 70% or 75%, and may also be set according to training requirements, which is not limited herein.
In the implementation process, the target task data are processed according to the coincidence degree and the preset threshold value to obtain training data so as to improve the data feature similarity between the training data and the initial data, and specifically, when the coincidence degree is greater than the preset threshold value, data which are not coincident with the initial data in the target task data are removed, so that data which are low in feature similarity with the initial data in the target task data are removed to obtain the training data, and thus the data feature similarity between the training data and the initial data is improved; when the contact ratio is smaller than a preset threshold value, target task data are interpolated into the initial data, so that the common data characteristic information between the target task data and the initial data is added to the target task data, training data are obtained, and the data characteristic similarity between the training data and the initial data is improved.
In some of these embodiments, comparing the target task data to the initial data may include:
and comparing the target task data with the distribution condition of the initial data, wherein the distribution condition comprises at least one of the layer spacing of the image, the pixel size of each layer, the maximum value, the minimum value, the average value and the variance of the pixels of each image sequence.
Illustratively, the distribution of the PET image with the stomach lesion and the plurality of PET images with the annotations is determined separately, wherein the distribution may comprise at least one of the layer spacing of the images, the pixel size per layer, the pixel maximum, the minimum, the mean, the variance per image sequence.
As an embodiment, the PET image labeled with the gastric lesion is acquired under the same acquisition condition as a plurality of PET images with labels, the resolution of the image is determined according to the interlayer spacing, the layer thickness and the pixel size of the PET image with the labels and the PET image labeled with the gastric lesion, and the coincidence degree of the PET image with the labels and the PET image labeled with the gastric lesion is further determined according to the image resolution.
Specifically, the maximum value of the pixel of each image sequence in the PET image marked with the gastric lesion is calculated, and further, the average value A of the maximum values of the pixels of all the image sequences is calculated; calculating the maximum value of the pixels of each image sequence in the PET images with the labels, further calculating the average value B of the maximum values of the PET images with the labels, and further determining the coincidence ratio N of the average value A of the maximum values of the pixels of the PET image sequences with the stomach lesions in the average value B of the maximum values of the PET image sequences with the labels, namely N is A/B.
In the embodiment of the present application, the determination of the degree of coincidence of the images is performed only by taking the average value of the maximum pixel values of the image sequences as an example, and in practical applications, the degree of coincidence of the images may also be determined according to one or more combinations of the layer spacing, the layer thickness, the pixel value size of each layer, the minimum pixel value of each image sequence, and the variance of the images, which is not limited herein.
And further preprocessing the target data according to the contact ratio N and the preset threshold value, thereby obtaining training data.
In the implementation process, the distribution condition of the target task data and the initial data is determined according to at least one of the layer spacing of the image, the size of each layer of pixels, and the maximum value, the minimum value, the average value and the variance of the pixels of each image sequence, so that the distribution condition of the target task data and the initial data is effectively quantized, and the data feature similarity between the target task data and the initial data is further determined according to the quantized distribution condition.
In some embodiments, training the pre-trained neural network model based on the target task data to obtain the target neural network model may further include:
step 1: task parameters are determined based on the target task data.
Step 2: a target output layer of the neural network model is determined based on the task parameters.
And step 3: and replacing the target output layer with the output layer of the pre-training neural network model to obtain the target neural network model.
Illustratively, the task parameters, i.e. the trained target task, are determined from the target task data, e.g. if the target task data is a PET image labeled with a gastric lesion, then the target task is a gastric lesion segmentation of the PET image.
Further, an output layer of the neural network model is adjusted according to the task parameters to obtain a target output layer, wherein the output layer of the neural network model can be used for outputting the target task.
Specifically, an output layer of the neural network model is adjusted according to the target task to obtain a target output layer, and the target output layer is used for outputting a stomach focus segmentation image.
And further, replacing the target output layer with an output layer in the pre-training neural network model to obtain the target neural network model.
In the implementation process, task parameters are determined according to the target task data, so that a target task of the target task data is determined, further, an output layer of the neural network model is adjusted according to the task data, a target output layer is obtained, so that the target output layer can execute the target task, and further, the target output layer is used for replacing the output layer in the pre-training neural network model, so that the target neural network model is obtained, and the target neural network model is suitable for training requirements.
In this embodiment, a lesion segmentation method applied to a PET image is further provided, and the method includes: inputting the image to be segmented into a target neural network model to obtain a focus segmentation image, wherein the target neural network model can be obtained by training through a model training method as shown in fig. 2.
Specifically, the image to be segmented may be obtained by scanning the body of the patient through a PET device, and further, the image to be segmented is input into the target neural network model trained by the model training method provided in any of the embodiments, so as to obtain the lesion segmentation image.
In the implementation process, the focus segmentation is carried out on the image to be segmented by using the target neural network model obtained by training based on the transfer learning method, so that the accuracy of the focus segmentation can be effectively improved.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
In this embodiment, a model training apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of which has been already made is omitted. The terms "module," "unit," "subunit," and the like as used below may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of a model training apparatus according to an embodiment of the present application, and as shown in fig. 4, the apparatus includes:
a module obtaining module 401, configured to obtain a pre-trained neural network model, where the pre-trained neural network model is obtained based on initial data training;
a data obtaining module 402, configured to obtain target task data;
the training module 403 is configured to train the pre-trained neural network model based on the target task data to obtain a target neural network model.
In some embodiments, the training module 403 is specifically configured to:
comparing the target task data with the initial data;
preprocessing the target task data based on the comparison result to obtain training data;
the pre-trained neural network model is trained based on the training data.
In some embodiments, the training module 403 is specifically configured to:
determining the contact ratio of the target task data and the initial data;
and comparing the contact ratio with a preset threshold value to obtain a comparison result.
In some embodiments, the training module 403 is specifically configured to:
if the coincidence degree is larger than a preset threshold value, eliminating data which are not coincident with the initial data in the target task data to obtain training data;
and if the contact ratio is not greater than the preset threshold value, interpolating the target task data into the initial data to obtain training data.
In some embodiments, the training module 403 is specifically configured to:
and comparing the target task data with the distribution condition of the initial data, wherein the distribution condition comprises at least one of the interlayer spacing of the images, the pixel size of each layer, and the pixel maximum value, minimum value, average value and variance of each image sequence.
In some of these embodiments, training module 403 is further configured to:
determining task parameters based on the target task data;
determining a target output layer of the neural network model based on the task parameters;
and replacing the target output layer with the output layer of the pre-training neural network model to obtain the target neural network model.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
There is also provided in this embodiment an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
and S1, obtaining a pre-training neural network model, wherein the pre-training neural network model is obtained based on initial data training.
And S2, acquiring target task data.
And S2, training the pre-trained neural network model based on the target task data to obtain the target neural network model.
It should be noted that, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementations, and details are not described again in this embodiment.
In addition, in combination with the model training method provided in the foregoing embodiment, a storage medium may also be provided in this embodiment. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the model training methods in the above embodiments.
Illustratively, the above model training method and the above lesion segmentation method may be applied to the following lesion recognition method, specifically, the lesion recognition method includes:
step 1, based on the acquired first image, a first target organ in the first image is sketched.
In this embodiment, the first image is an anatomical image, which may be an X-ray image, a CT image, or an MR image. Firstly, an MR sequence, a CT sequence or an X-ray sequence is obtained by scanning through corresponding scanning equipment, and then the MR sequence, the CT sequence or the X-ray sequence is reconstructed to obtain an MR image, a CT image or an X-ray image.
The liver is usually the first target organ because of its better referential. It should be noted that, in some other embodiments, another organ may be used as the first target organ, and the organ may be delineated.
In this embodiment, a deep learning algorithm or other recognition algorithms may be used to recognize the first target organ in the first image, and the first target organ may be delineated after being recognized.
And 2, determining a second target organ corresponding to the first target organ in the acquired second image based on the first target organ.
In this embodiment, the second image is a PET image, the second image and the first image are images of different modalities, and the detection time and the detection location of the second image and the first image are the same or partially the same.
Specifically, the images of different modalities are medical images that can provide information from a plurality of layers due to different imaging mechanisms.
Based on the first target organ having been determined in the first image, registering the first image with the second image may result in a second target organ corresponding to the first target organ.
Taking the liver as an example, the region of the liver in the second image, i.e. the second target organ, may be obtained by registering the first image with the second image.
And 3, determining an SUV threshold value based on at least one SUV sampling value in the second target organ.
And acquiring at least one SUV sampling value in the second target organ, and processing the acquired SUV sampling value to obtain an SUV threshold, wherein the SUV threshold is more accurate compared with manual VOI delineation or an empirical value as the SUV threshold.
And 4, determining a focus area in the second image based on the SUV threshold value.
After the SUV threshold is determined, a lesion area in the second image is automatically delineated by an automatic delineation method.
Based on the steps 1-4, a first target organ in the first image is sketched based on the acquired first image, a second target organ corresponding to the first target organ in the acquired second image is determined based on the first target organ, an SUV threshold value is determined based on at least one SUV sampling value in the second target organ, and a focus area in the second image is determined based on the SUV threshold value, so that the focus area is accurately determined, and the technical problem of inaccurate determination of the focus area in the prior art is solved.
In an embodiment, based on the acquired first image, the method for delineating the first target organ in the first image comprises:
step 1, inputting a first image into a trained organ classification model, and determining a first target organ;
and 2, delineating the first target organ.
The organ classification model can be obtained by training based on the model training method.
Specifically, the training of the organ classification model comprises the following steps:
step 1, obtaining a pre-training neural network model.
The pre-training neural network model is obtained based on initial data training.
Illustratively, the neural network model is trained according to the initial data to obtain a pre-trained neural network model.
Specifically, the initial data may be a large number of labeled PET images, and the initial data may be used to train the neural network model to obtain the target network model, where the type of label in the initial data determines the role of the target network model. For example, the initial data may be a PET image labeled with a lesion type, and the initial data may be used to train a neural network model to obtain a lesion classification model, which is used to classify a lesion of the PET image to be classified. The initial data may also be a PET image marked with a lesion position, and the initial data may be used to train the neural network model to obtain a lesion segmentation model, which is used to segment the lesion position of the PET image to be segmented.
It should be noted that, in the embodiment of the present application, only the labeling type is used as a lesion type, the target network model is used to classify the PET image into lesions, and the labeling type is used as a lesion position, and the target network model is used to divide the PET image into lesion positions.
It should be noted that the neural network model in the embodiment of the present application may be one or more of a u-net neural network, a Convolutional Neural Network (CNN), a Generative Adaptive Network (GAN), a Recurrent Neural Network (RNN), and a Deep Residual Network (DRN), and may also be other types of neural networks, which is not limited herein.
And 2, acquiring target task data.
Illustratively, the target task data may be a small number of PET images labeled with a lesion location, which may be used to train a pre-trained neural network model, resulting in a lesion segmentation network model. For example, the target task data is a small number of PET images labeled with gastric lesions.
Specifically, the volunteer body is scanned through PET equipment to obtain a PET image, and further, medical staff marks the liver focus of the PET image to obtain target task data. Generally, the efficiency of the method of labeling the image manually is low, and the data volume of the obtained labeling data is small, so that the data volume of the labeled PET image of the liver lesion obtained by the manual labeling method is small, that is, the data volume of the target task data is small.
It should be noted that, in the embodiment of the present application, only a liver lesion is taken as an example for illustration, and in practical applications, the liver lesion may be a lung lesion or a pancreas lesion, or may be other types of lesions, which is not limited herein.
And 3, training the pre-training neural network model based on the target task data to obtain an organ classification model.
Illustratively, the pre-trained neural network model is trained based on a small number of PET images labeled with focus positions to obtain a focus segmentation network model, and the focus segmentation network model is used for performing focus segmentation on the PET image to be segmented. And in the training process, adjusting partial parameters in the pre-training neural network model to obtain an organ classification model.
Taking the area of the liver as a first target organ as an example, acquiring a plurality of sample images containing the liver, and training by using a deep learning algorithm to obtain an organ classification model, wherein the model can automatically identify the liver in the image. And inputting the first image into the trained organ classification model to determine the region of the liver.
In an embodiment, the SUV threshold is determined based on at least one SUV sample value in the second target organ and a comparison coefficient; the comparison coefficient is determined based on the ratio of the SUV average value of the residual image and the second image after the second image is subjected to non-target organ elimination.
Specifically, firstly, a deep learning algorithm or other recognition algorithms are utilized to segment organs in a first image, segmented organ information is stored, then the first image is matched with a second image, non-target organs are eliminated, and then a comparison coefficient is determined by utilizing an SUV average value of the residual image and the second image. And then, determining an SUV threshold value based on at least one SUV sampling value in the second target organ and the comparison coefficient, and obtaining an accurate SUV threshold value based on the method.
Note that the non-target organ refers to a focal-free organ. The target organ refers to a focal organ. In an embodiment, determining the SUV threshold based on the at least one SUV sample value in the second target organ comprises the steps of:
step 1, determining an SUV average value and a standard variance value of a second target organ based on at least one SUV sampling value;
and 2, determining the SUV threshold value based on the SUV average value, the standard variance value and the comparison coefficient of the second target organ.
Specifically, the SUV threshold is determined by the following equation:
threshold=weight×(SUV mean +nSUV SD )
wherein threshold represents the SUV threshold; weight represents the ratio of the SUV average value of the residual image and the second image after the second image is removed from the non-target organ; SUV mean Represents the SUV mean; SUV SD A standard variance value representing at least one SUV sample value; n represents an adjustment coefficient, n > 0.
It should be noted that, in this embodiment, the SUV average value of the second target organ is added to the standard variance value, and then the sum of the additions is multiplied by the comparison coefficient, so as to obtain the accurate SUV threshold.
The adjustment coefficient n is determined based on the focus type and the drug type corresponding to the focus type.
In an exemplary embodiment, PSMA (Prostate Specific Membrane Antigen) drug imaging has a prominent effect in Prostate cancer diagnosis, and multiple trials have determined that when n is 3, the focal region in the image can be most accurately determined.
In an embodiment, in case the determined lesion area does not meet the requirement, the adjustment coefficient n is adjusted until the requirement is met.
Specifically, if the finally determined lesion area is too large, the value of the adjustment coefficient n may be increased; if the finally determined focal region is too small, the value of the adjustment coefficient n can be reduced, and the focal region in the second image can be finally and accurately determined.
The overall process of the focus identification method specifically comprises the following steps: for is toAnd reconstructing the MR/CT sequence to obtain an MR/CT image, and reconstructing the PET sequence to obtain a PET image. And (3) segmenting the organ by using the organ classification model aiming at the MR/CT image, storing the segmented organ information, and delineating a first target organ based on the segmented organ. Organ registration is carried out on the MR/CT image and the PET image by utilizing organ information to obtain a second target organ after registration, and the SUV average value SUV is calculated by utilizing the second target organ and the PET image after registration mean And after non-target organs are removed based on the second image, calculating the SUV threshold value by the average value of the SUV of the residual image and the second image, and determining the focus area in the second image based on the SUV threshold value.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (10)
1. A model training method is applied to lesion segmentation of PET images, and is characterized by comprising the following steps:
acquiring a pre-trained neural network model, wherein the pre-trained neural network model is obtained based on initial data training;
acquiring target task data;
and training the pre-training neural network model based on the target task data to obtain a target neural network model.
2. The method of claim 1, wherein the training the pre-trained neural network model based on the target task data comprises:
comparing the target task data with the initial data;
preprocessing the target task data based on a comparison result to obtain training data;
training the pre-trained neural network model based on the training data.
3. The method of claim 2, wherein comparing the target task data to the initial data comprises:
determining the contact ratio of the target task data and the initial data;
and comparing the contact ratio with a preset threshold value to obtain a comparison result.
4. The method of claim 3, wherein the preprocessing the target task data based on the comparison results to obtain training data comprises:
if the coincidence degree is larger than the preset threshold value, eliminating data which are not coincident with the initial data in the target task data to obtain the training data;
and if the contact ratio is not greater than the preset threshold value, interpolating the target task data into the initial data to obtain the training data.
5. The method of any one of claims 2-4, wherein the comparing the target task data to the initial data comprises:
and comparing the target task data with the distribution condition of the initial data, wherein the distribution condition comprises at least one of the interlayer spacing of the image, the pixel size of each layer, the pixel maximum value, the minimum value, the average value and the variance of each image sequence.
6. The method of claim 1, wherein the training the pre-trained neural network model based on the target task data, resulting in a target neural network model further comprises:
determining task parameters based on the target task data;
determining a target output layer of a neural network model based on the task parameters;
and replacing the target output layer with the output layer of the pre-training neural network model to obtain the target neural network model.
7. A lesion segmentation method applied to a PET image, the method comprising:
inputting an image to be segmented into a target neural network model to obtain a focus segmentation image, wherein the target neural network model is obtained by training through the model training method of any one of claims 1 to 6.
8. A model training apparatus, the apparatus comprising:
the model acquisition module is used for acquiring a pre-training neural network model, and the pre-training neural network model is obtained based on initial data training;
the data acquisition module is used for acquiring target task data;
and the training module is used for training the pre-training neural network model based on the target task data to obtain a target neural network model.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor, when executing the computer program, implements the model training method of any one of claims 1-6 or the lesion segmentation method of claim 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the model training method of any one of claims 1 to 6 or the lesion segmentation method of claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210710434.5A CN115081621A (en) | 2022-06-22 | 2022-06-22 | Model training method, focus segmentation device, computer device, and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210710434.5A CN115081621A (en) | 2022-06-22 | 2022-06-22 | Model training method, focus segmentation device, computer device, and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115081621A true CN115081621A (en) | 2022-09-20 |
Family
ID=83254133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210710434.5A Pending CN115081621A (en) | 2022-06-22 | 2022-06-22 | Model training method, focus segmentation device, computer device, and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115081621A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117892139A (en) * | 2024-03-14 | 2024-04-16 | 中国医学科学院医学信息研究所 | Large language model training and using method based on interlayer comparison and related device |
-
2022
- 2022-06-22 CN CN202210710434.5A patent/CN115081621A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117892139A (en) * | 2024-03-14 | 2024-04-16 | 中国医学科学院医学信息研究所 | Large language model training and using method based on interlayer comparison and related device |
CN117892139B (en) * | 2024-03-14 | 2024-05-14 | 中国医学科学院医学信息研究所 | Large language model training and using method based on interlayer comparison and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3979198A1 (en) | Image segmentation model training method and apparatus, computer device, and storage medium | |
CN109670532B (en) | Method, device and system for identifying abnormality of biological organ tissue image | |
US11593943B2 (en) | RECIST assessment of tumour progression | |
CN110310287B (en) | Automatic organ-at-risk delineation method, equipment and storage medium based on neural network | |
US20220092789A1 (en) | Automatic pancreas ct segmentation method based on a saliency-aware densely connected dilated convolutional neural network | |
US9959486B2 (en) | Voxel-level machine learning with or without cloud-based support in medical imaging | |
EP3611699A1 (en) | Image segmentation using deep learning techniques | |
US8494238B2 (en) | Redundant spatial ensemble for computer-aided detection and image understanding | |
CN110929728B (en) | Image region-of-interest dividing method, image segmentation method and device | |
CN113826143A (en) | Feature point detection | |
Masood et al. | Brain tumor localization and segmentation using mask RCNN. | |
US10878564B2 (en) | Systems and methods for processing 3D anatomical volumes based on localization of 2D slices thereof | |
US9311703B2 (en) | Method and system for categorizing heart disease states | |
CN114255235A (en) | Method and arrangement for automatic localization of organ segments in three-dimensional images | |
CN113327225A (en) | Method for providing airway information | |
US20210303930A1 (en) | Model training using fully and partially-annotated images | |
CN115081621A (en) | Model training method, focus segmentation device, computer device, and medium | |
Stough et al. | Regional appearance in deformable model segmentation | |
US20230115927A1 (en) | Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection | |
CN116433976A (en) | Image processing method, device, equipment and storage medium | |
Sun et al. | Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration | |
Sree et al. | Ultrasound fetal image segmentation techniques: a review | |
CN111079617A (en) | Poultry identification method and device, readable storage medium and electronic equipment | |
CN115187521A (en) | Focus identification method, device, computer equipment and storage medium | |
Longuefosse et al. | Lung CT Synthesis Using GANs with Conditional Normalization on Registered Ultrashort Echo-Time MRI |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |