WO2020143309A1 - Segmentation model training method, oct image segmentation method and apparatus, device and medium - Google Patents

Segmentation model training method, oct image segmentation method and apparatus, device and medium Download PDF

Info

Publication number
WO2020143309A1
WO2020143309A1 PCT/CN2019/117733 CN2019117733W WO2020143309A1 WO 2020143309 A1 WO2020143309 A1 WO 2020143309A1 CN 2019117733 W CN2019117733 W CN 2019117733W WO 2020143309 A1 WO2020143309 A1 WO 2020143309A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
model
segmentation
updated
lesion
Prior art date
Application number
PCT/CN2019/117733
Other languages
French (fr)
Chinese (zh)
Inventor
吕彬
郭晏
吕传峰
谢国彤
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020143309A1 publication Critical patent/WO2020143309A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection

Definitions

  • the present application relates to the field of image detection, in particular to a segmentation model training method, an OCT image segmentation method, device, equipment, and medium.
  • OCT image is short for optical coherence tomography (Optical Coherence Tomography). It mainly uses the basic principle of weak coherent light interferometer to detect the reflection and scattering signals of biological tissue at different depths of the incident weak coherent light to reconstruct biological tissue
  • the internal structure image is a non-contact, non-invasive tomography of biological tissue.
  • OCT imaging equipment is used in ophthalmology to assist doctors in observing the normal tissue structure of the posterior segment of the eye (such as the macula, optic disc, or retinal nerve fiber layer, etc.) and pathological changes.
  • OCT In order to provide a more accurate imaging basis for the diagnosis of related ophthalmic diseases, OCT The images are segmented to provide technical assistance for related medical procedures.
  • segmentation technologies include histogram-based, boundary-based segmentation, or region-based segmentation technologies, most of which are based on image processing algorithms.
  • the traditional segmentation technology often fails to take into account the integrity and accuracy of the lesions in the OCT image, that is, some lesion details or may be missing The extraneous and useless information leads to unsatisfactory segmentation results.
  • Embodiments of the present application provide a segmentation model training method, device, computer equipment, and storage medium to solve the problem of low segmentation model segmentation performance.
  • the embodiments of the present application provide an OCT image segmentation method, device, computer equipment, and storage medium to solve the problem of low lesion segmentation accuracy.
  • a segmentation model training method including:
  • the training sample image set including the original OCT image and the gold standard image
  • the generator model is determined as the image lesion segmentation model.
  • a segmentation model training device including:
  • a sample image set acquisition module for acquiring a training sample image set, the training sample image set including the original OCT image and the gold standard image;
  • a segmented image acquisition module configured to input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image
  • the generator update module is used to compare the first segmented image with the gold standard image through a preset discriminator model to obtain a comparison result, and calculate the loss of the generator model according to the comparison result Function, and update the generator model according to the loss function;
  • the discriminator update module is used to convert the first segmented image into a second segmented image using the updated generator model, and input the second segmented image and the gold standard image into a preset discriminator model , Updating the preset discriminator model according to the binary cross entropy to obtain the updated discriminator model;
  • the lesion segmentation model training module is used to iteratively train the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped and will be stopped.
  • the updated generator model after iterative training is determined to be an image lesion segmentation model.
  • An OCT image segmentation method including:
  • the OCT image to be processed is input into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using a segmentation model training method.
  • An OCT image segmentation device including:
  • To-be-processed image acquisition module for acquiring to-be-processed OCT images
  • the lesion image acquisition module is configured to input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using a segmentation model training method.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, and the processor implements the computer-readable instructions to implement the above segmentation model training method Or, the processor implements the above-mentioned OCT image segmentation method when executing the computer-readable instructions.
  • One or more readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the above-mentioned OCT image segmentation method.
  • FIG. 1 is a schematic diagram of an application environment of a segmentation model training method or an OCT image segmentation method provided by an embodiment of the present application;
  • FIG. 2 is an example diagram of a method for training a segmentation model provided by an embodiment of the present application
  • FIG. 3 is another example diagram of a segmentation model training method provided by an embodiment of the present application.
  • FIG. 4 is a schematic block diagram of a segmentation model training device provided by an embodiment of the present application.
  • FIG. 5 is another schematic block diagram of a segmentation model training device provided by an embodiment of the present application.
  • FIG. 6 is an example diagram of an OCT image segmentation method provided by an embodiment of the present application.
  • FIG. 7 is another example diagram of an OCT image segmentation method provided by an embodiment of the present application.
  • FIG. 8 is a schematic block diagram of an OCT image segmentation device provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a computer device provided by an embodiment of the present application.
  • the segmentation model training method provided in this application can be applied in the application environment as shown in FIG. 1, in which the client communicates with the server through the network, the server receives the training sample image set sent by the client, and then the training sample image
  • the original OCT image is input into the preset generator model for conversion processing to obtain the first segmented image; the first segmented image is compared with the gold standard image through the preset discriminator model to obtain the comparison result, according to the comparison Calculate the loss function of the generator model for the result, and update the generator model according to the loss function; then, use the updated generator model to convert the first segmented image to the second segmented image, and input the second segmented image and the gold standard image
  • update the preset discriminator model according to the binary cross entropy to obtain the updated discriminator model; finally, iteratively train the updated generator model model and the updated discriminator model Until the loss function of the updated discriminator model converges, then iterative training is stopped, and the updated discriminator model after the iter
  • the method is applied to the server in FIG. 1 as an example for illustration, including the following steps:
  • the training sample image set is a set of sample images used for deep learning, including the original OCT image and the gold standard image.
  • the original OCT image refers to an unprocessed OCT image, and the original OCT image can be acquired after being scanned by an OCT scanner.
  • the gold standard image refers to the pre-segmented lesion image.
  • the expert outlines the location of the lesion to be segmented from the unprocessed OCT image based on professional medical knowledge, that is, the gold standard image is pre-marked for the lesion.
  • the gold standard image can be obtained by annotating the location of each lesion in the unprocessed OCT image by experts.
  • the training sample image set by selecting a preset number of images from the public fundus retina data set (such as DRIVE or STARE) as the training sample image set.
  • the size of the gold standard image is the same as the original OCT image, and the pixels of the lesion area are the preset pixel values, and the pixel value of the non-focal area is 0, so as to enhance the difference between the lesion area and the non-focal area of the OCT image. Differentiate.
  • S20 Input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image.
  • the generator model is a model for segmenting the image.
  • the model can be a convolutional neural network model, such as a U-shaped convolutional neural network (U-Net).
  • U-Net U-shaped convolutional neural network
  • the model can be trained by convolutional nerves in advance The network gets.
  • the first segmented image refers to the result graph output by the preset generator model, that is, the image obtained by segmenting the image input to the generator model.
  • the generator model includes a down-sampling stage and an up-sampling stage, where the down-sampling stage is composed of multiple convolutional layers and pooling layers.
  • the down-sampling stage is used for feature extraction of the image input to the generator model.
  • the sampling stage is composed of multiple deconvolution layers.
  • the up-sampling stage is used to gradually restore the image details. At the same time, a jump structure is added between feature layers of the same resolution to achieve the segmentation of the target object in the image.
  • the original OCT image is input into the generator model, and images of different scales are formed by sampling, and connected to the corresponding scale convolutional layer of the downsampling stage.
  • the output end of the generator model passes the output results of the descaled convolutional layers of different scales through After upsampling, stitching is performed to obtain the first segmented image.
  • the preset generator model fully considers the effect of different scale images on image segmentation, the performance of the convolutional neural network model is improved, and thus the accuracy of the original OCT image segmentation is improved.
  • the discriminator model is a classification network used to judge whether the image output by the generator model is consistent with the marked gold standard image.
  • the discriminator model includes a convolutional layer, a ReLU (Rectified Linear Unit) activation layer, and a batch normalization layer (Batch Normalization).
  • a convolution layer a nonlinear activation function is used to classify the obtained output to realize the image. Comparison.
  • the comparison result is used to reflect how close the first divided image is to the gold standard image.
  • the loss function is used to measure the degree of inconsistency between the predicted value and the true value of the model. It is a non-negative real value function. The smaller the value of the loss function, the higher the accuracy of the discriminator model.
  • the first segmented image and the gold standard image are multiplied with the original OCT image respectively to obtain the processed segmented image and the processed gold standard image; then the preset number of neural network modules of different scales are used for the processing
  • the segmented image and the processed gold standard image are characterized to obtain feature maps of different scales; because the discriminator model includes convolution layers, ReLU activation layers, and batch normalization layers, and the discriminator model uses pooling Layers, which makes the feature maps of different scales change from large to small.
  • the feature maps of different scales are converted into fully connected layers by splicing operation, and connected to a single neuron as the final output layer, and the comparison result is obtained.
  • the comparison result takes a value between 0-1, if the value result is 1, it is determined that the first segmented image output by the generator model is completely consistent with the gold standard image, and if the comparison result is 0, it is determined The first split image and the gold standard image are completely inconsistent.
  • the comparison result is greater than 0.5, the first divided image and the gold standard image are determined to be the same; if the comparison result is less than or equal to 0.5, the first divided image and the gold standard image are determined. Inconsistent, it is necessary to update the loss function at this time, and update the generator model according to the loss function.
  • the discriminator model in this step adds convolutional layers of different scales during the classification and discrimination process. Due to the full consideration of the spatial dependence of image pixels on long and short distances, that is, large scale Images represent short-distance spatial dependencies, and small-scale images represent long-distance spatial dependencies, thereby improving the performance of the discriminator model.
  • S40 Use the updated generator model to convert the first segmented image into the second segmented image, input the second segmented image and the gold standard image into the preset discriminator model, and update the preset discriminant based on the binary cross entropy Model to get the updated discriminator model.
  • the second divided image refers to a result graph output by the updated generator model, that is, an image obtained by dividing the first divided image input at the input end.
  • Binary Cross Entropy is a way to measure the difference between the predicted value and the actual value of the discriminator model.
  • a second result image is generated according to the updated generator model, the second segmented image and the gold standard image are input into a preset discriminator model, and the second segmented image and the gold standard image are discriminated according to
  • the discriminator model outputs the binary cross-entropy update discriminator model, that is, the loss function of the preset discriminator model is defined in advance, and the loss function loss includes two parts, one part is the segmentation network (updated generator model ) loss function loss 1, a portion of the network is classified (predetermined discriminant model) loss function loss 2, both the weighted sum, i.e.
  • iterative training is a model training method in deep learning, which is used to optimize the model.
  • the implementation process of iterative training in this step is as follows: firstly build the objective loss function of the generator model and discriminator model, and use the optimization algorithm for loop training, such as the optimization algorithm SGD (stochastic gradient descent); in each loop During the training process, all training samples are read in sequence and the current loss function of the discriminator model is calculated. Based on the optimization algorithm, the gradient descent direction is determined, so that the target loss function is gradually reduced and reaches a stable state, and the parameters of the constructed network model are realized. Optimization.
  • SGD stochastic gradient descent
  • the loss function convergence means that the loss function is close to 0, for example, less than 0.1, that is, the value of the discriminator model output for a given sample (positive sample or negative sample) is close to 0.5, then the discriminator is unable to distinguish between positive and negative samples, and That is, the output of the discriminator is converged, that is, the training is stopped, and the model parameters of the last training are used as the parameters of the generator model, and then the lesion segmentation model is obtained.
  • the preset generator model learns the mapping relationship between the original OCT image x and the gold standard image y, that is, G: x- >y, output the original OCT image after segmentation, and the discriminator model (D) learns the distribution difference between the input image pair ⁇ x,y ⁇ and ⁇ G(x,y) ⁇ , so as to evaluate the updated generator model
  • the update is performed until the parameters of the generator model reach the optimum, that is, the loss function of the updated discriminator model converges, and then the updated generator model is determined as the image lesion segmentation model. Understandably, during the model training process, the accuracy of the model segmentation is improved, and no additional post-processing steps are required, and an end-to-end OCT image lesion segmentation algorithm is implemented, thereby improving the accuracy of the lesion segmentation model.
  • the training sample image set is obtained, and the training sample image set includes the original OCT image and the gold standard image; then, the original OCT image is input into a preset generator model for segmentation processing to obtain a first segmented image , Because the preset generator model fully considers the impact of different scale images on image segmentation, which improves the performance of the convolutional neural network model, and thus improves the accuracy of the original OCT image segmentation; then, through the preset discrimination
  • the generator model compares the first segmented image with the gold standard image to obtain a comparison result, calculates the loss function of the generator model according to the comparison result, and updates the generator model according to the loss function; next, the updated The generator model converts the first segmented image into the second segmented image, inputs the second segmented image and the gold standard image into the preset discriminator model, and updates the preset discriminator model according to the binary cross entropy, after the update Discriminator model; finally, iteratively train the updated generator model and the updated discriminator
  • step S20 the original OCT image is input into a preset generator model for conversion processing to obtain a first segmented image, which specifically includes the following steps:
  • S21 Input the original OCT image into a set of down-sampling blocks of a preset generator model to obtain a feature map corresponding to the original OCT image, where the down-sampling block set is composed of N down-sampling blocks connected in sequence, and N is a positive integer.
  • the down-sampling block is the first convolution layer in the preset generator model, which is used to extract the basic features (such as edges, textures, etc.) of the original OCT image through convolution, because the set of down-sampling blocks is composed of N
  • the down-sampling blocks are connected in sequence. Therefore, the extracted N basic features are fused to obtain the feature map corresponding to the original OCT image.
  • the permutation and combination unit refers to the second convolutional layer in the preset generator model.
  • the abstract permutation block is composed of M permutation and combination units connected in sequence, and is used to permutate and combine feature maps through convolution operations to obtain more It is abstract and has the characteristics of semantic information, so as to obtain a more accurate first segmented image.
  • the activation layer in the preset generator model can increase the nonlinearity of the convolutional neural network, which is conducive to the convergence of the convolutional neural network.
  • the activation layer can use rectified linear unit, sigmoid function, etc. as the activation function.
  • the activation layer may use a rectified linear unit as an activation function to accelerate the convergence speed of the convolutional neural network.
  • the pooling layer is used to reduce the length and width of the input feature map, reducing the connection parameters and calculation amount of the preset generator model, in order to comply with displacement invariance and obtain more global information; due to the shrinking of the pooling layer A constant-size filter is used on the graph, so the relative local receptive field of each neuron will become larger, so that each neuron of the next convolutional layer can extract more global features, thus making the first A segmented image is more accurate and enhances the sensitivity of segmentation.
  • the original OCT image is input into the set of down-sampled blocks of the preset generator model to obtain the feature map corresponding to the original OCT image, and the feature map is input into the abstract arrangement block to obtain the first segmented image, so that The first segmented image is more accurate and enhances the sensitivity of image segmentation.
  • step S50 iteratively training the updated generator model and the updated discriminator model, including:
  • back regulation is a training method for back propagation of model parameters. Specifically, after determining the network structure of the updated discriminator model and the updated generator model, the network is trained. In the course of several training iterations, the weights and deviations of the updated generator model and the updated discriminator model are trained by back propagation. The updated discriminator model learns to find the real lesion image from the training samples. At the same time, the updated generator model feedback learns how to generate an image that is close to the gold standard image, preventing it from being recognized by the updated discriminator model. Finally, the optimal updated generator model and updated discriminator model are obtained, so as to realize the segmentation of the OCT image, so as to subsequently improve the accuracy of the model for image segmentation.
  • the updated discriminator model is used to reversely adjust the updated generator model to realize the segmentation of the OCT image, so as to subsequently improve the accuracy of the model in segmenting the image.
  • a segmentation model training device is provided, and the segmentation model training device corresponds to the segmentation model training method in the above embodiment in one-to-one correspondence.
  • the segmentation model training device includes a sample image set acquisition module 10, a segmentation image acquisition module 20, a generator update module 30, a discriminator update module 40, and a lesion segmentation model training module 50.
  • the detailed description of each functional module is as follows:
  • the sample image set acquisition module 10 is used to obtain a training sample image set, the training sample image set includes the original OCT image and the gold standard image;
  • the segmented image acquisition module 20 is used to input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image;
  • the generator update module 30 is used to compare the first segmented image with the gold standard image through the preset discriminator model to obtain the comparison result, calculate the loss function of the generator model according to the comparison result, and update according to the loss function Generator model;
  • the discriminator update module 40 is used to convert the first segmented image to the second segmented image using the updated generator model, input the second segmented image and the gold standard image into the preset discriminator model, and cross according to the binary value Entropy updates the preset discriminator model to obtain the updated discriminator model;
  • the lesion segmentation model training module 50 is used to iteratively train the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped, and after iterative training is stopped.
  • the updated generator model is determined as the image lesion segmentation model.
  • the divided image acquisition module 20 includes a feature map acquisition unit 21 and a divided image acquisition unit 22.
  • the feature map acquisition unit 21 is used to input the original OCT image into a set of down-sampling blocks of a preset generator model to obtain a feature map corresponding to the original OCT image, where the down-sampling block set is composed of N down-sampling blocks connected in sequence , N is a positive integer;
  • the divided image acquisition unit 22 is configured to input a feature map into an abstract arrangement block to obtain a first divided image, where the abstract arrangement block is composed of M arrangement combination units connected in sequence, where M is a positive integer.
  • the lesion segmentation model training module includes an iterative training unit for reversely adjusting the updated generator model using the updated discriminator model.
  • an OCT image segmentation method is provided.
  • the OCT image segmentation method can also be applied in the application environment as shown in FIG. 1, in which the client communicates with the server through the network.
  • the server receives the to-be-processed OCT image sent by the client, and then inputs the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image.
  • the client may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • the server can be implemented with an independent server or a server cluster composed of multiple servers.
  • the method is applied to the server in FIG. 1 as an example for illustration, including the following steps:
  • the OCT image to be processed refers to the OCT image that needs to be segmented.
  • the OCT image to be processed can be obtained from the database of the client by the server, or directly obtained from the system database of the client, or can be obtained by the client.
  • the third-party image acquisition tool at the terminal obtains the OCT image to be processed from the system data interface.
  • S70 Input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using a segmentation model training method.
  • the OCT image to be processed is input into the image lesion segmentation model, and the output of the lesion segmentation model is the lesion image. Understandably, since the training method of the lesion image segmentation model has higher segmentation accuracy, the accuracy of the lesion image output by the image lesion segmentation model can be improved.
  • the to-be-processed OCT image is obtained; then, the to-be-processed OCT image is input into an image lesion segmentation model for segmentation to obtain a lesion image. Because the lesion image segmentation model training method has a higher segmentation accuracy, which makes The accuracy of the lesion image output by the image lesion segmentation model is improved.
  • the OCT image segmentation method further includes:
  • S81 Calculate the area of each lesion image to obtain the area of the lesion area.
  • the area of the lesion area refers to the area of the area where the lesion is located in the lesion image. Specifically, the area of the lesion area can be calculated according to the position parameter of the lesion. For example, an image of a diseased area is circular and the radius is 1.5 mm, and the area of the imaged area of the diseased image is 7.07 mm 2 .
  • S82 Use the weighted sum calculation for the area of each lesion area to obtain the lesion area.
  • weighted summation refers to a calculation method in which each parameter is given a corresponding weight, and then the parameter and the weight are multiplied and added. Understandably, the impact of lesions in different parts is different. Therefore, the weighted sum calculation is used for the area of each lesion area to obtain the lesion area, which makes the calculation of the lesion area more accurate, so as to provide a reference for the subsequent assessment of the disease condition according to the lesion area.
  • the area of each lesion image is calculated to obtain the area of the lesion area; then, the area of each lesion area is calculated by weighted sum to obtain the area of the lesion, so that the calculation of the area of the lesion is more accurate for subsequent Provide a reference for disease evaluation according to the area of the lesion.
  • an OCT image segmentation device is provided, and the OCT image segmentation device corresponds to the OCT image segmentation method in the above embodiment in one-to-one correspondence.
  • the segmentation model training device includes a to-be-processed image acquisition module 60 and a lesion image acquisition module 70. The detailed description of each functional module is as follows:
  • the to-be-processed image acquisition module 60 is used to acquire to-be-processed OCT images
  • the lesion image acquisition module 70 is configured to input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using a segmentation model training method.
  • the OCT image segmentation device further includes an area area calculation module and a lesion area acquisition module.
  • the area area calculation module is used to calculate the area of each lesion image to obtain the area of the lesion area;
  • the focus area acquisition module is used to calculate the weighted sum of the area of each focus area to obtain the focus area.
  • Each module in the above OCT image segmentation device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the hardware form or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure may be as shown in FIG. 9.
  • the computer device includes a processor, memory, network interface, and database connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer-readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer device is used to store data used by the OCT image segmentation method.
  • the network interface of the computer device is used to communicate with external terminals through a network connection.
  • a computer device which includes a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor.
  • the processor implements the computer-readable instructions to implement the The segmentation model training method, or the processor implements the computer-readable instructions to implement the OCT image segmentation method in the above embodiments.
  • one or more readable storage media storing computer-readable instructions are provided, which when executed by one or more processors cause the one or more processors to execute The segmentation model training method in the above embodiment, or when the computer readable instructions are executed by one or more processors, causes the one or more processors to execute the OCT image segmentation method in the above embodiments.
  • the readable storage medium includes a non-volatile readable storage medium and a volatile readable storage medium.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM random access memory
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • RDRAM direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A segmentation model training method, an OCT image segmentation method and apparatus, a device and a medium. The segmentation model training method comprises: acquiring a training sample image set, wherein the training sample image set comprises an original OCT image and a gold standard image (S10); inputting the original OCT image into a pre-set generator model for segmentation processing to obtain a first segmented image (S20); comparing the first segmented image with the gold standard image by means of a pre-set discriminator model to obtain a comparison result, calculating a loss function of the generator model according to the comparison result, and updating the generator model according to the loss function (S30); using the updated generator model to convert the first segmented image into a second segmented image, inputting the second segmented image and the gold standard image into the pre-set discriminator model, and updating the discriminator model according to a binary cross entropy to obtain an updated discriminator model (S40); and carrying out iterative training on the updated generator model and the updated discriminator model until a loss function of the updated discriminator model converges, then stopping the iterative training, and determining the updated generator model after the iterative training is stopped as an image lesion segmentation model (S50). The segmentation model training method improves the performance of a segmentation model by means of adversarial training, and improves the accuracy of the segmentation model.

Description

分割模型训练方法、OCT图像分割方法、装置、设备及介质Segmentation model training method, OCT image segmentation method, device, equipment and medium
本申请以2019年01月09日提交的申请号为201910019566.1,名称为“分割模型训练方法、OCT图像分割方法、装置、设备及介质”的中国发明专利申请为基础,并要求其优先权。This application is based on the Chinese invention patent application with the application number 201910019566.1 filed on January 9, 2019 and titled "Segmentation Model Training Method, OCT Image Segmentation Method, Device, Equipment, and Medium", and claims its priority.
技术领域Technical field
本申请涉及图像检测领域,尤其涉及一种分割模型训练方法、OCT图像分割方法、装置、设备及介质。The present application relates to the field of image detection, in particular to a segmentation model training method, an OCT image segmentation method, device, equipment, and medium.
背景技术Background technique
“OCT图像”是光学相干断层成像(Optical Coherence Tomography)的简写,主要利用弱相干光干涉仪的基本原理,检测生物组织不同深度层面对入射的弱相干光的反射以及散射信号,进而重建生物组织内部结构图像,是一种非接触式、非侵入式的生物组织断层成像。OCT成像设备应用于眼科,可以辅助医生观察眼睛后段正常组织结构(如黄斑、视盘或视网膜神经纤维层等)以及病理改变,为了给相关眼科疾病的诊断提供更加准确的影像依据,需要对OCT图像进行分割,从而为相关的医学过程提供技术辅助。"OCT image" is short for optical coherence tomography (Optical Coherence Tomography). It mainly uses the basic principle of weak coherent light interferometer to detect the reflection and scattering signals of biological tissue at different depths of the incident weak coherent light to reconstruct biological tissue The internal structure image is a non-contact, non-invasive tomography of biological tissue. OCT imaging equipment is used in ophthalmology to assist doctors in observing the normal tissue structure of the posterior segment of the eye (such as the macula, optic disc, or retinal nerve fiber layer, etc.) and pathological changes. In order to provide a more accurate imaging basis for the diagnosis of related ophthalmic diseases, OCT The images are segmented to provide technical assistance for related medical procedures.
传统地,分割技术包括基于直方图的、基于边界分割的或者基于区域分割等技术,大多数以图像处理算法为基础。但由于OCT图像噪声较多、不同病灶大小不一和边界轮廓不规律等因素,采用传统的分割技术往往不能兼顾OCT图像中的病灶的完整性和精确性,即可能会缺失一些病灶细节或者包含多余的无用信息,导致分割效果不理想。Traditionally, segmentation technologies include histogram-based, boundary-based segmentation, or region-based segmentation technologies, most of which are based on image processing algorithms. However, due to factors such as more noise in the OCT image, different lesion sizes and irregular border contours, the traditional segmentation technology often fails to take into account the integrity and accuracy of the lesions in the OCT image, that is, some lesion details or may be missing The extraneous and useless information leads to unsatisfactory segmentation results.
发明内容Summary of the invention
本申请实施例提供一种分割模型训练方法、装置、计算机设备及存储介质,以解决分割模型分割性能较低的问题。Embodiments of the present application provide a segmentation model training method, device, computer equipment, and storage medium to solve the problem of low segmentation model segmentation performance.
此外,本申请实施例提供一种OCT图像分割方法、装置、计算机设备及存储介质,以解决病灶分割精度较低的问题。In addition, the embodiments of the present application provide an OCT image segmentation method, device, computer equipment, and storage medium to solve the problem of low lesion segmentation accuracy.
一种分割模型训练方法,包括:A segmentation model training method, including:
获取训练样本图像集,所述训练样本图像集包括原始OCT图像和金标准图像;Acquiring a training sample image set, the training sample image set including the original OCT image and the gold standard image;
将所述原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像;Input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image;
通过预设的判别器模型将所述第一分割图像与所述金标准图像进行比对,得到比对结果,根据所述比对结果计算所述生成器模型的损失函数,并根据所述损失函数更新所述生成器模型;Comparing the first segmented image with the gold standard image through a preset discriminator model to obtain a comparison result, calculating a loss function of the generator model according to the comparison result, and according to the loss The function updates the generator model;
采用更新后的生成器模型将所述第一分割图像转换为第二分割图像,将所述第二分割图像与所述金标准图像输入至预设的判别器模型中,根据二值交叉熵更新所述预设的判别器模型,得到更新后的判别器模型;Use the updated generator model to convert the first segmented image into a second segmented image, input the second segmented image and the gold standard image into a preset discriminator model, and update according to the binary cross entropy The preset discriminator model to obtain an updated discriminator model;
将所述更新后的生成器模型与更新后的判别器模型进行迭代训练,直到所述更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的所述更新后的生成器模型确定为图像病灶分割模型。Iteratively training the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped, and the updated after the iterative training is stopped The generator model is determined as the image lesion segmentation model.
一种分割模型训练装置,包括:A segmentation model training device, including:
样本图像集获取模块,用于获取训练样本图像集,所述训练样本图像集包括原始OCT图像和金标准图像;A sample image set acquisition module, for acquiring a training sample image set, the training sample image set including the original OCT image and the gold standard image;
分割图像获取模块,用于将所述原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像;A segmented image acquisition module, configured to input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image;
生成器更新模块,用于通过预设的判别器模型将所述第一分割图像与所述金标准图像进行比对,得到比对结果,根据所述比对结果计算所述生成器模型的损失函数,并根据所述损失函数更新所述生成器模型;The generator update module is used to compare the first segmented image with the gold standard image through a preset discriminator model to obtain a comparison result, and calculate the loss of the generator model according to the comparison result Function, and update the generator model according to the loss function;
判别器更新模块,用于采用更新后的生成器模型将所述第一分割图像转换为第二分割图像,将所述第二分割图像与所述金标准图像输入至预设的判别器模型中,根据二值交叉熵更新所述预设的判别器模型,得到更新后的判别器模型;The discriminator update module is used to convert the first segmented image into a second segmented image using the updated generator model, and input the second segmented image and the gold standard image into a preset discriminator model , Updating the preset discriminator model according to the binary cross entropy to obtain the updated discriminator model;
病灶分割模型训练模块,用于将所述更新后的生成器模型与更新后的判别器模型进行迭代训练,直到所述更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的所述更新后的生成器模型确定为图像病灶分割模型。The lesion segmentation model training module is used to iteratively train the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped and will be stopped. The updated generator model after iterative training is determined to be an image lesion segmentation model.
一种OCT图像分割方法,包括:An OCT image segmentation method, including:
获取待处理OCT图像;Obtain the OCT image to be processed;
将所述待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像,其中,所述图像病灶分割模型是采用分割模型训练方法进行训练得到的。The OCT image to be processed is input into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using a segmentation model training method.
一种OCT图像分割装置,包括:An OCT image segmentation device, including:
待处理图像获取模块,用于获取待处理OCT图像;To-be-processed image acquisition module for acquiring to-be-processed OCT images;
病灶图像获取模块,用于将所述待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像,其中,所述图像病灶分割模型是采用分割模型训练方法进行训练得到的。The lesion image acquisition module is configured to input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using a segmentation model training method.
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述分割模型训练方法,或者,所述处理器执行所述计算机可读指令时实现上述OCT图像分割方法。A computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, and the processor implements the computer-readable instructions to implement the above segmentation model training method Or, the processor implements the above-mentioned OCT image segmentation method when executing the computer-readable instructions.
一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行上述OCT图像分割方法。One or more readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the above-mentioned OCT image segmentation method.
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。The details of one or more embodiments of the present application are set forth in the following drawings and description, and other features and advantages of the present application will become apparent from the description, drawings, and claims.
附图说明BRIEF DESCRIPTION
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly explain the technical solutions of the embodiments of the present application, the following will briefly introduce the drawings required in the description of the embodiments of the present application. Obviously, the drawings in the following description are only some embodiments of the present application For those of ordinary skill in the art, without paying creative labor, other drawings can also be obtained based on these drawings.
图1是本申请实施例提供的分割模型训练方法或OCT图像分割方法的应用环境示意图;1 is a schematic diagram of an application environment of a segmentation model training method or an OCT image segmentation method provided by an embodiment of the present application;
图2是本申请实施例提供的分割模型训练方法一示例图;2 is an example diagram of a method for training a segmentation model provided by an embodiment of the present application;
图3是本申请实施例提供的分割模型训练方法的另一示例图;FIG. 3 is another example diagram of a segmentation model training method provided by an embodiment of the present application;
图4是本申请实施例提供的分割模型训练装置的一原理框图;4 is a schematic block diagram of a segmentation model training device provided by an embodiment of the present application;
图5是本申请实施例提供的分割模型训练装置的另一原理框图;FIG. 5 is another schematic block diagram of a segmentation model training device provided by an embodiment of the present application;
图6是本申请实施例提供的OCT图像分割方法的一示例图;6 is an example diagram of an OCT image segmentation method provided by an embodiment of the present application;
图7是本申请实施例提供的OCT图像分割方法的另一示例图;7 is another example diagram of an OCT image segmentation method provided by an embodiment of the present application;
图8是本申请实施例提供的OCT图像分割装置的一原理框图;8 is a schematic block diagram of an OCT image segmentation device provided by an embodiment of the present application;
图9是本申请实施例提供的计算机设备的一示意图。9 is a schematic diagram of a computer device provided by an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请 中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, but not all the embodiments. Based on the embodiments in this application, all other embodiments obtained by a person of ordinary skill in the art without creative work fall within the scope of protection of this application.
本申请提供的分割模型训练方法,可应用在如图1的应用环境中,其中,客户端通过网络与服务端进行通信,服务端接收客户端发送的训练样本图像集,然后将训练样本图像中的原始OCT图像输入到预设的生成器模型中进行转换处理,得到第一分割图像;通过预设的判别器模型将第一分割图像与金标准图像进行比对,得到比对结果,根据比对结果计算生成器模型的损失函数,并根据损失函数更新生成器模型;接着,采用更新后的生成器模型将第一分割图像转换为第二分割图像,将第二分割图像与金标准图像输入至预设的判别器模型中,根据二值交叉熵更新预设的判别器模型,得到更新后的判别器模型;最后,将更新后的生成器模型模型与更新后的判别器模型进行迭代训练,直到更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的更新后的判别器模型确定为图像病灶分割模型。其中,客户端可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。服务端可以用独立的服务器或者是多个服务器组成的服务器集群来实现。The segmentation model training method provided in this application can be applied in the application environment as shown in FIG. 1, in which the client communicates with the server through the network, the server receives the training sample image set sent by the client, and then the training sample image The original OCT image is input into the preset generator model for conversion processing to obtain the first segmented image; the first segmented image is compared with the gold standard image through the preset discriminator model to obtain the comparison result, according to the comparison Calculate the loss function of the generator model for the result, and update the generator model according to the loss function; then, use the updated generator model to convert the first segmented image to the second segmented image, and input the second segmented image and the gold standard image To the preset discriminator model, update the preset discriminator model according to the binary cross entropy to obtain the updated discriminator model; finally, iteratively train the updated generator model model and the updated discriminator model Until the loss function of the updated discriminator model converges, then iterative training is stopped, and the updated discriminator model after the iterative training is stopped is determined as the image lesion segmentation model. Among them, the client may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server can be implemented with an independent server or a server cluster composed of multiple servers.
在一个实施例中,如图2所示,以该方法应用于图1中的服务端为例进行说明,包括如下步骤:In one embodiment, as shown in FIG. 2, the method is applied to the server in FIG. 1 as an example for illustration, including the following steps:
S10:获取训练样本图像集,训练样本图像集包括原始OCT图像和金标准图像。S10: Obtain the training sample image set, which includes the original OCT image and the gold standard image.
其中,训练样本图像集是用于进行深度学习的样本图像组成的集合,包括原始OCT图像和金标准图像。原始OCT图像是指未经处理的OCT图像,该原始OCT图像可以通过OCT扫描仪进行扫描后获取。金标准图像是指预先分割好的病灶图像,例如:专家基于专业的医学知识从未经处理的OCT图像中勾勒出所需要分割的病灶位置,即金标准图像预先进行了病灶标注。金标准图像可以通过专家对未经处理的OCT图像中各个病灶位置进行标注后得到。可选地,通过在公开的眼底视网膜数据集(如DRIVE或者STARE)中选取预设数量的图像作为训练样本图像集。需要说明的是,金标准图像与原始OCT图像的尺寸一致,且病灶区域的像素为预先设定的像素值,非病灶区域的像素值为0,以便增强OCT图像的病灶区域与非病灶区域的区分性。Among them, the training sample image set is a set of sample images used for deep learning, including the original OCT image and the gold standard image. The original OCT image refers to an unprocessed OCT image, and the original OCT image can be acquired after being scanned by an OCT scanner. The gold standard image refers to the pre-segmented lesion image. For example, the expert outlines the location of the lesion to be segmented from the unprocessed OCT image based on professional medical knowledge, that is, the gold standard image is pre-marked for the lesion. The gold standard image can be obtained by annotating the location of each lesion in the unprocessed OCT image by experts. Optionally, by selecting a preset number of images from the public fundus retina data set (such as DRIVE or STARE) as the training sample image set. It should be noted that the size of the gold standard image is the same as the original OCT image, and the pixels of the lesion area are the preset pixel values, and the pixel value of the non-focal area is 0, so as to enhance the difference between the lesion area and the non-focal area of the OCT image. Differentiate.
S20:将原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像。S20: Input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image.
其中,生成器模型是用于对图像进行分割的模型,该模型可以是卷积神经网络模型,如U型卷积神经网络(U-Net),具体地,该模型可以通过预先训练卷积神经网络得到。第一分割图像是指预设的生成器模型输出的结果图,即对输入到生成器模型的图像进行分 割后得到的图像。该生成器模型包括下采样阶段和上采样阶段,其中下采样阶段由多个卷积层以及池化层组成,该下采样阶段用于对输入到生成器模型的图像进行特征提取,其中的上采样阶段由多个反卷积层组成,该上采样阶段用于逐步恢复图像细节,与此同时,在相同分辨率特征层之间添加跳连结构,从而实现对图像中目标物体的分割。Among them, the generator model is a model for segmenting the image. The model can be a convolutional neural network model, such as a U-shaped convolutional neural network (U-Net). Specifically, the model can be trained by convolutional nerves in advance The network gets. The first segmented image refers to the result graph output by the preset generator model, that is, the image obtained by segmenting the image input to the generator model. The generator model includes a down-sampling stage and an up-sampling stage, where the down-sampling stage is composed of multiple convolutional layers and pooling layers. The down-sampling stage is used for feature extraction of the image input to the generator model. The sampling stage is composed of multiple deconvolution layers. The up-sampling stage is used to gradually restore the image details. At the same time, a jump structure is added between feature layers of the same resolution to achieve the segmentation of the target object in the image.
具体地,在生成器模型中输入原始OCT图像,通过采样形成不同尺度图像,并接入下采样阶段的相应尺度卷积层,生成器模型的输出端将不同尺度反卷积层的输出结果通过上采样后进行拼接,得到第一分割图像。可以理解地,由于预设的生成器模型充分考虑了不同尺度图像对图像分割的影响,从而提升了卷积神经网络模型的性能,进而提高了对原始OCT图像分割的准确性。Specifically, the original OCT image is input into the generator model, and images of different scales are formed by sampling, and connected to the corresponding scale convolutional layer of the downsampling stage. The output end of the generator model passes the output results of the descaled convolutional layers of different scales through After upsampling, stitching is performed to obtain the first segmented image. Understandably, since the preset generator model fully considers the effect of different scale images on image segmentation, the performance of the convolutional neural network model is improved, and thus the accuracy of the original OCT image segmentation is improved.
S30:通过预设的判别器模型将第一分割图像与金标准图像进行比对,得到比对结果,根据比对结果计算生成器模型的损失函数,并根据损失函数更新生成器模型。S30: Compare the first segmented image with the gold standard image through a preset discriminator model to obtain a comparison result, calculate a loss function of the generator model according to the comparison result, and update the generator model according to the loss function.
其中,判别器模型是一种分类网络,用于判断生成器模型输出的图像与标注的金标准图像是否达到一致。该判别器模型包括卷积层、ReLU(Rectified Linear Unit)激活层和批量归一化层(Batch Normalization),在每一个卷积层中使用非线性激活函数对得到的输出进行分类,实现对图像的比对。比对结果用于反映第一分割图像与金标准图像接近程度。损失函数(loss function)是用来估量模型的预测值与真实值的不一致程度,是非负实值函数,损失函数的值越小,判别器模型的准确性越高。Among them, the discriminator model is a classification network used to judge whether the image output by the generator model is consistent with the marked gold standard image. The discriminator model includes a convolutional layer, a ReLU (Rectified Linear Unit) activation layer, and a batch normalization layer (Batch Normalization). In each convolution layer, a nonlinear activation function is used to classify the obtained output to realize the image. Comparison. The comparison result is used to reflect how close the first divided image is to the gold standard image. The loss function is used to measure the degree of inconsistency between the predicted value and the true value of the model. It is a non-negative real value function. The smaller the value of the loss function, the higher the accuracy of the discriminator model.
具体地,将第一分割图像和金标准图像分别与原始OCT图像进行相乘,得到处理后的分割图像和处理后的金标准图像;然后用不同尺度的预设数量个神经网络模块对该处理后的分割图像和处理后的金标准图像进行特征表达,得到不同尺度特征图;由于判别器模型包括卷积层、ReLU激活层和批量归一化层等结构,且该判别器模型采用池化层,使得不同尺度特征图尺度从大变小;最后将不同尺度特征图通过拼接操作转变成全连接层,并连接到单一神经元作为最终的输出层,得到比对结果。可选地,比对结果在0-1之间取值,若取值结果为1,则确定生成器模型输出的第一分割图像和金标准图像完全一致,若比对结果为0,则确定第一分割图像和金标准图像完全不一致。优选地,在本申请实施例中,若比对结果大于0.5时,即判定第一分割图像和金标准图像一致;若比对结果小于或者等于0.5时,即判定第一分割图像和金标准图像不一致,此时需要进行损失函数更新,根据损失函数更新生成器模型。Specifically, the first segmented image and the gold standard image are multiplied with the original OCT image respectively to obtain the processed segmented image and the processed gold standard image; then the preset number of neural network modules of different scales are used for the processing The segmented image and the processed gold standard image are characterized to obtain feature maps of different scales; because the discriminator model includes convolution layers, ReLU activation layers, and batch normalization layers, and the discriminator model uses pooling Layers, which makes the feature maps of different scales change from large to small. Finally, the feature maps of different scales are converted into fully connected layers by splicing operation, and connected to a single neuron as the final output layer, and the comparison result is obtained. Optionally, the comparison result takes a value between 0-1, if the value result is 1, it is determined that the first segmented image output by the generator model is completely consistent with the gold standard image, and if the comparison result is 0, it is determined The first split image and the gold standard image are completely inconsistent. Preferably, in the embodiment of the present application, if the comparison result is greater than 0.5, the first divided image and the gold standard image are determined to be the same; if the comparison result is less than or equal to 0.5, the first divided image and the gold standard image are determined. Inconsistent, it is necessary to update the loss function at this time, and update the generator model according to the loss function.
需要说明的是,本步骤中的判别器模型在分类判别过程中,对不同尺度的卷积层进行了相加,由于充分考虑图像像素在长距离和短距离上的空间依赖关系,即大尺度图像表征 短距离空间依赖关系,小尺度图像表征长距离空间依赖关系,从而提升判别器模型的性能。It should be noted that the discriminator model in this step adds convolutional layers of different scales during the classification and discrimination process. Due to the full consideration of the spatial dependence of image pixels on long and short distances, that is, large scale Images represent short-distance spatial dependencies, and small-scale images represent long-distance spatial dependencies, thereby improving the performance of the discriminator model.
S40:采用更新后的生成器模型将第一分割图像转换为第二分割图像,将第二分割图像与金标准图像输入至预设的判别器模型中,根据二值交叉熵更新预设的判别器模型,得到更新后的判别器模型。S40: Use the updated generator model to convert the first segmented image into the second segmented image, input the second segmented image and the gold standard image into the preset discriminator model, and update the preset discriminant based on the binary cross entropy Model to get the updated discriminator model.
其中,第二分割图像是指更新后的生成器模型输出的结果图,即对输入端输入的第一分割图像进行分割后得到的图像。二值交叉熵(Binary Cross Entropy)是一种用于衡量判别器模型的预测值与实际值差异性的方式。具体地,根据更新后的生成器模型生成第二结果图,将该第二分割图像与金标准图像输入至预设的判别器模型中,对第二分割图像与金标准图像进行判别,根据预设的判别器模型输出的二值交叉熵更新判别器模型,即预先定义该预设的判别器模型的损失函数,且该损失函数loss包括两部分,一部分是分割网络(更新后的生成器模型)的损失函数loss 1,一部分是分类网络(预设的判别器模型)的损失函数loss 2,两者加权求和,即loss=λ 1*loss 12*loss 2,其中,λ 1和λ 2分别为loss 1和loss 2的权重,loss为整个网络的损失函数,根据损失函数的二值交叉熵的计算结果更新预设的判别器模型,得到更新后的判别器模型。其中的二值交叉熵(Binary Cross Entropy)的计算公式如下式所示: The second divided image refers to a result graph output by the updated generator model, that is, an image obtained by dividing the first divided image input at the input end. Binary Cross Entropy is a way to measure the difference between the predicted value and the actual value of the discriminator model. Specifically, a second result image is generated according to the updated generator model, the second segmented image and the gold standard image are input into a preset discriminator model, and the second segmented image and the gold standard image are discriminated according to The discriminator model outputs the binary cross-entropy update discriminator model, that is, the loss function of the preset discriminator model is defined in advance, and the loss function loss includes two parts, one part is the segmentation network (updated generator model ) loss function loss 1, a portion of the network is classified (predetermined discriminant model) loss function loss 2, both the weighted sum, i.e. loss = λ 1 * loss 1 + λ 2 * loss 2, wherein, λ 1 And λ 2 are the weights of loss 1 and loss 2 , respectively, and loss is the loss function of the entire network. The preset discriminator model is updated according to the calculation result of the binary cross entropy of the loss function, and the updated discriminator model is obtained. The calculation formula of Binary Cross Entropy is as follows:
Figure PCTCN2019117733-appb-000001
Figure PCTCN2019117733-appb-000001
式中,
Figure PCTCN2019117733-appb-000002
表示为第二分割图像病灶分割正确的概率,y j表示为预设第二分割图像病灶分割与金标准图像一致的概率,J(y)为二值交叉熵的表达式。
In the formula,
Figure PCTCN2019117733-appb-000002
It is expressed as the probability that the lesion is correctly segmented in the second segmented image, y j is the probability that the lesion segmentation of the preset second segmented image is consistent with the gold standard image, and J(y) is an expression of binary cross entropy.
S50:将更新后的生成器模型与更新后的判别器模型进行迭代训练,直到更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的更新后的生成器模型确定为图像病灶分割模型。S50: Iteratively train the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then stop iterative training, and stop the updated generator model after iterative training Determined as the image lesion segmentation model.
其中,迭代训练是深度学习中的一种模型训练方式,用于优化模型。本步骤中的迭代训练实现过程为:首先构建好生成器模型和判别器模型的目标损失函数,采用优化算法进行循环训练,如优化算法SGD(stochastic gradient descent,随机梯度下降);在每次循环训练过程中,顺次读入所有训练样本并计算判别器模型当前的损失函数,基于优化算法确定梯度下降方向,使得目标损失函数逐步降低并达到稳定状态,实现对所构建的网络模型中各参数的优化。Among them, iterative training is a model training method in deep learning, which is used to optimize the model. The implementation process of iterative training in this step is as follows: firstly build the objective loss function of the generator model and discriminator model, and use the optimization algorithm for loop training, such as the optimization algorithm SGD (stochastic gradient descent); in each loop During the training process, all training samples are read in sequence and the current loss function of the discriminator model is calculated. Based on the optimization algorithm, the gradient descent direction is determined, so that the target loss function is gradually reduced and reaches a stable state, and the parameters of the constructed network model are realized. Optimization.
其中,损失函数收敛是指损失函数接近0,例如小于0.1等,也即判别器模型对于给定样本(正样本或者负样本)输出的数值接近0.5,则认为判别器无法分辨正负样本,也 即判别器的输出收敛,即停止训练,将最后一次训练的模型参数作为生成器模型的参数,进而得到病灶分割模型。Among them, the loss function convergence means that the loss function is close to 0, for example, less than 0.1, that is, the value of the discriminator model output for a given sample (positive sample or negative sample) is close to 0.5, then the discriminator is unable to distinguish between positive and negative samples, and That is, the output of the discriminator is converged, that is, the training is stopped, and the model parameters of the last training are used as the parameters of the generator model, and then the lesion segmentation model is obtained.
具体地,通过输入原始OCT图像x至预设的生成器模型(G)中,该预设的生成器模型通过学习原始OCT图像x到金标准图像y之间的映射关系,即G:x->y,输出分割后的原始OCT图像,判别器模型(D)通过学习输入图像对{x,y}与{G(x,y)}之间的分布差异,从而对更新后的生成器模型进行更新,直到生成器模型的参数达到最优,也即更新后的判别器模型的损失函数收敛,进而将该更新后的生成器模型确定为图像病灶分割模型。可以理解地,该模型训练过程中,提高了模型的分割的准确性,并且不需要额外的后处理步骤,实现了端到端的OCT图像病灶分割算法,进而提高了病灶分割模型的准确率。Specifically, by inputting the original OCT image x into a preset generator model (G), the preset generator model learns the mapping relationship between the original OCT image x and the gold standard image y, that is, G: x- >y, output the original OCT image after segmentation, and the discriminator model (D) learns the distribution difference between the input image pair {x,y} and {G(x,y)}, so as to evaluate the updated generator model The update is performed until the parameters of the generator model reach the optimum, that is, the loss function of the updated discriminator model converges, and then the updated generator model is determined as the image lesion segmentation model. Understandably, during the model training process, the accuracy of the model segmentation is improved, and no additional post-processing steps are required, and an end-to-end OCT image lesion segmentation algorithm is implemented, thereby improving the accuracy of the lesion segmentation model.
本实施例中,首先,获取训练样本图像集,训练样本图像集包括原始OCT图像和金标准图像;然后,将原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像,由于预设的生成器模型充分考虑了不同尺度图像对图像分割的影响,从而提升了卷积神经网络模型的性能,进而提高了对原始OCT图像分割的准确性;接着,通过预设的判别器模型将第一分割图像与金标准图像进行比对,得到比对结果,根据比对结果计算所述生成器模型的损失函数,并根据损失函数更新生成器模型;接下来,采用更新后的生成器模型将第一分割图像转换为第二分割图像,将第二分割图像与金标准图像输入至预设的判别器模型中,根据二值交叉熵更新预设的判别器模型,得到更新后的判别器模型;最后,将更新后的生成器模型与更新后的判别器模型进行迭代训练,直到更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的更新后的生成器模型确定为图像病灶分割模型,提高了模型的分割的准确性,并且不需要额外的后处理步骤,实现了端到端的OCT图像病灶分割算法,进而提高了病灶分割模型的准确率。In this embodiment, first, the training sample image set is obtained, and the training sample image set includes the original OCT image and the gold standard image; then, the original OCT image is input into a preset generator model for segmentation processing to obtain a first segmented image , Because the preset generator model fully considers the impact of different scale images on image segmentation, which improves the performance of the convolutional neural network model, and thus improves the accuracy of the original OCT image segmentation; then, through the preset discrimination The generator model compares the first segmented image with the gold standard image to obtain a comparison result, calculates the loss function of the generator model according to the comparison result, and updates the generator model according to the loss function; next, the updated The generator model converts the first segmented image into the second segmented image, inputs the second segmented image and the gold standard image into the preset discriminator model, and updates the preset discriminator model according to the binary cross entropy, after the update Discriminator model; finally, iteratively train the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped, and the update after iterative training is stopped After the generator model is determined as an image lesion segmentation model, the accuracy of the model segmentation is improved, and no additional post-processing steps are required, and an end-to-end OCT image lesion segmentation algorithm is implemented, thereby improving the accuracy of the lesion segmentation model .
在一实施例中,如图3所示,步骤S20中,将原始OCT图像输入到预设的生成器模型中进行转换处理,得到第一分割图像,具体包括以下步骤:In an embodiment, as shown in FIG. 3, in step S20, the original OCT image is input into a preset generator model for conversion processing to obtain a first segmented image, which specifically includes the following steps:
S21:将原始OCT图像输入预设的生成器模型的下采样块集合中,得到原始OCT图像对应的特征图,其中,下采样块集合由N个下采样块依次连接组成,N为正整数。S21: Input the original OCT image into a set of down-sampling blocks of a preset generator model to obtain a feature map corresponding to the original OCT image, where the down-sampling block set is composed of N down-sampling blocks connected in sequence, and N is a positive integer.
其中,下采样块是预设的生成器模型中的第一卷积层,用于通过卷积分别提取原始OCT图像的基本特征(如边缘、纹理等),由于下采样块集合是由N个下采样块依次连接得到的,因此,将提取的N个基本特征进行融合,得到原始OCT图像对应的特征图,N为正整数,并且N的大小可根据实际需要选取,如N=5。Among them, the down-sampling block is the first convolution layer in the preset generator model, which is used to extract the basic features (such as edges, textures, etc.) of the original OCT image through convolution, because the set of down-sampling blocks is composed of N The down-sampling blocks are connected in sequence. Therefore, the extracted N basic features are fused to obtain the feature map corresponding to the original OCT image. N is a positive integer, and the size of N can be selected according to actual needs, such as N=5.
S22:将特征图输入到抽象排列块中,得到第一分割图像,其中,抽象排列块由M个 排列组合单元依次连接组成,M为正整数。S22: Input the feature map into the abstract arrangement block to obtain the first segmented image, where the abstract arrangement block is composed of M arrangement combination units connected in sequence, M is a positive integer.
其中,排列组合单元是指预设的生成器模型中的第二卷积层,抽象排列块由M个排列组合单元依次连接组成,用于通过卷积操作对特征图进行排列组合,以得到更抽象的且具有语义信息的特征,进而得到更为准确的第一分割图像。M为正整数,且M的大小可根据实际需要选取,如M=4。同时,预设的生成器模型中的激活层能够增加卷积神经网络的非线性,有利于卷积神经网络收敛。激活层可选用整流线性单元、sigmoid函数等作为激活函数。优选地,激活层可选用整流线性单元作为激活函数来加快卷积神经网络的收敛速度。池化层用于缩小输入特征图的长和宽,减少了预设的生成器模型的连接参数和计算量,以符合位移不变性,获取更为全局性的信息;由于在池化层缩小后的图上用大小不变的滤波器,因此每个神经元的相对局部感受野会变大,使下一个卷积层的每个神经元能提取到更全局性的特征,从而使得到的第一分割图像更加准确,增强了分割的敏感性。The permutation and combination unit refers to the second convolutional layer in the preset generator model. The abstract permutation block is composed of M permutation and combination units connected in sequence, and is used to permutate and combine feature maps through convolution operations to obtain more It is abstract and has the characteristics of semantic information, so as to obtain a more accurate first segmented image. M is a positive integer, and the size of M can be selected according to actual needs, such as M=4. At the same time, the activation layer in the preset generator model can increase the nonlinearity of the convolutional neural network, which is conducive to the convergence of the convolutional neural network. The activation layer can use rectified linear unit, sigmoid function, etc. as the activation function. Preferably, the activation layer may use a rectified linear unit as an activation function to accelerate the convergence speed of the convolutional neural network. The pooling layer is used to reduce the length and width of the input feature map, reducing the connection parameters and calculation amount of the preset generator model, in order to comply with displacement invariance and obtain more global information; due to the shrinking of the pooling layer A constant-size filter is used on the graph, so the relative local receptive field of each neuron will become larger, so that each neuron of the next convolutional layer can extract more global features, thus making the first A segmented image is more accurate and enhances the sensitivity of segmentation.
本实施例中,将原始OCT图像输入预设的生成器模型的下采样块集合中,得到原始OCT图像对应的特征图,将特征图输入到抽象排列块中,得到第一分割图像,使得到的第一分割图像更加准确,增强了图像分割的敏感性。In this embodiment, the original OCT image is input into the set of down-sampled blocks of the preset generator model to obtain the feature map corresponding to the original OCT image, and the feature map is input into the abstract arrangement block to obtain the first segmented image, so that The first segmented image is more accurate and enhances the sensitivity of image segmentation.
在一实施例中,步骤S50中,将更新后的生成器模型与更新后的判别器模型进行迭代训练,包括:In an embodiment, in step S50, iteratively training the updated generator model and the updated discriminator model, including:
利用更新后的判别器模型对更新后的生成器模型进行反向调节。Use the updated discriminator model to reverse adjust the updated generator model.
其中,反向调节是一种对模型参数进行反向传播的训练方式。具体地,在确定更新后的判别器模型和更新后的生成器模型的网络结构之后,对网络进行训练。在数次训练迭代的历程中,更新后的生成器模型与更新后的判别器模型的权重和偏差都是通过反向传播训练的。更新后的判别器模型学习从训练样本中找出真正的病灶图像。与此同时,更新后的生成器模型反馈学习如何生成具有与金标准图像接近的图像,防止被更新后的判别器模型识别出来。最终得到最优的更新后的生成器模型与更新后的判别器模型,从而实现对OCT图像的的分割,以便后续提高模型对图像分割的准确率。Among them, back regulation is a training method for back propagation of model parameters. Specifically, after determining the network structure of the updated discriminator model and the updated generator model, the network is trained. In the course of several training iterations, the weights and deviations of the updated generator model and the updated discriminator model are trained by back propagation. The updated discriminator model learns to find the real lesion image from the training samples. At the same time, the updated generator model feedback learns how to generate an image that is close to the gold standard image, preventing it from being recognized by the updated discriminator model. Finally, the optimal updated generator model and updated discriminator model are obtained, so as to realize the segmentation of the OCT image, so as to subsequently improve the accuracy of the model for image segmentation.
本实施例中,利用更新后的判别器模型对更新后的生成器模型进行反向调节,实现对OCT图像的的分割,以便后续提高模型对图像分割的准确率。In this embodiment, the updated discriminator model is used to reversely adjust the updated generator model to realize the segmentation of the OCT image, so as to subsequently improve the accuracy of the model in segmenting the image.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence numbers of the steps in the above embodiments does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
在一实施例中,提供一种分割模型训练装置,该分割模型训练装置与上述实施例中分割模型训练方法一一对应。如图4所示,该分割模型训练装置包括样本图像集获取模块10、 分割图像获取模块20、生成器更新模块30、判别器更新模块40和病灶分割模型训练模块50。各功能模块详细说明如下:In one embodiment, a segmentation model training device is provided, and the segmentation model training device corresponds to the segmentation model training method in the above embodiment in one-to-one correspondence. As shown in FIG. 4, the segmentation model training device includes a sample image set acquisition module 10, a segmentation image acquisition module 20, a generator update module 30, a discriminator update module 40, and a lesion segmentation model training module 50. The detailed description of each functional module is as follows:
样本图像集获取模块10,用于获取训练样本图像集,训练样本图像集包括原始OCT图像和金标准图像;The sample image set acquisition module 10 is used to obtain a training sample image set, the training sample image set includes the original OCT image and the gold standard image;
分割图像获取模块20,用于将原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像;The segmented image acquisition module 20 is used to input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image;
生成器更新模块30,用于通过预设的判别器模型将第一分割图像与金标准图像进行比对,得到比对结果,根据比对结果计算生成器模型的损失函数,并根据损失函数更新生成器模型;The generator update module 30 is used to compare the first segmented image with the gold standard image through the preset discriminator model to obtain the comparison result, calculate the loss function of the generator model according to the comparison result, and update according to the loss function Generator model;
判别器更新模块40,用于采用更新后的生成器模型将第一分割图像转换为第二分割图像,将第二分割图像与金标准图像输入至预设的判别器模型中,根据二值交叉熵更新预设的判别器模型,得到更新后的判别器模型;The discriminator update module 40 is used to convert the first segmented image to the second segmented image using the updated generator model, input the second segmented image and the gold standard image into the preset discriminator model, and cross according to the binary value Entropy updates the preset discriminator model to obtain the updated discriminator model;
病灶分割模型训练模块50,用于将更新后的生成器模型与更新后的判别器模型进行迭代训练,直到更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的更新后的生成器模型确定为图像病灶分割模型。The lesion segmentation model training module 50 is used to iteratively train the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped, and after iterative training is stopped The updated generator model is determined as the image lesion segmentation model.
优选地,如图5所示,分割图像获取模块20包括特征图获取单元21和分割图像获取单元22。Preferably, as shown in FIG. 5, the divided image acquisition module 20 includes a feature map acquisition unit 21 and a divided image acquisition unit 22.
特征图获取单元21,用于将原始OCT图像输入预设的生成器模型的下采样块集合中,得到原始OCT图像对应的特征图,其中,下采样块集合由N个下采样块依次连接组成,N为正整数;The feature map acquisition unit 21 is used to input the original OCT image into a set of down-sampling blocks of a preset generator model to obtain a feature map corresponding to the original OCT image, where the down-sampling block set is composed of N down-sampling blocks connected in sequence , N is a positive integer;
分割图像获取单元22,用于将特征图输入到抽象排列块中,得到第一分割图像,其中,抽象排列块由M个排列组合单元依次连接组成,M为正整数。The divided image acquisition unit 22 is configured to input a feature map into an abstract arrangement block to obtain a first divided image, where the abstract arrangement block is composed of M arrangement combination units connected in sequence, where M is a positive integer.
优选地,病灶分割模型训练模块包括迭代训练单元,用于利用更新后的判别器模型对更新后的生成器模型进行反向调节。Preferably, the lesion segmentation model training module includes an iterative training unit for reversely adjusting the updated generator model using the updated discriminator model.
在一实施例中,提供一OCT图像分割方法,该OCT图像分割方法也可以应用在如图1的应用环境中,其中,客户端通过网络与服务端进行通信。服务端接收客户端发送的待处理OCT图像,然后将待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像。其中,客户端可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。服务端可以用独立的服务器或者是多个服务器组成的服务器集群来实现。In one embodiment, an OCT image segmentation method is provided. The OCT image segmentation method can also be applied in the application environment as shown in FIG. 1, in which the client communicates with the server through the network. The server receives the to-be-processed OCT image sent by the client, and then inputs the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image. Among them, the client may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server can be implemented with an independent server or a server cluster composed of multiple servers.
在一个实施例中,如图6所示,以该方法应用于图1中的服务端为例进行说明,包括如下步骤:In one embodiment, as shown in FIG. 6, the method is applied to the server in FIG. 1 as an example for illustration, including the following steps:
S60:获取待处理OCT图像。S60: Obtain the OCT image to be processed.
其中,待处理OCT图像是指需要进行病灶分割的OCT图像,待处理OCT图像可以是服务端从客户端的数据库中获取得到,也可以是通过客户端的系统数据库中直接获取得到,还可以是通过客户端的第三方图像采集工具,从系统数据接口获取待处理OCT图像。Among them, the OCT image to be processed refers to the OCT image that needs to be segmented. The OCT image to be processed can be obtained from the database of the client by the server, or directly obtained from the system database of the client, or can be obtained by the client The third-party image acquisition tool at the terminal obtains the OCT image to be processed from the system data interface.
S70:将待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像,其中,图像病灶分割模型是采用分割模型训练方法进行训练得到的。S70: Input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using a segmentation model training method.
具体地,将待处理OCT图像输入至图像病灶分割模型中,病灶分割模型的输出即为病灶图像。可以理解地,由于病灶图像分割模型训练方法具有较高的分割精度,从而使得图像病灶分割模型输出的病灶图像的准确性得以提高。Specifically, the OCT image to be processed is input into the image lesion segmentation model, and the output of the lesion segmentation model is the lesion image. Understandably, since the training method of the lesion image segmentation model has higher segmentation accuracy, the accuracy of the lesion image output by the image lesion segmentation model can be improved.
本实施例中,首先,获取待处理OCT图像;然后,将待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像,由于病灶图像分割模型训练方法具有较高的分割精度,从而使得图像病灶分割模型输出的病灶图像的准确性得以提高。In this embodiment, first, the to-be-processed OCT image is obtained; then, the to-be-processed OCT image is input into an image lesion segmentation model for segmentation to obtain a lesion image. Because the lesion image segmentation model training method has a higher segmentation accuracy, which makes The accuracy of the lesion image output by the image lesion segmentation model is improved.
在一实施例中,如图7所示,在将待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像之后,OCT图像分割方法还包括:In an embodiment, as shown in FIG. 7, after inputting the OCT image to be processed into an image lesion segmentation model for segmentation to obtain a lesion image, the OCT image segmentation method further includes:
S81:计算每一病灶图像的区域面积,得到病灶区域面积。S81: Calculate the area of each lesion image to obtain the area of the lesion area.
其中,病灶区域面积是指病灶图像中的病灶所在区域的面积。具体地,可以根据病灶的位置参数进行进行计算得到病灶区域面积。例如一病区域图像为圆形,半径为1.5mm,该病灶图像的病灶区域面积即为7.07mm 2The area of the lesion area refers to the area of the area where the lesion is located in the lesion image. Specifically, the area of the lesion area can be calculated according to the position parameter of the lesion. For example, an image of a diseased area is circular and the radius is 1.5 mm, and the area of the imaged area of the diseased image is 7.07 mm 2 .
S82:对每一病灶区域面积采用加权求和计算,得到病灶面积。S82: Use the weighted sum calculation for the area of each lesion area to obtain the lesion area.
其中,加权求和是指对每个参数赋予对应的权重,然后将参数与权重相乘后进行相加的计算方式。可以理解地,不同部位的病灶产生的影响不同,因此对每一病灶区域面积采用加权求和计算,得到病灶面积,使得病灶面积的计算更为准确,以便后续根据病灶面积对病情评估提供参考。Among them, weighted summation refers to a calculation method in which each parameter is given a corresponding weight, and then the parameter and the weight are multiplied and added. Understandably, the impact of lesions in different parts is different. Therefore, the weighted sum calculation is used for the area of each lesion area to obtain the lesion area, which makes the calculation of the lesion area more accurate, so as to provide a reference for the subsequent assessment of the disease condition according to the lesion area.
本实施例中,首先,计算每一病灶图像的区域面积,得到病灶区域面积;然后,对每一病灶区域面积采用加权求和计算,得到病灶面积,使得病灶面积的计算更为准确,以便后续根据病灶面积对病情评估提供参考。In this embodiment, first, the area of each lesion image is calculated to obtain the area of the lesion area; then, the area of each lesion area is calculated by weighted sum to obtain the area of the lesion, so that the calculation of the area of the lesion is more accurate for subsequent Provide a reference for disease evaluation according to the area of the lesion.
在一实施例中,提供一种OCT图像分割装置,该OCT图像分割装置与上述实施例中OCT图像分割方法一一对应。如图8所示,该分割模型训练装置包括待处理图像获取模块 60和病灶图像获取模块70。各功能模块详细说明如下:In an embodiment, an OCT image segmentation device is provided, and the OCT image segmentation device corresponds to the OCT image segmentation method in the above embodiment in one-to-one correspondence. As shown in FIG. 8, the segmentation model training device includes a to-be-processed image acquisition module 60 and a lesion image acquisition module 70. The detailed description of each functional module is as follows:
待处理图像获取模块60,用于获取待处理OCT图像;The to-be-processed image acquisition module 60 is used to acquire to-be-processed OCT images;
病灶图像获取模块70,用于将待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像,其中,图像病灶分割模型是采用分割模型训练方法进行训练得到的。The lesion image acquisition module 70 is configured to input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using a segmentation model training method.
优选地,OCT图像分割装置还包括区域面积计算模块和病灶面积获取模块。Preferably, the OCT image segmentation device further includes an area area calculation module and a lesion area acquisition module.
区域面积计算模块,用于计算每一病灶图像的区域面积,得到病灶区域面积;The area area calculation module is used to calculate the area of each lesion image to obtain the area of the lesion area;
病灶面积获取模块,用于对每一病灶区域面积采用加权求和计算,得到病灶面积。The focus area acquisition module is used to calculate the weighted sum of the area of each focus area to obtain the focus area.
关于OCT图像分割装置的具体限定可以参见上文中对于OCT图像分割方法的限定,在此不再赘述。上述OCT图像分割装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific definition of the OCT image segmentation device, please refer to the above definition of the OCT image segmentation method, which will not be repeated here. Each module in the above OCT image segmentation device may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in the hardware form or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software so that the processor can call and execute the operations corresponding to the above modules.
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图9所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储OCT图像分割方法使用到的数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种分割模型训练方法。In one embodiment, a computer device is provided. The computer device may be a server, and its internal structure may be as shown in FIG. 9. The computer device includes a processor, memory, network interface, and database connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer-readable instructions, and a database. The internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium. The database of the computer device is used to store data used by the OCT image segmentation method. The network interface of the computer device is used to communicate with external terminals through a network connection. When the computer-readable instructions are executed by the processor, a method for training a segmentation model is realized.
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现上述实施例中的分割模型训练方法,或者处理器执行计算机可读指令时实现上述实施例中的OCT图像分割方法。In one embodiment, a computer device is provided, which includes a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor. The processor implements the computer-readable instructions to implement the The segmentation model training method, or the processor implements the computer-readable instructions to implement the OCT image segmentation method in the above embodiments.
在一个实施例中,提供了一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行上述实施例中的分割模型训练方法,或者所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行上述实施例中的OCT图像分割方法。其中,所述可读存储介质包括非易失性可读存储介质和易失性可读存储介质。In one embodiment, one or more readable storage media storing computer-readable instructions are provided, which when executed by one or more processors cause the one or more processors to execute The segmentation model training method in the above embodiment, or when the computer readable instructions are executed by one or more processors, causes the one or more processors to execute the OCT image segmentation method in the above embodiments. Wherein, the readable storage medium includes a non-volatile readable storage medium and a volatile readable storage medium.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过 计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art may understand that all or part of the process in the method of the foregoing embodiments may be completed by instructing relevant hardware through computer-readable instructions, and the computer-readable instructions may be stored in a non-volatile computer In a readable storage medium, when the computer-readable instructions are executed, they may include the processes of the foregoing method embodiments. Wherein, any reference to the memory, storage, database or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。Those skilled in the art can clearly understand that, for convenience and conciseness of description, only the above-mentioned division of each functional unit and module is used as an example for illustration. In practical applications, the above-mentioned functions may be allocated by different functional units, Module completion means that the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, persons of ordinary skill in the art should understand that they can still implement the foregoing The technical solutions described in the examples are modified, or some of the technical features are equivalently replaced; and these modifications or replacements do not deviate from the spirit and scope of the technical solutions of the embodiments of the present application. Within the scope of protection of this application.

Claims (20)

  1. 一种分割模型训练方法,其特征在于,所述分割模型训练方法包括:A segmentation model training method, characterized in that the segmentation model training method includes:
    获取训练样本图像集,所述训练样本图像集包括原始OCT图像和金标准图像;Acquiring a training sample image set, the training sample image set including the original OCT image and the gold standard image;
    将所述原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像;Input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image;
    通过预设的判别器模型将所述第一分割图像与所述金标准图像进行比对,得到比对结果,根据所述比对结果计算所述生成器模型的损失函数,并根据所述损失函数更新所述生成器模型;Comparing the first segmented image with the gold standard image through a preset discriminator model to obtain a comparison result, calculating a loss function of the generator model according to the comparison result, and according to the loss The function updates the generator model;
    采用更新后的生成器模型将所述第一分割图像转换为第二分割图像,将所述第二分割图像与所述金标准图像输入至预设的判别器模型中,根据二值交叉熵更新所述预设的判别器模型,得到更新后的判别器模型;Use the updated generator model to convert the first segmented image into a second segmented image, input the second segmented image and the gold standard image into a preset discriminator model, and update according to the binary cross entropy The preset discriminator model to obtain an updated discriminator model;
    将所述更新后的生成器模型与更新后的判别器模型进行迭代训练,直到所述更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的所述更新后的生成器模型确定为图像病灶分割模型。Iteratively training the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped, and the updated after the iterative training is stopped The generator model is determined as the image lesion segmentation model.
  2. 如权利要求1所述的分割模型训练方法,其特征在于,所述将所述原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像,包括:The method for training a segmentation model according to claim 1, wherein the inputting the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image includes:
    将所述原始OCT图像输入所述预设的生成器模型的下采样块集合中,得到所述原始OCT图像对应的特征图,其中,所述下采样块集合由N个下采样块依次连接组成,N为正整数;Input the original OCT image into a set of down-sampled blocks of the preset generator model to obtain a feature map corresponding to the original OCT image, wherein the set of down-sampled blocks is composed of N down-sampled blocks connected in sequence , N is a positive integer;
    将所述特征图输入到抽象排列块中,得到所述第一分割图像,其中,所述抽象排列块由M个排列组合单元依次连接组成,M为正整数。The feature map is input into an abstract arrangement block to obtain the first segmented image, wherein the abstract arrangement block is composed of M arrangement combination units connected in sequence, and M is a positive integer.
  3. 如权利要求1所述的分割模型训练方法,其特征在于,所述将所述更新后的生成器模型与更新后的判别器模型进行迭代训练,包括:The method for split model training according to claim 1, wherein the iterative training of the updated generator model and the updated discriminator model includes:
    利用所述更新后的判别器模型对所述更新后的生成器模型进行反向调节。Use the updated discriminator model to reverse adjust the updated generator model.
  4. 一种OCT图像分割方法,其特征在于,所述OCT图像分割方法,包括:An OCT image segmentation method, characterized in that the OCT image segmentation method includes:
    获取待处理OCT图像;Obtain the OCT image to be processed;
    将所述待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像,其中,所述图像病灶分割模型是采用如权利要求1至3任一项所述的分割模型训练方法进行训练得到的。Input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using the segmentation model training method according to any one of claims 1 to 3. of.
  5. 如权利要求4所述的OCT图像分割方法,其特征在于,在所述将所述待处理OCT 图像输入至图像病灶分割模型中进行分割,得到病灶图像之后,所述OCT图像分割方法还包括:The OCT image segmentation method according to claim 4, wherein after the inputting of the OCT image to be processed into an image lesion segmentation model for segmentation to obtain a lesion image, the OCT image segmentation method further comprises:
    计算每一所述病灶图像的区域面积,得到病灶区域面积;Calculating the area of each lesion image to obtain the area of the lesion area;
    对每一所述病灶区域面积采用加权求和计算,得到病灶面积。The weighted sum is used to calculate the area of each lesion area to obtain the lesion area.
  6. 一种分割模型训练装置,其特征在于,所述分割模型训练装置包括:A segmentation model training device, characterized in that the segmentation model training device includes:
    样本图像集获取模块,用于获取训练样本图像集,所述训练样本图像集包括原始OCT图像和金标准图像;A sample image set acquisition module, for acquiring a training sample image set, the training sample image set including the original OCT image and the gold standard image;
    分割图像获取模块,用于将所述原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像;A segmented image acquisition module, configured to input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image;
    生成器更新模块,用于通过预设的判别器模型将所述第一分割图像与所述金标准图像进行比对,得到比对结果,根据所述比对结果计算所述生成器模型的损失函数,并根据所述损失函数更新所述生成器模型;The generator update module is used to compare the first segmented image with the gold standard image through a preset discriminator model to obtain a comparison result, and calculate the loss of the generator model according to the comparison result Function, and update the generator model according to the loss function;
    判别器更新模块,用于采用更新后的生成器模型将所述第一分割图像转换为第二分割图像,将所述第二分割图像与所述金标准图像输入至预设的判别器模型中,根据二值交叉熵更新所述预设的判别器模型,得到更新后的判别器模型;The discriminator update module is used to convert the first segmented image into a second segmented image using the updated generator model, and input the second segmented image and the gold standard image into a preset discriminator model , Updating the preset discriminator model according to the binary cross entropy to obtain the updated discriminator model;
    病灶分割模型训练模块,用于将所述更新后的生成器模型与更新后的判别器模型进行迭代训练,直到所述更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的所述更新后的生成器模型确定为图像病灶分割模型。The lesion segmentation model training module is used to iteratively train the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped and will be stopped. The updated generator model after iterative training is determined to be an image lesion segmentation model.
  7. 如权利要求6所述的分割模型训练装置,其特征在于,所述分割图像获取模块,包括:The segmentation model training device according to claim 6, wherein the segmented image acquisition module includes:
    特征图获取单元,用于将所述原始OCT图像输入所述预设的生成器模型的下采样块集合中,得到所述原始OCT图像对应的特征图,其中,所述下采样块集合由N个下采样块依次连接组成,N为正整数;A feature map acquisition unit, configured to input the original OCT image into the set of down-sampled blocks of the preset generator model to obtain a feature map corresponding to the original OCT image, wherein the set of down-sampled blocks is composed of N The downsampling blocks are connected in sequence, N is a positive integer;
    分割图像获取单元,用于将所述特征图输入到抽象排列块中,得到所述第一分割图像,其中,所述抽象排列块由M个排列组合单元依次连接组成,M为正整数。A divided image acquisition unit is used to input the feature map into an abstract arrangement block to obtain the first divided image, wherein the abstract arrangement block is composed of M arrangement combination units connected in sequence, M is a positive integer.
  8. 如权利要求6所述的分割模型训练装置,其特征在于,病灶分割模型训练模块包括迭代训练单元,用于利用更新后的判别器模型对更新后的生成器模型进行反向调节The segmentation model training device according to claim 6, wherein the lesion segmentation model training module includes an iterative training unit for reversely adjusting the updated generator model using the updated discriminator model
  9. 一种OCT图像分割装置,其特征在于,所述OCT图像分割装置包括:An OCT image segmentation device, characterized in that the OCT image segmentation device includes:
    待处理图像获取模块,用于获取待处理OCT图像;To-be-processed image acquisition module for acquiring to-be-processed OCT images;
    病灶图像获取模块,用于将所述待处理OCT图像输入至图像病灶分割模型中进行分 割,得到病灶图像,其中,所述图像病灶分割模型是采用如权利要求1至3任一项所述的分割模型训练方法进行训练得到的。A lesion image acquisition module, configured to input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model adopts any one of claims 1 to 3 Partial model training method is obtained by training.
  10. 如权利要求9所述的OCT图像分割装置,其特征在于,所述OCT图像分割装置还包括区域面积计算模块和病灶面积获取模块;The OCT image segmentation device according to claim 9, wherein the OCT image segmentation device further comprises an area area calculation module and a lesion area acquisition module;
    区域面积计算模块,用于计算每一病灶图像的区域面积,得到病灶区域面积;The area area calculation module is used to calculate the area of each lesion image to obtain the area of the lesion area;
    病灶面积获取模块,用于对每一病灶区域面积采用加权求和计算,得到病灶面积The focal area acquisition module is used to calculate the weighted sum of the area of each focal area to obtain the focal area
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:A computer device, including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, characterized in that, when the processor executes the computer-readable instructions, it is implemented as follows step:
    获取训练样本图像集,所述训练样本图像集包括原始OCT图像和金标准图像;Acquiring a training sample image set, the training sample image set including the original OCT image and the gold standard image;
    将所述原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像;Input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image;
    通过预设的判别器模型将所述第一分割图像与所述金标准图像进行比对,得到比对结果,根据所述比对结果计算所述生成器模型的损失函数,并根据所述损失函数更新所述生成器模型;Comparing the first segmented image with the gold standard image through a preset discriminator model to obtain a comparison result, calculating a loss function of the generator model according to the comparison result, and according to the loss The function updates the generator model;
    采用更新后的生成器模型将所述第一分割图像转换为第二分割图像,将所述第二分割图像与所述金标准图像输入至预设的判别器模型中,根据二值交叉熵更新所述预设的判别器模型,得到更新后的判别器模型;Use the updated generator model to convert the first segmented image into a second segmented image, input the second segmented image and the gold standard image into a preset discriminator model, and update according to the binary cross entropy The preset discriminator model to obtain an updated discriminator model;
    将所述更新后的生成器模型与更新后的判别器模型进行迭代训练,直到所述更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的所述更新后的生成器模型确定为图像病灶分割模型。Iteratively training the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped, and the updated after the iterative training is stopped The generator model is determined as the image lesion segmentation model.
  12. 如权利要求11所述的计算机设备,其特征在于,所述将所述原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像,包括:The computer device according to claim 11, wherein the inputting the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image includes:
    将所述原始OCT图像输入所述预设的生成器模型的下采样块集合中,得到所述原始OCT图像对应的特征图,其中,所述下采样块集合由N个下采样块依次连接组成,N为正整数;Input the original OCT image into a set of down-sampled blocks of the preset generator model to obtain a feature map corresponding to the original OCT image, wherein the set of down-sampled blocks is composed of N down-sampled blocks connected in sequence , N is a positive integer;
    将所述特征图输入到抽象排列块中,得到所述第一分割图像,其中,所述抽象排列块由M个排列组合单元依次连接组成,M为正整数。The feature map is input into an abstract arrangement block to obtain the first segmented image, wherein the abstract arrangement block is composed of M arrangement combination units connected in sequence, and M is a positive integer.
  13. 如权利要求11所述的计算机设备,其特征在于,所述将所述更新后的生成器模型与更新后的判别器模型进行迭代训练,包括:The computer device according to claim 11, wherein the iterative training of the updated generator model and the updated discriminator model includes:
    利用所述更新后的判别器模型对所述更新后的生成器模型进行反向调节。Use the updated discriminator model to reverse adjust the updated generator model.
  14. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:A computer device, including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, characterized in that, when the processor executes the computer-readable instructions, it is implemented as follows step:
    获取待处理OCT图像;Obtain the OCT image to be processed;
    将所述待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像,其中,所述图像病灶分割模型是采用如权利要求1至3任一项所述的分割模型训练方法进行训练得到的。Input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using the segmentation model training method according to any one of claims 1 to 3. of.
  15. 如权利要求14所述的计算机设备,其特征在于,在所述将所述待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像之后,所述处理器执行所述计算机可读指令时还实现如下步骤:The computer device of claim 14, wherein after the OCT image to be processed is input into an image lesion segmentation model for segmentation to obtain a lesion image, the processor executes the computer-readable instructions The following steps are also implemented:
    计算每一所述病灶图像的区域面积,得到病灶区域面积;Calculating the area of each lesion image to obtain the area of the lesion area;
    对每一所述病灶区域面积采用加权求和计算,得到病灶面积。The weighted sum is used to calculate the area of each lesion area to obtain the lesion area.
  16. 一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:One or more readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
    获取训练样本图像集,所述训练样本图像集包括原始OCT图像和金标准图像;Acquiring a training sample image set, the training sample image set including the original OCT image and the gold standard image;
    将所述原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像;Input the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image;
    通过预设的判别器模型将所述第一分割图像与所述金标准图像进行比对,得到比对结果,根据所述比对结果计算所述生成器模型的损失函数,并根据所述损失函数更新所述生成器模型;Comparing the first segmented image with the gold standard image through a preset discriminator model to obtain a comparison result, calculating a loss function of the generator model according to the comparison result, and according to the loss The function updates the generator model;
    采用更新后的生成器模型将所述第一分割图像转换为第二分割图像,将所述第二分割图像与所述金标准图像输入至预设的判别器模型中,根据二值交叉熵更新所述预设的判别器模型,得到更新后的判别器模型;Use the updated generator model to convert the first segmented image into a second segmented image, input the second segmented image and the gold standard image into a preset discriminator model, and update according to the binary cross entropy The preset discriminator model to obtain an updated discriminator model;
    将所述更新后的生成器模型与更新后的判别器模型进行迭代训练,直到所述更新后的判别器模型的损失函数收敛,则停止迭代训练,并将停止迭代训练后的所述更新后的生成器模型确定为图像病灶分割模型。Iteratively training the updated generator model and the updated discriminator model until the loss function of the updated discriminator model converges, then iterative training is stopped, and the updated after the iterative training is stopped The generator model is determined as the image lesion segmentation model.
  17. 如权利要求16所述的可读存储介质,其特征在于,所述将所述原始OCT图像输入到预设的生成器模型中进行分割处理,得到第一分割图像,包括:The readable storage medium according to claim 16, wherein the inputting the original OCT image into a preset generator model for segmentation processing to obtain a first segmented image includes:
    将所述原始OCT图像输入所述预设的生成器模型的下采样块集合中,得到所述原始OCT图像对应的特征图,其中,所述下采样块集合由N个下采样块依次连接组成,N为正整数;Input the original OCT image into a set of down-sampled blocks of the preset generator model to obtain a feature map corresponding to the original OCT image, wherein the set of down-sampled blocks is composed of N down-sampled blocks connected in sequence , N is a positive integer;
    将所述特征图输入到抽象排列块中,得到所述第一分割图像,其中,所述抽象排列块由M个排列组合单元依次连接组成,M为正整数。The feature map is input into an abstract arrangement block to obtain the first segmented image, wherein the abstract arrangement block is composed of M arrangement combination units connected in sequence, and M is a positive integer.
  18. 如权利要求16所述的可读存储介质,其特征在于,所述将所述更新后的生成器模型与更新后的判别器模型进行迭代训练,包括:The readable storage medium of claim 16, wherein the iterative training of the updated generator model and the updated discriminator model includes:
    利用所述更新后的判别器模型对所述更新后的生成器模型进行反向调节。Use the updated discriminator model to reverse adjust the updated generator model.
  19. 一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:One or more readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
    获取待处理OCT图像;Obtain the OCT image to be processed;
    将所述待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像,其中,所述图像病灶分割模型是采用如权利要求1至3任一项所述的分割模型训练方法进行训练得到的。Input the to-be-processed OCT image into an image lesion segmentation model for segmentation to obtain a lesion image, wherein the image lesion segmentation model is obtained by training using the segmentation model training method according to any one of claims 1 to 3. of.
  20. 如权利要求19所述的可读存储介质,其特征在于,在所述将所述待处理OCT图像输入至图像病灶分割模型中进行分割,得到病灶图像之后,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:The readable storage medium of claim 19, wherein after the OCT image to be processed is input into an image lesion segmentation model for segmentation to obtain a lesion image, the computer-readable instruction is When multiple processors execute, the one or more processors also execute the following steps:
    计算每一所述病灶图像的区域面积,得到病灶区域面积;Calculating the area of each lesion image to obtain the area of the lesion area;
    对每一所述病灶区域面积采用加权求和计算,得到病灶面积。The weighted sum is used to calculate the area of each lesion area to obtain the lesion area.
PCT/CN2019/117733 2019-01-09 2019-11-13 Segmentation model training method, oct image segmentation method and apparatus, device and medium WO2020143309A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910019566.1 2019-01-09
CN201910019566.1A CN109829894B (en) 2019-01-09 2019-01-09 Segmentation model training method, OCT image segmentation method, device, equipment and medium

Publications (1)

Publication Number Publication Date
WO2020143309A1 true WO2020143309A1 (en) 2020-07-16

Family

ID=66860177

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117733 WO2020143309A1 (en) 2019-01-09 2019-11-13 Segmentation model training method, oct image segmentation method and apparatus, device and medium

Country Status (2)

Country Link
CN (1) CN109829894B (en)
WO (1) WO2020143309A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348825A (en) * 2020-10-16 2021-02-09 佛山科学技术学院 DR-U-net network method and device for retinal blood flow image segmentation
CN112434631A (en) * 2020-12-01 2021-03-02 天冕信息技术(深圳)有限公司 Target object identification method and device, electronic equipment and readable storage medium
CN112435256A (en) * 2020-12-11 2021-03-02 北京大恒普信医疗技术有限公司 CNV active focus detection method and device based on image and electronic equipment
CN112508974A (en) * 2020-12-14 2021-03-16 北京达佳互联信息技术有限公司 Training method and device of image segmentation model, electronic equipment and storage medium
CN112634257A (en) * 2020-12-31 2021-04-09 常州奥创医疗科技有限公司 Fungus fluorescence detection method
CN112700408A (en) * 2020-12-28 2021-04-23 中国银联股份有限公司 Model training method, image quality evaluation method and device
CN112749746A (en) * 2021-01-12 2021-05-04 云南电网有限责任公司电力科学研究院 Method, system and device for iteratively updating defect sample
CN112884782A (en) * 2021-03-02 2021-06-01 深圳市瑞图生物技术有限公司 Biological object segmentation method, apparatus, computer device and storage medium
CN112990218A (en) * 2021-03-25 2021-06-18 北京百度网讯科技有限公司 Optimization method and device of image semantic segmentation model and electronic equipment
CN113269721A (en) * 2021-04-21 2021-08-17 上海联影智能医疗科技有限公司 Model training method and device, electronic equipment and storage medium
CN113344896A (en) * 2021-06-24 2021-09-03 鹏城实验室 Breast CT image focus segmentation model training method and system
CN113361535A (en) * 2021-06-30 2021-09-07 北京百度网讯科技有限公司 Image segmentation model training method, image segmentation method and related device
CN113743410A (en) * 2021-02-09 2021-12-03 京东数字科技控股股份有限公司 Image processing method, apparatus and computer-readable storage medium
CN114841878A (en) * 2022-04-27 2022-08-02 广东博迈医疗科技股份有限公司 Speckle denoising method and device for optical coherence tomography image and electronic equipment
CN114926471A (en) * 2022-05-24 2022-08-19 北京医准智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN115481736A (en) * 2022-11-10 2022-12-16 富联裕展科技(深圳)有限公司 Training method of welding slag map model, generation method of welding slag cutting model and equipment
CN116934747A (en) * 2023-09-15 2023-10-24 江西师范大学 Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system
CN117274278A (en) * 2023-09-28 2023-12-22 武汉大学人民医院(湖北省人民医院) Retina image focus part segmentation method and system based on simulated receptive field
CN117726642A (en) * 2024-02-07 2024-03-19 中国科学院宁波材料技术与工程研究所 High reflection focus segmentation method and device for optical coherence tomography image
CN117726642B (en) * 2024-02-07 2024-05-31 中国科学院宁波材料技术与工程研究所 High reflection focus segmentation method and device for optical coherence tomography image

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829894B (en) * 2019-01-09 2022-04-26 平安科技(深圳)有限公司 Segmentation model training method, OCT image segmentation method, device, equipment and medium
CN110189341B (en) * 2019-06-05 2021-08-10 北京青燕祥云科技有限公司 Image segmentation model training method, image segmentation method and device
CN110363782B (en) * 2019-06-13 2023-06-16 平安科技(深圳)有限公司 Region identification method and device based on edge identification algorithm and electronic equipment
CN110414526B (en) * 2019-07-31 2022-04-08 达闼科技(北京)有限公司 Training method, training device, server and storage medium for semantic segmentation network
CN110428579B (en) * 2019-08-08 2022-01-18 刘宝鑫 Indoor monitoring system, method and device based on image recognition
CN112418255A (en) * 2019-08-21 2021-02-26 东北大学秦皇岛分校 Unsupervised anomaly detection scheme based on one-dimensional convolution generation type countermeasure network
CN110599492B (en) * 2019-09-19 2024-02-06 腾讯科技(深圳)有限公司 Training method and device for image segmentation model, electronic equipment and storage medium
CN110889826B (en) * 2019-10-30 2024-04-19 平安科技(深圳)有限公司 Eye OCT image focus region segmentation method, device and terminal equipment
CN112836701A (en) * 2019-11-25 2021-05-25 中国移动通信集团浙江有限公司 Face recognition method and device and computing equipment
CN111080592B (en) * 2019-12-06 2021-06-01 广州柏视医疗科技有限公司 Rib extraction method and device based on deep learning
CN111340819B (en) * 2020-02-10 2023-09-12 腾讯科技(深圳)有限公司 Image segmentation method, device and storage medium
CN111311565A (en) * 2020-02-11 2020-06-19 平安科技(深圳)有限公司 Eye OCT image-based detection method and device for positioning points of optic cups and optic discs
CN111462263B (en) * 2020-03-16 2023-08-11 云知声智能科技股份有限公司 Image generation method and device
CN112348774A (en) * 2020-09-29 2021-02-09 深圳市罗湖区人民医院 CT image segmentation method, terminal and storage medium suitable for bladder cancer
CN112232360A (en) * 2020-09-30 2021-01-15 上海眼控科技股份有限公司 Image retrieval model optimization method, image retrieval device and storage medium
CN112435212A (en) * 2020-10-15 2021-03-02 杭州脉流科技有限公司 Brain focus region volume obtaining method and device based on deep learning, computer equipment and storage medium
CN112508097B (en) * 2020-12-08 2024-01-19 深圳市优必选科技股份有限公司 Image conversion model training method and device, terminal equipment and storage medium
CN113140291B (en) * 2020-12-17 2022-05-10 慧影医疗科技(北京)股份有限公司 Image segmentation method and device, model training method and electronic equipment
CN112884770B (en) * 2021-04-28 2021-07-02 腾讯科技(深圳)有限公司 Image segmentation processing method and device and computer equipment
CN113326851B (en) * 2021-05-21 2023-10-27 中国科学院深圳先进技术研究院 Image feature extraction method and device, electronic equipment and storage medium
CN113421270B (en) * 2021-07-05 2022-07-19 上海市精神卫生中心(上海市心理咨询培训中心) Method, system, device, processor and storage medium for realizing medical image domain adaptive segmentation based on single-center calibration data
CN114240954B (en) * 2021-12-16 2022-07-08 推想医疗科技股份有限公司 Network model training method and device and image segmentation method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537801A (en) * 2018-03-29 2018-09-14 山东大学 Based on the retinal angiomatous image partition method for generating confrontation network
CN108665463A (en) * 2018-03-30 2018-10-16 哈尔滨理工大学 A kind of cervical cell image partition method generating network based on confrontation type
CN108764342A (en) * 2018-05-29 2018-11-06 广东技术师范学院 A kind of semantic segmentation method of optic disk and optic cup in the figure for eyeground
US20180330187A1 (en) * 2017-05-11 2018-11-15 Digitalglobe, Inc. Shape-based segmentation using hierarchical image representations for automatic training data generation and search space specification for machine learning algorithms
CN109166126A (en) * 2018-08-13 2019-01-08 苏州比格威医疗科技有限公司 A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN109829894A (en) * 2019-01-09 2019-05-31 平安科技(深圳)有限公司 Parted pattern training method, OCT image dividing method, device, equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180330187A1 (en) * 2017-05-11 2018-11-15 Digitalglobe, Inc. Shape-based segmentation using hierarchical image representations for automatic training data generation and search space specification for machine learning algorithms
CN108537801A (en) * 2018-03-29 2018-09-14 山东大学 Based on the retinal angiomatous image partition method for generating confrontation network
CN108665463A (en) * 2018-03-30 2018-10-16 哈尔滨理工大学 A kind of cervical cell image partition method generating network based on confrontation type
CN108764342A (en) * 2018-05-29 2018-11-06 广东技术师范学院 A kind of semantic segmentation method of optic disk and optic cup in the figure for eyeground
CN109166126A (en) * 2018-08-13 2019-01-08 苏州比格威医疗科技有限公司 A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN109829894A (en) * 2019-01-09 2019-05-31 平安科技(深圳)有限公司 Parted pattern training method, OCT image dividing method, device, equipment and medium

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348825A (en) * 2020-10-16 2021-02-09 佛山科学技术学院 DR-U-net network method and device for retinal blood flow image segmentation
CN112434631A (en) * 2020-12-01 2021-03-02 天冕信息技术(深圳)有限公司 Target object identification method and device, electronic equipment and readable storage medium
CN112435256A (en) * 2020-12-11 2021-03-02 北京大恒普信医疗技术有限公司 CNV active focus detection method and device based on image and electronic equipment
CN112508974A (en) * 2020-12-14 2021-03-16 北京达佳互联信息技术有限公司 Training method and device of image segmentation model, electronic equipment and storage medium
CN112508974B (en) * 2020-12-14 2024-06-11 北京达佳互联信息技术有限公司 Training method and device for image segmentation model, electronic equipment and storage medium
CN112700408A (en) * 2020-12-28 2021-04-23 中国银联股份有限公司 Model training method, image quality evaluation method and device
CN112700408B (en) * 2020-12-28 2023-09-08 中国银联股份有限公司 Model training method, image quality evaluation method and device
CN112634257B (en) * 2020-12-31 2023-10-27 常州奥创医疗科技有限公司 Fungus fluorescence detection method
CN112634257A (en) * 2020-12-31 2021-04-09 常州奥创医疗科技有限公司 Fungus fluorescence detection method
CN112749746A (en) * 2021-01-12 2021-05-04 云南电网有限责任公司电力科学研究院 Method, system and device for iteratively updating defect sample
CN113743410A (en) * 2021-02-09 2021-12-03 京东数字科技控股股份有限公司 Image processing method, apparatus and computer-readable storage medium
CN113743410B (en) * 2021-02-09 2024-04-09 京东科技控股股份有限公司 Image processing method, apparatus and computer readable storage medium
CN112884782B (en) * 2021-03-02 2024-01-05 深圳市瑞图生物技术有限公司 Biological object segmentation method, apparatus, computer device, and storage medium
CN112884782A (en) * 2021-03-02 2021-06-01 深圳市瑞图生物技术有限公司 Biological object segmentation method, apparatus, computer device and storage medium
CN112990218A (en) * 2021-03-25 2021-06-18 北京百度网讯科技有限公司 Optimization method and device of image semantic segmentation model and electronic equipment
CN113269721A (en) * 2021-04-21 2021-08-17 上海联影智能医疗科技有限公司 Model training method and device, electronic equipment and storage medium
CN113269721B (en) * 2021-04-21 2024-05-17 上海联影智能医疗科技有限公司 Model training method and device, electronic equipment and storage medium
CN113344896A (en) * 2021-06-24 2021-09-03 鹏城实验室 Breast CT image focus segmentation model training method and system
CN113344896B (en) * 2021-06-24 2023-01-17 鹏城实验室 Breast CT image focus segmentation model training method and system
CN113361535A (en) * 2021-06-30 2021-09-07 北京百度网讯科技有限公司 Image segmentation model training method, image segmentation method and related device
CN113361535B (en) * 2021-06-30 2023-08-01 北京百度网讯科技有限公司 Image segmentation model training, image segmentation method and related device
CN114841878A (en) * 2022-04-27 2022-08-02 广东博迈医疗科技股份有限公司 Speckle denoising method and device for optical coherence tomography image and electronic equipment
CN114926471A (en) * 2022-05-24 2022-08-19 北京医准智能科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN115481736A (en) * 2022-11-10 2022-12-16 富联裕展科技(深圳)有限公司 Training method of welding slag map model, generation method of welding slag cutting model and equipment
CN116934747B (en) * 2023-09-15 2023-11-28 江西师范大学 Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system
CN116934747A (en) * 2023-09-15 2023-10-24 江西师范大学 Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system
CN117274278A (en) * 2023-09-28 2023-12-22 武汉大学人民医院(湖北省人民医院) Retina image focus part segmentation method and system based on simulated receptive field
CN117274278B (en) * 2023-09-28 2024-04-02 武汉大学人民医院(湖北省人民医院) Retina image focus part segmentation method and system based on simulated receptive field
CN117726642A (en) * 2024-02-07 2024-03-19 中国科学院宁波材料技术与工程研究所 High reflection focus segmentation method and device for optical coherence tomography image
CN117726642B (en) * 2024-02-07 2024-05-31 中国科学院宁波材料技术与工程研究所 High reflection focus segmentation method and device for optical coherence tomography image

Also Published As

Publication number Publication date
CN109829894A (en) 2019-05-31
CN109829894B (en) 2022-04-26

Similar Documents

Publication Publication Date Title
WO2020143309A1 (en) Segmentation model training method, oct image segmentation method and apparatus, device and medium
WO2020215672A1 (en) Method, apparatus, and device for detecting and locating lesion in medical image, and storage medium
EP3674968A1 (en) Image classifying method, server and computer readable storage medium
BR112021001576A2 (en) system and method for eye condition determinations based on ia
Kumar et al. Deep transfer learning approaches to predict glaucoma, cataract, choroidal neovascularization, diabetic macular edema, drusen and healthy eyes: an experimental review
JP2022540634A (en) 3D Point Cloud Object Detection and Instance Segmentation Based on Deep Learning
Mayya et al. Automated microaneurysms detection for early diagnosis of diabetic retinopathy: A Comprehensive review
WO2021082691A1 (en) Segmentation method and apparatus for lesion area of eye oct image, and terminal device
CN111709485B (en) Medical image processing method, device and computer equipment
Sharafeldeen et al. Precise higher-order reflectivity and morphology models for early diagnosis of diabetic retinopathy using OCT images
Kim et al. Few-shot learning using a small-sized dataset of high-resolution fundus images for glaucoma diagnosis
Singh et al. Deep learning system applicability for rapid glaucoma prediction from fundus images across various data sets
Kumar et al. Redefining Retinal Lesion Segmentation: A Quantum Leap With DL-UNet Enhanced Auto Encoder-Decoder for Fundus Image Analysis
Xiao et al. Major automatic diabetic retinopathy screening systems and related core algorithms: a review
WO2021114623A1 (en) Method, apparatus, computer device, and storage medium for identifying persons having deformed spinal columns
WO2021120753A1 (en) Method and apparatus for recognition of luminal area in choroidal vessels, device, and medium
Hassan et al. Multilayered deep structure tensor delaunay triangulation and morphing based automated diagnosis and 3D presentation of human macula
Cheng Sparse range-constrained learning and its application for medical image grading
Sadhana et al. An intelligent technique for detection of diabetic retinopathy using improved alexnet model based convoluitonal neural network
US20210374955A1 (en) Retinal color fundus image analysis for detection of age-related macular degeneration
Gautam et al. An adaptive localization of pupil degraded by eyelash occlusion and poor contrast
Yenegeta et al. TrachomaNet: Detection and grading of trachoma using texture feature based deep convolutional neural network
Lee et al. Grading diabetic retinopathy severity using modern convolution neural networks (CNN)
Opoku et al. CLAHE-CapsNet: Efficient retina optical coherence tomography classification using capsule networks with contrast limited adaptive histogram equalization
US20230298175A1 (en) Machine learning model based method and analysis system for performing covid-19 testing according to eye image captured by smartphone

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19909312

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19909312

Country of ref document: EP

Kind code of ref document: A1