CN116071622B - Stomach image recognition model construction method and system based on deep learning - Google Patents
Stomach image recognition model construction method and system based on deep learning Download PDFInfo
- Publication number
- CN116071622B CN116071622B CN202310357698.1A CN202310357698A CN116071622B CN 116071622 B CN116071622 B CN 116071622B CN 202310357698 A CN202310357698 A CN 202310357698A CN 116071622 B CN116071622 B CN 116071622B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- stomach
- training
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000002784 stomach Anatomy 0.000 title claims abstract description 196
- 238000010276 construction Methods 0.000 title claims abstract description 35
- 238000013135 deep learning Methods 0.000 title claims abstract description 35
- 238000012549 training Methods 0.000 claims abstract description 205
- 238000000034 method Methods 0.000 claims abstract description 14
- 238000012360 testing method Methods 0.000 claims description 71
- 238000005070 sampling Methods 0.000 claims description 65
- 238000002372 labelling Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 6
- 206010047571 Visual impairment Diseases 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 239000002775 capsule Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 210000002318 cardia Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000002496 gastric effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/031—Recognition of patterns in medical or anatomical images of internal organs
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of computers, and provides a stomach image recognition model construction method and a stomach image recognition model construction system based on deep learning, wherein the method comprises the following steps: acquiring a first training image set; determining a first size based on the size of the target training image; determining network parameters based on the first size, and constructing a first network to be trained based on the network parameters; and training the first network to be trained based on the first training image set to obtain a stomach image recognition model. According to the deep learning-based stomach image recognition model building method provided by the invention, the stomach image recognition model is trained, so that each part of a stomach image can be marked through the stomach image recognition model, manual marking by medical staff is not needed, the burden of the medical staff is reduced, meanwhile, network parameters in the stomach image recognition model are determined through the first size, automatic setting of the parameters is realized, and each part of the stomach can be marked more accurately through the stomach image recognition model.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a stomach image recognition model construction method and a stomach image recognition model construction system based on deep learning.
Background
At present, the identification method of the capsule gastroscope image part mainly depends on manual identification, and doctors need to mark each part of the stomach on the capsule gastroscope image, so that the burden of the doctors is increased. The identification and marking of each stomach part are realized through an example segmentation network Mask RCNN, so that the problem of inaccurate marking exists.
Disclosure of Invention
The invention provides a stomach image recognition model construction method and a stomach image recognition model construction system based on deep learning, aiming at reducing the burden of doctors and improving the labeling precision of each stomach part.
In a first aspect, the present invention provides a method for constructing a stomach image recognition model based on deep learning, including:
acquiring a first training image set;
determining a first size based on the size of the target training image; the target training image is any image in the first training image set;
determining network parameters based on the first size, and constructing a first network to be trained based on the network parameters;
training the first network to be trained based on the first training image set to obtain a stomach image recognition model;
The first training image set is a set of annotated stomach images.
In one embodiment, the determining the network parameter based on the first size includes:
determining a second dimension of the minimum feature map of the downsampling layer; the downsampling layer is a pooling layer of the first network to be trained;
determining a sampling number based on the first size and the second size, and determining an up-sampling layer number and a down-sampling layer number based on the sampling number;
determining a third size of the sampled image based on the number of samples and the first size;
and determining the up-sampling layer number, the down-sampling layer number and the third size as the network parameters.
Training the first network to be trained based on the first training image set to obtain a stomach image recognition model, including:
determining a target weight of a target upsampling layer based on the number of layers of the target upsampling layer; the target upsampling layer is any upsampling layer of the first network to be trained;
determining a loss value of the target upsampling layer based on the target weight and a loss function;
superposing the loss values of all the target up-sampling layers to obtain a training loss value of the target model; the target model is a model obtained when the first network to be trained is subjected to any iteration training;
And determining the target model with the minimum training loss value as the stomach image recognition model.
The training the first network to be trained based on the first training image set, after obtaining the stomach image recognition model, further includes:
acquiring a first test image set; the first test image set is a set of unlabeled stomach images;
and labeling all stomach images in the first test image set based on the stomach image recognition model to obtain a labeled first test image set.
Labeling all stomach images in the first test image set based on the stomach image recognition model to obtain a labeled first test image set, wherein the labeling comprises the following steps:
determining a first area occupation ratio of a target part in a first target image; the first target image is any image in the first training image set, and the target part is any part in stomach;
determining an area occupation ratio range of the target part based on the first area occupation ratio;
determining a second area occupation ratio of a target part in a second target image; the second target image is any image in the first test image set;
If the second area ratio is within the area ratio range, marking a target part in the second target image through the stomach image recognition model to obtain a marked second target image;
and collecting each marked second target image to obtain the marked first test image set.
The first network to be trained is a U-Net network.
The acquiring a first training image set includes:
acquiring a second training image set and a second test image set;
cutting and normalizing all images in the second training image set to obtain a preprocessed image data set;
training a second network to be trained based on the image data set to obtain a quality difference image recognition model; the second network to be trained is a classified neural network;
classifying the stomach images in the second test image set based on the quality difference image recognition model to obtain a quality training image;
collecting the excellent training images to obtain a first training image set;
the second training image set is a set of stomach images meeting preset standards, the second test image set is a set of stomach images needing to be classified according to the preset standards, and the preset standards are: the brightness of the image is lower than the first preset brightness, or/and the brightness of the image is higher than the second preset brightness, or/and the definition of the image is lower than the preset definition, or/and the image has afterimages.
In a second aspect, the present invention provides a stomach image recognition model building system based on deep learning, comprising:
the image acquisition module is used for acquiring a first training image set;
the size determining module is used for determining a first size based on the size of the target training image; the target training image is any image in the first training image set;
the network construction module is used for determining network parameters based on the first size and constructing a first network to be trained based on the network parameters;
the model training module is used for training the first network to be trained based on the first training image set to obtain a stomach image recognition model;
the first training image set is a set of annotated stomach images.
In a third aspect, the present invention further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the deep learning-based stomach image recognition model construction method of the first aspect when the program is executed.
In a fourth aspect, the present invention also provides a non-transitory computer readable storage medium, which includes a computer program, which when executed by the processor, implements the deep learning-based stomach image recognition model construction method of the first aspect.
The invention provides a stomach image recognition model construction method and a stomach image recognition model construction system based on deep learning, which are used for acquiring a first training image set; determining a first size based on the size of the target training image; determining network parameters based on the first size, and constructing a first network to be trained based on the network parameters; and training the first network to be trained based on the first training image set to obtain a stomach image recognition model.
When each part of the stomach image is identified, each part is marked through the stomach image identification model, and medical staff is not required to carry out manual marking, so that the burden of the medical staff is reduced. Meanwhile, the network parameters of the first network to be trained are determined through the first size, so that automatic setting of the parameters is realized, additional auxiliary setting is not needed, the burden of medical staff is further lightened, the stomach image and the stomach image recognition model are more adaptive, and each part of the stomach can be marked more accurately through the stomach image recognition model.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the following description will be given with a brief introduction to the drawings used in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present invention, and that other drawings can be obtained from these drawings without the inventive effort of a person skilled in the art.
FIG. 1 is a flow chart of a method for constructing a deep learning-based stomach image recognition model;
FIG. 2 is a block diagram of the deep learning-based stomach image recognition model building system provided by the invention;
fig. 3 illustrates a physical structure diagram of an electronic device.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiments of the present invention provide embodiments of a method for constructing a deep learning based stomach image recognition model, it should be noted that although a logical sequence is shown in the flowchart, the steps shown or described may be accomplished in a different order than here under certain data.
Referring to fig. 1, fig. 1 is a flowchart of a method for constructing a deep learning-based stomach image recognition model provided by the invention. The stomach image recognition model construction method based on deep learning provided by the embodiment of the invention comprises the following steps:
Step 101, acquiring a first training image set;
step 102, determining a first size based on the size of the target training image;
step 103, determining network parameters based on the first size, and constructing a first network to be trained based on the network parameters;
and step 104, training the first network to be trained based on the first training image set to obtain a stomach image recognition model.
It should be noted that, in the embodiment of the present invention, the model building system is used as an execution subject to describe the stomach image recognition model building method based on deep learning, and the execution subject is not limited to the model building system in actual operation.
Specifically, when each part of the stomach image needs to be identified and labeled, the user needs to input a first training image set into the model building system, wherein each part of the stomach image needs to be labeled includes a cardia, a fundus, a greater curvature of the stomach, a lesser curvature of the stomach, a gastric angle and a antrum, and the first training image set is a set of labeled stomach images.
In the embodiment of the invention, the stomach image is obtained through the capsule endoscope and is manually marked to obtain the first training image set.
Further, after the model building system obtains the first training image set, the size of each first training image in the first training image set is obtained, and the first size is determined according to the size of each first training image. The image size is expressed in the following manner: x-axis dimension x y-axis dimension, for example, if the image size is 17 x 9, then the x-axis dimension is 17 and the y-axis dimension is 9. The x-axis dimension may be understood as the lateral pixel and the y-axis dimension as the longitudinal pixel.
It should be further noted that, after the model building system obtains the size of each first training image, there are three ways to determine the first size, the first is to determine the first size according to the median of the sizes of the first training images, the second is to determine the first size according to the average number of the sizes of the first training images, and the third is to determine the first size according to the mode of the sizes of the first training images.
Determining a first size according to the median of the sizes of the first training images, specifically: the model building system obtains an x-axis dimension of each first training image and determines a median of the x-axis dimensions of the first training images as an x-axis median. Further, the model building system obtains a y-axis dimension of each first training image and determines a median of the y-axis dimensions of the first training images as a y-axis median. Further, the model building system determines an x-axis median x-y-axis median as the first size.
Determining a first size according to the average number of sizes of the first training images, specifically: the model building system obtains an x-axis dimension of each first training image and determines an average of the x-axis dimensions of the first training images as an x-axis average. Further, the model building system obtains a y-axis dimension of each first training image and determines an average of the y-axis dimensions of the first training images as a y-axis average. Further, the model building system determines an x-axis average number x y-axis average number as the first size.
Determining a first size according to the mode of the size of the first training image set, specifically: the model building system obtains an x-axis dimension of each first training image and determines a mode of the x-axis dimension of the first training image as an x-axis mode. Further, the model building system obtains a y-axis dimension of each first training image and determines a mode of the y-axis dimension of the first training image set as a y-axis mode. Further, the model building system determines the x-axis mode as the first dimension.
Further, after the model building system obtains the first size, the sampling times are determined according to the x-axis size and the y-axis size of the first size, and the network parameters are determined according to the sampling times.
Further, the model building system builds a first network to be trained according to the network parameters.
Further, the model building system trains the first network to be trained through the first training image set to obtain a stomach image recognition model.
The invention provides a stomach image recognition model construction method and a stomach image recognition model construction system based on deep learning, which are used for acquiring a first training image set; determining a first size based on the size of the target training image; determining network parameters based on the first size, and constructing a first network to be trained based on the network parameters; and training the first network to be trained based on the first training image set to obtain a stomach image recognition model.
When each part of the stomach image is identified, each part is marked through the stomach image identification model, and medical staff is not required to carry out manual marking, so that the burden of the medical staff is reduced. Meanwhile, the network parameters of the first network to be trained are determined through the first size, so that automatic setting of the parameters is realized, additional auxiliary setting is not needed, the burden of medical staff is further lightened, the stomach image and the stomach image recognition model are more adaptive, and each part of the stomach can be marked more accurately through the stomach image recognition model.
Further, determining the network parameter based on the first size in step 103 includes:
determining a second dimension of the minimum feature map of the downsampling layer; the downsampling layer is a pooling layer of the first network to be trained;
determining a sampling number based on the first size and the second size, and determining an up-sampling layer number and a down-sampling layer number based on the sampling number;
determining a third size of the sampled image based on the number of samples and the first size;
and determining the up-sampling layer number, the down-sampling layer number and the third size as the network parameters.
Specifically, after obtaining the first size, the model building system obtains a second size of a minimum feature map of a downsampling layer, wherein the downsampling layer is a pooling layer of a first network to be trained, and the second size is set by a user according to actual needs. It should be noted that, when the second size is 4*4, the sampling effect of the downsampling layer is the best, so the stomach image recognition model obtained under the setting can more accurately mark each part of the stomach.
Further, the model building system obtains a first x-axis dimension and a first y-axis dimension in the first dimension, and a second x-axis dimension and a second y-axis dimension in the second dimension.
Further, the model building system determines the maximum x-axis downsampling frequency according to the first x-axis size and the second x-axis size, wherein a calculation formula of the maximum x-axis downsampling frequency is as follows:
①。
wherein x is 1 Is the first x-axis dimension; x is x 2 Is the second x-axis dimension; and x is a first parameter, and when the formula (1) is established, the maximum value obtained by x is the maximum downsampling frequency of the x axis.
Further, the model building system determines the maximum down-sampling times of the y-axis according to the first y-axis size and the second y-axis size, wherein the calculation formula of the maximum down-sampling times of the y-axis is as follows:
②。
wherein y is 1 Is the first y-axis dimension; y is 2 Is the second y-axis dimension; and y is a second parameter, and when the formula (2) is established, the maximum value obtained by y is the maximum downsampling frequency of the y axis.
Further, after the model building system obtains the x-axis maximum downsampling times and the y-axis maximum downsampling times, determining smaller values of the x-axis maximum downsampling times and the y-axis maximum downsampling times as sampling times.
In one embodiment, the model building system obtains a first dimension of 16×9 and a second dimension of 4*4, and the first x-axis dimension is 16, the first y-axis dimension is 9, the second x-axis dimension is 4, and the second y-axis dimension is 4. Further, the model building system obtains that the maximum down-sampling frequency of the x-axis is 2 according to the first x-axis size and the second x-axis size; and obtaining that the maximum down-sampling frequency of the y-axis is 1 according to the first y-axis size and the second y-axis size. Further, the model building system determines the smaller value of the x-axis maximum downsampling number and the y-axis maximum downsampling number as the sampling number, that is, the sampling number is 1.
Further, after the model building system obtains the sampling times, the sampling times are determined as the up-sampling layer number and the down-sampling layer number of the first network to be trained.
Further, the model building system determines a third x-axis dimension of the sampled image according to the sampling times and a first x-axis dimension in the first dimension, wherein a calculation formula of the third x-axis dimension is as follows:
③。
wherein x is 1 Is the first x-axis dimension; n is the sampling times; i is a third parameter; x is x 3 When the formula (3) is established, x is a first numerical value 3 The minimum value is taken to be the third x-axis dimension.
Further, the model building system determines a third y-axis dimension of the sampled image according to the sampling times and the first y-axis dimension in the first dimension, wherein a calculation formula of the third y-axis dimension is as follows:
④。
wherein y is 1 Is the first y-axis dimension; n is the sampling times; j is a fourth parameter; y is 3 When the formula (4) is established, y is the second numerical value 3 The minimum value is the third y-axis dimension.
Further, after the model building system obtains the third x-axis dimension and the third y-axis dimension, the third dimension of the sampled image may be determined to be the third x-axis dimension.
Further, the model building system determines the obtained up-sampling layer number, down-sampling layer number and third size as network parameters.
According to the embodiment of the invention, the sampling times are determined through the first size and the second size, so that the number of sampling layers is determined according to all stomach images in the first training image set, the adaptation degree of the first training image set and the first network to be trained is improved, the characteristics of the first training image set are extracted efficiently and completely, and the obtained stomach image recognition model can accurately label the first test image set. In addition, the third size of the sampling image is determined through the sampling times and the first sizes, so that the third size is more similar to the size of each first training image, and the feature images of the first training images are extracted, and the feature loss of the first training images is reduced by minimally amplifying or reducing the first training image set, so that the stomach image recognition model obtained through training can label stomach images more accurately.
Further, training the first network to be trained based on the first training image set in step 104 to obtain a stomach image recognition model, including:
determining a target weight of a target upsampling layer based on the number of layers of the target upsampling layer; the target upsampling layer is any upsampling layer of the first network to be trained;
Determining a loss value of the target upsampling layer based on the target weight and a loss function;
superposing the loss values of all the target up-sampling layers to obtain a training loss value of the target model; the target model is a model obtained when the first network to be trained is subjected to any iteration training;
and determining the target model with the minimum training loss value as the stomach image recognition model.
It should be noted that, in the first network to be trained according to the embodiment of the present invention, each up-sampling layer is provided with a single convolution layer with a convolution kernel of 1*1, which is used for calculating the loss.
Specifically, the model building system acquires the number of layers of a target upsampling layer and determines a target weight according to the number of layers of the target upsampling layer, wherein the target upsampling layer is any upsampling layer of a first network to be trained, and a target weight calculation formula is as follows:
k=;
where k is the target weight, n is the number of layers of the target upsampling layer, m is the total number of layers of the upsampling layer, and x is a variable.
Further, in each iterative training performed on the first network to be trained, the model building system calculates an initial loss value of the target upsampling layer through the loss function, and multiplies the initial loss value by the target weight to obtain the loss value of the target upsampling layer.
In one embodiment, the model building system obtains the loss value of the target upsampling layer through the Dice coefficient and the cross entropy loss function, specifically: the model construction system calculates a first loss value of the target upsampling layer through a Dice coefficient, calculates a second loss value of the target upsampling layer through a cross entropy loss function, and adds the first loss value and the second loss value to obtain an initial loss value of the target upsampling layer. Further, the model building system multiplies the initial loss value by the target weight to obtain a loss value of the target upsampling layer.
Further, after the model construction system obtains the loss values of all the target upsampling layers, the loss values of all the target upsampling layers are overlapped to obtain a training loss value of a target model, wherein the target model is a model obtained through the iterative training.
Further, the model construction system determines a target model with the minimum training loss value as a stomach image recognition model. It should be noted that, the model building system may also determine the model with the best test parameters as the stomach image recognition model.
According to the embodiment of the invention, the target weight is determined through the number of layers of the target upsampling layer, so that the characteristic loss of each upsampling layer can be obtained through the network training in proportion, the obtained training loss value is more accurate, and the model building system can accurately determine the target model with the best training effect as the stomach image recognition model.
Further, training the first network to be trained based on the first training image set, and after obtaining the stomach image recognition model, further including:
acquiring a first test image set; the first test image set is a set of unlabeled stomach images;
and labeling all stomach images in the first test image set based on the stomach image recognition model to obtain a labeled first test image set.
Specifically, the user inputs the stomach image to be annotated into the model building system, and the model building system determines the stomach image as the first test image set after obtaining the stomach image.
Further, the model building system marks all stomach images in the first test image set through the stomach image recognition model to obtain a marked first test image set.
In one embodiment, the model building system determines a stomach image acquired by the capsule endoscope as a first training image set and a first test image set, and trains a first network to be trained through the first training image set to obtain a stomach image recognition model. Further, the model building system marks all stomach images in the first test image set through the stomach image recognition model to obtain a marked first test image set.
According to the embodiment of the invention, the first test image set is marked through the stomach image recognition model, so that manual marking by medical staff is not needed, and the burden of the medical staff is reduced.
Further, labeling all stomach images in the first test image set based on the stomach image recognition model to obtain a labeled first test image set, including:
determining a first area occupation ratio of a target part in a first target image; the first target image is any image in the first training image set, and the target part is any part in stomach;
determining an area occupation ratio range of the target part based on the first area occupation ratio;
determining a second area occupation ratio of a target part in a second target image; the second target image is any image in the first test image set;
if the second area ratio is within the area ratio range, marking a target part in the second target image through the stomach image recognition model to obtain a marked second target image;
and collecting each marked second target image to obtain the marked first test image set.
Specifically, the model building system determines the position of a target part in a first target image, and obtains the area of the target part in the first target image, wherein the first target image is any image in a first training image set, and the target part is any part in stomach.
Further, the model building system divides the area of the target part in the first target image by the area of the first target image to obtain a first area ratio of the target part.
Further, the model building system aggregates all the first area duty ratios to obtain a first area duty ratio set.
Further, the model building system determines the maximum value in the first area ratio set as the upper limit of the area ratio range of the target portion, and determines the minimum value in the first area ratio set as the lower limit of the area ratio range of the target portion, so that the area ratio range of the target portion can be obtained.
Further, the model building system determines the position of the target part in a second target image, and obtains the area of the target part in the second target image, wherein the second target image is any image in the first test image set.
Further, the model building system divides the area of the target part in the second target image by the area of the second target image to obtain a second area ratio of the target part.
Further, if the second area ratio is within the area ratio range, the model building system marks the target part in the second target image through the stomach image recognition model, and the marked second target image is obtained. It should be noted that the model building system may label multiple target sites in the second target image through the stomach image recognition model.
Further, the model building system gathers each annotated second target image to obtain a annotated first test image set.
In one embodiment, the model building system trains the first network to be trained by adopting five-fold cross validation to obtain five stomach image recognition models, wherein the five stomach image recognition models are respectively: a first model, a second model, a third model, a fourth model, and a fifth model. Further, the model construction system determines a first area where a target part in the second target image is located through the first model, determines a second area where the target part in the second target image is located through the second model, determines a third area where the target part in the second target image is located through the third model, determines a fourth area where the target part in the second target image is located through the fourth model, and determines a fifth area where the target part in the second target image is located through the fifth model. Further, the model building system determines a region where the first region, the second region, the third region, the fourth region and the fifth region overlap in the second target image as a final region where the target part is located in the second target image. Further, the model building system performs contour searching on a final area where the target part is located in the second target image, and fills holes in the contour to obtain a filled final area. Further, the model building system calculates a second area duty ratio of the final area after filling, and determines whether the second area duty ratio is within the area duty ratio range. If the second area ratio is within the area ratio range, the model building system marks a final area in the second target image through the stomach image recognition model.
According to the embodiment of the invention, the area occupation ratio range of the target part is determined through the first training image set, so that the first test image set is screened according to the area occupation ratio range, the first test image with the target part area meeting the requirement is screened out, and the condition that the stomach image recognition model performs useless labeling on the target part with the too small area is avoided.
Further, the first network to be trained is a U-Net network.
Specifically, after the model building system determines the network parameters, the U-Net network is determined as the basic network structure of the training model, and the initial channel number is set to 32, and the maximum channel number is set to 480.
Further, the model building system obtains the up-sampling layer number in the network parameter, and determines the double value of the up-sampling layer number as the layer number of the up-sampling convolution layer.
Further, the model building system obtains the number of downsampling layers in the network parameters, and determines the double value of the downsampling layers as the number of downsampling convolution layers.
Further, the model building system obtains a third size of the network parameters and determines it as the size of the sampled image.
Further, the model building system sets the number of epochs to 1000 and the number of Batch contained in each Epoch to 500.
Further, the model building system randomly performs elastic change, mirroring, rotation, scaling and Gamma transformation on the stomach images in the first training image set in an online augmentation mode to obtain an augmented first training image set.
Further, the model building system trains the U-Net network through the first training image set after the augmentation to obtain a stomach image recognition model.
The stomach image recognition model obtained by the embodiment of the invention is trained on the U-Net network with a simple network structure, and on the premise of ensuring the deep learning effect, the training time of the model is shortened by reducing the learning depth of the model, and meanwhile, the stomach image recognition model trained by the U-Net network is more accurate in labeling the light stomach image.
Further, the acquiring the first training image set in step 101 includes:
acquiring a second training image set and a second test image set;
cutting and normalizing all images in the second training image set to obtain a preprocessed image data set;
training a second network to be trained based on the image data set to obtain a quality difference image recognition model; the second network to be trained is a classified neural network;
Classifying the stomach images in the second test image set based on the quality difference image recognition model to obtain a quality training image;
collecting the excellent training images to obtain a first training image set;
the second training image set is a set of stomach images meeting preset standards, the second test image set is a set of stomach images needing to be classified according to the preset standards, and the preset standards are: the brightness of the image is lower than the first preset brightness, or/and the brightness of the image is higher than the second preset brightness, or/and the definition of the image is lower than the preset definition, or/and the image has afterimages.
Specifically, after a photographed image of the stomach is obtained through the capsule endoscope, the user needs to divide the photographed image into a first stomach image and a second stomach image.
Further, the user needs to select a stomach image meeting a preset standard from the first stomach image, and determine the stomach image as a second training image set to be input into the model building system, wherein the preset standard is: the brightness of the image is lower than the first preset brightness, or/and the brightness of the image is higher than the second preset brightness, or/and the definition of the image is lower than the preset definition, or/and the image has afterimages. It should be noted that, the user determines the stomach image remaining in the first stomach image as the second test image set, and inputs it into the model building system.
Further, after the model building system obtains the second training image set, all images in the second training image set are cut and normalized to obtain a preprocessed image data set.
Further, the model building system trains a second network to be trained through the image data set to obtain a quality difference image recognition model, wherein the second network to be trained is a classified neural network.
Further, after the model building system obtains the quality difference image recognition model, classifying the stomach images in the second test image set through the quality difference image recognition model to obtain a quality difference training image which meets the preset standard and a quality improvement training image which does not meet the preset standard.
Further, the model building system gathers the quality training images to obtain a first training image set.
Further, the model building system obtains a size of each first training image in the first training image set, and determines the first size according to the size of each first training image.
Further, after the model building system obtains the first size, the sampling times are determined according to the x-axis size and the y-axis size of the first size, and the network parameters are determined according to the sampling times.
Further, the model building system builds a first network to be trained according to the network parameters.
Further, the model building system trains the first network to be trained through the first training image set to obtain a stomach image recognition model.
Further, after the stomach image recognition model is obtained by the model construction system, classifying the second stomach image through the quality difference image recognition model to obtain a quality difference test image which accords with a preset standard and a quality improvement test image which does not accord with a preset standard.
Further, the model building system gathers the quality test images to obtain a first test image set.
Further, the model building system marks all stomach images in the first test image set through the stomach image recognition model to obtain a marked first test image set.
According to the embodiment of the invention, the second network to be trained is trained through the stomach images meeting the preset standard, so that the quality difference image recognition model is obtained, stomach images with poor image quality can be screened by the quality difference image recognition model, adverse effects of the quality difference image on the first network to be trained are avoided, and the stomach image recognition model is ensured to accurately label the stomach images.
Further, the stomach image recognition model construction system based on the deep learning provided by the invention is correspondingly referred to the stomach image recognition model construction method based on the deep learning provided by the invention.
Referring to fig. 2, fig. 2 is a block diagram of a deep learning-based stomach image recognition model construction system according to the present invention, the deep learning-based stomach image recognition model construction system comprising:
an image acquisition module 201, configured to acquire a first training image set;
a size determination module 202 for determining a first size based on the size of the target training image; the target training image is any image in the first training image set;
a network construction module 203, configured to determine a network parameter based on the first size, and construct a first network to be trained based on the network parameter;
the model training module 204 is configured to train the first network to be trained based on the first training image set to obtain a stomach image recognition model;
the first training image set is a set of annotated stomach images.
Further, the network construction module 203 is further configured to:
determining a second dimension of the minimum feature map of the downsampling layer; the downsampling layer is a pooling layer of the first network to be trained;
Determining a sampling number based on the first size and the second size, and determining an up-sampling layer number and a down-sampling layer number based on the sampling number;
determining a third size of the sampled image based on the number of samples and the first size;
and determining the up-sampling layer number, the down-sampling layer number and the third size as the network parameters.
Further, the model training module 204 is further configured to:
determining a target weight of a target upsampling layer based on the number of layers of the target upsampling layer; the target upsampling layer is any upsampling layer of the first network to be trained;
determining a loss value of the target upsampling layer based on the target weight and a loss function;
superposing the loss values of all the target up-sampling layers to obtain a training loss value of the target model; the target model is a model obtained when the first network to be trained is subjected to any iteration training;
and determining the target model with the minimum training loss value as the stomach image recognition model.
Further, the stomach image recognition model building system based on deep learning is further used for:
acquiring a first test image set; the first test image set is a set of unlabeled stomach images;
And labeling all stomach images in the first test image set based on the stomach image recognition model to obtain a labeled first test image set.
Further, the stomach image recognition model building system based on deep learning is further used for:
determining a first area occupation ratio of a target part in a first target image; the first target image is any image in the first training image set, and the target part is any part in stomach;
determining an area occupation ratio range of the target part based on the first area occupation ratio;
determining a second area occupation ratio of a target part in a second target image; the second target image is any image in the first test image set;
if the second area ratio is within the area ratio range, marking a target part in the second target image through the stomach image recognition model to obtain a marked second target image;
and collecting each marked second target image to obtain the marked first test image set.
Further, the first network to be trained is a U-Net network.
Further, the image acquisition module 201 is further configured to:
Acquiring a second training image set and a second test image set;
cutting and normalizing all images in the second training image set to obtain a preprocessed image data set;
training a second network to be trained based on the image data set to obtain a quality difference image recognition model; the second network to be trained is a classified neural network;
classifying the stomach images in the second test image set based on the quality difference image recognition model to obtain a quality training image;
collecting the excellent training images to obtain a first training image set;
the second training image set is a set of stomach images meeting preset standards, the second test image set is a set of stomach images needing to be classified according to the preset standards, and the preset standards are: the brightness of the image is lower than the first preset brightness, or/and the brightness of the image is higher than the second preset brightness, or/and the definition of the image is lower than the preset definition, or/and the image has afterimages.
The specific embodiment of the stomach image recognition model construction system based on the deep learning is basically the same as the above-mentioned stomach image recognition model construction method based on the deep learning, and is not described herein.
Fig. 3 illustrates a physical schematic diagram of an electronic device, as shown in fig. 3, the electronic device may include: processor 310, communication interface (Communications Interface) 320, memory 330 and communication bus 340, wherein processor 310, communication interface 320, memory 330 accomplish communication with each other through communication bus 340. Processor 310 may invoke logic instructions in memory 330 to perform a deep learning based stomach image recognition model building method comprising:
acquiring a first training image set;
determining a first size based on the size of the target training image; the target training image is any image in the first training image set;
determining network parameters based on the first size, and constructing a first network to be trained based on the network parameters;
training the first network to be trained based on the first training image set to obtain a stomach image recognition model;
the first training image set is a set of annotated stomach images.
Further, the logic instructions in the memory 330 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In yet another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the above provided deep learning based stomach image recognition model construction method, the method comprising:
acquiring a first training image set;
determining a first size based on the size of the target training image; the target training image is any image in the first training image set;
determining network parameters based on the first size, and constructing a first network to be trained based on the network parameters;
training the first network to be trained based on the first training image set to obtain a stomach image recognition model;
the first training image set is a set of annotated stomach images.
The processor-readable storage medium may be any available medium or data storage device that can be accessed by a processor, including, but not limited to, magnetic storage (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical storage (e.g., CD, DVD, BD, HVD, etc.), semiconductor storage (e.g., ROM, EPROM, EEPROM, nonvolatile storage (NAND FLASH), solid State Disk (SSD)), and the like.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (6)
1. The stomach image recognition model construction method based on deep learning is characterized by comprising the following steps of:
acquiring a first training image set;
determining a first size based on the size of the target training image; the target training image is any image in the first training image set;
wherein, based on the size of the target training image, determining the first size includes: determining a first size according to a median of the sizes of the first training images, and/or determining the first size according to an average of the sizes of the first training images, and/or determining the first size according to a mode of the sizes of the first training images;
Determining network parameters based on the first size, and constructing a first network to be trained based on the network parameters;
wherein the determining network parameters based on the first size comprises:
determining a second dimension of the minimum feature map of the downsampling layer; the downsampling layer is a pooling layer of the first network to be trained;
determining a sampling number based on the first size and the second size, and determining an up-sampling layer number and a down-sampling layer number based on the sampling number;
determining a third size of the sampled image based on the number of samples and the first size;
determining the up-sampling layer number, the down-sampling layer number and the third size as the network parameters;
training the first network to be trained based on the first training image set to obtain a stomach image recognition model; the first training image set is a set of marked stomach images;
the training the first network to be trained based on the first training image set to obtain a stomach image recognition model comprises the following steps:
determining a target weight of a target upsampling layer based on the number of layers of the target upsampling layer; the target upsampling layer is any upsampling layer of the first network to be trained;
Determining a loss value of the target upsampling layer based on the target weight and a loss function;
superposing the loss values of all the target up-sampling layers to obtain a training loss value of the target model; the target model is a model obtained when the first network to be trained is subjected to any iteration training;
determining a target model with the minimum training loss value as the stomach image recognition model;
the training the first network to be trained based on the first training image set, after obtaining the stomach image recognition model, further includes:
acquiring a first test image set; the first test image set is a set of unlabeled stomach images;
labeling all stomach images in the first test image set based on the stomach image recognition model to obtain a labeled first test image set;
labeling all stomach images in the first test image set based on the stomach image recognition model to obtain a labeled first test image set, wherein the labeling comprises the following steps:
determining a first area occupation ratio of a target part in a first target image; the first target image is any image in the first training image set, and the target part is any part in stomach;
Determining an area occupation ratio range of the target part based on the first area occupation ratio;
determining a second area occupation ratio of a target part in a second target image; the second target image is any image in the first test image set;
if the second area ratio is within the area ratio range, marking a target part in the second target image through the stomach image recognition model to obtain a marked second target image;
collecting each marked second target image to obtain a first marked test image set;
if the second area ratio is within the area ratio range, marking a target part in the second target image through the stomach image recognition model to obtain a marked second target image, wherein the marking comprises the following steps:
searching the outline of the final area where the target part is located in the second target image, and filling holes in the outline to obtain a filled final area;
calculating the second area occupation ratio of the final area after filling, and judging whether the second area occupation ratio is in the area occupation ratio range or not;
And if the second area ratio is within the area ratio range, marking the target part in the second target image through the stomach image recognition model, and obtaining the marked second target image.
2. The deep learning-based stomach image recognition model construction method according to claim 1, wherein the first network to be trained is a U-Net network.
3. The method for constructing a deep learning-based stomach image recognition model according to any one of claims 1 to 2, wherein the acquiring a first training image set includes:
acquiring a second training image set and a second test image set;
cutting and normalizing all images in the second training image set to obtain a preprocessed image data set;
training a second network to be trained based on the image data set to obtain a quality difference image recognition model; the second network to be trained is a classified neural network;
classifying the stomach images in the second test image set based on the quality difference image recognition model to obtain a quality training image;
collecting the excellent training images to obtain a first training image set;
The second training image set is a set of stomach images meeting preset standards, the second test image set is a set of stomach images needing to be classified according to the preset standards, and the preset standards are: the brightness of the image is lower than the first preset brightness, or/and the brightness of the image is higher than the second preset brightness, or/and the definition of the image is lower than the preset definition, or/and the image has afterimages.
4. A deep learning-based stomach image recognition model construction system, comprising:
the image acquisition module is used for acquiring a first training image set;
the size determining module is used for determining a first size based on the size of the target training image; the target training image is any image in the first training image set;
wherein, based on the size of the target training image, determining the first size includes: determining a first size according to a median of the sizes of the first training images, and/or determining the first size according to an average of the sizes of the first training images, and/or determining the first size according to a mode of the sizes of the first training images;
the network construction module is used for determining network parameters based on the first size and constructing a first network to be trained based on the network parameters;
Wherein the determining network parameters based on the first size comprises:
determining a second dimension of the minimum feature map of the downsampling layer; the downsampling layer is a pooling layer of the first network to be trained;
determining a sampling number based on the first size and the second size, and determining an up-sampling layer number and a down-sampling layer number based on the sampling number;
determining a third size of the sampled image based on the number of samples and the first size; determining the up-sampling layer number, the down-sampling layer number and the third size as the network parameters;
the model training module is used for training the first network to be trained based on the first training image set to obtain a stomach image recognition model; the first training image set is a set of marked stomach images;
the training the first network to be trained based on the first training image set to obtain a stomach image recognition model comprises the following steps:
determining a target weight of a target upsampling layer based on the number of layers of the target upsampling layer; the target upsampling layer is any upsampling layer of the first network to be trained;
determining a loss value of the target upsampling layer based on the target weight and a loss function;
Superposing the loss values of all the target up-sampling layers to obtain a training loss value of the target model; the target model is a model obtained when the first network to be trained is subjected to any iteration training;
determining a target model with the minimum training loss value as the stomach image recognition model;
the training the first network to be trained based on the first training image set, after obtaining the stomach image recognition model, further includes:
acquiring a first test image set; the first test image set is a set of unlabeled stomach images;
labeling all stomach images in the first test image set based on the stomach image recognition model to obtain a labeled first test image set;
labeling all stomach images in the first test image set based on the stomach image recognition model to obtain a labeled first test image set, wherein the labeling comprises the following steps:
determining a first area occupation ratio of a target part in a first target image; the first target image is any image in the first training image set, and the target part is any part in stomach;
determining an area occupation ratio range of the target part based on the first area occupation ratio;
Determining a second area occupation ratio of a target part in a second target image; the second target image is any image in the first test image set;
if the second area ratio is within the area ratio range, marking a target part in the second target image through the stomach image recognition model to obtain a marked second target image;
collecting each marked second target image to obtain a first marked test image set;
if the second area ratio is within the area ratio range, marking a target part in the second target image through the stomach image recognition model to obtain a marked second target image, wherein the marking comprises the following steps:
searching the outline of the final area where the target part is located in the second target image, and filling holes in the outline to obtain a filled final area;
calculating the second area occupation ratio of the final area after filling, and judging whether the second area occupation ratio is in the area occupation ratio range or not;
and if the second area ratio is within the area ratio range, marking the target part in the second target image through the stomach image recognition model, and obtaining the marked second target image.
5. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the deep learning-based stomach image recognition model building method of any one of claims 1 to 3 when the computer program is executed by the processor.
6. A non-transitory computer readable storage medium comprising a computer program, characterized in that the computer program when executed by a processor implements the deep learning-based stomach image recognition model construction method of any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310357698.1A CN116071622B (en) | 2023-04-06 | 2023-04-06 | Stomach image recognition model construction method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310357698.1A CN116071622B (en) | 2023-04-06 | 2023-04-06 | Stomach image recognition model construction method and system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116071622A CN116071622A (en) | 2023-05-05 |
CN116071622B true CN116071622B (en) | 2024-01-12 |
Family
ID=86180556
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310357698.1A Active CN116071622B (en) | 2023-04-06 | 2023-04-06 | Stomach image recognition model construction method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116071622B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230339A (en) * | 2018-01-31 | 2018-06-29 | 浙江大学 | A kind of gastric cancer pathological section based on pseudo label iteration mark marks complementing method |
CN108364025A (en) * | 2018-02-11 | 2018-08-03 | 广州市碳码科技有限责任公司 | Gastroscope image-recognizing method, device, equipment and medium based on deep learning |
CN109118491A (en) * | 2018-07-30 | 2019-01-01 | 深圳先进技术研究院 | A kind of image partition method based on deep learning, system and electronic equipment |
CN109584218A (en) * | 2018-11-15 | 2019-04-05 | 首都医科大学附属北京友谊医院 | A kind of construction method of gastric cancer image recognition model and its application |
CN110598600A (en) * | 2019-08-27 | 2019-12-20 | 广东工业大学 | Remote sensing image cloud detection method based on UNET neural network |
CN110969627A (en) * | 2019-11-29 | 2020-04-07 | 北京达佳互联信息技术有限公司 | Image processing method and device |
CN112967287A (en) * | 2021-01-29 | 2021-06-15 | 平安科技(深圳)有限公司 | Gastric cancer focus identification method, device, equipment and storage medium based on image processing |
CN113096020A (en) * | 2021-05-08 | 2021-07-09 | 苏州大学 | Calligraphy font creation method for generating confrontation network based on average mode |
CN114022679A (en) * | 2021-11-29 | 2022-02-08 | 重庆赛迪奇智人工智能科技有限公司 | Image segmentation method, model training device and electronic equipment |
CN114913327A (en) * | 2022-05-17 | 2022-08-16 | 河海大学常州校区 | Lower limb skeleton CT image segmentation algorithm based on improved U-Net |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3041140C (en) * | 2018-04-26 | 2021-12-14 | NeuralSeg Ltd. | Systems and methods for segmenting an image |
CN116012589A (en) * | 2023-02-20 | 2023-04-25 | 苏州国科康成医疗科技有限公司 | Image segmentation method, device, equipment and storage medium |
-
2023
- 2023-04-06 CN CN202310357698.1A patent/CN116071622B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230339A (en) * | 2018-01-31 | 2018-06-29 | 浙江大学 | A kind of gastric cancer pathological section based on pseudo label iteration mark marks complementing method |
CN108364025A (en) * | 2018-02-11 | 2018-08-03 | 广州市碳码科技有限责任公司 | Gastroscope image-recognizing method, device, equipment and medium based on deep learning |
CN109118491A (en) * | 2018-07-30 | 2019-01-01 | 深圳先进技术研究院 | A kind of image partition method based on deep learning, system and electronic equipment |
CN109584218A (en) * | 2018-11-15 | 2019-04-05 | 首都医科大学附属北京友谊医院 | A kind of construction method of gastric cancer image recognition model and its application |
CN110598600A (en) * | 2019-08-27 | 2019-12-20 | 广东工业大学 | Remote sensing image cloud detection method based on UNET neural network |
CN110969627A (en) * | 2019-11-29 | 2020-04-07 | 北京达佳互联信息技术有限公司 | Image processing method and device |
CN112967287A (en) * | 2021-01-29 | 2021-06-15 | 平安科技(深圳)有限公司 | Gastric cancer focus identification method, device, equipment and storage medium based on image processing |
CN113096020A (en) * | 2021-05-08 | 2021-07-09 | 苏州大学 | Calligraphy font creation method for generating confrontation network based on average mode |
CN114022679A (en) * | 2021-11-29 | 2022-02-08 | 重庆赛迪奇智人工智能科技有限公司 | Image segmentation method, model training device and electronic equipment |
CN114913327A (en) * | 2022-05-17 | 2022-08-16 | 河海大学常州校区 | Lower limb skeleton CT image segmentation algorithm based on improved U-Net |
Also Published As
Publication number | Publication date |
---|---|
CN116071622A (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111291825B (en) | Focus classification model training method, apparatus, computer device and storage medium | |
CN111784671B (en) | Pathological image focus region detection method based on multi-scale deep learning | |
CN108805134B (en) | Construction method and application of aortic dissection model | |
CN108596904B (en) | Method for generating positioning model and method for processing spine sagittal position image | |
CN107451615A (en) | Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN | |
CN110110808B (en) | Method and device for performing target labeling on image and computer recording medium | |
CN113793348B (en) | Retinal blood vessel segmentation method and device | |
CN110706233A (en) | Retina fundus image segmentation method and device | |
US20200372654A1 (en) | Sampling latent variables to generate multiple segmentations of an image | |
CN113012155A (en) | Bone segmentation method in hip image, electronic device, and storage medium | |
CN110992370B (en) | Pancreas tissue segmentation method and device and terminal equipment | |
CN116030259B (en) | Abdominal CT image multi-organ segmentation method and device and terminal equipment | |
CN112613471B (en) | Face living body detection method, device and computer readable storage medium | |
CN111626379B (en) | X-ray image detection method for pneumonia | |
CN110781831A (en) | Hyperspectral optimal waveband selection method and device based on self-adaption | |
CN115439651A (en) | DSA (digital Signal amplification) cerebrovascular segmentation system and method based on space multi-scale attention network | |
CN112750137A (en) | Liver tumor segmentation method and system based on deep learning | |
CN117422880A (en) | Segmentation method and system combining improved attention mechanism and CV model | |
US9305243B2 (en) | Adaptable classification method | |
CN114693671A (en) | Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning | |
CN116071622B (en) | Stomach image recognition model construction method and system based on deep learning | |
CN110189332A (en) | Prostate Magnetic Resonance Image Segmentation method and system based on weight G- Design | |
CN116468702A (en) | Chloasma assessment method, device, electronic equipment and computer readable storage medium | |
US12027270B2 (en) | Method of training model for identification of disease, electronic device using method, and non-transitory storage medium | |
CN113379770B (en) | Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |