CN111612789A - Defect detection method based on improved U-net network - Google Patents

Defect detection method based on improved U-net network Download PDF

Info

Publication number
CN111612789A
CN111612789A CN202010611491.9A CN202010611491A CN111612789A CN 111612789 A CN111612789 A CN 111612789A CN 202010611491 A CN202010611491 A CN 202010611491A CN 111612789 A CN111612789 A CN 111612789A
Authority
CN
China
Prior art keywords
network
feature map
training
size
net network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010611491.9A
Other languages
Chinese (zh)
Inventor
都卫东
王岩松
和江镇
龙仕玉
张海宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Focusight Technology Co Ltd
Original Assignee
Focusight Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Focusight Technology Co Ltd filed Critical Focusight Technology Co Ltd
Priority to CN202010611491.9A priority Critical patent/CN111612789A/en
Publication of CN111612789A publication Critical patent/CN111612789A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention relates to a defect detection method based on an improved U-net network, which is used for detecting defects of a planar product and comprises the following steps: training a batch of training images by utilizing a U-net network, wherein the training images comprise a plurality of normal samples and a plurality of defect samples; the encode part of the U-net network is replaced by a Resnet network. The Resnet network comprises a plurality of residual error modules, maximum pooling operation is correspondingly carried out through each residual error module, and a corresponding residual error characteristic diagram is obtained through each residual error module and the maximum pooling operation. The method improves the accuracy of the prediction of the training model, increases the receptive field of the network, keeps more characteristic information of the original image, can more fully keep the characteristics of the bottom layer, can more quickly make the defect characteristics prominent, and is particularly suitable for the accuracy of the defect detection of products in flat scenes such as glass and the like.

Description

Defect detection method based on improved U-net network
Technical Field
The invention relates to the technical field of defect detection of product images by utilizing a U-net network, in particular to defect detection of flat products such as glass and the like.
Background
A U-net network (segmentation network) is commonly used for image segmentation tasks, as shown in fig. 1, the left half of fig. 1 is a feature extraction part, and each time a pooling layer is passed, the feature map is reduced by half, and fig. 1 includes 5 dimensions. The right half part in fig. 1 is an up-sampling part, and each up-sampling part is fused with a feature map with the same number of channels corresponding to the feature extraction part, and the U-net network has two main advantages in image segmentation, one of which is that multi-scale input can be realized, and the other of which is more suitable for segmentation of images with higher resolution.
At present, the U-net network is mainly used for medical image segmentation, an encode part (an encoding part) of the U-net network is downsampled for 4 times, and the downsampling is 16 times in total, meanwhile, a decode part (a decoding part) of the U-net network is correspondingly upsampled for 4 times, the obtained high-level feature graph (feature map) is restored to the resolution of an original input image, compared with other segmentation networks, the U-net network does not directly monitor and loss calculation on high-level semantic features, but splices in the feature graph with the same size through 4 upsampling processes. Therefore, the recovered feature graph can be ensured to be capable of fusing more features of the bottom layer network and the high layer network, and multi-scale prediction is carried out. The 4-time upsampling of the U-net network also enables edge information to be finer, medical images are basically images of fixed organs because of being simpler and relatively fixed in structure, meanwhile, the data volume of the medical images is less, an over-deep network model cannot be used, parameters are too many, and overfitting is easily caused. Suitable scenes for the U-net network are high resolution images (providing accurate segmentation positioning) and low resolution information (determining whether a target is present or not).
When the U-net network is applied to defect detection of flat products such as glass, defects often exist in a high-resolution and complex structure mode, defect characteristics of a batch of samples exist in different positions, the shapes and sizes of the defects are greatly different, correlation does not exist among a plurality of defects, more bottom-layer characteristics are needed, and the defect of up-sampling information is overcome. The traditional U-net network realizes feature extraction through a convolutional layer and a pooling layer, the convolutional layer is few, the receptive field of a feature map is low, and the problems of gradient loss, explosion and the like can be caused if the number of network layers is simply increased.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the defect detection method based on the improved U-net network is provided, more characteristic information of an original image of a product is kept, and the accuracy of product defect detection is improved.
The technical scheme adopted by the invention for solving the technical problems is as follows: a defect detection method based on an improved U-net network, for performing defect detection on flat type products, the method comprising: training a batch of training images by utilizing a U-net network, wherein the training images comprise a plurality of normal samples and a plurality of defect samples; the encode part of the U-net network is replaced by a Resnet network.
More preferably, the method comprises the steps of:
(1) and training image production: cutting an input original image of the plane product into a training image with a preset size;
more preferably, the replacing the encode part of the U-net network with the Resnet network includes the following steps:
(2) firstly, extracting feature maps from the training images through a first convolution layer to obtain a first-layer feature map, and reducing the size of each feature map 1/2 through a maximum pooling operation of the first-layer feature map;
(3) the characteristic diagram obtained in the step (2) passes through a Resnet network, the Resnet network comprises a plurality of residual modules, maximum pooling operation is correspondingly performed once through each residual module, and the size of the corresponding characteristic diagram is reduced 1/2 through each maximum pooling operation; and obtaining a corresponding residual error characteristic diagram after each residual error module and one maximum pooling operation.
More preferably, the method comprises the steps of:
(4) and the decode part of the U-net network: deconvoluting all residual modules of the Resnet network and the total residual characteristic graph obtained after the corresponding maximum pooling operation to obtain an up-sampling characteristic graph, and fusing the up-sampling characteristic graph with the corresponding residual characteristic graph with the same size in the step (3) to obtain a fused characteristic graph;
(5) after deconvolution is carried out on the obtained fusion feature map, the fusion feature map is fused with the corresponding residual error feature map with the same size in the step (3) to obtain a new fusion feature map;
(6) and (5) repeating the step until a fusion feature map with the same initial size as the training image is obtained, and finally performing deconvolution to obtain a total fusion feature map.
More preferably, the method comprises the steps of:
(7) calculating cross entropy loss of the total fusion characteristic diagram according to a label diagram corresponding to the training image, and updating parameters in the U-net network and the Resnet network through back propagation training to reduce loss values, so as to finally obtain a prediction model;
(8) and cutting an original image of the planar product to be tested into the size of a training image, and inputting the size of the training image into a prediction model to obtain the probability value of whether the pixel point of the original image is a defect.
The invention has the beneficial effects that: the invention utilizes Resnet network to extract features, and improves the accuracy of training model prediction. Compared with the original U-net network, the Resnet network has the advantages that the receptive field of the network is increased, more characteristic information of an original image is kept, the characteristics of the bottom layer can be kept more sufficiently, the loss of the characteristics of the bottom layer caused by network degradation is avoided in the training process, and further more bottom layer information can be fused in the decode part. Compared with the original U-net network, the method has higher defect detection accuracy, can quickly highlight defect characteristics, can quickly converge and keep the characteristics of the original image, and has better adaptability to the defect detection accuracy of products in flat scenes such as glass and the like and the defect of weak contrast in the original image.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a diagram of a conventional U-net network architecture;
FIG. 2 is a diagram of a resnet network architecture employed in the present invention;
FIG. 3 is a diagram of a resnet plus U-net network architecture in accordance with an embodiment of the present invention;
FIG. 4 is a diagram of the effect of final size segmentation in an embodiment of the present invention;
the device comprises a first residual error module, a second residual error module, a third residual error module, a fourth residual error module and a fourth residual error module, wherein the first residual error module, the second residual error module, the third residual error module and the fourth residual error module are respectively arranged in the device 1.
Detailed Description
The invention will now be further described with reference to the accompanying drawings. These drawings are simplified schematic diagrams only illustrating the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.
A defect detection method based on an improved U-net network is used for detecting defects of a planar product, and comprises the following steps: training a batch of training images by utilizing a U-net network, wherein the training images comprise a plurality of normal samples and a plurality of defect samples; the encode part of the U-net network is replaced by a Resnet network. The method is suitable for being applied to flat scenes such as glass and the like.
The convolution layer of the original U-net network is a characteristic extraction layer, the original U-net network is shown in figure 1, the left half is an encode part, and the right half is a decode part. The resnet network in the present invention may be selected from the resnet-34 network shown in fig. 2. As shown in fig. 2, the resnet network has 4 residual modules: a first residual module 1, a second residual module 2, a third residual module 3 and a fourth residual module 4. Each residual block contains a convolution operation with a convolution kernel size of 3 x 3 for a different number of feature maps, and each residual block finally performs a maximum pooling operation (maxpoling), which results in a reduction 1/2 in the feature map size each time. When the application scene is complex, the resnet network can select resnet-50 or resnet-101 as a backbone network to extract more bottom-layer features.
As shown in fig. 3, the specific embodiment is as follows:
(1) and training image production: and cutting the input original image of the plane product into a training image with a preset size. The size of an original image of a planar product is variable, n normal samples and m samples containing defects are cut out in a cutting mode, and the size of a training image is preferably 256 × 256; n and m are natural numbers larger than zero.
(2) And starting training, wherein the training images firstly pass through a first convolutional layer to extract feature maps to obtain a first-layer feature map, and the size of each feature map is reduced 1/2 by the aid of the maximum pooling operation of the first-layer feature maps. Preferably, for example, the training image is subjected to a maximum pooling calculation to convert the first level feature map into 64 feature maps of size 128 × 128 by extracting the feature maps for a size of 7 × 7 by means of 64 convolution kernels to obtain the first level feature map.
(3) The characteristic diagram obtained in the step (2) passes through a Resnet network, the Resnet network comprises a plurality of residual modules, maximum pooling operation is correspondingly performed once through each residual module, and the size of the corresponding characteristic diagram is reduced 1/2 through each maximum pooling operation; and obtaining a corresponding residual error characteristic diagram after each residual error module and one maximum pooling operation. The maximum pooling operation can be placed at the very front or at the very end of each residual block.
The multiple residual error modules of the Resnet network greatly accelerate the training speed of the network and improve the training effect by learning the output and input residual errors without increasing the calculated amount, and the problem of network degradation can be well solved by increasing the number of network layers.
As shown in fig. 3 and fig. 2, the first residual block 1 stage is entered, first maximum pooling (maxporoling); and 6 convolution operations are calculated, the number of the feature maps is 64, the convolution kernel size is 3 x 3, and the residual feature map with the size of 64 x 64 is obtained. And (3) entering a second residual error module 2 stage, performing maximum pooling operation, and then calculating 8 times of convolution operation, wherein the number of feature maps is 128, the size of a convolution kernel is 3 x 3, and a residual error feature map with the size of 32 x 32 is obtained. Entering a third residual error module 3 stage, performing maximum pooling operation, and then calculating 12 times of convolution operation, wherein the number of feature maps is 256, and the size of a convolution kernel is 3 x 3; a residual profile with dimensions 16 x 16 was obtained. Entering a stage 4 of a fourth residual module, performing maximum pooling operation, and then calculating 6 times of convolution operation, wherein the number of feature maps is 512, and the size of a convolution kernel is 3 x 3; a total residual profile with a size of 8 x 8 was obtained.
(4) And the decode part of the U-net network: and (4) deconvoluting the total residual error feature map obtained after all residual error modules of the Resnet network and the corresponding maximum pooling operation to obtain an up-sampling feature map, and fusing the up-sampling feature map with the corresponding residual error feature map with the same size in the step (3) to obtain a fused feature map. Preferably, as shown in fig. 3, the obtained total residual feature maps are deconvoluted, the size of the convolution kernel is 3 × 3, the number of convolution kernels is 256, and the obtained 256 upsampled feature maps with the feature map size of 16 × 16 are fused with the corresponding 256 residual feature maps with the size of 16 × 16 in step (3), so as to obtain 512 fused feature maps with 16 × 16.
(5) And (4) deconvoluting the obtained fusion feature map, and fusing the deconvoluted fusion feature map with the corresponding residual feature map with the same size in the step (3) to obtain a new fusion feature map. For example, the 512 16 × 16 fused feature maps obtained above are deconvoluted, the convolution kernel size is 3 × 3, the number of convolution kernels is 128, and the 128 feature maps with the size of 32 × 32 obtained after deconvolution are fused with the 128 residual feature maps with the size of 32 × 32 corresponding to step (3) to obtain 256 fused feature maps with 32 × 32.
(6) And (5) repeating the step until a fusion feature map with the same initial size as the training image is obtained, and finally performing deconvolution to obtain a total fusion feature map. For example, according to step (5), the 256 fused feature maps 32 × 32 are deconvoluted, the convolution kernel size is 3 × 3, the number of convolution kernels is 256, and the 256 feature maps with the size of 64 × 64 obtained after deconvolution are fused with the 256 residual feature maps with the size of 64 × 64 in step (3) to obtain 512 fused feature maps 64 × 64. And (3) deconvolving the 512 64 × 64 fused feature maps, wherein the size of a convolution kernel is 3 × 3, the number of the convolution kernels is 512, and the 512 feature maps with the size of 128 × 128 obtained after deconvolution are fused with the 512 residual feature maps with the size of 128 × 128 corresponding to the step (3) to obtain 1024 fused feature maps with the size of 128 × 128. And finally, deconvoluting the 1024 128-128 fused feature maps to obtain a total fused feature map with the size of 256-256.
(7) Calculating cross entropy loss of the total fusion characteristic diagram according to a label diagram (label diagram) corresponding to the training image, and updating parameters in the U-net network and the Resnet network through back propagation training to reduce a loss value, so as to finally obtain a prediction model;
(8) and cutting an original image of the planar product to be tested into the size of a training image, and inputting the size of the training image into a prediction model to obtain the probability value of whether the pixel point of the original image is a defect.
The training model of the invention can obtain the probability value that the pixel point of the original image is the defect, and simultaneously, the characteristics of human eyes are utilized, as shown in figure 4, the defect usually exists in a form with higher contrast and certain connectivity. And searching a region with higher contrast in an original image of the planar product according to the probability heat map predicted by the training model, so that accurate defect detection can be realized. The probability heat map reflects the magnitude of the likelihood that each pixel is a defect. The invention utilizes Resnet network to extract features, and improves the accuracy of training model prediction. Compared with the original U-net network, the Resnet network has the advantages that the receptive field of the network is increased, more characteristic information of an original image is kept, the characteristics of the bottom layer can be kept more sufficiently, the loss of the characteristics of the bottom layer caused by network degradation is avoided in the training process, and further more bottom layer information can be fused in the decode part. Compared with the original U-net network, the method has higher defect detection accuracy. The method can quickly highlight the defect characteristics, can quickly converge and keep the characteristics of the original image, and has better adaptability to the product defect detection accuracy in flat scenes such as glass and the like and the weak contrast defect in the original image. Compared with a step size (stride) mode and a mean value pooling (avgpoling) mode, the maximum pooling operation is used in the method, so that the characteristics of the defect part and the non-defect part can be more easily distinguished in the network iteration process, and further, the model is converged.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (4)

1. A defect detection method based on improved U-net network is used for detecting defects of plane products, and is characterized in that: the method comprises the following steps: training a batch of training images by utilizing a U-net network, wherein the training images comprise a plurality of normal samples and a plurality of defect samples; the encode part of the U-net network is replaced by a Resnet network.
2. The defect detection method of claim 1, wherein: the method comprises the following steps:
(1) and training image production: cutting an input original image of the plane product into a training image with a preset size;
the use of the Resnet network in place of the encode portion of the U-net network as claimed in claim 1, comprising the steps of:
(2) firstly, extracting feature maps from the training images through a first convolution layer to obtain a first-layer feature map, and reducing the size of each feature map 1/2 through a maximum pooling operation of the first-layer feature map;
(3) the characteristic diagram obtained in the step (2) passes through a Resnet network, the Resnet network comprises a plurality of residual modules, maximum pooling operation is correspondingly performed once through each residual module, and the size of the corresponding characteristic diagram is reduced 1/2 through each maximum pooling operation; and obtaining a corresponding residual error characteristic diagram after each residual error module and one maximum pooling operation.
3. The defect detection method of claim 2, wherein: the method comprises the following steps:
(4) and the decode part of the U-net network: deconvoluting all residual modules of the Resnet network and the total residual characteristic graph obtained after the corresponding maximum pooling operation to obtain an up-sampling characteristic graph, and fusing the up-sampling characteristic graph with the corresponding residual characteristic graph with the same size in the step (3) to obtain a fused characteristic graph;
(5) after deconvolution is carried out on the obtained fusion feature map, the fusion feature map is fused with the corresponding residual error feature map with the same size in the step (3) to obtain a new fusion feature map;
(6) and (5) repeating the step until a fusion feature map with the same initial size as the training image is obtained, and finally performing deconvolution to obtain a total fusion feature map.
4. The defect detection method of claim 3, wherein: the method comprises the following steps:
(7) calculating cross entropy loss of the total fusion characteristic diagram according to a label diagram corresponding to the training image, and updating parameters in the U-net network and the Resnet network through back propagation training to reduce loss values, so as to finally obtain a prediction model;
(8) and cutting an original image of the planar product to be tested into the size of a training image, and inputting the size of the training image into a prediction model to obtain the probability value of whether the pixel point of the original image is a defect.
CN202010611491.9A 2020-06-30 2020-06-30 Defect detection method based on improved U-net network Withdrawn CN111612789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010611491.9A CN111612789A (en) 2020-06-30 2020-06-30 Defect detection method based on improved U-net network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010611491.9A CN111612789A (en) 2020-06-30 2020-06-30 Defect detection method based on improved U-net network

Publications (1)

Publication Number Publication Date
CN111612789A true CN111612789A (en) 2020-09-01

Family

ID=72198995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010611491.9A Withdrawn CN111612789A (en) 2020-06-30 2020-06-30 Defect detection method based on improved U-net network

Country Status (1)

Country Link
CN (1) CN111612789A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070762A (en) * 2020-09-18 2020-12-11 惠州高视科技有限公司 Mura defect detection method and device for liquid crystal panel, storage medium and terminal
CN112614113A (en) * 2020-12-26 2021-04-06 北京工业大学 Strip steel defect detection method based on deep learning
CN114119640A (en) * 2022-01-27 2022-03-01 广东皓行科技有限公司 Model training method, image segmentation method and image segmentation system
CN114782387A (en) * 2022-04-29 2022-07-22 苏州威达智电子科技有限公司 Surface defect detection system
WO2022236876A1 (en) * 2021-05-14 2022-11-17 广州广电运通金融电子股份有限公司 Cellophane defect recognition method, system and apparatus, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160265969A1 (en) * 2015-03-10 2016-09-15 Pixart Imaging Inc. Image processing method capable of detecting noise and related navigation device
CN109118491A (en) * 2018-07-30 2019-01-01 深圳先进技术研究院 A kind of image partition method based on deep learning, system and electronic equipment
CN109840471A (en) * 2018-12-14 2019-06-04 天津大学 A kind of connecting way dividing method based on improvement Unet network model
CN110334645A (en) * 2019-07-02 2019-10-15 华东交通大学 A kind of moon impact crater recognition methods based on deep learning
CN111145188A (en) * 2019-12-25 2020-05-12 西安电子科技大学 Image segmentation method based on ResNet and UNet models
CN111275653A (en) * 2020-02-28 2020-06-12 北京松果电子有限公司 Image denoising method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160265969A1 (en) * 2015-03-10 2016-09-15 Pixart Imaging Inc. Image processing method capable of detecting noise and related navigation device
CN109118491A (en) * 2018-07-30 2019-01-01 深圳先进技术研究院 A kind of image partition method based on deep learning, system and electronic equipment
CN109840471A (en) * 2018-12-14 2019-06-04 天津大学 A kind of connecting way dividing method based on improvement Unet network model
CN110334645A (en) * 2019-07-02 2019-10-15 华东交通大学 A kind of moon impact crater recognition methods based on deep learning
CN111145188A (en) * 2019-12-25 2020-05-12 西安电子科技大学 Image segmentation method based on ResNet and UNet models
CN111275653A (en) * 2020-02-28 2020-06-12 北京松果电子有限公司 Image denoising method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070762A (en) * 2020-09-18 2020-12-11 惠州高视科技有限公司 Mura defect detection method and device for liquid crystal panel, storage medium and terminal
CN112614113A (en) * 2020-12-26 2021-04-06 北京工业大学 Strip steel defect detection method based on deep learning
WO2022236876A1 (en) * 2021-05-14 2022-11-17 广州广电运通金融电子股份有限公司 Cellophane defect recognition method, system and apparatus, and storage medium
CN114119640A (en) * 2022-01-27 2022-03-01 广东皓行科技有限公司 Model training method, image segmentation method and image segmentation system
CN114119640B (en) * 2022-01-27 2022-04-22 广东皓行科技有限公司 Model training method, image segmentation method and image segmentation system
CN114782387A (en) * 2022-04-29 2022-07-22 苏州威达智电子科技有限公司 Surface defect detection system

Similar Documents

Publication Publication Date Title
CN111325751B (en) CT image segmentation system based on attention convolution neural network
CN111612789A (en) Defect detection method based on improved U-net network
CN110097554B (en) Retina blood vessel segmentation method based on dense convolution and depth separable convolution
CN110111334B (en) Crack segmentation method and device, electronic equipment and storage medium
CN112381097A (en) Scene semantic segmentation method based on deep learning
CN110046550B (en) Pedestrian attribute identification system and method based on multilayer feature learning
CN111145181B (en) Skeleton CT image three-dimensional segmentation method based on multi-view separation convolutional neural network
CN111784671A (en) Pathological image focus region detection method based on multi-scale deep learning
CN111291825A (en) Focus classification model training method and device, computer equipment and storage medium
CN111353544A (en) Improved Mixed Pooling-Yolov 3-based target detection method
CN111914654A (en) Text layout analysis method, device, equipment and medium
CN114332133A (en) New coronary pneumonia CT image infected area segmentation method and system based on improved CE-Net
CN110599495B (en) Image segmentation method based on semantic information mining
CN112700460A (en) Image segmentation method and system
CN116645592A (en) Crack detection method based on image processing and storage medium
CN113468946A (en) Semantically consistent enhanced training data for traffic light detection
CN115170801A (en) FDA-deep Lab semantic segmentation algorithm based on double-attention mechanism fusion
CN113870286A (en) Foreground segmentation method based on multi-level feature and mask fusion
CN114418987A (en) Retinal vessel segmentation method and system based on multi-stage feature fusion
CN113762265A (en) Pneumonia classification and segmentation method and system
CN110517272B (en) Deep learning-based blood cell segmentation method
CN115272242B (en) YOLOv 5-based optical remote sensing image target detection method
CN111612803A (en) Vehicle image semantic segmentation method based on image definition
CN116486156A (en) Full-view digital slice image classification method integrating multi-scale feature context
CN116091763A (en) Apple leaf disease image semantic segmentation system, segmentation method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200901