CN112329771A - Building material sample identification method based on deep learning - Google Patents

Building material sample identification method based on deep learning Download PDF

Info

Publication number
CN112329771A
CN112329771A CN202011201983.7A CN202011201983A CN112329771A CN 112329771 A CN112329771 A CN 112329771A CN 202011201983 A CN202011201983 A CN 202011201983A CN 112329771 A CN112329771 A CN 112329771A
Authority
CN
China
Prior art keywords
building material
roi
material sample
image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011201983.7A
Other languages
Chinese (zh)
Inventor
赵力
程荣
张亦明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute Of Building Science Group Co ltd
Yuanzhun Intelligent Technology Suzhou Co ltd
Original Assignee
Suzhou Institute Of Building Science Group Co ltd
Yuanzhun Intelligent Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute Of Building Science Group Co ltd, Yuanzhun Intelligent Technology Suzhou Co ltd filed Critical Suzhou Institute Of Building Science Group Co ltd
Priority to CN202011201983.7A priority Critical patent/CN112329771A/en
Publication of CN112329771A publication Critical patent/CN112329771A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention provides a building material sample identification method based on deep learning, which comprises a model training stage and a sample identification stage, wherein the model training stage comprises the steps of manufacturing a building material sample data set and constructing a multi-scale information fusion convolutional neural network for sample identification; enhancing the data of the sample data set to obtain the optimal model performance; and the sample identification stage comprises the steps of inputting the processed building material sample image into a model, performing feature extraction to generate an optimal size feature map, correcting the generated feature map to be an ROI, transmitting the ROI into a ROI posing layer according to different scales, mapping to a same size propofol, projecting on the original building material sample image to generate a proseal feature map, performing BBOX and CLS branch processing and the like to generate a building material detection frame with an accurate position, and identifying the material performance state of the sample. The method extracts information through the multi-scale feature map, well learns the target feature information of different scales, has good identification performance and universality, and has wide application prospect in the field of construction engineering.

Description

Building material sample identification method based on deep learning
Technical Field
The invention belongs to the field of construction engineering, and particularly relates to a building material sample identification method based on deep learning.
Background
With the increase of data information processing requirements and the rapid development of artificial intelligence technologies, people are trying to identify building material samples by using any method based on machine learning or deep learning, such as shallow network algorithms like a clustering neural network, a support vector machine, a wavelet transform neural network, and the like. However, the shallow network algorithm needs various complex algorithms to extract and determine the sample identification feature information from the echo information; the computational complexity and the consumption of computational resources are high, and therefore the versatility is low. Because a convolutional neural network is one of important models in the deep learning field and has a network structure that is highly invariant to image data having characteristics such as translation, inversion, and affine transformation, the convolutional neural network has been widely used in various fields in computer vision in recent years and has achieved excellent results. However, in the identification process, the conventional single linear convolution neural network only outputs for the last layer, namely: information extraction is carried out on the feature map with a single scale, and target feature information with different scales cannot be well learned obviously; it is therefore difficult to achieve good identification performance in complex construction material scenarios.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a building material sample identification method based on deep learning.
The purpose of the invention is realized by the following technical scheme:
a building material sample identification method based on deep learning comprises a model training stage and a sample identification stage,
the model training phase comprises the following steps:
s1, collecting and labeling building material samples, and manufacturing a building material sample data set; the data set comprises a building material sample of each category and is divided into a training set, a testing set and an evaluation set;
s2, constructing a multi-scale information fusion convolutional neural network for building material sample identification;
s3, performing data enhancement on the sample data set in the S1: to obtain optimal model performance;
the sample identification phase comprises the following steps:
s1, inputting the processed building material sample image into a model, pre-training the model through imagenet, and removing Resnet at the top layer to extract the characteristics and generate a characteristic diagram with the optimal size;
s2, respectively passing the feature map generated in S1 through RPN2, 3, 4 and 5 to generate candidate anchors with different sizes; setting the area of the anchor, wherein the anchors generated by all RPNs are uniformly distributed according to the following formula of 1: 1. 1: 2. 2: 1, generating a plurality of candidate anchors, and screening the anchor with the most complete target by the RPN according to the real mark by utilizing a binary classification and bounding box regerssion function and correcting the anchor to be used as an ROI;
s3, using the characteristic image layers output by different residual convolution modules as the input of an ROI posing layer in the ROI with different scales, using the output characteristic image of a deep convolution module in the ROI with large scale, and using ROI leave as the discrimination standard output by the corresponding layer convolution module:
Figure BDA0002755698920000021
wherein w and h are the length and width of ROI, and K0Is a reference leave;
s4, transferring the ROI generated in the S3 into an ROI posing layer, wherein the ROI posing uniformly maps the multi-scale ROI into a prosal with the same size, and the prosal is projected on an original building material sample image to generate a prosal feature map, so that subsequent BBOX and CLS branches can be conveniently processed;
s5, calculating the class of each sample through the full connection layer and softmax for the generic feature map by the CLS branch, and outputting the highest class probability as the confidence coefficient;
s6, the BBOX branch utilizes a bounding box regression function to correct the propofol area, a building material detection frame with a more accurate position is generated, and the material performance state of the building material sample is identified.
Preferably, the model training phase S3 includes the following steps:
s31, building material sample images under different scales and scenes are built by combining a plurality of data enhancement methods, and the existing data are expanded to simulate complex recognition scenes, so that the learning of the model to detail characteristic information is improved, and the universality of the model is enhanced;
s32, setting an initial weight as a pre-trained weight on imagenet, setting an initial learning rate to be 0.001, a learning rate attenuation index to be 0.1 and a batch _ size to be 16, and inputting the image size;
s33, in a loss function, an RPN series module adopts two classification losses and regression losses; CLS branch adopts multi-classification loss, BBOX branch adopts regression loss;
and S34, training on a training set and a test set by adopting an SGD random gradient descent optimizer in training until the performance of the model is optimal.
Preferably, the method for enhancing data in S31 includes:
S311、Random Erasing:
(3) IRE, randomly selecting an occlusion position on the whole target image;
(4) ORE, randomly selecting an occlusion position in a bounding-box area of the target;
(3) combining both IRE and ORE;
S312、Hide and Seek:
the method comprises the steps that a picture is cut into S-S grids, each grid is hidden according to probability, and different grid groups are hidden in each batch of the same picture in training;
S313、Grid Mask:
in order to avoid the problem that the complete target is deleted or the context information is lost due to the existence of an over-deletion area in S31 and S32 1; setting four parameters of x, y, r and d through Grid Mask:
Figure BDA0002755698920000031
wherein r is the size of mask, M is the number of reserved pixels, H, W is the image size; x and y are area coordinates randomly generated on the image; the value of the non-shielded area is 1, the value of the shielded area is 0, a mask with the same resolution as the original image is generated, and then the mask and the original image are multiplied to obtain an image;
S314、Mixup
performing mixed enhancement on the images, and mixing the images among different classes; the algorithm can be summarized as follows:
Figure BDA0002755698920000041
Figure BDA0002755698920000042
wherein x1、x2Is the pixel of different images, and lambda is the weight;
S315、Cutmix
a portion of the area is randomly cropped and the area pixel values of the other data in the training set are randomly filled.
Preferably, the step of generating the optimal feature map after the sample identification stage S1 performs sample extraction includes:
s11, marking the last residual convolution modules in Resnet as { C1,C2,C3,…CnAnd extracting output characteristic graphs of the residual error modules respectively and marking as { P }1,P2,P3,…,Pn};
S12, matching characteristic diagram P of the deepest layer52 times of nearest neighbor upsampling is carried out through 1;
s13, extracting CnAdjacent residual convolution module Cn-1Is output characteristic map Pn-11 is carried out1, convolution dimensionality reduction processing;
s14, adding the pixel values of the corresponding parts to the characteristic map PnCarrying out fusion;
s15, reducing aliasing effect brought by upsampling by carrying out 3-by-3 convolution on the fused feature map;
and S16, iterating the process until the optimal size characteristic diagram is generated.
The invention has the beneficial effects that: the method provided by the invention can be used for extracting information through the multi-scale feature map, well learning the target feature information with different scales, has good identification performance and universality, and has wide application prospects in the field of construction engineering.
Detailed Description
The technical scheme of the invention is specifically described by combining the embodiment, and the invention discloses a building material sample identification method based on deep learning, which comprises a model training stage and a sample identification stage, wherein the model training stage comprises the following steps:
s1, collecting and labeling building material samples, and manufacturing a building material sample data set; the data set comprises a building material sample of each category and is divided into a training set, a testing set and an evaluation set;
s2, constructing a multi-scale information fusion convolutional neural network for building material sample identification;
s3, performing data enhancement on the sample data set in the S1 to obtain the best model performance;
in particular, the method comprises the following steps of,
s31, building material sample images under different scales and scenes are built by combining a plurality of data enhancement methods, and the existing data are expanded to simulate complex recognition scenes, so that the learning of the model to detail characteristic information is improved, and the universality of the model is enhanced;
s32, setting the initial weight as the pre-trained weight on imagenet, the initial learning rate being 0.001, the learning rate decay index being 0.1, the batch _ size being 16, and the input image size being 224 × 224.
S33, in a loss function, an RPN series module adopts two classification losses and regression losses; the CLS branch employs multi-classification loss, and the BBOX branch employs regression loss.
And S34, training 20 epochs on a training set and a testing set by adopting an SGD random gradient descent optimizer in the training until the model performance reaches the best.
Wherein the enhancing method in S31 comprises the following steps:
S311、Random Erasing:
(5) IRE, randomly selecting an occlusion position on the whole target image;
(6) ORE, randomly selecting an occlusion position in a bounding-box area of the target;
(3) combining both IRE and ORE;
S312、Hide and Seek:
the picture is cut into S-S grids, each grid is hidden by adopting a certain probability, and different grid groups are hidden in each batch of the same picture in training;
S313、Grid Mask:
in order to avoid the problem that the complete target is deleted or the context information is lost due to the existence of an over-deletion area in S31 and S32 1; setting four parameters of x, y, r and d through Grid Mask:
Figure BDA0002755698920000051
wherein r is the size of mask, M is the number of reserved pixels, H, W is the image size; x and y are area coordinates randomly generated on the image; the value of the non-shielded area is 1, the value of the shielded area is 0, a mask with the same resolution as the original image is generated, and then the mask and the original image are multiplied to obtain an image;
S314、Mixup
performing mixed enhancement on the images, and mixing the images among different classes; the algorithm can be summarized as follows:
Figure BDA0002755698920000061
Figure BDA0002755698920000062
wherein x1、x2λ is the weight for the pixels of the different images.
S315、Cutmix
A portion of the area is randomly cropped and the area pixel values of the other data in the training set are randomly filled.
The sample identification phase comprises the following steps:
and S1, inputting the processed building material sample image into the model, and performing feature extraction through Resnet which is pre-trained on imagenet and goes to the top layer.
S2, marking the last 5 residual convolution modules in Resnet as { C1,C2,C3,C4,C5Extracting output characteristic graphs of the 5 residual modules respectively and marking the output characteristic graphs as { P }1,P2,P3,P4,P5}; generating a feature map with an optimal size;
the generation of the feature map comprises the following steps:
s21, matching characteristic diagram P of the deepest layer52 times of nearest neighbor upsampling is carried out through 1;
s22, extracting the output characteristic graph P of the residual convolution module C4 adjacent to the C54Performing 1 × 1 convolution dimensionality reduction processing;
s23, adding the pixel values of the corresponding parts to the characteristic map P5Carrying out fusion in a fusion mode;
s24, reducing aliasing effect brought by upsampling by carrying out 3-by-3 convolution on the fused feature map;
s25, iterating the processes until a feature map with the optimal size is generated;
s3, respectively passing the feature maps generated in the steps through RPN2, 3, 4 and 5 to generate candidate anchors with different sizes; the anchors areas are set to 32 × 32, 64 × 64, 128 × 128, 256 × 256, respectively, and all RPN-generated anchors collectively adopt 1: 1. 1: 2. 2: 1, generating a plurality of candidate anchors, and screening the anchor with the most complete target by the RPN according to the real mark by utilizing a binary classification and bounding box regerssion function and correcting the anchor to be used as an ROI;
s4, using the characteristic image layers output by different residual convolution modules as the input of an ROI posing layer in the ROI with different scales, using the output characteristic image of a deep convolution module in the ROI with large scale, and using ROI leave as the discrimination standard output by a certain layer of convolution module:
Figure BDA0002755698920000071
wherein w and h are the length and width of ROI, and K0For the reference leave, the small-scale ROI is set to 5 by the output feature map of the deep shallow convolution module, and represents the feature map P5And (4) size.
S5, transferring the ROI generated in the S4 into an ROI posing layer, wherein the ROI posing uniformly maps the multi-scale ROI into a proposal with the size of 7 x 7, and the proposal feature map is generated by projecting the multi-scale ROI on an original building material sample image, so that subsequent BBOX and CLS branches can be conveniently processed;
s6, calculating the class of each sample through the full connection layer and softmax for the generic feature map by the CLS branch, and outputting the highest class probability as the confidence coefficient;
s7, the BBOX branch utilizes a bounding box regression function to correct the prosal area, a building material detection frame with a more accurate position is generated, and the material performance state of the building material sample is identified.
There are, of course, many other specific embodiments of the invention and these are not to be considered as limiting. All technical solutions formed by using equivalent substitutions or equivalent transformations fall within the scope of the claimed invention.

Claims (4)

1. A building material sample identification method based on deep learning is characterized in that: comprises a model training stage and a sample identification stage,
the model training phase comprises the following steps:
s1, collecting and labeling building material samples, and manufacturing a building material sample data set; the data set comprises a building material sample of each category and is divided into a training set, a testing set and an evaluation set;
s2, constructing a multi-scale information fusion convolutional neural network for building material sample identification;
s3, performing data enhancement on the sample data set in the S1: to obtain optimal model performance;
the sample identification phase comprises the following steps:
s1, inputting the processed building material sample image into a model, pre-training the model through imagenet, and removing Resnet at the top layer to extract the characteristics and generate a characteristic diagram with the optimal size;
s2, respectively passing the feature map generated in S1 through RPN2, 3, 4 and 5 to generate candidate anchors with different sizes; setting the area of the anchor, wherein the anchors generated by all RPNs are uniformly distributed according to the following formula of 1: 1. 1: 2. 2: 1, generating a plurality of candidate anchors, and screening the anchor with the most complete target by the RPN according to the real mark by utilizing a binary classification and bounding box regerssion function and correcting the anchor to be used as an ROI;
s3, using the characteristic image layers output by different residual convolution modules as the input of an ROI posing layer in the ROI with different scales, using the output characteristic image of a deep convolution module in the ROI with large scale, and using ROI leave as the discrimination standard output by the corresponding layer convolution module:
Figure FDA0002755698910000011
wherein w and h are the length and width of ROI, and K0Is a reference leave;
s4, transferring the ROI generated in the S3 into an ROI posing layer, wherein the ROI posing uniformly maps the multi-scale ROI into a prosal with the same size, and the prosal is projected on an original building material sample image to generate a prosal feature map, so that subsequent BBOX and CLS branches can be conveniently processed;
s5, calculating the class of each sample through the full connection layer and softmax for the generic feature map by the CLS branch, and outputting the highest class probability as the confidence coefficient;
s6, the BBOX branch utilizes a bounding box regression function to correct the propofol area, a building material detection frame with a more accurate position is generated, and the material performance state of the building material sample is identified.
2. The building material sample identification method based on deep learning of claim 1, wherein: the model training phase S3 includes the following steps:
s31, building material sample images under different scales and scenes are built by combining a plurality of data enhancement methods, and the existing data are expanded to simulate complex recognition scenes, so that the learning of the model to detail characteristic information is improved, and the universality of the model is enhanced;
s32, setting an initial weight as a pre-trained weight on imagenet, setting an initial learning rate to be 0.001, a learning rate attenuation index to be 0.1 and a batch _ size to be 16, and inputting the image size;
s33, in a loss function, an RPN series module adopts two classification losses and regression losses; CLS branch adopts multi-classification loss, BBOX branch adopts regression loss;
and S34, training on a training set and a test set by adopting an SGD random gradient descent optimizer in training until the performance of the model is optimal.
3. The building material sample identification method based on deep learning of claim 2, wherein: the method for enhancing data in S31 includes:
S311、Random Erasing:
(1) IRE, randomly selecting an occlusion position on the whole target image;
(2) ORE, randomly selecting an occlusion position in a bounding-box area of the target;
(3) combining both IRE and ORE;
S312、Hide and Seek:
the method comprises the steps that a picture is cut into S-S grids, each grid is hidden according to probability, and different grid groups are hidden in each batch of the same picture in training;
S313、Grid Mask:
in order to avoid the problem that the complete target is deleted or the context information is lost due to the existence of an over-deletion area in S31 and S32 1; setting four parameters of x, y, r and d through Grid Mask:
Figure FDA0002755698910000031
wherein r is the size of mask, M is the number of reserved pixels, H, W is the image size; x and y are area coordinates randomly generated on the image; the value of the non-shielded area is 1, the value of the shielded area is 0, a mask with the same resolution as the original image is generated, and then the mask and the original image are multiplied to obtain an image;
S314、Mixup
performing mixed enhancement on the images, and mixing the images among different classes; the algorithm can be summarized as follows:
Figure FDA0002755698910000032
Figure FDA0002755698910000033
wherein x1、x2Is the pixel of different images, and lambda is the weight;
S315、Cutmix
a portion of the area is randomly cropped and the area pixel values of the other data in the training set are randomly filled.
4. The building material sample identification method based on deep learning of claim 1, wherein: the step of generating the optimal feature map after the sample identification stage S1 performs sample extraction includes:
s11, marking the last residual convolution modules in Resnet as { C1,C2,C3,…CnGet rid of, extract the module of residual error separatelyIs marked as { P1,P2,P3,…,Pn};
S12, matching characteristic diagram P of the deepest layer52 times of nearest neighbor upsampling is carried out through 1;
s13, extracting CnAdjacent residual convolution module Cn-1Is output characteristic map Pn-1Performing 1 × 1 convolution dimensionality reduction processing;
s14, adding the pixel values of the corresponding parts to the characteristic map PnCarrying out fusion;
s15, reducing aliasing effect brought by upsampling by carrying out 3-by-3 convolution on the fused feature map;
and S16, iterating the process until the optimal size characteristic diagram is generated.
CN202011201983.7A 2020-11-02 2020-11-02 Building material sample identification method based on deep learning Pending CN112329771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011201983.7A CN112329771A (en) 2020-11-02 2020-11-02 Building material sample identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011201983.7A CN112329771A (en) 2020-11-02 2020-11-02 Building material sample identification method based on deep learning

Publications (1)

Publication Number Publication Date
CN112329771A true CN112329771A (en) 2021-02-05

Family

ID=74323985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011201983.7A Pending CN112329771A (en) 2020-11-02 2020-11-02 Building material sample identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN112329771A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967296A (en) * 2021-03-10 2021-06-15 重庆理工大学 Point cloud dynamic region graph convolution method, classification method and segmentation method
CN113657202A (en) * 2021-07-28 2021-11-16 万翼科技有限公司 Component identification method, training set construction method, device, equipment and storage medium
CN113762229A (en) * 2021-11-10 2021-12-07 山东天亚达新材料科技有限公司 Intelligent identification method and system for building equipment in building site

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967296A (en) * 2021-03-10 2021-06-15 重庆理工大学 Point cloud dynamic region graph convolution method, classification method and segmentation method
CN113657202A (en) * 2021-07-28 2021-11-16 万翼科技有限公司 Component identification method, training set construction method, device, equipment and storage medium
CN113762229A (en) * 2021-11-10 2021-12-07 山东天亚达新材料科技有限公司 Intelligent identification method and system for building equipment in building site
CN113762229B (en) * 2021-11-10 2022-02-08 山东天亚达新材料科技有限公司 Intelligent identification method and system for building equipment in building site

Similar Documents

Publication Publication Date Title
CN109977918B (en) Target detection positioning optimization method based on unsupervised domain adaptation
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN108416266B (en) Method for rapidly identifying video behaviors by extracting moving object through optical flow
CN110738207B (en) Character detection method for fusing character area edge information in character image
CN111369572B (en) Weak supervision semantic segmentation method and device based on image restoration technology
CN112329771A (en) Building material sample identification method based on deep learning
Kadam et al. Detection and localization of multiple image splicing using MobileNet V1
CN107784288A (en) A kind of iteration positioning formula method for detecting human face based on deep neural network
CN108154133B (en) Face portrait-photo recognition method based on asymmetric joint learning
CN110390308B (en) Video behavior identification method based on space-time confrontation generation network
CN113011357A (en) Depth fake face video positioning method based on space-time fusion
CN112926652B (en) Fish fine granularity image recognition method based on deep learning
CN111652273B (en) Deep learning-based RGB-D image classification method
CN114758288A (en) Power distribution network engineering safety control detection method and device
CN112364974B (en) YOLOv3 algorithm based on activation function improvement
CN105574545B (en) The semantic cutting method of street environment image various visual angles and device
CN109635726A (en) A kind of landslide identification method based on the symmetrical multiple dimensioned pond of depth network integration
CN112861970A (en) Fine-grained image classification method based on feature fusion
CN115410081A (en) Multi-scale aggregated cloud and cloud shadow identification method, system, equipment and storage medium
CN114626476A (en) Bird fine-grained image recognition method and device based on Transformer and component feature fusion
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
CN115861306B (en) Industrial product abnormality detection method based on self-supervision jigsaw module
CN116596966A (en) Segmentation and tracking method based on attention and feature fusion
CN116453102A (en) Foggy day license plate recognition method based on deep learning
CN111209886A (en) Rapid pedestrian re-identification method based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination