CN114821328A - Electric power image processing method and device based on complete learning - Google Patents

Electric power image processing method and device based on complete learning Download PDF

Info

Publication number
CN114821328A
CN114821328A CN202210502905.3A CN202210502905A CN114821328A CN 114821328 A CN114821328 A CN 114821328A CN 202210502905 A CN202210502905 A CN 202210502905A CN 114821328 A CN114821328 A CN 114821328A
Authority
CN
China
Prior art keywords
layer
power image
power
image
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210502905.3A
Other languages
Chinese (zh)
Inventor
罗旺
陈骏
郝运河
张佩
夏源
琚小明
钱莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nari Information and Communication Technology Co
Original Assignee
Nari Information and Communication Technology Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nari Information and Communication Technology Co filed Critical Nari Information and Communication Technology Co
Priority to CN202210502905.3A priority Critical patent/CN114821328A/en
Publication of CN114821328A publication Critical patent/CN114821328A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for processing a power image based on complete learning, wherein calibration values of a characteristic diagram Fout and the characteristic diagram Fout are used as a training set, and the training set is used for detecting a model of the abnormal value of the power image; and inputting the power image into the trained power image abnormal value detection model, and outputting the power equipment abnormal prediction result in the power image. The method effectively utilizes the advantages of local feature extraction in the convolution process and the advantages of global feature extraction in self-attention calculation, constructs an efficient image feature learning method, can well and completely represent the feature information of the original image through the feature map of hybrid learning, and effectively learns the feature information in the power image. The invention improves the identification accuracy, reduces the manual inspection cost, automatically detects the defects on the power transmission line and ensures the safe operation of the national power system.

Description

Electric power image processing method and device based on complete learning
Technical Field
The invention relates to a method and a device for processing an electric power image based on complete learning, and belongs to the technical field of power grid operation and maintenance intellectualization.
Background
With the continuous development of national power grids, outdoor power equipment is also continuously increased, and therefore, the maintenance of circuit equipment is also continuously increased. In order to reduce the consumption of manpower in the maintenance process, the images are automatically inspected and shot through the unmanned aerial vehicle, and the images are automatically detected to be abnormal values, so that the future development direction of the power operation and maintenance is formed.
In recent years, as machine learning and deep learning have been developed, a deep learning method is also used in various industries. The Convolutional Neural Network (CNN) shows good characteristic learning performance in tasks such as image recognition, semantic segmentation and target detection. The Transformer is firstly introduced into the field of natural language processing and is mostly applied to tasks such as machine translation, semantic relation recognition and the like.
At present, researchers introduce the Transformer into the field of computer vision, and find that the Transformer also shows great potential in the fields of image generation and super resolution. Although both methods have met with great success, the modules of convolution calculation and self-attention calculation follow quite different design paradigms. The traditional convolution network is biased to learn local information of the image and obtain weight characteristics, and the Transformer can learn global information characteristics of the image.
Therefore, how to combine the convolutional network and the Transformer to process objects, such as trees, of non-power devices in the power image, and to conveniently identify the power devices is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the invention provides a method and a device for processing an electric power image based on complete learning, which effectively utilize the advantages of local feature extraction in the convolution process and the advantages of global feature extraction in self-attention calculation, construct an efficient image feature learning method, can well and completely represent the feature information of an original image through a mixed learned feature map, and effectively learn the feature information in the electric power image.
The technical scheme is as follows: in order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect, a method for processing a power image based on complete learning includes the following steps:
will feature chart F out And feature map F out The calibration value is used as a training set, and the training set is used for detecting a model for the abnormal value of the power image.
And inputting the power image into the trained power image abnormal value detection model, and outputting the power equipment abnormal prediction result in the power image.
Preferably, the characteristic diagram F out The formula is obtained by calculation, and the formula is as follows:
F out =α·F conv +β·F tran
wherein, F conv Is a partial feature map, F tran The global feature map is obtained, and alpha and beta are parameters.
Preferably, the local feature map F conv The acquisition method comprises the following steps:
and inputting the original image of the power image into N layers of convolutional neural networks, wherein the output of each layer of convolutional neural network is the input of the next layer of convolutional neural network.
After N times of iteration operation, obtaining a local characteristic diagram F conv
The convolutional neural network of each layer at least comprises a convolutional layer, a normalization layer, an activation layer and a pooling layer.
A method of operating a convolutional neural network for each layer, comprising the steps of:
the image is passed through the convolutional filtering operation of the convolutional layer to obtain convolutional layer output.
And carrying out normalization operation on the convolution layer output through a normalization layer to obtain normalization layer output.
And operating the normalization layer output through an activation function of the activation layer to obtain the activation layer output.
And (5) leading the output of the activation layer to pass through the compression operation of the pooling layer to obtain the output of the convolutional neural network of the layer.
As a preferred scheme, the global feature map F tran The acquisition method comprises the following steps:
dividing an original image of the power image into K small blocks, flattening the value of each small block to obtain linear projection, adding position information to the linear projection to serve as characteristic information of each small block, and using X to obtain the characteristic information of each small block i And (4) showing.
The characteristic information X of each small block is processed i The output of each layer of the Transformer encoder is the input of the next layer of the Transformer encoder.
After N times of iteration operation, obtaining a global feature map F tran
The Transformer encoder of each layer at least comprises a normalization layer, a self-attention calculation layer, a residual error connection layer and a multi-layer perceptron module.
The operation method of the Transformer encoder of each layer comprises the following steps:
characteristic information X of each small block i Inputting the data into a normalization layer for linear normalization to obtain a normalized result X i ′。
Normalized result X i ' input to the self-attention computing layer, normalize the result X of each image feature i ' respectively with W Q ,W K And W V And multiplying the three weight matrixes, and calculating to obtain linear projection matrixes Q, K and V. From linear projection matrices Q, K and VCalculating multi-head attention MSA, MSA ═ SA 1 ,SA 2 ,...,SA k ]U msa Wherein SAk represents the attention value of the kth attention head, U msa Representing the transformation matrix.
Inputting the multi-head attention MSA into a residual error connection layer according to the normalized result X i ' Add to the multi-head attention MSA, resulting in the output of the residual connected layer.
And inputting the output of the residual connecting layer into a multi-layer perceptron module for learning and discarding the parameter weight to obtain the output of the transform encoder of the layer.
Preferably, the calculation formula of the SAk is as follows:
Figure BDA0003636010630000031
where D represents the dimension of the input and softmax (x) is the activation function.
Preferably, the power image abnormal value detection model at least comprises a multi-layer perceptron module and a full connection layer.
Preferably, the power equipment abnormality prediction result in the power image includes: the power equipment has a problem, and the power equipment has no problem.
In a second aspect, a full learning based power image processing apparatus includes the following modules:
a training module for transforming the feature map F out And feature map F out The calibration value is used as a training set, and the training set is used for detecting a model for the abnormal value of the power image.
And the prediction module is used for inputting the power image into the trained power image abnormal value detection model and outputting the power equipment abnormal prediction result in the power image.
Has the advantages that: the method and the device for processing the power image based on complete learning, provided by the invention, combine the advantages of a convolutional neural network and a Transformer, and can effectively learn and express the local and global characteristics of the image. The invention improves the identification accuracy, reduces the manual inspection cost, automatically detects the defects on the power transmission line and ensures the safe operation of the national power system.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Detailed Description
The present invention will be further described with reference to the following examples.
As shown in fig. 1, a power image processing method based on complete learning includes the following steps:
and extracting local features of the image of the input power equipment through the N layers of convolutional neural networks to obtain a local feature map.
And extracting the global features of the image of the input power equipment through an N-layer Transformer encoder to obtain a global feature map.
And performing weight splicing on the local feature map and the global feature map to obtain a final image feature map.
And (4) passing the final image characteristic diagram through a power image abnormal value detection model to obtain a final power equipment identification result.
The specific implementation method comprises the following steps:
and inputting the original image into an N-layer convolutional neural network, and extracting local features of the image. The convolutional neural network of each layer comprises a convolutional layer, a normalization layer, an activation layer and a pooling layer. The output of the convolutional neural network of each layer is the input of the convolutional neural network of the next layer. The convolutional neural network processing method of each layer is as follows:
convolutional layers are the most important layers of computation for convolutional neural networks. The convolution layer includes a plurality of convolution kernels, and the feature map of the image can be obtained through computational filtering of the convolution kernels. This process is formulated as:
Figure BDA0003636010630000041
p is the original image, and P belongs to R h×w ,R h×w The pixel matrices for the length h and width W of the image, W and b represent the convolution network parameter matrix and offset values, respectively. f (×) denotes the operation of convolution filtering.
Figure BDA0003636010630000042
Representing the output of the ith convolutional layer.
After the convolution layer, the obtained output is subjected to normalization operation. The role of Batch Normalization (BN) is to normalize the input values and reduce the difference between dimensions to the same range. The BN layer firstly obtains the mean value and the variance of each batch of data, then subtracts the mean value, and then divides the mean value by the variance to obtain the normalized data.
After the normalization operation, the normalized data is input into the active layer. The activation function used by the layer is a LeakyReLU activation function, and the formula is as follows:
Figure BDA0003636010630000043
where x represents the input data and α is a learnable parametric variable.
And inputting the result of the activation function layer into the maximum pooling layer, and compressing the data to ensure that the data is the maximum value in a certain dimension.
The above operation is the operation of one layer of convolutional neural network, and the output result of the ith layer of convolutional neural network is the input of the (i + 1) th layer of convolutional neural network. After such N iterations, a local feature map of the image is obtained and denoted as F conv
The original image is divided into K small blocks. This is achieved by setting the kernel _ size and padding values of the convolution kernel equal. The values of each patch are then flattened to obtain a linear projection. The position information is then added as characteristic information for each tile. The feature information of each block of image can be represented as X i . Then, an N-layer Transformer encoder is constructed to learn the global features of the image. The Transformer encoder of each layer comprises a normalization layer, a self-attention calculation layer, a residual error connection layer and a multi-layer perceptron module, and the output result of the Transformer encoder of each layer is the input of the Transformer encoder of the next layer. The transform encoder processing method for each layer is as follows:
and inputting the characteristic information of each image block into a Layer Normalization (LN) Layer for linear Normalization processing. Its function is consistent with Batch Normalization, which reduces the data characteristics to a certain extent.
The normalized result X i ' self-attention calculation is carried out, and the semantic relation between each image feature and other image features is obtained. Firstly, calculating the result X of the normalization of each image feature i ' linear projection matrices, respectively according to W Q ,W K And W V Three weight matrices and X i ' multiplication calculation results in a linear projection matrix Q, K, V. Q, K and V each represent X i ' projection matrices of different dimensions. The attention value is then calculated from these three linear projection matrices as:
Figure BDA0003636010630000051
where D represents the dimension of the input and softmax (x) is the activation function. A multi-head attention mechanism is used here, so the result of the final multi-head attention calculation is: MSA ═ SA 1 ,SA 2 ,...,SA k ]U msa . Here K and U msa The number of attention heads and the transformation matrix are respectively represented, and SA represents the result of attention calculation for each head.
After the multi-head attention calculation is finished, the features of the previous time are added for residual error connection. Can be expressed as: x i ′=X i ' + MSA. And after residual connection, a multi-layer perceptron module is arranged for further learning the characteristics after residual connection and discarding the parameter weight. The fitting degree of the model method can be effectively improved by the step.
The above operation is an operation of a layer Transformer encoder, and the output result of the i-th layer encoding operation is the input of the i + 1-th layer encoding operation. After such N iterations, a global feature map of the image is obtained and denoted as F tran
For local feature map F conv And global feature map F tran We use two variantsThe parameters alpha and beta of the two characteristic maps are adjusted to adjust the weight ratio between the two characteristic maps. Can be formulated as: f out =α·F conv +β·F tran . Thus, the final image feature map F can be obtained out
For the final image feature map F out And setting an abnormal value detection model of the power image, wherein the abnormal value detection model comprises a multilayer perceptron module for representation learning, and then obtaining a final identification result through a final full-connection layer for identifying abnormal conditions in the power image.
An abnormal value detection model is constructed through the steps. The input of the model is a photo shot by the unmanned aerial vehicle, and the output result of the model can be obtained through feature extraction of the middle layer of the model and the final full-connection layer. The output result is 0 or 1, where 0 represents that there is no problem with the circuit device in the taken inspection photograph, and 1 represents that there is a problem with the circuit device in the taken inspection photograph, such as rusting, breaking, etc. of the device. And then dividing the inspection photos shot by the unmanned aerial vehicle into a training set, a verification set and a test set. The method comprises the steps of firstly training a model by using pictures in a training set, then verifying on a verification set and finely adjusting the hyper-parameters of a network model. And finally testing the prediction capability of the model on the test set. And obtaining a final model for detecting the defects of the power transmission line after training, verification and testing. When one or a group of pictures are input, one or a group of result values are directly output, wherein the result values are 0 and 1 to display the prediction result.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (8)

1. A power image processing method based on complete learning is characterized in that: the method comprises the following steps:
will feature map F out And feature map F out The calibration values of (a) are used as a training set,detecting a model for the abnormal value of the power image by using a training set;
and inputting the power image into the trained power image abnormal value detection model, and outputting the power equipment abnormal prediction result in the power image.
2. The power image processing method based on complete learning according to claim 1, characterized in that: the characteristic diagram F out The formula is obtained by calculation, and the formula is as follows:
F out =α·F conv +β·F tran
wherein, F conv Is a partial feature map, F tran The global feature map is obtained, and alpha and beta are parameters.
3. A full learning-based power image processing method according to claim 2, characterized in that: the local feature map F conv The acquisition method comprises the following steps:
inputting the original image of the power image into N layers of convolutional neural networks, wherein the output of each layer of convolutional neural network is the input of the next layer of convolutional neural network;
after N times of iteration operation, obtaining a local characteristic diagram F conv
The convolutional neural network of each layer at least comprises a convolutional layer, a normalization layer, an activation layer and a pooling layer;
a method of operating a convolutional neural network for each layer, comprising the steps of:
the image is subjected to convolution filtering operation of the convolution layer to obtain convolution layer output;
outputting the convolution layer through normalization operation of a normalization layer to obtain normalization layer output;
the normalization layer output is operated through an activation function of the activation layer to obtain activation layer output;
and (4) leading the output of the activation layer to pass through the compression operation of the pooling layer to obtain the output of the layer of convolutional neural network.
4. According toThe power image processing method based on complete learning of claim 2, characterized in that: the global feature map F tran The acquisition method comprises the following steps:
dividing an original image of the power image into K small blocks, flattening the value of each small block to obtain linear projection, adding position information to the linear projection to serve as characteristic information of each small block, and using X to obtain the characteristic information of each small block i Represents;
the characteristic information X of each small block is processed i Inputting the data into N layers of Transformer encoders, wherein the output of each layer of Transformer encoder is the input of the next layer of Transformer encoder;
after N times of iteration operation, obtaining a global feature map F tran
The Transformer encoder of each layer at least comprises a normalization layer, a self-attention calculation layer, a residual error connection layer and a multi-layer perceptron module;
the operation method of the Transformer encoder of each layer comprises the following steps:
characteristic information X of each small block i Inputting the data into a normalization layer for linear normalization to obtain a normalized result X i ′;
Normalized result X i ' input to the self-attention computing layer, normalize the result X of each image feature i ' respectively with W Q ,W K And w V Multiplying the three weight matrixes, and calculating to obtain linear projection matrixes Q, K and V; calculating multi-head attention MSA, MSA ═ SA from linear projection matrices Q, K and V 1 ,SA 2 ,...,SA k ]U msa Wherein SA k Represents the attention value of the kth attention head, U msa Representing a transformation matrix;
inputting the multi-head attention MSA into a residual error connection layer according to the normalized result X i Adding the multi-head attention MSA to obtain the output of a residual connecting layer;
and inputting the output of the residual connecting layer into a multi-layer perceptron module for learning and discarding the parameter weight to obtain the output of the transform encoder of the layer.
5. The power image processing method based on complete learning according to claim 4, characterized in that: the SA k The calculation formula of (a) is as follows:
Figure FDA0003636010620000021
where D represents the dimension of the input and softmax (x) is the activation function.
6. The power image processing method based on complete learning according to claim 1, characterized in that: the power image abnormal value detection model at least comprises a multilayer perceptron module and a full connection layer.
7. The power image processing method based on complete learning according to claim 1, characterized in that: the power equipment abnormity prediction result in the power image comprises the following steps: the power equipment has a problem, and the power equipment has no problem.
8. An electric power image processing apparatus based on complete learning, characterized in that: the system comprises the following modules:
a training module for transforming the feature map F out And feature map F out The calibration value is used as a training set, and the training set is used for detecting a model for the abnormal value of the power image;
and the prediction module is used for inputting the power image into the trained power image abnormal value detection model and outputting the power equipment abnormal prediction result in the power image.
CN202210502905.3A 2022-05-10 2022-05-10 Electric power image processing method and device based on complete learning Pending CN114821328A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210502905.3A CN114821328A (en) 2022-05-10 2022-05-10 Electric power image processing method and device based on complete learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210502905.3A CN114821328A (en) 2022-05-10 2022-05-10 Electric power image processing method and device based on complete learning

Publications (1)

Publication Number Publication Date
CN114821328A true CN114821328A (en) 2022-07-29

Family

ID=82513172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210502905.3A Pending CN114821328A (en) 2022-05-10 2022-05-10 Electric power image processing method and device based on complete learning

Country Status (1)

Country Link
CN (1) CN114821328A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272131A (en) * 2022-08-22 2022-11-01 苏州大学 Image Moire pattern removing system and method based on self-adaptive multi-spectral coding
CN117848515A (en) * 2024-03-07 2024-04-09 国网吉林省电力有限公司长春供电公司 Switch cabinet temperature monitoring method and system
CN118505687A (en) * 2024-07-17 2024-08-16 合肥中科类脑智能技术有限公司 Photovoltaic panel defect detection method, storage medium and electronic device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272131A (en) * 2022-08-22 2022-11-01 苏州大学 Image Moire pattern removing system and method based on self-adaptive multi-spectral coding
CN115272131B (en) * 2022-08-22 2023-06-30 苏州大学 Image mole pattern removing system and method based on self-adaptive multispectral coding
CN117848515A (en) * 2024-03-07 2024-04-09 国网吉林省电力有限公司长春供电公司 Switch cabinet temperature monitoring method and system
CN117848515B (en) * 2024-03-07 2024-05-07 国网吉林省电力有限公司长春供电公司 Switch cabinet temperature monitoring method and system
CN118505687A (en) * 2024-07-17 2024-08-16 合肥中科类脑智能技术有限公司 Photovoltaic panel defect detection method, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN114821328A (en) Electric power image processing method and device based on complete learning
CN110930378B (en) Emphysema image processing method and system based on low data demand
CN112818969A (en) Knowledge distillation-based face pose estimation method and system
CN117557775B (en) Substation power equipment detection method and system based on infrared and visible light fusion
CN114187590A (en) Method and system for identifying target fruits under homochromatic system background
CN114818826A (en) Fault diagnosis method based on lightweight Vision Transformer module
CN114170154A (en) Remote sensing VHR image change detection method based on Transformer
CN116503398B (en) Insulator pollution flashover detection method and device, electronic equipment and storage medium
CN118015332A (en) Remote sensing image saliency target detection method
CN117132910A (en) Vehicle detection method and device for unmanned aerial vehicle and storage medium
CN115723280B (en) Polyimide film production equipment with adjustable thickness
CN117076983A (en) Transmission outer line resource identification detection method, device, equipment and storage medium
CN117079099A (en) Illegal behavior detection method based on improved YOLOv8n
CN117115616A (en) Real-time low-illumination image target detection method based on convolutional neural network
CN116403090A (en) Small-size target detection method based on dynamic anchor frame and transducer
CN116523888A (en) Pavement crack detection method, device, equipment and medium
CN115761268A (en) Pole tower key part defect identification method based on local texture enhancement network
CN116109868A (en) Image classification model construction and small sample image classification method based on lightweight neural network
CN111696070A (en) Multispectral image fusion power internet of things fault point detection method based on deep learning
CN112926619B (en) High-precision underwater laser target recognition system
CN117764969B (en) Lightweight multi-scale feature fusion defect detection method
CN113627556B (en) Method and device for realizing image classification, electronic equipment and storage medium
Zhang et al. Data visualization and fault detection using Bi-Kernel t-distributed stochastic neighbor embedding in semiconductor etching processes
CN118298149A (en) Target detection method for parts on power transmission line
CN118608792A (en) Mamba-based ultra-light image segmentation method and computer device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination