CN112561885A - YOLOv 4-tiny-based gate valve opening detection method - Google Patents

YOLOv 4-tiny-based gate valve opening detection method Download PDF

Info

Publication number
CN112561885A
CN112561885A CN202011502843.3A CN202011502843A CN112561885A CN 112561885 A CN112561885 A CN 112561885A CN 202011502843 A CN202011502843 A CN 202011502843A CN 112561885 A CN112561885 A CN 112561885A
Authority
CN
China
Prior art keywords
yolov4
tiny
training
gate valve
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011502843.3A
Other languages
Chinese (zh)
Other versions
CN112561885B (en
Inventor
李明
鹿朋
朱美强
梁健
王军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202011502843.3A priority Critical patent/CN112561885B/en
Publication of CN112561885A publication Critical patent/CN112561885A/en
Application granted granted Critical
Publication of CN112561885B publication Critical patent/CN112561885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/20Hydro energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a detection method, in particular to a method for detecting the opening of a gate valve based on YOLOv4-tiny. The YOLOv4-tiny basic detection model can be constructed based on a configuration file of YOLOv4-tiny, a YOLOv4-tiny target detection model can be finally obtained after training, the opening detection of the gate valve can be carried out by utilizing the YOLOv4-tiny target detection model, the method has the advantages of high robustness and strong generalization capability, the effective detection of the gate valve can be realized, the scientific and effective guarantee is provided for the subsequent control of the opening of the gate valve, the labor force is saved and the intelligent degree is improved while the automatic detection is realized.

Description

YOLOv 4-tiny-based gate valve opening detection method
Technical Field
The invention relates to a detection method, in particular to a method for detecting the opening of a gate valve based on YOLOv4-tiny.
Background
The gate valve plays an important role in industry, for example, the water and coal flow of many coal washing plants in China are controlled by the gate valve, and the gate valve is flexible in opening and closing and convenient to control. For a gate valve in an important place, a proximity switch, a displacement sensor and the like are generally equipped for opening control, and a sensor and a connecting cable which are additionally equipped require a large capital investment.
Generally, video monitoring systems are arranged in most working areas of the gate valve, and therefore a gate valve opening detection algorithm based on machine vision is provided. However, in a complex environment, the detection precision of a conventional image processing algorithm is not high, so that feature extraction can be performed by means of a laser sensor assisted template matching algorithm to obtain coordinates of the board and the board outer frame, so as to determine the current board opening value, and finally the board valve opening value is fed back to a centralized control system for unified control, and specific opening detection specifications can refer to published references such as army, liming, and Jiangyu: the coal preparation plant water pump flashboard valve openness line laser-assisted visual monitoring [ J/OL ] industrial and mining automation 2020,46(9): 1-7. The board insertion opening detection method is sensitive to the illumination intensity, and the robustness and the detection precision of the board insertion opening detection algorithm still need to be improved.
With the continuous development of deep learning technology, the algorithm based on deep learning target detection and semantic segmentation is gradually successful in industrial application. Therefore, on the basis of the existing video monitoring system, a target detection algorithm can be used for quickly positioning the working area of the gate valve, then a semantic segmentation algorithm is used for accurately segmenting the gate and the gate valve, and finally the opening value of the gate relative to the gate valve is solved, which can be specifically referred to a Jiangyan reference: the research on the opening degree and control system of a coal preparation flow plugboard group based on machine vision [ D ]. China mining university, 2020. The detection algorithm usually needs a multi-step algorithm for processing and requires the position of the camera to keep a fixed posture, otherwise, larger detection errors are easy to occur, and the generalization is not strong.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a method for detecting the opening of a gate valve based on YOLOv4-tiny, which can effectively realize the real-time detection of the opening of the gate valve and improve the robustness and generalization capability in the detection process.
According to the technical scheme provided by the invention, the method for detecting the opening of the gate valve based on YOLOv4-tiny, comprises the following steps:
s1, acquiring an initial sample data set of the gate valve, and performing required preprocessing on the acquired initial sample data set to obtain a standard sample data set of the gate valve;
s2, carrying out object class marking and position information marking on the standard sample dataset of the gate valve to obtain a standard sample marking dataset of the gate valve, and dividing the standard sample marking dataset of the gate valve to obtain a training dataset, a verification dataset and a test dataset;
s3, reading a configuration file of YOLOv4-tiny, and configuring network layer parameter information in the YOLOv4-tiny configuration file to obtain a YOLOv4-tiny basic detection model; for the YOLOv4-tiny basic detection model, using CSPDarknet53 as a background to perform a main feature extractor, using sub-stage as a rock to perform feature fusion, using YOLO as a head to predict target position and category information, and respectively connecting the constructed space pyramid pooling module with a constraint module with index m and a constraint module with index m + 1;
step S4, training the above-mentioned YOLOv4-tiny basic detection model by using training data set, stopping training when the training reaches the set times or overfitting appears on the verification data set, in each training, updating the weight information of the YOLOv4-tiny basic detection model, so as to obtain a corresponding YOLOv4-tiny basic detection training model after each training;
step S5, testing all the YOLOv4-tiny basic detection training models obtained in the step S4 by using a test data set; after testing, any YOLOv4-tiny basic detection training model obtains corresponding multi-scale prediction information, the multi-scale prediction information of each YOLOv4-tiny basic detection training model is utilized to determine the detection precision of the corresponding YOLOv4-tiny basic detection training model, and the YOLOv4-tiny basic detection training model with the highest detection precision is used as a YOLOv4-tiny target detection model;
and S6, processing the gate valve image with the opening to be calculated by using a YOLOv4-tiny target detection model to obtain the opening of the gate valve image.
In step S1, the initial sample data set of the gate valve includes image sequences of the gate valve under different scenes, different time periods, different angles, and/or different illuminations; the pre-processing of the initial sample data set includes data augmentation processing and/or data enhancement processing.
In step S2, the object classification standard of the standard sample data set includes a standard for an external frame of the board and a label for the board; during marking, a Labelimage marking tool is adopted for marking; after labeling, generating a data set in a VOC2007 standard data format; the number proportion of the corresponding images of the training data set, the verification data set and the test data set obtained by dividing is 8: 1: 1.
step S4 includes the following steps:
s4.1, clustering the corresponding sizes of the inserting plate in the inserting plate valve and the inserting plate outer frame by adopting a K-means clustering method to the training data set to obtain M prior frames;
s4.2, configuring hyper-parameters of a YOLOv4-tiny basic detection model, and randomly changing the size of a corresponding image input into the YOLOv4-tiny basic detection model in a training data set in the training process of the YOLOv4-tiny basic detection model so as to realize multi-scale training of the YOLOv4-tiny basic detection model;
s4.3, updating the weight information of the YOLOv4-tiny basic detection model, so that the numerical value of the loss function is in a descending trend when the YOLOv4-tiny basic detection model after updating the weight information is trained by utilizing a training data set;
and 4.4, stopping training when the training reaches the set times or overfitting appears on the verification data set, and obtaining a corresponding YOLOv4-tiny basic detection training model after each training according to the updated different weight information.
The loss function loss is:
Figure BDA0002843966640000031
wherein S denotes dividing the image into S grids from the logic average, each grid has B kinds of bounding boxes to predict the object,
Figure BDA0002843966640000032
the target object is indicated to be contained in the bounding box,
Figure BDA0002843966640000033
indicating that the bounding box does not contain the target object;
Figure BDA0002843966640000034
a bounding box representing the predicted object is shown,
Figure BDA0002843966640000035
a bounding box representing the actual object; ac represents a minimum rectangular region surrounded by a rectangular frame at a certain position in the predicted image and a rectangular frame at the position in the actual image, and U represents a union region of the rectangular frame at the certain position in the predicted image and the rectangular frame at the position in the actual image;
Figure BDA0002843966640000036
the confidence level of the predicted object is represented,
Figure BDA0002843966640000037
representing the confidence of the actual object; classes represents all numerical category information of the actual object;
Figure BDA0002843966640000038
representing the class of the predicted object, PiRepresenting the category of the actual object.
In step S4, after the training data set has trained the YOLOv4-tiny basic detection model, the verification data set is used to verify the average accuracy of the mean value of the current YOLOv4-tiny basic detection training model, and if the average accuracy of the mean value of the current YOLOv4-tiny basic detection training model on the verification data set is lower than the average accuracy of the mean value of the YOLOv4-tiny basic detection training model on the training data set, it is determined to be over-fit.
In step S5, the method specifically includes the following steps:
s5.1, testing all the YOLOv4-tiny basic detection training models obtained in the step S4 by using a test data set; any YOLOv4-tiny basic detection training model obtains corresponding multi-scale prediction information, wherein the multi-scale prediction information comprises the position of an outer plug board frame, the confidence coefficient of the outer plug board frame, the position of a plug board and the confidence coefficient of the plug board;
s5.2, processing and counting the multi-scale prediction information to obtain the number TP of correct positive samples, the number FP of wrong positive samples and the number FN of wrong negative samples;
s5.3, obtaining the average precision of the mean value of each YOLOv4-tiny basic detection training model according to the number TP of correct positive samples, the number FP of wrong positive samples and the number FN of wrong negative samples;
and S5.4, taking the YOLOv4-tiny basic detection training model with the maximum average precision as a YOLOv4-tiny target detection model.
Step 5.2, when processing the multi-scale prediction information, the maximum suppression processing is carried out on the plugboard outer frame and the plugboard, and after the maximum suppression processing is carried out on the plugboard outer frame and the plugboard, the number TP of correct positive samples, the number FP of wrong positive samples and the number FN of wrong negative samples can be obtained through statistics;
Figure BDA0002843966640000041
Figure BDA0002843966640000042
Figure BDA0002843966640000043
Figure BDA0002843966640000044
wherein Precision represents Precision and Recall represents Recall; p (R) represents a function of a certain object with recall as an independent variable and precision as a dependent variable; n represents a total of N classes of objects.
In step S6, the opening O of the gate valve image is:
Figure BDA0002843966640000045
wherein x isfullIs the fully opened position of the plug board, xemptyThe position of the fully closed plug-in board is shown, and x is the current position of the plug-in board.
The invention has the advantages that: the YOLOv4-tiny basic detection model can be constructed based on a configuration file of YOLOv4-tiny, a YOLOv4-tiny target detection model can be finally obtained after training, the opening detection of the gate valve can be carried out by utilizing the YOLOv4-tiny target detection model, the method has the advantages of high robustness and strong generalization capability, the effective detection of the gate valve can be realized, the scientific and effective guarantee is provided for the subsequent control of the opening of the gate valve, the labor force is saved and the intelligent degree is improved while the automatic detection is realized.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic diagram of the gate valve during sampling according to the present invention.
Fig. 3 is a schematic diagram of a spatial pyramid pooling module constructed by the present invention.
FIG. 4 is a schematic diagram of the network layer of the YOLOv4-tiny basic detection model of the present invention.
Description of reference numerals: the device comprises a 1-plugboard, a 2-plugboard outer frame, a 3-camera, a 4-conveying pipeline and a 5-electric valve push rod.
Detailed Description
The invention is further illustrated by the following specific figures and examples.
As shown in fig. 1: in order to effectively realize the real-time detection of the opening of the gate valve and improve the robustness and generalization capability in the detection process, the method for detecting the opening of the gate valve comprises the following steps:
s1, acquiring an initial sample data set of the gate valve, and performing required preprocessing on the acquired initial sample data set to obtain a standard sample data set of the gate valve;
specifically, the initial sample data set of the gate valve comprises image sequences of the gate valve under different scenes, different time periods, different angles and/or different illuminations; the pre-processing of the initial sample data set includes data augmentation processing and/or data enhancement processing.
During specific implementation, the camera device is placed near the gate valve, the camera device can be a camera 3 or a video camera, the height of the camera device from the gate 1 is about 1 meter to 2 meters, the horizontal distance from the gate 1 is about 1 meter, and the visual angle of the plane where the gate 1 is located is about 45 degrees to 90 degrees, so that the overall outline of the gate 1 and the gate outer frame 2 thereof can be shot, and working videos of the gate valve under different scenes and time periods are obtained. Of course, the gate valve also comprises a conveying pipeline and an electric valve push rod 5 which are in adaptive connection.
After the working video of the gate valve is obtained by the camera equipment, video framing processing is required to be carried out, and images of each frame are segmented to obtain an image sequence set of the gate valve. And screening the obtained image sequence set of the gate valve, selecting the image sequence set of the gate valve under different scenes, different time periods, different angles and different illumination, and obtaining an initial sample data set of the gate valve.
Performing data augmentation and data enhancement processing on the initial sample data set to obtain a final standard sample data set; of course, the preprocessing performed on the initial sample data set further includes, but is not limited to, operations of flipping, translating, cropping, and increasing contrast of the image, and the type of the specific operation and the like may be selected according to actual needs.
In the embodiment of the invention, when the initial sample data set is screened during preprocessing, the number of images of each scene is ensured to be the same. Data augmentation and data enhancement are to improve the diversity of samples, so as to improve the anti-interference capability, generalization performance and robustness, and the specific processing of data augmentation and data enhancement is often used in target detection based on deep learning, and the specific processing process of images is well known to those skilled in the art and is not described herein again. Generally, data enhancement processing is performed first, and then data enhancement processing is performed.
S2, carrying out object class marking and position information marking on the standard sample dataset of the gate valve to obtain a standard sample marking dataset of the gate valve, and dividing the standard sample marking dataset of the gate valve to obtain a training dataset, a verification dataset and a test dataset;
specifically, the object type standard of the standard sample data set comprises the standard of an insert plate outer frame and the marking of an insert plate; during marking, a Labelimage marking tool is adopted for marking; after labeling, generating a data set in a VOC2007 standard data format; the number proportion of the corresponding images of the training data set, the verification data set and the test data set obtained by dividing is 8: 1: 1.
the labelinmage labeling tool is a commonly used labeling tool in the technical field, and the specific labeling process using the labelin image labeling tool is well known to those skilled in the art. And generating a data set in a VOC2007 standard data format by using a LabelImage labeling tool, wherein the specific process is consistent with the prior art and is not described herein again.
S3, reading a configuration file of YOLOv4-tiny, and configuring network layer parameter information in the YOLOv4-tiny configuration file to obtain a YOLOv4-tiny basic detection model; for the YOLOv4-tiny basic detection model, using CSPDarknet53 as a background to perform a main feature extractor, using sub-stage as a rock to perform feature fusion, using YOLO as a head to predict target position and category information, and respectively connecting the constructed space pyramid pooling module with a constraint module with index m and a constraint module with index m + 1;
specifically, the specific form of reading the configuration file of YOLOv4-tiny, that is, YOLOv4-tiny. cfg, YOLOv4-tiny, is consistent with the existing one, and the YOLOv4-tiny model can be configured by the configuration file of YOLOv4-tiny, which is well known to those skilled in the art and will not be described herein again. Corresponding network layer parameter information can be configured in a configuration file of YOLOv4-tiny, so that a required YOLOv4-tiny basic detection model can be obtained, and the specific process and mode for configuring the network parameter information are consistent with those in the prior art, and are specifically known to those skilled in the art, and are not described herein again.
In specific implementation, the configuration file of YOLOv4-tiny further includes a constraint module, a route module, a maxporoling module, an upsample module, and a yolo module; the context module is used for feature extraction, the route module is mainly used for overlaying a feature map, the maxpopooling module is used for reducing the dimension of the feature map, the upsample module is used for increasing the dimension of the feature map, and the yolo module is used for predicting the category and the position of the target object. The concrete functions of the constraint module, the route module, the maxporoling module, the upsample module and the yolo module are consistent with the prior art, the concrete number can be selected according to actual needs, and fig. 4 is a network layer schematic diagram of a YOLOv4-tiny basic detection model. After using CSPDarknet53 as a backbone to perform a main feature extractor, using a sub-stage as a sock to perform feature fusion, and using YOLO as a head to predict target position and category information, connection matching relations corresponding to a convergence module, a route module, a maxpoulg module, an upsample module, and a YOLO module in network layer information may be selected as required, which is known to those skilled in the art, and will not be described herein again.
In order to increase the angle of detecting the network receptive field and improving the detection performance of the network on small objects, in the embodiment of the present invention, an illustrated spatial pyramid pooling module (SPP) is constructed by convolutions with convolution kernels of 3 × 3, 5 × 5, and 7 × 7, respectively, a specific case of the spatial pyramid pooling module may refer to fig. 3, and a specific case of the spatial pyramid pooling module may refer to the following publications: HEK, ZHANGX, RenS, equivalent, spatopyramidpoolingin depthwartion networks for visual retrieval [ J ]. IEEETransactionson Pattern & machinery introduction, 2014,37(9): 1904-16. In fig. 3, a and B represent the width or height dimensions of the feature map; c1 and C2 indicate the number of channels of the feature map, and 3 × 3, 5 × 5, and 7 × 7 indicate the size of the convolution kernel.
In the embodiment of the present invention, the constructed spatial pyramid pooling module is added after the constraint module with index 9 and before the constraint module with index 10, and is connected with the constraint module with index 10 and after the constraint module with index 9, as shown in fig. 4, that is, m is 9, and of course, the specific value of m may be selected as needed, and in the specific implementation, the spatial pyramid pooling module is specifically known to those skilled in the art and is not described herein again before the multi-scale output of the yolo 4-tiny basic detection model.
Step S4, training the above-mentioned YOLOv4-tiny basic detection model by using training data set, stopping training when the training reaches the set times or overfitting appears on the verification data set, in each training, updating the weight information of the YOLOv4-tiny basic detection model, so as to obtain a corresponding YOLOv4-tiny basic detection training model after each training;
in specific implementation, step S4 includes the following steps:
s4.1, clustering the corresponding sizes of the inserting plate in the inserting plate valve and the inserting plate outer frame by adopting a K-means clustering method to the training data set to obtain M prior frames;
specifically, a training data set is read, a width value and a height value of a gate valve are randomly selected from the training data set as a center of initial clustering, iteration calculation is continuously carried out by using a K-mean clustering method until iteration times are finished or clustering sizes are not changed, so that six prior frames with different sizes are obtained, namely M is 6, and of course, different M sizes can be obtained according to different training data sets in specific implementation. The specific clustering process using the K-mean clustering method is consistent with the prior art, and is well known to those skilled in the art, and will not be described herein again. After clustering, the obtained six prior boxes are { [105,54], [202,198], [352,227], [523,308], [533,325], [534,329] }.
In the embodiment of the present invention, M kinds of prior frames indicate that there are mainly M kinds of boards or board frame sizes (width and height) in the image of the training data set. The M kinds of prior frames reflect the main sizes of the gate valve and the outer frame of the gate in the current training data set, and when the prior frames are provided, the YOLOv4-tiny basic detection model can be finely adjusted on the basis of the main sizes when the size of the gate or the outer frame of the gate is predicted, so that more accurate predicted value size can be obtained, and the positioning precision can be improved.
S4.2, configuring hyper-parameters of a YOLOv4-tiny basic detection model, and randomly changing the size of a corresponding image input into the YOLOv4-tiny basic detection model in a training data set in the training process of the YOLOv4-tiny basic detection model so as to realize multi-scale training of the YOLOv4-tiny basic detection model;
specifically, the hyper-parameter is a necessary value for the operation of the YOLOv4-tiny basic test model, the operation can be performed only by setting the hyper-parameter and the network layer in the YOLOv4-tiny basic test model, the relationship between the hyper-parameter and the YOLOv4-tiny basic test model is consistent with the prior art, and the specific setting mode and size of the hyper-parameter can refer to the prior literature or use empirical values, which is known to those skilled in the art. In the embodiment of the present invention, the configured hyper-parameters may be: epochs are 104; the batch size and the mini-batch size are 64 and 16, respectively; the initial learning rate was 0.01, and the final learning rate was 0.0005; a learning rate optimization strategy of cosine annealing attenuation is adopted; the weight decay parameter is set to 0.000484. The specific manner and process of performing the super-parameter configuration are consistent with those of the prior art, and are well known to those skilled in the art, and will not be described herein again.
In the training process, the size of an input image is randomly changed, multi-scale training is carried out, and the robustness of a YOLOv4-tiny basic detection model to the size of the input image is increased. In specific implementation, the resize () function of the opencv module can be used to implement random change of the image size, the size of the changed image is neither too small nor too large, and is a multiple of the down-sampling of the YOLOv4-tiny basic detection model, where the image size is a multiple of 32, and the specific method for randomly changing the size of the input image is consistent with the prior art, and is specifically known by those skilled in the art, and is not described herein again. The multiple of the downsampling of the YOLOv4-tiny basic detection model specifically refers to: size of input image x size of minimum output feature map; the minimum is that the YOLOv4-tiny basic detection model is a dual-scale output, which will output feature maps of two sizes. The selectable range of input picture sizes may be [352,384,416,448,480,512,544,576,608,640], where the height value of the input image is the same as the width value.
S4.3, updating the weight information of the YOLOv4-tiny basic detection model, so that the numerical value of the loss function is in a descending trend when the YOLOv4-tiny basic detection model after updating the weight information is trained by utilizing a training data set;
specifically, the loss function loss is:
Figure BDA0002843966640000081
wherein, S is the mean S grid of dividing the image from logic, each grid has B kinds of boundary boxes to predict the object, each grid predicts B objects, thus the image can predict S object at most theoretically.
Figure BDA0002843966640000082
The boundary frame is indicated to contain a target object, the contained target object is an inserting plate and/or an inserting plate outer frame,
Figure BDA0002843966640000083
the target object is not contained in the boundary frame, namely the target object is not contained in the plugboard and the plugboard outer frame;
Figure BDA0002843966640000084
a bounding box representing the predicted object is shown,
Figure BDA0002843966640000085
a bounding box representing the actual object; ac represents a minimum rectangular region surrounded by a rectangular frame at a certain position in the predicted image and a rectangular frame at the position in the actual image, and U represents a union region of the rectangular frame at the certain position in the predicted image and the rectangular frame at the position in the actual image;
Figure BDA0002843966640000086
the confidence level of the predicted object is represented,
Figure BDA0002843966640000087
representing the confidence of the actual object; classes represents all numerical category information of the actual object;
Figure BDA0002843966640000088
representing the class of the predicted object, PiRepresenting a category of the actual object; the predicted object specifically refers to an inserting plate or an inserting plate outer frame.
For any image in the training dataset, the class P of the actual objectiAll the number class information classes of the actual object, and the confidence of the actual object
Figure BDA0002843966640000089
The boundary box of the actual object is obtained according to the above standard in step S2
Figure BDA00028439666400000810
Can also be specified from the image, as is well known to those skilled in the art. Predicting classes of objects
Figure BDA00028439666400000811
Bounding box for predicting objects
Figure BDA00028439666400000812
Confidence of predicted object
Figure BDA00028439666400000813
The method is obtained by outputting a YOLOv4-tiny basic detection model, is consistent with the prior art, and is well known to those skilled in the art. The specific determination method and process of the minimum rectangular region Ac enclosed by the rectangular frame at a certain position in the predicted image and the rectangular frame at the position in the actual image, and the union region U of the rectangular frame at a certain position in the predicted image and the rectangular frame at the position in the actual image are all consistent with the prior art, and are specifically known to those skilled in the art, and are not described herein again.
In the embodiment of the invention, the updated weight information comprises the weight w and the deviation b, and the updated weight information can be used for extracting the characteristics of the gate valve in the image, so that the type of the gate valve in the image and the position of the gate valve in the image can be accurately output. The role of the weight information in the YOLOv4-tiny basic detection model is consistent in the prior art, and is well known to those skilled in the art. In specific implementation, the weight w and the deviation b may be subjected to parameter learning based on a multi-layer perceptron (MLP), and the weight w and the deviation b may be subjected to parameter updating by using a gradient descent method, that is, the multi-layer perceptron is a mathematical model of the weight w and the deviation b, and a specific way and a specific process for updating the weight information are well known to those skilled in the art, and are not described herein again.
And 4.4, stopping training when the training reaches the set times or overfitting appears on the verification data set, and obtaining a corresponding YOLOv4-tiny basic detection training model after each training by using the updated weight information.
In the embodiment of the invention, generally, the training times can be set to be 50-100 times; each training session requires training with all images in the training dataset. After the training data set is used for training the YOLOv4-tiny basic detection model, the verification data set is used for verifying the mean average precision of the current YOLOv4-tiny basic detection training model, and if the mean average precision of the current YOLOv4-tiny basic detection training model on the verification data set is lower than the mean average precision of the YOLOv4-tiny basic detection training model on the training data set, the fitting is judged to be overfitting.
In specific implementation, the average precision of the YOLOv4-tiny basic detection training model on the verification data set and the average precision of the YOLOv4-tiny basic detection training model on the training data set can be determined by adopting the conventional and commonly used technical means, which are well known to those skilled in the art and are not described herein again.
Step S5, testing all the YOLOv4-tiny basic detection training models obtained in the step S4 by using a test data set; after testing, any YOLOv4-tiny basic detection training model obtains corresponding multi-scale prediction information, the detection precision of the corresponding YOLOv4-tiny basic detection training model is determined according to the multi-scale prediction information of each YOLOv4-tiny basic detection training model, and the YOLOv4-tiny basic detection training model with the highest detection precision is used as a YOLOv4-tiny target detection model;
specifically, step S5 specifically includes the following steps:
s5.1, testing all the YOLOv4-tiny basic detection training models obtained in the step S4 by using a test data set; any YOLOv4-tiny basic detection training model obtains corresponding multi-scale prediction information, wherein the multi-scale prediction information comprises the position of an outer plug board frame, the confidence coefficient of the outer plug board frame, the position of a plug board and the confidence coefficient of the plug board;
in the embodiment of the invention, a YOLOv4-tiny basic detection training model obtains corresponding multi-scale prediction information, wherein the multi-scale prediction information comprises the position of an outer frame of a plug board, the confidence coefficient of the outer frame of the plug board, the position of the plug board and the confidence coefficient of the plug board; the actual confidence of the outer frame of the plug-in board, the actual confidence of the plug-in board, the outer frame of the plug-in board and the actual category of the plug-in board are all obtained by labeling through the Labelimage labeling tool. For example, 0 and 1 are used to respectively indicate the categories of the gate valve and the outer frame of the gate, and when labeled, 0 and 1 are both classes (numerical labels) and are also used to replace pi (category of actual object). In the multi-scale prediction information output, the possibility (confidence) of whether an insert board or an insert board outer frame at a certain position and a corresponding position are output, for example, the upper left corner of an image is predicted to be the insert board, the corresponding possibility is 0.4, the insert board outer frame is in the middle of the image, the corresponding possibility is 0.9, and the higher the confidence value is, the higher the possibility that objects exist in the areas is. However, only objects above the confidence threshold (typically 0.5) are saved, e.g., objects with a confidence of 0.4 are discarded and objects with a confidence of 0.9 are retained.
S5.2, processing and counting the multi-scale prediction information to obtain the number TP of correct positive samples, the number FP of wrong positive samples and the number FN of wrong negative samples;
step 5.2, when processing the multi-scale prediction information, the maximum suppression processing is carried out on the plugboard outer frame and the plugboard, and after the maximum suppression processing is carried out on the plugboard outer frame and the plugboard, the number TP of correct positive samples, the number FP of wrong positive samples and the number FN of wrong negative samples can be obtained through statistics;
in the embodiment of the invention, the processing process of non-maximum suppression comprises the following steps: firstly, setting a cross ratio threshold, specifically, the cross ratio threshold may be set to 0.5; secondly, classifying the output multi-scale prediction results according to categories, and sequencing each category of prediction results in a descending order according to the confidence degrees of boundary frames (namely a plug board and a plug board outer frame) of the prediction results, namely sequencing the confidence degrees corresponding to the plug board and the plug board outer frame in the descending order; then, the following operations are repeated for each type of predicted object until all types of objects are processed, specifically, in the embodiment of the present invention, the predicted object is a board or a board frame.
1) And selecting the prediction value which is not marked as the candidate value in the prediction result list and has the maximum confidence coefficient, and marking the prediction value as the candidate value.
2) And then calculating the cross-over ratio of the prediction results which are not marked as candidate values in the class and the prediction results which are not marked as candidate values in the class in sequence. If the cross-over ratio of the unmarked candidate value and the candidate value in 1) is larger than the cross-over ratio threshold value, removing the unmarked prediction result from the prediction result list; otherwise, no processing is performed.
3) And repeating the operations 1) to 2) until the number of the unmarked candidate values in the type of prediction result is less than 1.
In specific implementation, the method for calculating the intersection ratio is as follows:
Figure BDA0002843966640000101
wherein DetectionResult represents the bounding box predicted by the detector, and groudtruth represents the real bounding box of the target object. IoU the larger the better the performance of the corresponding Yolov4-tiny basic test training model.
In the embodiment of the invention, the maximum value suppression processing is carried out, so that the most possible prediction result of the plugboard or the plugboard outer frame can be obtained from the output multi-scale prediction information.
S5.3, obtaining the average precision of the mean value of each YOLOv4-tiny basic detection training model according to the number TP of correct positive samples, the number FP of wrong positive samples and the number FN of wrong negative samples;
in particular, the amount of the solvent to be used,
Figure BDA0002843966640000102
Figure BDA0002843966640000103
Figure BDA0002843966640000104
Figure BDA0002843966640000111
wherein mAP is the average precision, and AP is the average precision; precision represents Precision and Recall represents Recall; p (R) represents a function of a certain object with recall as an independent variable and precision as a dependent variable; n represents a total of N classes of objects. In the embodiment of the invention, N is 2, namely the inserting plate and the outer frame of the inserting plate valve.
In the embodiment of the invention, after non-maximum value suppression processing, multi-scale prediction information of each image can be obtained, then the multi-scale prediction information is compared with real information of a gate valve in the image one by one, and if the type of the object (a gate or a gate outer frame) in the image is consistent with the type of the object predicted by the multi-scale prediction information at the position of the image, the object is recorded as TP; if not, recording as FP; if there is an object at the position in the image, but the multi-scale prediction information is not predicted there, this is denoted as (FN). In specific implementation, the positive sample, i.e. the object in the image, is the board or the board frame, and the negative sample, i.e. the object in the image, is not the board or the board frame.
And S5.4, taking the YOLOv4-tiny basic detection training model with the maximum average precision as a YOLOv4-tiny target detection model.
And S6, processing the gate valve image with the opening to be calculated by using a YOLOv4-tiny target detection model to obtain the opening of the gate valve image.
In the embodiment of the invention, when a YOLOv4-tiny target detection model is used for carrying out graphic processing on the gate valve, the YOLOv4-tiny target detection model can only predict the confidence coefficient of a gate and a gate outer frame in an image and the position of the gate and the gate outer frame in the image (namely the x coordinate value and the y coordinate value in an image coordinate system). Therefore, after obtaining the x value of the horizontal coordinate of the board in the board sending image and the board outer frame in the image coordinate system, the current opening value is calculated by using the opening algorithm. The following is a detailed description.
In the embodiment of the invention, the coordinates of the plugboard and the plugboard outer frame in the image can be obtained aiming at each image or each frame image, and the proportion of the length of the plugboard relative to the length of the plugboard outer frame is calculated, so that the opening range of the plugboard is indirectly obtained. Specifically, the opening O of the gate valve image is:
Figure BDA0002843966640000112
wherein x isfullIs the fully opened position of the plug board, xemptyThe position of the fully closed plug-in board is shown, and x is the current position of the plug-in board.
The method has high robustness and strong generalization capability, can realize effective detection of the gate valve, provides scientific and effective guarantee for controlling the opening of the gate valve subsequently, saves labor force and improves the intelligent degree while realizing automatic detection.
It should be noted that the above-mentioned real-time solution should be interpreted as illustrative and not limiting the scope of the present invention, which is defined by the claims. It will be apparent to those skilled in the art that certain insubstantial modifications and adaptations of the present invention can be made without departing from the spirit and scope of the invention.

Claims (9)

1. A method for detecting the opening degree of a gate valve based on YOLOv4-tiny is characterized by comprising the following steps:
s1, acquiring an initial sample data set of the gate valve, and performing required preprocessing on the acquired initial sample data set to obtain a standard sample data set of the gate valve;
s2, carrying out object class marking and position information marking on the standard sample dataset of the gate valve to obtain a standard sample marking dataset of the gate valve, and dividing the standard sample marking dataset of the gate valve to obtain a training dataset, a verification dataset and a test dataset;
s3, reading a configuration file of YOLOv4-tiny, and configuring network layer parameter information in the YOLOv4-tiny configuration file to obtain a YOLOv4-tiny basic detection model; for the YOLOv4-tiny basic detection model, using CSPDarknet53 as a background to perform a main feature extractor, using sub-stage as a rock to perform feature fusion, using YOLO as a head to predict target position and category information, and respectively connecting the constructed space pyramid pooling module with a constraint module with index m and a constraint module with index m + 1;
step S4, training the above-mentioned YOLOv4-tiny basic detection model by using training data set, stopping training when the training reaches the set times or overfitting appears on the verification data set, in each training, updating the weight information of the YOLOv4-tiny basic detection model, so as to obtain a corresponding YOLOv4-tiny basic detection training model after each training;
step S5, testing all the YOLOv4-tiny basic detection training models obtained in the step S4 by using a test data set; after testing, any YOLOv4-tiny basic detection training model obtains corresponding multi-scale prediction information, the multi-scale prediction information of each YOLOv4-tiny basic detection training model is utilized to determine the detection precision of the corresponding YOLOv4-tiny basic detection training model, and the YOLOv4-tiny basic detection training model with the highest detection precision is used as a YOLOv4-tiny target detection model;
and S6, processing the gate valve image with the opening to be calculated by using a YOLOv4-tiny target detection model to obtain the opening of the gate valve image.
2. The method for detecting the opening degree of the gate valve based on YOLOv4-tiny according to claim 1, wherein the method comprises the following steps: in step S1, the initial sample data set of the gate valve includes image sequences of the gate valve under different scenes, different time periods, different angles, and/or different illuminations; the pre-processing of the initial sample data set includes data augmentation processing and/or data enhancement processing.
3. The method for detecting the opening degree of the gate valve based on YOLOv4-tiny according to claim 1, wherein the method comprises the following steps: in step S2, the object classification standard of the standard sample data set includes a standard for an external frame of the board and a label for the board; during marking, a Labelimage marking tool is adopted for marking; after labeling, generating a data set in a VOC2007 standard data format; the number proportion of the corresponding images of the training data set, the verification data set and the test data set obtained by dividing is 8: 1: 1.
4. the method for detecting the opening degree of the gate valve based on YOLOv4-tiny according to claim 1, wherein the step S4 comprises the steps of:
s4.1, clustering the corresponding sizes of the inserting plate in the inserting plate valve and the inserting plate outer frame by adopting a K-means clustering method to the training data set to obtain M prior frames;
s4.2, configuring hyper-parameters of a YOLOv4-tiny basic detection model, and randomly changing the size of a corresponding image input into the YOLOv4-tiny basic detection model in a training data set in the training process of the YOLOv4-tiny basic detection model so as to realize multi-scale training of the YOLOv4-tiny basic detection model;
s4.3, updating the weight information of the YOLOv4-tiny basic detection model, so that the numerical value of the loss function is in a descending trend overall when the YOLOv4-tiny basic detection model after updating the weight information is trained by utilizing a training data set;
and 4.4, stopping training when the training reaches the set times or overfitting appears on the verification data set, and obtaining a corresponding YOLOv4-tiny basic detection training model after each training according to the updated different weight information.
5. The method for detecting the opening degree of the gate valve based on YOLOv4-tiny according to claim 4, wherein the loss function loss is as follows:
Figure FDA0002843966630000021
wherein S denotes dividing the image into S grids from the logic average, each grid has B kinds of bounding boxes to predict the object,
Figure FDA0002843966630000022
the target object is indicated to be contained in the bounding box,
Figure FDA0002843966630000023
indicating that the bounding box does not contain the target object;
Figure FDA0002843966630000024
a bounding box representing the predicted object is shown,
Figure FDA0002843966630000025
a bounding box representing the actual object; ac represents a minimum rectangular region surrounded by a rectangular frame at a certain position in the predicted image and a rectangular frame at the position in the actual image, and U represents a union region of the rectangular frame at the certain position in the predicted image and the rectangular frame at the position in the actual image;
Figure FDA0002843966630000026
the confidence level of the predicted object is represented,
Figure FDA0002843966630000027
representing the confidence of the actual object; classes represents all numerical category information of the actual object;
Figure FDA0002843966630000028
representing the class of the predicted object, PiRepresenting the category of the actual object.
6. The method for detecting the opening degree of a gate valve based on YOLOv4-tiny according to one of claims 1 to 5, wherein in step S4, after the training data set has trained the YOLOv4-tiny basic detection model, the verification data set is used to verify the mean average accuracy of the current YOLOv4-tiny basic detection training model, and if the mean average accuracy of the current YOLOv4-tiny basic detection training model on the verification data set is lower than the mean average accuracy of the YOLOv4-tiny basic detection training model on the training data set, the fitting is determined to be over-fitting.
7. The method for detecting the opening degree of the gate valve based on YOLOv4-tiny according to claim 1, wherein the step S5 specifically comprises the following steps:
s5.1, testing all the YOLOv4-tiny basic detection training models obtained in the step S4 by using a test data set; any YOLOv4-tiny basic detection training model obtains corresponding multi-scale prediction information, wherein the multi-scale prediction information comprises the position of an outer plug board frame, the confidence coefficient of the outer plug board frame, the position of a plug board and the confidence coefficient of the plug board;
s5.2, processing and counting the multi-scale prediction information to obtain the number TP of correct positive samples, the number FP of wrong positive samples and the number FN of wrong negative samples;
s5.3, obtaining the average precision of the mean value of each YOLOv4-tiny basic detection training model according to the number TP of correct positive samples, the number FP of wrong positive samples and the number FN of wrong negative samples;
and S5.4, taking the YOLOv4-tiny basic detection training model with the maximum average precision as a YOLOv4-tiny target detection model.
8. The method for detecting the opening degree of the gate valve based on YOLOv4-tiny according to claim 8, wherein in step S5.2, when processing the multi-scale prediction information, the method comprises performing maximum suppression processing on both the outer frame and the plug board, and after performing maximum suppression processing on both the outer frame and the plug board, counting the number TP of correct positive samples, the number FP of incorrect positive samples, and the number FN of incorrect negative samples;
Figure FDA0002843966630000031
Figure FDA0002843966630000032
Figure FDA0002843966630000033
Figure FDA0002843966630000034
wherein Precision represents Precision and Recall represents Recall; p (R) represents a function of a certain object with recall as an independent variable and precision as a dependent variable; n represents a total of N classes of objects.
9. The method for detecting the opening degree of the gate valve based on YOLOv4-tiny according to claim 1, wherein in step S6, the opening degree O of the gate valve image is as follows:
Figure FDA0002843966630000035
wherein x isfullIs the fully opened position of the plug board, xemptyThe position of the fully closed plug-in board is shown, and x is the current position of the plug-in board.
CN202011502843.3A 2020-12-17 2020-12-17 YOLOv 4-tiny-based gate valve opening detection method Active CN112561885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011502843.3A CN112561885B (en) 2020-12-17 2020-12-17 YOLOv 4-tiny-based gate valve opening detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011502843.3A CN112561885B (en) 2020-12-17 2020-12-17 YOLOv 4-tiny-based gate valve opening detection method

Publications (2)

Publication Number Publication Date
CN112561885A true CN112561885A (en) 2021-03-26
CN112561885B CN112561885B (en) 2023-04-18

Family

ID=75063879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011502843.3A Active CN112561885B (en) 2020-12-17 2020-12-17 YOLOv 4-tiny-based gate valve opening detection method

Country Status (1)

Country Link
CN (1) CN112561885B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926685A (en) * 2021-03-30 2021-06-08 济南大学 Industrial steel oxidation zone target detection method, system and equipment
CN113327240A (en) * 2021-06-11 2021-08-31 国网上海市电力公司 Visual guidance-based wire lapping method and system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014183275A1 (en) * 2013-05-15 2014-11-20 中国科学院自动化研究所 Detection method and system for locally deformable object based on on-line learning
CN110929577A (en) * 2019-10-23 2020-03-27 桂林电子科技大学 Improved target identification method based on YOLOv3 lightweight framework
WO2020155518A1 (en) * 2019-02-03 2020-08-06 平安科技(深圳)有限公司 Object detection method and device, computer device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014183275A1 (en) * 2013-05-15 2014-11-20 中国科学院自动化研究所 Detection method and system for locally deformable object based on on-line learning
WO2020155518A1 (en) * 2019-02-03 2020-08-06 平安科技(深圳)有限公司 Object detection method and device, computer device and storage medium
CN110929577A (en) * 2019-10-23 2020-03-27 桂林电子科技大学 Improved target identification method based on YOLOv3 lightweight framework

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜瑾: "《基于机器视觉的选煤流程闸板群开度及控制系统研究》", 《中国优秀硕士学位论文全文数据库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926685A (en) * 2021-03-30 2021-06-08 济南大学 Industrial steel oxidation zone target detection method, system and equipment
CN113327240A (en) * 2021-06-11 2021-08-31 国网上海市电力公司 Visual guidance-based wire lapping method and system and storage medium

Also Published As

Publication number Publication date
CN112561885B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN112199993B (en) Method for identifying transformer substation insulator infrared image detection model in any direction based on artificial intelligence
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN113469177B (en) Deep learning-based drainage pipeline defect detection method and system
CN111080693A (en) Robot autonomous classification grabbing method based on YOLOv3
CN111444921A (en) Scratch defect detection method and device, computing equipment and storage medium
CN112967243A (en) Deep learning chip packaging crack defect detection method based on YOLO
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN110929795B (en) Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN112561885B (en) YOLOv 4-tiny-based gate valve opening detection method
CN111652225A (en) Non-invasive camera reading method and system based on deep learning
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN115880260A (en) Method, device and equipment for detecting base station construction and computer readable storage medium
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN113705564B (en) Pointer type instrument identification reading method
CN111178405A (en) Similar object identification method fusing multiple neural networks
CN114331961A (en) Method for defect detection of an object
CN117437647A (en) Oracle character detection method based on deep learning and computer vision
CN111738264A (en) Intelligent acquisition method for data of display panel of machine room equipment
CN111539931A (en) Appearance abnormity detection method based on convolutional neural network and boundary limit optimization
CN116245882A (en) Circuit board electronic element detection method and device and computer equipment
CN115830302A (en) Multi-scale feature extraction and fusion power distribution network equipment positioning identification method
CN112991280B (en) Visual detection method, visual detection system and electronic equipment
CN112699898B (en) Image direction identification method based on multi-layer feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant