CN115564031A - Detection network for glass defect detection - Google Patents

Detection network for glass defect detection Download PDF

Info

Publication number
CN115564031A
CN115564031A CN202211300522.4A CN202211300522A CN115564031A CN 115564031 A CN115564031 A CN 115564031A CN 202211300522 A CN202211300522 A CN 202211300522A CN 115564031 A CN115564031 A CN 115564031A
Authority
CN
China
Prior art keywords
information
network
video
defect
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211300522.4A
Other languages
Chinese (zh)
Inventor
李浩程
胡秋桂
王远扬
王宗辉
方正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Dinnar Automation Technology Co Ltd
Original Assignee
Suzhou Dinnar Automation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Dinnar Automation Technology Co Ltd filed Critical Suzhou Dinnar Automation Technology Co Ltd
Priority to CN202211300522.4A priority Critical patent/CN115564031A/en
Publication of CN115564031A publication Critical patent/CN115564031A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention discloses a detection network for detecting glass defects, which comprises the steps of collecting video data and carrying out semi-supervised labeling to obtain video information and label information; inputting the video information and the label information into a neural network for training to obtain a trained network; the method includes the steps that a video to be tested is input into a trained network, frame information and defect type information of defects are obtained, the frame information and the defect type information of the defects are output and displayed according to a preset mode, the double-flow network, a space stream heavy image feature and a time stream heavy change feature are adopted, training of the network is assisted by simulating change information of human eyes catching slight defects in a change light source and an angle, and network accuracy is improved.

Description

Detection network for glass defect detection
Technical Field
The invention relates to the field of machine vision, in particular to a detection network for detecting glass defects.
Background
In the field of defect detection, deep learning is playing an increasingly important role. In practical industrial scenarios, the detection of transparent objects has always been a great challenge. A set of fixed light source and lens are difficult to clearly image the slight defects in the complex and variable transparent object. Only by simulating human eyes, slight defects in the transparent object can be clearly imaged at a specific light source and a specific angle, and the imaging system is used for subsequent deep learning defect detection.
In industrial defect detection, the existing technology generally takes a high-resolution picture shot by an industrial camera as a supervision signal so as to train a deep learning model and achieve the purpose of detecting defects in the picture. However, there is a huge gap between the limited angles of the light source and the industrial lens and the complex and variable defects, so that the defect detection items become very difficult, and the most intuitive judgment is to judge whether the human eyes can accurately judge through pictures. If the ambiguous part exists, the ambiguous part becomes the ambiguous part of the algorithm, the fixed lighting mode is adopted, and the angle of a lens shooting product is always fixed, so that the ambiguous part cannot be matched with a momentarily-variable defect scene in characteristics. The existing fixed point location shooting technology is not capable of imaging defects. The existing technology is difficult to meet the high index requirement of specific scene defect detection.
Disclosure of Invention
The invention aims to provide a detection network for detecting glass defects, which comprises the following components:
s1, collecting video data and performing semi-supervised labeling to obtain video information and label information;
s2, inputting the video information and the label information into a neural network for training to obtain a trained network;
s3, inputting the video to be tested into the trained network to obtain frame information and defect type information of the defect;
and S4, outputting and displaying the frame information and the defect type information of the defect according to a preset mode.
Preferably, the task of collecting video specifically includes:
the glass product is placed in a dark box to shield the pollution of other light sources, a rotating light source is used for providing a 360-degree lighting scheme around the glass, and a video lens is used for acquiring 30 frames of video information for subsequent defect detection tasks.
Further, step S1 further includes making a label image of the original defect data, and the making method includes: and (3) using an image labeling tool to manufacture a corresponding label image, and marking frame and category information of the video frame with defects by using labeling software LabelImg to generate labeling information.
Preferably, the information labeling method adopts a semi-supervised learning mode, a small part of glass defect pictures are labeled, a neural network is trained to detect the rest of glass defects, detected defect frame information and category information are converted into labeling information, the rest labels are automatically generated, and after manual examination and verification, a training set is added to train a model for use.
Preferably, the model is a dual-flow neural network model, and the dual-flow neural network comprises a spatial flow network and a temporal flow network.
Preferably, the training method of the dual-flow neural network model is as follows:
inputting an RGB image by a spatial stream network, and extracting optical flow by the temporal stream network by using a TVL one to represent motion information of a video;
adopting DarkNet as a backbone network to extract features, and fusing high-low layer information by using FPN + PAN in a Neck part;
the characteristics that the captured defects belong to low semantic features are fused in the low-level features;
further performing convolution on the fused features to extract the features, and performing second fusion;
performing feature enhancement on the extracted features to obtain a high-level semantic feature fusing RGB information, video motion information and high-level and low-level feature information;
and adding a detection head on the characteristic layer to perform a target detection task, and finally obtaining frame information and category information of the defect.
Due to the application of the technical scheme, compared with the prior art, the invention has the following advantages:
(1) Compared with the traditional defect detection method, the method has the advantages that the double-flow network, the space streamer heavy image characteristic and the time streamer heavy change characteristic are adopted, the training of the change information auxiliary network that the human eyes catch slight defects at changing light sources and angles is simulated, and the network precision is improved.
(2) The defect characteristics are mostly grasped by the application and belong to the characteristic that the texture information belongs to the low-level characteristics, and the double branches are fused on the low-level characteristic diagram, so that the interaction of the low-level information is realized.
(3) This application uses semi-supervision to mark glass defect, and most marks will automatic generation, greatly reduced the degree of difficulty of artifical mark.
Drawings
FIG. 1 is a block diagram of a process for performing glass defect inspection by the inspection network of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
Example 1
As shown in FIG. 1, the present invention discloses a detection network for glass defect detection, comprising:
s1, collecting video data and performing semi-supervised labeling to obtain video information and label information;
further, a label image of the original defect data is produced, and the production method comprises: and (3) using an image labeling tool to manufacture a corresponding label image, and marking frame and category information of the video frame with defects by using labeling software LabelImg to generate labeling information.
Furthermore, labelImg is a tool for labeling images, and labeled data is provided as GT (ground route) to a network monitoring signal for monitoring the back propagation training of the network.
The format of the labeling information is txt, json or xml format labeling.
Further, the task of collecting video specifically includes:
the glass product is placed in a dark box to shield the pollution of other light sources, a rotating light source is used for providing a 360-degree lighting scheme around the glass, and a video lens is used for acquiring 30 frames of video information for subsequent defect detection tasks.
Further, the glass defect detection long video detection scheme comprises: when you shoot a glass defect, you find that consecutive video frames are highly redundant, the change information of the glass defect, and the frames are highly similar. That is, even if we can use sparse sampling, even if the extracted frames look different on the surface, the semantic information of the highest layer is described as one thing, we can model the whole long video, only a small segment and a small segment can be sent into the network before, when the detected video exceeds 20S, the calculation power of us cannot bear, the invention breaks through the time limit, the original video is cut into 3 segments, namely 60S can be detected, and the time requirement of industry on defect detection is met.
Furthermore, because the video duration in the data set is short, a video frame is randomly taken as input during training and testing, the spatial stream is responsible for extracting the appearance characteristics of the static frame, the optical flow addition makes up the defect that the neural network is difficult to capture time sequence motion information, and the motion information between model frames is directly provided, so that the recognition task is easier.
Sparse sampling: only 1 frame of the 10 frames of pictures is taken to represent the semantic spatial feature of the current slice. A single frame picture lacks understanding of the motion change information of the context, and we use the optical flow information of these 10 frame pictures to perform the optical flow training of the above method.
Given a video V, the video V is divided into K segments { S1, S2, · ·, sk } at equal intervals, and a segment network models a segment sequence
Figure 494050DEST_PATH_IMAGE001
Here (T1, T2, \8230; tk) is a fragment sequence. Each segment Tk is randomly sampled from its corresponding segment Sk. F (Tk; W) is a function with a parameter W, representing ConvNet, which operates on short segments Tk and generates class scores for all classes. Piecewise consistency function
Figure 10000274457283
The outputs from multiple short segments are combined together to obtain consistency of the class assumptions therein. Based on this consensus, the prediction function H predicts the probability of each action class in the entire video. Here we have chosen the widely used Softmax function for H. The final loss function for segment consistency G = ζ (F (T1; W), F (T2; W), F (Tk; W)) in combination with the standard classification cross entropy losses is as follows:
Figure 955118DEST_PATH_IMAGE002
this time period network is differentiable, or at least sub-gradiently, depending on the choice of aggregation function g. This allows us to use multiple segments to optimize the model parameters W in conjunction with a standard back-propagation algorithm, where the gradient of the model parameters W with respect to the loss value L during back-propagation can be derived as:
Figure 384962DEST_PATH_IMAGE003
when we learn the model parameters using a gradient-based optimization method, such as Stochastic Gradient Descent (SGD), equation 3 guarantees that the parameter update is the segment consistency G obtained with all segment-level predictions. After optimization in this way, the temporal segment network can learn the model parameters from the whole video, not from a small segment, and at the same time, by correcting the K values of all videos, we construct a sparse temporal sampling strategy, where the sampled segment contains only a small portion of frames. It significantly reduces the computational load of convnet over a computational frame compared to previous work using densely sampled frames.
The method spans the time limit, extracts the characteristics of the whole video, is also a double-flow model, one is information of image dimensionality, the other is information of time dimensionality, the previous network lacks global modeling capability because all operations are carried out in small segments, the video is uniformly divided into a plurality of segments, each segment can output a result, and then the scores of all the segments are fused to obtain a final result. In the learning process, instead of using segment-level predicted loss values in a dual-stream convolutional neural network, the loss values of video-level prediction are optimized by iteratively updating model parameters. By using the training method, the mAP index of the whole network on the data set of the industrial defect detection is increased by 8.2 points.
In an experiment, time flow and space flow are fused in a low-level feature layer, so that the detection rate of defects is improved to 98.6% from 81.2%, and the reason is that information interaction is started when information such as defect edges and textures are extracted by two networks, so that the whole network is trained more accurately and efficiently.
Further, the training time consumption is reduced by 40% by the method, and the network can learn the characteristics of the defects more easily, so that convergence is realized.
The loss function is:
Figure 626588DEST_PATH_IMAGE004
expression for gradient update:
Figure 901711DEST_PATH_IMAGE005
the above equation can be optimized by the SGD algorithm.
The selection of the aggregation function includes: maximum, average, weighted average.
In doing so, the model parameters can be updated with information of the entire video, rather than only over a very short segment.
Furthermore, the glass defect is marked by using semi-supervision, most marks are automatically generated, and the difficulty of manual marking is greatly reduced.
Preferably, the information labeling method adopts a semi-supervised learning mode, a small part of glass defect pictures are labeled, a neural network is trained to detect the rest glass defects, the detected frame information and category information of the defects are converted into labeling information, the rest labels are automatically generated, and after manual examination and verification, the labeling information is added into a training set to train a model for use.
S2, inputting the video information and the label information into a neural network for training to obtain a trained network;
s3, inputting the video to be tested into the trained network to obtain frame information and defect type information of the defect,
and S4, outputting and displaying the frame information and the defect type information of the defect according to a preset mode.
Further, the model is a double-flow neural network model, the double-flow neural network comprises a space flow network and a time flow network, and the training method of the double-flow neural network model is as follows:
inputting an RGB image by a spatial stream network, and extracting optical flow by the temporal stream network by using a TVL one to represent motion information of a video;
adopting DarkNet as a backbone network to extract features, and fusing high-low layer information by using FPN + PAN in a Neck part;
the characteristics that the grabbing defects belong to low semantic features are fused in the low-level features;
the features after fusion are further convolved to extract the features, the features are fused for the second time, the defect features are mostly grabbed by the method and belong to the characteristic that the texture information belongs to the low-level features, the double branches are fused on the low-level feature diagram, and the interaction of the low-level information is realized.
Performing feature enhancement on the extracted features to obtain a high-level semantic feature fusing RGB information, video motion information and high-level and low-level feature information;
and adding a detection head on the characteristic layer to perform a target detection task, and finally obtaining frame information and category information of the defect.
Further, the image low semantic features include contour, edge, color, texture, and shape features.
Edges and contours can reflect image content; if the edges and key points can be extracted reliably, many visual problems are basically solved, the characteristic semantic information of the lower layer of the image is less, but the target position is accurate.
The high semantic features of the image refer to what we can see, such as extracting low-level features from a face, and extracting the outline of the face, the nose, glasses and the like; the high-level features are displayed as human faces, the high-level feature semantic information is rich, but the target position is rough.
Further, the feature enhancement is to perform concat splicing on the feature layers extracted with different features, and then reduce the dimension from 1 × 1 convolution to the original dimension. Such that the new features will blend features of different feature layers.
Feature enhancement is also like feature interaction, such as concat combining RGB features and time-stream features, which has interoperability and can guide each other to train the network better.
Furthermore, the video data is processed by using a double-current network, and the time sequence information cannot be well processed by using a single convolutional neural network.
The double-current network adopted by the application uses one network to learn the image characteristics of the current frame, uses the other network to learn the change characteristics of the defects, and extracts the corresponding optical flow image from the original video, wherein the optical flow image covers a plurality of information of object changes and the time sequence information of the video. The spatial stream network learns the mapping from an RGB image to the final classification, the time stream network learns the mapping from an optical stream image to the final classification, in the time stream network, a TVL one is used for extracting optical stream to represent motion information of a video, one network is responsible for processing characteristic information of the image, the other network is responsible for processing change information of the video, the two networks are not interfered with each other, and the network added with the time stream can greatly improve the performance of the model to enable the model to capture the change of an object like human eyes.
Furthermore, the double-flow network, the space stream heavy image characteristic and the time stream heavy change characteristic are adopted, the training of the network is assisted by simulating the change information that human eyes catch slight defects in the changing light source and angle, and the network precision is improved.
Further, the output display in the predetermined mode may be transmitted to a computer in a wireless signal mode, and displayed on a computer screen.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the embodiments of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (4)

1. A detection network for glass defect detection, comprising:
s1, collecting video data and performing semi-supervised labeling to obtain video information and label information;
s2, inputting the video information and the label information into a neural network for training to obtain a trained network;
s3, inputting the video to be tested into the trained network to obtain frame information and defect type information of the defect,
s4, outputting and displaying the frame information and the defect type information of the defect according to a preset mode;
the video collection task specifically comprises:
placing the glass product in a dark box, shielding the pollution of other light sources, providing a 360-degree lighting scheme around the glass by using a rotating light source, and acquiring video information of 30 frames by using a video lens for a subsequent defect detection task;
step S1 further includes making a label image of the original defect data, and the making method includes: using an image labeling tool to manufacture a corresponding label image, and using labeling software LabelImg to mark a defective frame and category information for a video frame to generate labeling information;
the method for marking information adopts a semi-supervised learning mode, a part of glass defect pictures are marked, a neural network is trained to detect the rest glass defects, detected defect frame information and category information are converted into marking information, the rest marks are automatically generated, and after manual examination and verification, a training set is added to train a model for use;
the model is a double-flow neural network model, and the double-flow neural network comprises a spatial flow network and a time flow network;
the training method of the double-flow neural network model comprises the following steps:
inputting an RGB image by a spatial stream network, and extracting change information of an optical flow representing defect by the temporal stream network by using a TVL one;
the DarkNet is used as a backbone network to extract features, and the high-low layer information is fused by using FPN + PAN in the Neck part, so that the high-low layer feature information has interaction capacity;
the captured defects belong to low semantic features, and the features are fused in a low-level feature layer, so that the effective utilization of the low-level features by a network is facilitated;
further performing convolution on the fused features to extract the features, and performing second fusion;
performing feature enhancement on the extracted features to obtain a high-level semantic feature fusing RGB information, video motion information and high-level and low-level feature information;
and adding a detection head on the characteristic layer to perform a target detection task, and finally obtaining frame information and category information of the defect.
2. The detection network for glass defect detection of claim 1, further comprising: the original video is cut into multiple segments, each small segment can predict the frame type information of the segment through sparse sampling segment operation, and the defect prediction of the video is obtained according to the prediction results of all the small segments.
3. The detection network for glass defect detection of claim 2, wherein given a video V, the K segments { S1, S2, · ·, sk } are equally spaced, and the segment network models the sequence of segments
Figure 48224DEST_PATH_IMAGE001
T1, T2, \8230, tk is a sequence of segments, each segment Tk is randomly sampled from its corresponding segment Sk, F (Tk; W) is a function with a parameter W representing ConvNet, which operates on short segments Tk and generates class scores for all classes; piecewise consistency function
Figure 10000174409694
Combining the outputs from the plurality of short segments together to obtain consistency of class assumptions therein; the prediction function H predicts the probability of each action class in the whole video, H is a Softmax function, and the final loss function of the segment consistency G = zeta (F (T1; W), F (T2; W), F (Tk; W)) is as follows in combination with the standard classification cross entropy loss:
Figure 41588DEST_PATH_IMAGE002
wherein W is a back propagation algorithm joint optimization model parameter.
4. The inspection network for glass defect inspection of claim 3, wherein the gradient of the model parameter W with respect to the loss value L is derived as:
Figure 795917DEST_PATH_IMAGE003
CN202211300522.4A 2022-10-24 2022-10-24 Detection network for glass defect detection Pending CN115564031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211300522.4A CN115564031A (en) 2022-10-24 2022-10-24 Detection network for glass defect detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211300522.4A CN115564031A (en) 2022-10-24 2022-10-24 Detection network for glass defect detection

Publications (1)

Publication Number Publication Date
CN115564031A true CN115564031A (en) 2023-01-03

Family

ID=84747387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211300522.4A Pending CN115564031A (en) 2022-10-24 2022-10-24 Detection network for glass defect detection

Country Status (1)

Country Link
CN (1) CN115564031A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342952A (en) * 2023-03-29 2023-06-27 北京西清能源科技有限公司 Transformer bushing abnormality identification method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342952A (en) * 2023-03-29 2023-06-27 北京西清能源科技有限公司 Transformer bushing abnormality identification method and system
CN116342952B (en) * 2023-03-29 2024-01-23 北京西清能源科技有限公司 Transformer bushing abnormality identification method and system

Similar Documents

Publication Publication Date Title
CN112017189B (en) Image segmentation method and device, computer equipment and storage medium
CN109508671B (en) Video abnormal event detection system and method based on weak supervision learning
CN110569700B (en) Method and device for optimizing damage identification result
CN111091109B (en) Method, system and equipment for predicting age and gender based on face image
CN105243356B (en) A kind of method and device that establishing pedestrian detection model and pedestrian detection method
CN110705412A (en) Video target detection method based on motion history image
CN110796018A (en) Hand motion recognition method based on depth image and color image
CN108710893A (en) A kind of digital image cameras source model sorting technique of feature based fusion
CN109614896A (en) A method of the video content semantic understanding based on recursive convolution neural network
CN115082254A (en) Lean control digital twin system of transformer substation
CN114170144A (en) Power transmission line pin defect detection method, equipment and medium
CN113516146A (en) Data classification method, computer and readable storage medium
CN113569981A (en) Power inspection bird nest detection method based on single-stage target detection network
CN107948586B (en) Trans-regional moving target detecting method and device based on video-splicing
CN115564031A (en) Detection network for glass defect detection
Viraktamath et al. Comparison of YOLOv3 and SSD algorithms
CN116824641B (en) Gesture classification method, device, equipment and computer storage medium
CN111461772A (en) Video advertisement integration system and method based on generation countermeasure network
CN115546689A (en) Video time sequence abnormal frame detection method based on unsupervised frame correlation
CN112069997B (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net
CN111354028B (en) Binocular vision-based power transmission channel hidden danger identification and tracking method
CN115700737A (en) Oil spill detection method based on video monitoring
CN115393252A (en) Defect detection method and device for display panel, electronic equipment and storage medium
Sun et al. Kinect depth recovery via the cooperative profit random forest algorithm
CN112488015B (en) Intelligent building site-oriented target detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination