CN115035311A - Carrier roller detection method based on fusion of visible light and thermal infrared - Google Patents

Carrier roller detection method based on fusion of visible light and thermal infrared Download PDF

Info

Publication number
CN115035311A
CN115035311A CN202210616564.2A CN202210616564A CN115035311A CN 115035311 A CN115035311 A CN 115035311A CN 202210616564 A CN202210616564 A CN 202210616564A CN 115035311 A CN115035311 A CN 115035311A
Authority
CN
China
Prior art keywords
image
thermal infrared
visible light
profile
roller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210616564.2A
Other languages
Chinese (zh)
Inventor
杨伟忠
朱恩东
雷凌
徐晨鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Beixin Intelligent Technology Co ltd
Original Assignee
Nanjing Beixin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Beixin Intelligent Technology Co ltd filed Critical Nanjing Beixin Intelligent Technology Co ltd
Priority to CN202210616564.2A priority Critical patent/CN115035311A/en
Publication of CN115035311A publication Critical patent/CN115035311A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image fusion of computer vision, in particular to a carrier roller detection method based on visible light and thermal infrared fusion; the camera reads the video frame and calls a visible light image and a thermal infrared image; extracting a dragging roller profile based on the visible light image to obtain a first profile; extracting a dragging roller profile based on the thermal infrared image to obtain a second profile; preprocessing the visible light image and the thermal infrared image to obtain new image data; extracting the dragging roller profile by combining the first profile, the second profile and new image data to obtain a final dragging roller profile; the method combines the final contour of the dragging roller and the temperature information of the thermal infrared image to cut and alarm, and effectively solves the problem that the extraction of the carrier roller is greatly limited only on the basis of the thermal infrared image by fusing the visible light image and the thermal infrared image, and also solves the problem that the carrier roller abnormity cannot be effectively detected only on the basis of the visible light image.

Description

Carrier roller detection method based on fusion of visible light and thermal infrared
Technical Field
The invention relates to the technical field of image fusion of computer vision, in particular to a carrier roller detection method based on visible light and thermal infrared fusion.
Background
The carrier rollers are used as basic parts of the belt conveyor and distributed in the whole process of the belt conveyor, the number of the carrier rollers is large, manual inspection of the carrier rollers along the belt conveyor is needed to be carried out regularly, the safety and the stability of the belt conveyor are guaranteed, the traditional method adopts manual inspection to distinguish abnormal carrier rollers mainly through visual inspection and audition, and the mode has the defects of serious missing inspection, high working strength, low efficiency and the like.
At present, researches on online monitoring and automatic judgment of belt conveyor carrier roller faults are mainly based on analysis of information such as sound signals, temperature sensing and image characteristics. The current commonly used carrier roller detection algorithm mostly adopts thermal infrared image detection.
However, the field environment is far more complex than the experimental environment, and the extraction of the carrier roller is greatly limited by the thermal infrared image detection.
Disclosure of Invention
The invention aims to provide a carrier roller detection method based on fusion of visible light and thermal infrared, and aims to solve the problems that the field environment is complex, and the extraction of a carrier roller is limited by thermal infrared image detection.
In order to achieve the purpose, the invention provides a carrier roller detection method based on visible light and thermal infrared fusion, which comprises the following steps:
reading a video frame through a camera, and calling a visible light image and a thermal infrared image from the video frame;
extracting a dragging roller profile based on the visible light image to obtain a first profile;
extracting a dragging roller profile based on the thermal infrared image to obtain a second profile;
preprocessing the visible light image and the thermal infrared image to obtain new image data;
combining the first profile, the second profile and the new image data to extract a roller profile to obtain a final roller profile;
and cutting and alarming are carried out by combining the final roller profile and the temperature information of the thermal infrared image.
The method comprises the following specific steps of extracting a roller outline based on the visible light image to obtain a first outline:
carrying out target recognition detection on the dragging roller by utilizing the YOLOv5 algorithm based on the visible light image to obtain a detection result;
and carrying out edge detection on the detection result to obtain the first contour.
The method comprises the following specific steps of extracting a dragging roller profile based on the thermal infrared image to obtain a second profile:
removing the background from the histogram of the thermal infrared image in a balanced manner to obtain the edge of the belt conveyor;
and carrying out linear transformation on the edge of the belt conveyor to obtain the second contour.
The specific way of preprocessing the visible light image and the thermal infrared image to obtain new image data is as follows:
performing Gaussian filtering processing on the visible light image and the thermal infrared image to obtain a processed image;
scaling the bilinear interpolation of the processed image to the same resolution ratio to obtain a secondary image;
performing characteristic point detection on the secondary image by using an Akaza operator to obtain a transformation matrix;
and performing image matching and information fusion on the transformation matrix to obtain new image data.
According to the roller detection method based on visible light and thermal infrared fusion, the required environmental condition is to ensure the definition of a lens; the resolution ratio of the infrared camera is low, so that the target object for ROI extraction is obvious in characteristic and easy to extract when a transformation matrix is obtained, and the camera reads a video frame and transfers a visible light image and a thermal infrared image under the condition that the required conditions are met; extracting a roller outline based on the visible light image to obtain a first outline; extracting a dragging roller profile based on the thermal infrared image to obtain a second profile; preprocessing the visible light image and the thermal infrared image to obtain new image data; combining the first profile, the second profile and the new image data to extract a roller profile to obtain a final roller profile; the method combines the final contour of the dragging roller and the temperature information of the thermal infrared image to cut and alarm, and effectively solves the problem that the extraction of the carrier roller is greatly limited only based on the thermal infrared image by fusing the visible light image and the thermal infrared image, and also solves the problem that the carrier roller is not detected to be abnormal only based on the visible light image, and the problems that the field environment is complex and the extraction of the thermal infrared image is limited are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a bilinear interpolated scaled image.
Fig. 2 is a YOLOv5 improvement.
Fig. 3 is a histogram equalization graph.
Fig. 4 is a schematic diagram of a roller detection method based on visible light and thermal infrared fusion.
Fig. 5 is a flow chart of the roller detection method based on visible light and thermal infrared fusion.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Referring to fig. 1 to 5, the present invention provides a roller detecting method based on visible light and thermal infrared fusion, including the following steps:
s1, reading a video frame through a camera, and calling a visible light image and a thermal infrared image from the video frame;
s2, extracting a roller outline based on the visible light image to obtain a first outline;
the specific mode is as follows:
s21, carrying out target recognition detection on the dragging roller by utilizing the YOLOv5 algorithm based on the visible light image to obtain a detection result;
specifically, the idler target identification is carried out by using YOLOV5, YOLOv5 is a single-stage target detection algorithm, and a plurality of new improvement ideas are added to the algorithm on the basis of YOLOv4, so that the speed and the precision of the algorithm are greatly improved. The main improvement idea is shown in fig. 2.
Input end: in a model training stage, some improvement ideas are provided, and mainly comprise Mosaic data enhancement, self-adaptive anchor frame calculation and self-adaptive picture scaling;
reference network: some new ideas in other detection algorithms are fused, and the method mainly comprises the following steps: focus structure and CSP structure;
the hack network: some layers are often inserted between the BackBone and the final Head output layer of the target detection network, and an FPN + PAN structure is added into the Yolov 5;
head output layer: the output layer's anchor box mechanism is the same as YOLOv4, and the main improvements are the Loss function GIOU _ Loss during training and the DIOU _ nms of the prediction box filtering.
S22, carrying out edge detection on the detection result to obtain the first contour.
Specifically, contour edge extraction is carried out on the image subjected to noise reduction processing by using a Canny operator, the optimal double-threshold parameter of the Canny operator is obtained according to the actual situation of the image, and all contour edge information including belt edges is extracted; the Canny operator is mainly divided into three steps: gradient calculation, non-maximum suppression and edge determination.
Gradient calculation: the gradient direction and the edge direction are vertical, and usually take 8 different directions such as horizontal (left and right), vertical (up and down), diagonal (up and down right, up and down left and down right), and the edge detection operator returns Gx in the horizontal direction and Gy in the vertical direction. When the gradient is calculated, two values of the amplitude and the angle (representing the direction of the gradient) of the gradient can be obtained, and the amplitude and the direction (represented by the angle value) of the gradient are as follows:
Figure BDA0003673484990000041
θ=atan2(G y ,G x )
in the formula, atan2(·) represents an arctan function having two parameters.
Non-maxima suppression: the non-maximum suppression is the process of edge refinement, after the amplitude and the direction of the gradient are obtained, pixel points in the image are traversed, whether the current pixel point is the maximum value with the same gradient direction in surrounding pixel points or not is judged, whether the point is suppressed or not is determined according to the judgment result, and all non-edge points are removed.
Determining the edge: the above steps generally result in all edge information including the virtual edge. Obtaining a strong edge and a virtual edge according to the relation between the gradient value and the high and low threshold values; and screening the edges according to whether the virtual edges are connected with the strong edges.
S3, extracting a dragging roller contour based on the thermal infrared image to obtain a second contour;
the specific mode is as follows:
s31, removing the background of the histogram of the thermal infrared image in a balanced manner to obtain the edge of the belt conveyor;
specifically, the histogram reflects the gray scale distribution rule in the image. It describes the number of pixels that each gray level has, but does not contain information about the position of these pixels in the image. The image histogram does not care about the spatial position of the pixel, so the image histogram is not influenced by the rotation and translation change of the image and can be used as the characteristic of the image.
Histogram Equalization (Histogram Equalization) is a method for enhancing Image Contrast (Image Contrast), and the main idea is to make the Histogram distribution of one Image into an approximately uniform distribution through a cumulative distribution function, thereby enhancing the Image Contrast. In order to expand the brightness range of the original image, a mapping function is needed to map the pixel values of the original image into a new histogram in an equalized manner, and the mapping function has two conditions:
firstly, the original pixel value order cannot be disturbed, and the relationship between brightness and darkness after mapping cannot be changed;
secondly, the mapping must be in the original range, namely the value range of the pixel mapping function should be between 0 and 255;
combining the above two conditions, the cumulative distribution function is a good choice, because the cumulative distribution function is a monotonically increasing function (controlling magnitude relationship) and the range is 0 to 1 (controlling out-of-range problem), so what is used in histogram equalization is the cumulative distribution function. In the histogram equalization process, the mapping method comprises the following steps:
Figure BDA0003673484990000051
wherein S is k Refers to the value of the current gray level after the cumulative distribution function mapping, n is the sum of the pixels in the image, n j Is the number of pixels of the current gray level and L is the total number of gray levels in the image.
As shown in fig. 3, the original image is dark as a whole, and after histogram equalization, the contrast in the image is more obvious, which is helpful for extracting the edge contour.
S32, carrying out linear transformation on the edge of the belt conveyor to obtain the second contour.
S4, preprocessing the visible light image and the thermal infrared image to obtain new image data;
the specific mode is as follows:
s41, carrying out Gaussian filtering processing on the visible light image and the thermal infrared image to obtain a processed image;
specifically, the visible light image and the thermal infrared image are preprocessed by Gaussian filtering, the image quality is improved, the images are used for subsequent ROI extraction, and noise elimination is carried out on original RGB and the thermal infrared image data by a Gaussian filter; gaussian filtering is a weighted average filter whose convolution kernel has a coefficient for achieving averaging. The reciprocal of the sum of all values in the matrix is the coefficient of the convolution kernel. In the actual filtering, traversing the visible light image and the thermal infrared image, taking a certain point in the image as a convolution kernel center, utilizing convolution to check neighborhood pixels around the pixel point for weighted average, and taking a calculation result as a new pixel value of the current pixel point. And finally, Gaussian filtering and denoising of the image are realized, and high-quality image data are provided for subsequent ROI extraction and feature point detection.
The weight of the gaussian distribution can be selected according to a gaussian function, and the one-dimensional form and the two-dimensional form when the mean value μ of the function is 0 are shown in the formula. Where σ is the standard deviation of the normal distribution, and its value determines how fast the function decays.
One-dimensional form when μ ═ 0:
Figure BDA0003673484990000052
two-dimensional form when μ ═ 0:
Figure BDA0003673484990000053
s42, zooming the bilinear interpolation of the processed image to the same resolution ratio to obtain a secondary image;
specifically, a bilinear interpolation zoom image is extracted, a target object ROI area in the image is extracted, nearest neighbor interpolation zoom is carried out to the same resolution according to the characteristics of the ROI area, OTSU binarization threshold segmentation and Canny contour extraction are carried out on the visible light image and the thermal infrared image for subsequent characteristic point detection and image information fusion, and then shape scale characteristics such as an aspect ratio are utilized for screening to obtain the required target object ROI area; extracting the outline shape characteristics of the ROI, fitting a minimum external rectangle, determining the scaling proportion according to the size of the external rectangle, and scaling the image by using a bilinear interpolation method; bilinear interpolation, namely twice linear interpolation, in the x direction and the y direction respectively, such as the pixel values of four points Q11(x1, y1), Q12(x1, y2), Q21(x2, y1), Q22(x2, y2) of the image 1 are f (Q11), f (Q12), f (Q21) and f (Q22); for the pixel value f (P) of any point P (x, y) between the four points, linear interpolation can be carried out in the x direction firstly, and then linear interpolation is carried out in the y direction, so that the following relational expression is obtained, namely the pixel value of the point P can be solved, and the quality of the zoomed image is ensured;
Figure BDA0003673484990000061
s43, performing feature point detection on the secondary image by using an Akaza operator to obtain a transformation matrix;
specifically, the Akaza operator is used for detecting the characteristic points of the secondary image, a digital transformation matrix is obtained, image information fusion is carried out, the AKAZE algorithm is used for solving a nonlinear diffusion equation by using an FED (fast display algorithm) algorithm, the FED algorithm can be converged in any step length to obtain a stable scale space, and repeatability and specificity are improved. In the description of the characteristic direction, the calculation precision is improved, the calculation speed is improved, and the FED algorithm solves a nonlinear diffusion equation:
L i+1 =(I+τA(L i ))L i ,i=0,1,…,n-1
wherein I is an identity matrix, A (Li) is a conduction matrix of an image on a dimension I, the conduction matrix is constructed by a gradient histogram of scale influence after Gaussian filtering, tau is a time step and is a difference value ti-1-ti of evolution time.
S44, image matching and information fusion are carried out on the transformation matrix to obtain new image data.
S5, combining the first profile, the second profile and the new image data to extract the roller profile to obtain a final roller profile;
s6, combining the final roller contour and the temperature information of the thermal infrared image to carry out clipping and alarming.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. A carrier roller detection method based on visible light and thermal infrared fusion is characterized by comprising the following steps:
reading a video frame through a camera, and calling a visible light image and a thermal infrared image from the video frame;
extracting a roller outline based on the visible light image to obtain a first outline;
extracting a dragging roller profile based on the thermal infrared image to obtain a second profile;
preprocessing the visible light image and the thermal infrared image to obtain new image data;
combining the first profile, the second profile and the new image data to extract a roller profile to obtain a final roller profile;
and cutting and alarming are carried out by combining the final roller profile and the temperature information of the thermal infrared image.
2. A roller detection method based on visible light and thermal infrared fusion as claimed in claim 1,
the specific mode of extracting the dragging roller profile based on the visible light image to obtain the first profile is as follows:
carrying out target recognition detection on the dragging roller by utilizing the YOLOv5 algorithm based on the visible light image to obtain a detection result;
and carrying out edge detection on the detection result to obtain the first contour.
3. A roller detection method based on visible light and thermal infrared fusion as claimed in claim 1,
the specific mode of extracting the outline of the dragging roller based on the thermal infrared image to obtain the second outline is as follows:
removing the background from the histogram of the thermal infrared image in a balanced manner to obtain the edge of the belt conveyor;
and carrying out linear transformation on the edge of the belt conveyor to obtain the second contour.
4. A roller detection method based on visible light and thermal infrared fusion as claimed in claim 1,
the specific mode for preprocessing the visible light image and the thermal infrared image to obtain new image data is as follows:
performing Gaussian filtering processing on the visible light image and the thermal infrared image to obtain a processed image;
scaling the bilinear interpolation of the processed image to the same resolution ratio to obtain a secondary image;
performing characteristic point detection on the secondary image by using an Akaza operator to obtain a transformation matrix;
and performing image matching and information fusion on the transformation matrix to obtain new image data.
CN202210616564.2A 2022-06-01 2022-06-01 Carrier roller detection method based on fusion of visible light and thermal infrared Pending CN115035311A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210616564.2A CN115035311A (en) 2022-06-01 2022-06-01 Carrier roller detection method based on fusion of visible light and thermal infrared

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210616564.2A CN115035311A (en) 2022-06-01 2022-06-01 Carrier roller detection method based on fusion of visible light and thermal infrared

Publications (1)

Publication Number Publication Date
CN115035311A true CN115035311A (en) 2022-09-09

Family

ID=83122647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210616564.2A Pending CN115035311A (en) 2022-06-01 2022-06-01 Carrier roller detection method based on fusion of visible light and thermal infrared

Country Status (1)

Country Link
CN (1) CN115035311A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107043000A (en) * 2017-06-15 2017-08-15 西安科技大学 A kind of belt conveyer safe and intelligent safeguards system based on machine vision
CN109300161A (en) * 2018-10-24 2019-02-01 四川阿泰因机器人智能装备有限公司 A kind of localization method and device based on binocular vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107043000A (en) * 2017-06-15 2017-08-15 西安科技大学 A kind of belt conveyer safe and intelligent safeguards system based on machine vision
CN109300161A (en) * 2018-10-24 2019-02-01 四川阿泰因机器人智能装备有限公司 A kind of localization method and device based on binocular vision

Similar Documents

Publication Publication Date Title
Tripathi et al. Single image fog removal using bilateral filter
CN110648349A (en) Weld defect segmentation method based on background subtraction and connected region algorithm
CN110717922A (en) Image definition evaluation method and device
US8693783B2 (en) Processing method for image interpolation
WO2021217642A1 (en) Infrared image processing method and apparatus, and movable platform
CN114240789A (en) Infrared image histogram equalization enhancement method based on optimized brightness keeping
JP3486229B2 (en) Image change detection device
CN115797872A (en) Machine vision-based packaging defect identification method, system, equipment and medium
CN115689960A (en) Illumination self-adaptive infrared and visible light image fusion method in night scene
CN117456371B (en) Group string hot spot detection method, device, equipment and medium
CN116152261A (en) Visual inspection system for quality of printed product
CN115100226A (en) Contour extraction method based on monocular digital image
CN114792310A (en) Mura defect detection method for edge blurring in LCD screen
JP4934839B2 (en) Image processing apparatus, method thereof, and program
CN105809677B (en) Image edge detection method and system based on bilateral filter
CN113592750A (en) Infrared enhancement method based on gradient histogram
JP3860540B2 (en) Entropy filter and region extraction method using the filter
CN115035311A (en) Carrier roller detection method based on fusion of visible light and thermal infrared
CN113936242B (en) Video image interference detection method, system, device and medium
Cui et al. Single image haze removal based on luminance weight prior
CN114240920A (en) Appearance defect detection method
CN114418874A (en) Low-illumination image enhancement method
JP4817318B2 (en) Change extraction method, system, and program using two-dimensional density histogram
Naseeba et al. KP Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions
CN115018788B (en) Overhead line abnormality detection method and system based on intelligent robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination