CN115035311A - Carrier roller detection method based on fusion of visible light and thermal infrared - Google Patents
Carrier roller detection method based on fusion of visible light and thermal infrared Download PDFInfo
- Publication number
- CN115035311A CN115035311A CN202210616564.2A CN202210616564A CN115035311A CN 115035311 A CN115035311 A CN 115035311A CN 202210616564 A CN202210616564 A CN 202210616564A CN 115035311 A CN115035311 A CN 115035311A
- Authority
- CN
- China
- Prior art keywords
- image
- thermal infrared
- visible light
- profile
- roller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 34
- 230000004927 fusion Effects 0.000 title claims abstract description 21
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 11
- 238000001914 filtration Methods 0.000 claims description 9
- 238000003708 edge detection Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000000034 method Methods 0.000 abstract description 13
- 238000000605 extraction Methods 0.000 abstract description 10
- 238000013507 mapping Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 230000001186 cumulative effect Effects 0.000 description 5
- 238000005315 distribution function Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 238000007689 inspection Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000001629 suppression Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of image fusion of computer vision, in particular to a carrier roller detection method based on visible light and thermal infrared fusion; the camera reads the video frame and calls a visible light image and a thermal infrared image; extracting a dragging roller profile based on the visible light image to obtain a first profile; extracting a dragging roller profile based on the thermal infrared image to obtain a second profile; preprocessing the visible light image and the thermal infrared image to obtain new image data; extracting the dragging roller profile by combining the first profile, the second profile and new image data to obtain a final dragging roller profile; the method combines the final contour of the dragging roller and the temperature information of the thermal infrared image to cut and alarm, and effectively solves the problem that the extraction of the carrier roller is greatly limited only on the basis of the thermal infrared image by fusing the visible light image and the thermal infrared image, and also solves the problem that the carrier roller abnormity cannot be effectively detected only on the basis of the visible light image.
Description
Technical Field
The invention relates to the technical field of image fusion of computer vision, in particular to a carrier roller detection method based on visible light and thermal infrared fusion.
Background
The carrier rollers are used as basic parts of the belt conveyor and distributed in the whole process of the belt conveyor, the number of the carrier rollers is large, manual inspection of the carrier rollers along the belt conveyor is needed to be carried out regularly, the safety and the stability of the belt conveyor are guaranteed, the traditional method adopts manual inspection to distinguish abnormal carrier rollers mainly through visual inspection and audition, and the mode has the defects of serious missing inspection, high working strength, low efficiency and the like.
At present, researches on online monitoring and automatic judgment of belt conveyor carrier roller faults are mainly based on analysis of information such as sound signals, temperature sensing and image characteristics. The current commonly used carrier roller detection algorithm mostly adopts thermal infrared image detection.
However, the field environment is far more complex than the experimental environment, and the extraction of the carrier roller is greatly limited by the thermal infrared image detection.
Disclosure of Invention
The invention aims to provide a carrier roller detection method based on fusion of visible light and thermal infrared, and aims to solve the problems that the field environment is complex, and the extraction of a carrier roller is limited by thermal infrared image detection.
In order to achieve the purpose, the invention provides a carrier roller detection method based on visible light and thermal infrared fusion, which comprises the following steps:
reading a video frame through a camera, and calling a visible light image and a thermal infrared image from the video frame;
extracting a dragging roller profile based on the visible light image to obtain a first profile;
extracting a dragging roller profile based on the thermal infrared image to obtain a second profile;
preprocessing the visible light image and the thermal infrared image to obtain new image data;
combining the first profile, the second profile and the new image data to extract a roller profile to obtain a final roller profile;
and cutting and alarming are carried out by combining the final roller profile and the temperature information of the thermal infrared image.
The method comprises the following specific steps of extracting a roller outline based on the visible light image to obtain a first outline:
carrying out target recognition detection on the dragging roller by utilizing the YOLOv5 algorithm based on the visible light image to obtain a detection result;
and carrying out edge detection on the detection result to obtain the first contour.
The method comprises the following specific steps of extracting a dragging roller profile based on the thermal infrared image to obtain a second profile:
removing the background from the histogram of the thermal infrared image in a balanced manner to obtain the edge of the belt conveyor;
and carrying out linear transformation on the edge of the belt conveyor to obtain the second contour.
The specific way of preprocessing the visible light image and the thermal infrared image to obtain new image data is as follows:
performing Gaussian filtering processing on the visible light image and the thermal infrared image to obtain a processed image;
scaling the bilinear interpolation of the processed image to the same resolution ratio to obtain a secondary image;
performing characteristic point detection on the secondary image by using an Akaza operator to obtain a transformation matrix;
and performing image matching and information fusion on the transformation matrix to obtain new image data.
According to the roller detection method based on visible light and thermal infrared fusion, the required environmental condition is to ensure the definition of a lens; the resolution ratio of the infrared camera is low, so that the target object for ROI extraction is obvious in characteristic and easy to extract when a transformation matrix is obtained, and the camera reads a video frame and transfers a visible light image and a thermal infrared image under the condition that the required conditions are met; extracting a roller outline based on the visible light image to obtain a first outline; extracting a dragging roller profile based on the thermal infrared image to obtain a second profile; preprocessing the visible light image and the thermal infrared image to obtain new image data; combining the first profile, the second profile and the new image data to extract a roller profile to obtain a final roller profile; the method combines the final contour of the dragging roller and the temperature information of the thermal infrared image to cut and alarm, and effectively solves the problem that the extraction of the carrier roller is greatly limited only based on the thermal infrared image by fusing the visible light image and the thermal infrared image, and also solves the problem that the carrier roller is not detected to be abnormal only based on the visible light image, and the problems that the field environment is complex and the extraction of the thermal infrared image is limited are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a bilinear interpolated scaled image.
Fig. 2 is a YOLOv5 improvement.
Fig. 3 is a histogram equalization graph.
Fig. 4 is a schematic diagram of a roller detection method based on visible light and thermal infrared fusion.
Fig. 5 is a flow chart of the roller detection method based on visible light and thermal infrared fusion.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Referring to fig. 1 to 5, the present invention provides a roller detecting method based on visible light and thermal infrared fusion, including the following steps:
s1, reading a video frame through a camera, and calling a visible light image and a thermal infrared image from the video frame;
s2, extracting a roller outline based on the visible light image to obtain a first outline;
the specific mode is as follows:
s21, carrying out target recognition detection on the dragging roller by utilizing the YOLOv5 algorithm based on the visible light image to obtain a detection result;
specifically, the idler target identification is carried out by using YOLOV5, YOLOv5 is a single-stage target detection algorithm, and a plurality of new improvement ideas are added to the algorithm on the basis of YOLOv4, so that the speed and the precision of the algorithm are greatly improved. The main improvement idea is shown in fig. 2.
Input end: in a model training stage, some improvement ideas are provided, and mainly comprise Mosaic data enhancement, self-adaptive anchor frame calculation and self-adaptive picture scaling;
reference network: some new ideas in other detection algorithms are fused, and the method mainly comprises the following steps: focus structure and CSP structure;
the hack network: some layers are often inserted between the BackBone and the final Head output layer of the target detection network, and an FPN + PAN structure is added into the Yolov 5;
head output layer: the output layer's anchor box mechanism is the same as YOLOv4, and the main improvements are the Loss function GIOU _ Loss during training and the DIOU _ nms of the prediction box filtering.
S22, carrying out edge detection on the detection result to obtain the first contour.
Specifically, contour edge extraction is carried out on the image subjected to noise reduction processing by using a Canny operator, the optimal double-threshold parameter of the Canny operator is obtained according to the actual situation of the image, and all contour edge information including belt edges is extracted; the Canny operator is mainly divided into three steps: gradient calculation, non-maximum suppression and edge determination.
Gradient calculation: the gradient direction and the edge direction are vertical, and usually take 8 different directions such as horizontal (left and right), vertical (up and down), diagonal (up and down right, up and down left and down right), and the edge detection operator returns Gx in the horizontal direction and Gy in the vertical direction. When the gradient is calculated, two values of the amplitude and the angle (representing the direction of the gradient) of the gradient can be obtained, and the amplitude and the direction (represented by the angle value) of the gradient are as follows:
θ=atan2(G y ,G x )
in the formula, atan2(·) represents an arctan function having two parameters.
Non-maxima suppression: the non-maximum suppression is the process of edge refinement, after the amplitude and the direction of the gradient are obtained, pixel points in the image are traversed, whether the current pixel point is the maximum value with the same gradient direction in surrounding pixel points or not is judged, whether the point is suppressed or not is determined according to the judgment result, and all non-edge points are removed.
Determining the edge: the above steps generally result in all edge information including the virtual edge. Obtaining a strong edge and a virtual edge according to the relation between the gradient value and the high and low threshold values; and screening the edges according to whether the virtual edges are connected with the strong edges.
S3, extracting a dragging roller contour based on the thermal infrared image to obtain a second contour;
the specific mode is as follows:
s31, removing the background of the histogram of the thermal infrared image in a balanced manner to obtain the edge of the belt conveyor;
specifically, the histogram reflects the gray scale distribution rule in the image. It describes the number of pixels that each gray level has, but does not contain information about the position of these pixels in the image. The image histogram does not care about the spatial position of the pixel, so the image histogram is not influenced by the rotation and translation change of the image and can be used as the characteristic of the image.
Histogram Equalization (Histogram Equalization) is a method for enhancing Image Contrast (Image Contrast), and the main idea is to make the Histogram distribution of one Image into an approximately uniform distribution through a cumulative distribution function, thereby enhancing the Image Contrast. In order to expand the brightness range of the original image, a mapping function is needed to map the pixel values of the original image into a new histogram in an equalized manner, and the mapping function has two conditions:
firstly, the original pixel value order cannot be disturbed, and the relationship between brightness and darkness after mapping cannot be changed;
secondly, the mapping must be in the original range, namely the value range of the pixel mapping function should be between 0 and 255;
combining the above two conditions, the cumulative distribution function is a good choice, because the cumulative distribution function is a monotonically increasing function (controlling magnitude relationship) and the range is 0 to 1 (controlling out-of-range problem), so what is used in histogram equalization is the cumulative distribution function. In the histogram equalization process, the mapping method comprises the following steps:
wherein S is k Refers to the value of the current gray level after the cumulative distribution function mapping, n is the sum of the pixels in the image, n j Is the number of pixels of the current gray level and L is the total number of gray levels in the image.
As shown in fig. 3, the original image is dark as a whole, and after histogram equalization, the contrast in the image is more obvious, which is helpful for extracting the edge contour.
S32, carrying out linear transformation on the edge of the belt conveyor to obtain the second contour.
S4, preprocessing the visible light image and the thermal infrared image to obtain new image data;
the specific mode is as follows:
s41, carrying out Gaussian filtering processing on the visible light image and the thermal infrared image to obtain a processed image;
specifically, the visible light image and the thermal infrared image are preprocessed by Gaussian filtering, the image quality is improved, the images are used for subsequent ROI extraction, and noise elimination is carried out on original RGB and the thermal infrared image data by a Gaussian filter; gaussian filtering is a weighted average filter whose convolution kernel has a coefficient for achieving averaging. The reciprocal of the sum of all values in the matrix is the coefficient of the convolution kernel. In the actual filtering, traversing the visible light image and the thermal infrared image, taking a certain point in the image as a convolution kernel center, utilizing convolution to check neighborhood pixels around the pixel point for weighted average, and taking a calculation result as a new pixel value of the current pixel point. And finally, Gaussian filtering and denoising of the image are realized, and high-quality image data are provided for subsequent ROI extraction and feature point detection.
The weight of the gaussian distribution can be selected according to a gaussian function, and the one-dimensional form and the two-dimensional form when the mean value μ of the function is 0 are shown in the formula. Where σ is the standard deviation of the normal distribution, and its value determines how fast the function decays.
s42, zooming the bilinear interpolation of the processed image to the same resolution ratio to obtain a secondary image;
specifically, a bilinear interpolation zoom image is extracted, a target object ROI area in the image is extracted, nearest neighbor interpolation zoom is carried out to the same resolution according to the characteristics of the ROI area, OTSU binarization threshold segmentation and Canny contour extraction are carried out on the visible light image and the thermal infrared image for subsequent characteristic point detection and image information fusion, and then shape scale characteristics such as an aspect ratio are utilized for screening to obtain the required target object ROI area; extracting the outline shape characteristics of the ROI, fitting a minimum external rectangle, determining the scaling proportion according to the size of the external rectangle, and scaling the image by using a bilinear interpolation method; bilinear interpolation, namely twice linear interpolation, in the x direction and the y direction respectively, such as the pixel values of four points Q11(x1, y1), Q12(x1, y2), Q21(x2, y1), Q22(x2, y2) of the image 1 are f (Q11), f (Q12), f (Q21) and f (Q22); for the pixel value f (P) of any point P (x, y) between the four points, linear interpolation can be carried out in the x direction firstly, and then linear interpolation is carried out in the y direction, so that the following relational expression is obtained, namely the pixel value of the point P can be solved, and the quality of the zoomed image is ensured;
s43, performing feature point detection on the secondary image by using an Akaza operator to obtain a transformation matrix;
specifically, the Akaza operator is used for detecting the characteristic points of the secondary image, a digital transformation matrix is obtained, image information fusion is carried out, the AKAZE algorithm is used for solving a nonlinear diffusion equation by using an FED (fast display algorithm) algorithm, the FED algorithm can be converged in any step length to obtain a stable scale space, and repeatability and specificity are improved. In the description of the characteristic direction, the calculation precision is improved, the calculation speed is improved, and the FED algorithm solves a nonlinear diffusion equation:
L i+1 =(I+τA(L i ))L i ,i=0,1,…,n-1
wherein I is an identity matrix, A (Li) is a conduction matrix of an image on a dimension I, the conduction matrix is constructed by a gradient histogram of scale influence after Gaussian filtering, tau is a time step and is a difference value ti-1-ti of evolution time.
S44, image matching and information fusion are carried out on the transformation matrix to obtain new image data.
S5, combining the first profile, the second profile and the new image data to extract the roller profile to obtain a final roller profile;
s6, combining the final roller contour and the temperature information of the thermal infrared image to carry out clipping and alarming.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (4)
1. A carrier roller detection method based on visible light and thermal infrared fusion is characterized by comprising the following steps:
reading a video frame through a camera, and calling a visible light image and a thermal infrared image from the video frame;
extracting a roller outline based on the visible light image to obtain a first outline;
extracting a dragging roller profile based on the thermal infrared image to obtain a second profile;
preprocessing the visible light image and the thermal infrared image to obtain new image data;
combining the first profile, the second profile and the new image data to extract a roller profile to obtain a final roller profile;
and cutting and alarming are carried out by combining the final roller profile and the temperature information of the thermal infrared image.
2. A roller detection method based on visible light and thermal infrared fusion as claimed in claim 1,
the specific mode of extracting the dragging roller profile based on the visible light image to obtain the first profile is as follows:
carrying out target recognition detection on the dragging roller by utilizing the YOLOv5 algorithm based on the visible light image to obtain a detection result;
and carrying out edge detection on the detection result to obtain the first contour.
3. A roller detection method based on visible light and thermal infrared fusion as claimed in claim 1,
the specific mode of extracting the outline of the dragging roller based on the thermal infrared image to obtain the second outline is as follows:
removing the background from the histogram of the thermal infrared image in a balanced manner to obtain the edge of the belt conveyor;
and carrying out linear transformation on the edge of the belt conveyor to obtain the second contour.
4. A roller detection method based on visible light and thermal infrared fusion as claimed in claim 1,
the specific mode for preprocessing the visible light image and the thermal infrared image to obtain new image data is as follows:
performing Gaussian filtering processing on the visible light image and the thermal infrared image to obtain a processed image;
scaling the bilinear interpolation of the processed image to the same resolution ratio to obtain a secondary image;
performing characteristic point detection on the secondary image by using an Akaza operator to obtain a transformation matrix;
and performing image matching and information fusion on the transformation matrix to obtain new image data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210616564.2A CN115035311A (en) | 2022-06-01 | 2022-06-01 | Carrier roller detection method based on fusion of visible light and thermal infrared |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210616564.2A CN115035311A (en) | 2022-06-01 | 2022-06-01 | Carrier roller detection method based on fusion of visible light and thermal infrared |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115035311A true CN115035311A (en) | 2022-09-09 |
Family
ID=83122647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210616564.2A Pending CN115035311A (en) | 2022-06-01 | 2022-06-01 | Carrier roller detection method based on fusion of visible light and thermal infrared |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115035311A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107043000A (en) * | 2017-06-15 | 2017-08-15 | 西安科技大学 | A kind of belt conveyer safe and intelligent safeguards system based on machine vision |
CN109300161A (en) * | 2018-10-24 | 2019-02-01 | 四川阿泰因机器人智能装备有限公司 | A kind of localization method and device based on binocular vision |
-
2022
- 2022-06-01 CN CN202210616564.2A patent/CN115035311A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107043000A (en) * | 2017-06-15 | 2017-08-15 | 西安科技大学 | A kind of belt conveyer safe and intelligent safeguards system based on machine vision |
CN109300161A (en) * | 2018-10-24 | 2019-02-01 | 四川阿泰因机器人智能装备有限公司 | A kind of localization method and device based on binocular vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tripathi et al. | Single image fog removal using bilateral filter | |
CN110648349A (en) | Weld defect segmentation method based on background subtraction and connected region algorithm | |
CN110717922A (en) | Image definition evaluation method and device | |
US8693783B2 (en) | Processing method for image interpolation | |
WO2021217642A1 (en) | Infrared image processing method and apparatus, and movable platform | |
CN114240789A (en) | Infrared image histogram equalization enhancement method based on optimized brightness keeping | |
JP3486229B2 (en) | Image change detection device | |
CN115797872A (en) | Machine vision-based packaging defect identification method, system, equipment and medium | |
CN115689960A (en) | Illumination self-adaptive infrared and visible light image fusion method in night scene | |
CN117456371B (en) | Group string hot spot detection method, device, equipment and medium | |
CN116152261A (en) | Visual inspection system for quality of printed product | |
CN115100226A (en) | Contour extraction method based on monocular digital image | |
CN114792310A (en) | Mura defect detection method for edge blurring in LCD screen | |
JP4934839B2 (en) | Image processing apparatus, method thereof, and program | |
CN105809677B (en) | Image edge detection method and system based on bilateral filter | |
CN113592750A (en) | Infrared enhancement method based on gradient histogram | |
JP3860540B2 (en) | Entropy filter and region extraction method using the filter | |
CN115035311A (en) | Carrier roller detection method based on fusion of visible light and thermal infrared | |
CN113936242B (en) | Video image interference detection method, system, device and medium | |
Cui et al. | Single image haze removal based on luminance weight prior | |
CN114240920A (en) | Appearance defect detection method | |
CN114418874A (en) | Low-illumination image enhancement method | |
JP4817318B2 (en) | Change extraction method, system, and program using two-dimensional density histogram | |
Naseeba et al. | KP Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions | |
CN115018788B (en) | Overhead line abnormality detection method and system based on intelligent robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |