CN115641273A - Coal mine underground robot position perception image enhancement method based on luminosity compensation - Google Patents

Coal mine underground robot position perception image enhancement method based on luminosity compensation Download PDF

Info

Publication number
CN115641273A
CN115641273A CN202211284691.3A CN202211284691A CN115641273A CN 115641273 A CN115641273 A CN 115641273A CN 202211284691 A CN202211284691 A CN 202211284691A CN 115641273 A CN115641273 A CN 115641273A
Authority
CN
China
Prior art keywords
data
point cloud
image data
image
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211284691.3A
Other languages
Chinese (zh)
Inventor
满洋
陈广立
刘志强
张少帅
汤明东
徐鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Fengyihe Intelligent Technology Co ltd
Original Assignee
Suzhou Fengyihe Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Fengyihe Intelligent Technology Co ltd filed Critical Suzhou Fengyihe Intelligent Technology Co ltd
Priority to CN202211284691.3A priority Critical patent/CN115641273A/en
Publication of CN115641273A publication Critical patent/CN115641273A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a method for enhancing a position perception image of a coal mine underground robot based on luminosity compensation, which comprises the following steps: processing the laser point cloud data and the original depth image data, and performing plane fitting and segmentation on the noisy dense depth image data to obtain a small area; establishing a data fusion model based on Bayesian kriging in a small area to obtain an optimal weight coefficient, and substituting the optimal weight coefficient into the data fusion model based on Bayesian kriging to obtain reconstructed dense high-precision point cloud; and performing recovery enhancement on the image in the region of the underground environment which is in diffuse reflection. The invention has the beneficial effects that: the method carries out recovery enhancement on the image in the region with diffuse reflection in the underground environment; good data support is provided for the positioning of the robot, and the problems of sensing and accurate modeling of complex coal mine environment are solved. The invention provides an image enhancement algorithm based on active light source luminosity compensation, which overcomes the low-illumination underground environment and recovers the image texture.

Description

Coal mine underground robot position perception image enhancement method based on luminosity compensation
Technical Field
The invention belongs to the field of accurate environment sensing of coal mine environments, and particularly relates to a method for enhancing a position sensing image of a coal mine underground robot based on active light source luminosity compensation.
Background
The underground environment illumination condition is poor, the textural features are sparse, the textural features are single in color and large in particle dust, and a single sensor cannot cope with instant positioning and map building (SLAM) under the special conditions.
The stable and effective characteristic points cannot be captured in a dark and lightless environment based on the vision positioning, the positioning precision is low, and the positioning precision needs to be assisted by an active light source, other mark points or a sensor; the positioning method based on the laser point cloud has the positioning accuracy depending on strong environmental structure characteristics, such as sharp edge points of intersections; in summary, the existing method based on lidar + IMU, vision + infrared + IMU, laser + visible + infrared + IMU has the defect that the traditional feature extraction algorithm cannot extract stable features for fusion in the downhole environment.
In the prior art, sensing and accurate modeling of complex coal mine environment have certain difficulty: in a mine environment with weak illumination and GPS rejection, laser point cloud information with accurate and effective distance measurement occupies a crucial position, but sparse laser point clouds cannot provide enough environmental characteristic information. In addition to the laser radar, a sensor for acquiring environmental depth information, such as a depth camera based on binocular matching, may acquire dense and rich depth images, but depth values of the depth images are disturbed by significant noise, so that the measurement accuracy of the depth camera based on binocular matching is far less than that of laser point cloud. Therefore, the problem that the geometric characteristics of the mine environment cannot be accurately sensed by a single depth sensor is solved.
The tunnel image characteristics of low-illumination weak texture in the prior art have the problem of being difficult to identify: in an almost dark coal mine environment, the unmanned aerial vehicle needs to complete the acquisition of an underground environment image by virtue of an active light source carried by the unmanned aerial vehicle; the propagation characteristics of the light source can make the brightness of the area close to the light source in the image high, and the brightness of the area far from the light source in the image low, which results in large texture feature variation. The image features obtained by the existing image key point descriptor extraction algorithm are quite unstable and are easy to be subjected to error matching. Accurate registration between continuous image frames cannot be realized, and great difficulty is brought to positioning navigation based on vision.
The visual image is used as the most intuitive reflection of the underground environment, particularly after disasters, and has important significance for robot navigation and workers; how to overcome the underground environment with low illumination and recover the image texture, and the image enhancement algorithm based on the active light source luminosity compensation has important research significance.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a coal mine underground robot position perception image enhancement method based on active light source luminosity compensation.
The coal mine underground robot position perception image enhancement method based on luminosity compensation comprises the following working steps:
step 1, processing laser point cloud data and original depth image data, and performing plane fitting and segmentation on dense depth image data with noise in the original depth image data to obtain a small area; establishing a data fusion model based on Bayesian kriging in a small area to obtain an optimal weight coefficient, and substituting the optimal weight coefficient into the data fusion model based on Bayesian kriging to obtain a reconstructed dense high-precision point cloud;
and 2, eliminating the highlight area of the original RGB image, estimating the reflectivity of the surface of the underground environment by combining with the depth information of the dense high-precision point cloud, re-rendering the image by combining with the illumination coefficient, and restoring and enhancing the image in the area of diffuse reflection of the underground environment.
Preferably, step 1 specifically comprises the following steps:
step 1.1, adopting a laser point cloud data geometric space optimal approximation algorithm to preprocess laser point cloud data and original depth image data: if the mutual relation between the sensors is known, converting the laser point cloud data and the original depth image data into the same reference coordinate system;
step 1.2, setting original depth image data to accord with Gaussian distribution, performing plane fitting and segmentation on dense depth image data with noise in the original depth image data by using a RANSAC random consensus algorithm, and estimating Gaussian parameters of each pixel point according to a plane fitting equation of a small region obtained after segmentation;
step 1.3, in the small region obtained after the segmentation in the step 1.2, establishing a data fusion model based on Bayesian Krigin, and on the premise of unbiased optimal estimation, performing minimum estimation on the variance to establish an optimization equation;
step 1.4, in each small area, carrying out spatial covariance correlation analysis on original depth image data and point cloud data;
and step 1.5, fitting the variation function and the conditional variation function, solving the optimal weight coefficient, and bringing the optimal weight coefficient back to the data fusion model based on Bayesian kriging established in the step 1.3 to obtain the reconstructed dense high-precision point cloud.
Preferably, step 2 specifically comprises the following steps:
step 2.1, supposing that the surface of the underground environment is Lambert, detecting the highlight area of the original RGB image and then removing the highlight area;
2.2, establishing an SFS model, and preliminarily estimating the reflectivity of the surface of the underground environment by combining the depth information of the dense high-precision point cloud obtained in the step 1;
and 2.3, establishing an optimization equation based on the Retinex image enhancement algorithm model, solving the optimized reflectivity and illumination coefficient according to the optimization equation, re-rendering the image, and performing brightness recovery and texture enhancement on the image in an area with diffuse reflection in the underground environment.
The invention has the beneficial effects that:
the method comprises the steps of analyzing an incidence relation between sparse high-precision laser point cloud information and dense fuzzy depth image information double-channel depth data in combination with the sparse high-precision laser point cloud information and the dense fuzzy depth image information, establishing a redundant depth data fusion model, estimating underground high-precision dense point cloud data, estimating the reflectivity of the surface of an underground environment in combination with the depth information of the dense high-precision point cloud, re-rendering the image in combination with an illumination coefficient, and recovering and enhancing the image in a diffuse reflection area of the underground environment; good data support is provided for the positioning of the robot, and the problems of sensing and accurate modeling of complex coal mine environments are solved. The invention provides an image enhancement algorithm based on active light source luminosity compensation, which overcomes the low-illumination underground environment and recovers the image texture.
Detailed Description
The present invention will be further described with reference to the following examples. The following examples are set forth merely to provide an understanding of the invention. It should be noted that, for a person skilled in the art, several modifications can be made to the invention without departing from the principle of the invention, and these modifications and modifications also fall within the protection scope of the claims of the present invention.
As an embodiment, the method for enhancing the position perception image of the coal mine underground robot based on the active light source luminosity compensation comprises the following working steps:
step 1, processing laser point cloud data and original depth image data, and performing plane fitting and segmentation on dense depth image data with noise in the original depth image data to obtain a small area; establishing a data fusion model based on Bayesian kriging in a small area to obtain an optimal weight coefficient, and substituting the optimal weight coefficient into the data fusion model based on Bayesian kriging to obtain a reconstructed dense high-precision point cloud;
step 1.1, adopting a laser point cloud data geometric space optimal approximation algorithm to preprocess laser point cloud data and original depth image data: setting the mutual relation among the sensors to be known, and converting the laser point cloud data and the original depth image data into the same reference coordinate system;
the preprocessing mode of the laser point cloud data and the original depth image data is as follows: acquiring a laser point through a laser radar, and acquiring a first group of polar coordinate strings according to the laser point; acquiring a depth image through a depth camera, wherein the depth image comprises a first pixel point; the first pixel point is a two-dimensional data point, wherein the pixel value of the first pixel point corresponds to the depth information, and the position of the first pixel point in the depth image corresponds to the spatial information; calculating a first angle from the first pixel point to the laser radar according to the first pixel point; reading the depth information of the first pixel points, and obtaining a first distance from each row of the first pixel points to the laser radar according to the depth information; forming the first distance and the first angle into a set of points, thereby obtaining a second set of polar coordinate strings; performing sequence fusion on the first group of polar coordinate strings and the second group of polar coordinate strings according to angles, and converting the laser point cloud data and the original depth image data to the same reference coordinate system;
step 1.2, setting original depth image data to accord with Gaussian distribution, performing plane fitting and segmentation on dense depth image data with noise in the original depth image data by using a RANSAC random consensus algorithm, and estimating Gaussian parameters of each pixel point according to a plane fitting equation of a small region obtained after segmentation;
the RANSAC random consensus algorithm is commonly used for feature point matching in SLAM, and specifically comprises the following steps:
the RANSAC random consensus algorithm searches for effective data from original depth image data of the dense depth image data containing noise, selects a minimum data set capable of estimating a fitting plane, estimates the fitting plane by using the data set, substitutes all the dense depth image data containing the noise into the fitting plane, and calculates the original depth image data without the noise belonging to the fitting plane; comparing the number of the original depth image data without noise of the current fitting plane and the best fitting plane deduced before, and recording the model parameters of the original depth image data without noise at the maximum and the number of the original depth image data without noise; repeating the previous steps until the fitting plane meets the requirement; then dividing the fitting plane meeting the requirements into a plurality of small areas;
step 1.3, in the small region obtained after the segmentation in the step 1.2, establishing a data fusion model based on Bayesian Krigin, and on the premise of unbiased optimal estimation, performing minimum estimation on the variance to establish an optimization equation;
a regional variable on region D is represented by z (x), and the corresponding random function is denoted as { z (x), x ∈ D }, where z (x) represents the observed dataThe corresponding random function is called hard data for short; using M (x) to represent another regional variable on the region D, and marking a corresponding random function as { M (x), wherein x belongs to D }, wherein M (x) represents guess data, namely soft data for short; assuming a random function Z (x), x ∈ D a set of observations [ Z (x) ] i ),i=1,2,…,N]Thereby defining a random function as:
Z T (x)=Z(x)-μ M (x)x∈D
in the formula: mu.s M (x) For a mathematical expectation of M (x), for any set of observations { Z T (x i )=Z(x i )-μ M =(x i ) I =1,2, \8230;, N }, an estimated value Z obtained by Bayesian kriging estimation (x 0 ) Has the following form:
Figure BDA0003899289170000041
in the above formula, x 0 Is a point in region D, λ i (i =1,2, \8230;, N) is the weighting coefficient to be determined;
according to the estimated unbiased property and the minimum of the estimated variance, the Lagrange method is used for obtaining the weight coefficient lambda i Bayesian kriging estimation equation set of (a):
Figure BDA0003899289170000051
wherein β is the Lagrangian constant, γ Z|M A conditional variation function for random function Z (x) over M (x); gamma ray Z A variogram of M (x);
obtaining lambda by solving Bayesian Kriging estimation equation set i And a value of β; find lambda i Then, the minimum estimation error variance is further found:
Figure BDA0003899289170000052
step 1.4, performing spatial covariance correlation analysis on the original depth image data and the point cloud data in each independent small area;
step 1.5, fitting a variation function and a conditional variation function, solving an optimal weight coefficient, and bringing the optimal weight coefficient back to the Bayesian kriging-based data fusion model established in the step 1.3 to obtain reconstructed dense high-precision point cloud;
step 2, eliminating highlight areas of original RGB images, estimating the reflectivity of the surface of the underground environment by combining depth information of dense high-precision point clouds, and re-rendering the images by combining illumination coefficients;
step 2.1, supposing that the surface of the underground environment is Lambert surface, and removing the original RGB image after detecting the highlight area of the original RGB image;
2.2, establishing an SFS model, and preliminarily estimating the reflectivity of the surface of the underground environment by combining the depth information of the dense high-precision point cloud obtained in the step 1;
Figure BDA0003899289170000053
where E is the reflection intensity (scattering intensity, intensity in each direction is the same),
Figure BDA0003899289170000054
is the direction vector of the incident wave,
Figure BDA0003899289170000055
is the normal vector of the object surface. After the reflection intensity is normalized, mapping is performed to [0,1 ] according to the gray scale]After internal normalization, the proportional sign can be changed into the equal sign, so that:
Figure BDA0003899289170000061
the condition that cosine values are negative possibly occurring in the included angle of the two vectors is not considered first, and the included angle of the two vectors is
Figure BDA0003899289170000062
And
Figure BDA0003899289170000063
brought into
Figure BDA0003899289170000064
In (1), obtaining:
Figure BDA0003899289170000065
and 2.3, establishing an optimization equation based on the Retinex image enhancement algorithm model, solving the optimized reflectivity and illumination coefficient according to the optimization equation, re-rendering the image, and performing brightness recovery and texture enhancement on the image in an area with diffuse reflection (weak illumination) in the underground environment.
The Retinex-based image enhancement algorithm is specifically as follows: the basic assumption of Retinex theory is that the original image S is the product of the illumination image L and the reflectance image R:
S(x,y)=R(x,y)*L(x,y)
the purpose of image enhancement based on Retinex is to estimate illumination L from an original image S so as to decompose R and eliminate the influence of illumination nonuniformity so as to improve the visual effect of the image; transfer image to logarithmic domain:
s=logS(x,y),l=logL(x,y),r=logR(x,y)
therefore, the method comprises the following steps:
logS(x,y)=logR(x,y)+logL(x,y)
converting into:
Figure BDA0003899289170000066
Figure BDA0003899289170000067
r (x, y) is the output image, and the operations in parentheses in the following in the above formula are convolution operations. F (x, y) is the center-surround function, expressed as:
Figure BDA0003899289170000068
where c is the gaussian surround scale and λ is a scale satisfying:
∫∫F(x,y)dxdy=1
convolution in the SSR algorithm is a calculation of an incident image, and its physical meaning is that the change of illumination in the image is estimated by calculating the pixel point and the surrounding area under the action of weighted average. L (x, y) is removed, leaving only the S (x, y) attribute.

Claims (3)

1. A position perception image enhancement method of a coal mine underground robot based on luminosity compensation is characterized by comprising the following working steps:
step 1, processing laser point cloud data and original depth image data, and performing plane fitting and segmentation on dense depth image data with noise in the original depth image data to obtain a small area; establishing a data fusion model based on Bayesian kriging in a small area to obtain an optimal weight coefficient, and substituting the optimal weight coefficient into the data fusion model based on Bayesian kriging to obtain a reconstructed dense high-precision point cloud;
and 2, eliminating highlight areas of the original RGB images, estimating the reflectivity of the surface of the underground environment by combining depth information of dense high-precision point clouds, re-rendering the images by combining illumination coefficients, and recovering and enhancing the images in areas with diffuse reflection in the underground environment.
2. The method for enhancing the position perception image of the coal mine underground robot based on the luminosity compensation as claimed in claim 1, wherein the step 1 specifically comprises the following steps:
step 1.1, adopting a laser point cloud data geometric space optimal approximation algorithm to preprocess laser point cloud data and original depth image data: if the mutual relation between the sensors is known, converting the laser point cloud data and the original depth image data into the same reference coordinate system;
step 1.2, setting original depth image data to accord with Gaussian distribution, performing plane fitting and segmentation on dense depth image data with noise in the original depth image data by using a RANSAC random consensus algorithm, and estimating Gaussian parameters of each pixel point according to a plane fitting equation of a small region obtained after segmentation;
step 1.3, in the small region obtained after the segmentation in the step 1.2, establishing a data fusion model based on Bayesian Krigin, and on the premise of unbiased optimal estimation, performing minimum estimation on the variance to establish an optimization equation;
step 1.4, performing spatial covariance correlation analysis on the original depth image data and the point cloud data in each cell;
and step 1.5, fitting the variation function and the conditional variation function, solving the optimal weight coefficient, and bringing the optimal weight coefficient back to the data fusion model based on Bayesian kriging established in the step 1.3 to obtain the reconstructed dense high-precision point cloud.
3. The coal mine underground robot position perception image enhancement method based on luminosity compensation as claimed in claim 2, wherein step 2 specifically includes the following steps:
step 2.1, supposing that the surface of the underground environment is Lambert, detecting the highlight area of the original RGB image and then removing the highlight area;
2.2, establishing an SFS model, and preliminarily estimating the reflectivity of the surface of the underground environment by combining the depth information of the dense high-precision point cloud obtained in the step 1;
and 2.3, establishing an optimization equation based on the Retinex image enhancement algorithm model, solving the optimized reflectivity and illumination coefficient according to the optimization equation, re-rendering the image, and performing brightness recovery and texture enhancement on the image in an area with diffuse reflection in the underground environment.
CN202211284691.3A 2022-10-20 2022-10-20 Coal mine underground robot position perception image enhancement method based on luminosity compensation Withdrawn CN115641273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211284691.3A CN115641273A (en) 2022-10-20 2022-10-20 Coal mine underground robot position perception image enhancement method based on luminosity compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211284691.3A CN115641273A (en) 2022-10-20 2022-10-20 Coal mine underground robot position perception image enhancement method based on luminosity compensation

Publications (1)

Publication Number Publication Date
CN115641273A true CN115641273A (en) 2023-01-24

Family

ID=84957078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211284691.3A Withdrawn CN115641273A (en) 2022-10-20 2022-10-20 Coal mine underground robot position perception image enhancement method based on luminosity compensation

Country Status (1)

Country Link
CN (1) CN115641273A (en)

Similar Documents

Publication Publication Date Title
Chen et al. Building change detection with RGB-D map generated from UAV images
Li et al. Automatic DSM generation from linear array imagery data
Gressin et al. Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge
Tatoglu et al. Point cloud segmentation with LIDAR reflection intensity behavior
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
CN110689562A (en) Trajectory loop detection optimization method based on generation of countermeasure network
Peng et al. Model and context‐driven building extraction in dense urban aerial images
CN112883850B (en) Multi-view space remote sensing image matching method based on convolutional neural network
Hanaizumi et al. An automated method for registration of satellite remote sensing images
CN112184765B (en) Autonomous tracking method for underwater vehicle
CN110245566B (en) Infrared target remote tracking method based on background features
CN111340951A (en) Ocean environment automatic identification method based on deep learning
US11861855B2 (en) System and method for aerial to ground registration
CN112561996A (en) Target detection method in autonomous underwater robot recovery docking
Wan et al. An illumination-invariant change detection method based on disparity saliency map for multitemporal optical remotely sensed images
CN114973010A (en) Remote sensing water depth inversion model migration method based on no prior data
Mohamed et al. Change detection techniques using optical remote sensing: a survey
CN113724381B (en) Dynamic three-dimensional scene rapid reconstruction method based on high-resolution remote sensing image
Parmehr et al. Automatic registration of optical imagery with 3d lidar data using local combined mutual information
Liu et al. Manhole cover detection from natural scene based on imaging environment perception
CN107765257A (en) A kind of laser acquisition and measuring method based on the calibration of reflected intensity accessory external
KR102176093B1 (en) Method for acquiring illumination invariant image using color space compression technique and device therefor
CN116863357A (en) Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method
CN115641273A (en) Coal mine underground robot position perception image enhancement method based on luminosity compensation
Jacobs et al. Two cloud-based cues for estimating scene structure and camera calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20230124

WW01 Invention patent application withdrawn after publication