CN115909025A - Terrain vision autonomous detection and identification method for small celestial body surface sampling point - Google Patents

Terrain vision autonomous detection and identification method for small celestial body surface sampling point Download PDF

Info

Publication number
CN115909025A
CN115909025A CN202211211528.4A CN202211211528A CN115909025A CN 115909025 A CN115909025 A CN 115909025A CN 202211211528 A CN202211211528 A CN 202211211528A CN 115909025 A CN115909025 A CN 115909025A
Authority
CN
China
Prior art keywords
point
image
terrain
pixel
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211211528.4A
Other languages
Chinese (zh)
Inventor
杜晓东
陈磊
刘雅芳
郭宇
满剑锋
刘鑫
张运
郭璠
曾福明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Spacecraft System Engineering
Original Assignee
Beijing Institute of Spacecraft System Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Spacecraft System Engineering filed Critical Beijing Institute of Spacecraft System Engineering
Priority to CN202211211528.4A priority Critical patent/CN115909025A/en
Publication of CN115909025A publication Critical patent/CN115909025A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a terrain vision autonomous detection and identification method for sampling points on the surface of a small celestial body, which belongs to the technical field of small celestial body surface detection, wherein a mechanical arm system is provided with a binocular stereo vision camera to realize vision imaging of a sampling area and identification and measurement of the terrain of the sampling points, and noise in an image is filtered by a preprocessing algorithm; then, distortion correction is carried out on the image, and imaging errors caused by an optical system of the camera are reduced; then, after binocular epipolar line correction is carried out on the images, matching is carried out on the left eye image and the right eye image, and a dense disparity map is obtained; calculating the three-dimensional coordinates of the two-dimensional disparity map in a coordinate system of a left eye camera point by point so as to obtain a three-dimensional point cloud map; sensing the overall terrain by using the point cloud data; and finally, carrying out meshing division on the plane area of the ground, calculating the area information in the sliding window, and judging the terrain of the area information, so that the operable position of the sampling device on the mechanical arm can be screened out.

Description

Terrain vision autonomous detection and identification method for small celestial body surface sampling point
Technical Field
The invention belongs to the technical field of small celestial body surface detection, and particularly relates to a method for automatically detecting and identifying the terrain of a small celestial body surface sampling point through vision.
Background
By collecting rock samples on the surface of the small celestial body and analyzing the composition of the rock samples, clues of solar system formation and life origin can be explored. The small celestial body detector carries a mechanical arm to carry out surface autonomous sampling and returning, and the method is an important means. Due to the limitations in terms of energy, propulsion, etc., the attachment task time of the probe on the small celestial body is short. And the small celestial body is far away from the earth, the communication time is prolonged, and real-time measurement and control cannot be carried out. Therefore, in the sampling process of the mechanical arm, detection and identification of the sampling point terrain must be realized by on-orbit autonomous visual image processing so as to complete the sampling operation of the mechanical arm.
Detection and identification of non-cooperative targets are one of the difficult problems in the field of space vision, and the small celestial body surface rock serving as a sampling object has the characteristic of weak texture and high similarity, so that the image processing is more difficult. The images of the sample rock differ very little in pixel value and it is very difficult to extract features. Therefore, identifying and measuring such unstructured rock targets with high similarity is very challenging. The literature (Wang Yalin and the like, the surface topography analysis and simulation verification of the asteroid with the gravel pile structure, deep space exploration and study, 2019,6 (5)) researches the surface topography characteristics of the asteroid with the gravel pile structure characteristics, and provides a method for generating a asteroid surface topography simulation model. The invention patent CN111721302A discloses a method for recognizing and sensing complex topographic features on the surface of an irregular asteroid, which can detect and distinguish meteor craters and rock features by utilizing an optical image shot by a deep space detector and according to the geometric features of the topographic features on the surface of the asteroid. The method is mainly applied to in-orbit navigation and obstacle avoidance of the deep space probe, and is not suitable for sampling point detailed terrain identification based on close-range imaging after landing.
The invention provides a sampling point terrain autonomous detection and identification method based on binocular stereo vision aiming at selecting and developing relevant researches on a mechanical arm sampling point in a small celestial body surface detection task, and the method has wide application prospect in the small celestial body surface detection task.
Disclosure of Invention
The invention solves the technical problems that: aiming at the characteristic of high similarity of weak texture of small celestial body surface rock serving as a sampling object, the method for autonomously detecting and identifying the terrain of the small celestial body surface sampling point based on binocular stereo vision is provided, and the problem that a sampling mechanical arm autonomously detects and identifies the terrain of the surface sampling point in a small celestial body surface detection task is solved.
The purpose of the invention is realized by the following technical scheme:
the mechanical arm system is provided with a binocular stereoscopic vision camera, the two-phase camera imaging common visual field covers the sample collection area so as to realize visual imaging of the sampling area and identification and measurement of the terrain of the sampling point, wherein the mechanical arm system is widely applied, which is the prior art and is not specifically explained herein. The binocular camera is provided with an active lighting source to ensure a better imaging effect. The two cameras perform synchronous imaging on the sampling area, and noise in the image is filtered by a preprocessing algorithm; then distortion correction is carried out on the image, and imaging errors caused by an optical system of the camera are reduced. And then, after binocular epipolar line correction is carried out on the images, matching is carried out on the left eye image and the right eye image, and a dense disparity map is obtained. And calculating the three-dimensional coordinates of the two-dimensional disparity map in the coordinate system of the left eye camera point by point so as to obtain a three-dimensional point cloud map. And sensing the overall terrain situation by using the point cloud data. And finally, carrying out meshing division on the plane area of the ground, calculating the area information in the sliding window, and judging the terrain of the area information, so that the operable position of the sampling device on the mechanical arm can be screened out.
A visual autonomous detection and identification method for small celestial body surface sampling points mainly comprises the following steps:
(1) Imaging the sampling area by using a binocular stereo vision camera; ensuring that the common view field of the left eye camera and the right eye camera can cover the sampling area;
(2) Preprocessing original images acquired by a left eye camera and a right eye camera, and reducing the influence of noise through a proper filtering algorithm;
(3) Distortion correction is carried out on the denoised image obtained in the step (2) after filtering processing, and errors caused by distortion of an optical system of a camera are reduced;
(4) Performing binocular epipolar line correction on the image subjected to distortion correction obtained in the step (3) to generate a left eye epipolar line corrected image and a right eye epipolar line corrected image;
(5) Searching matching points of the left-eye image and the right-eye image to calculate parallax, and thus obtaining a dense parallax image;
(6) Calculating the three-dimensional coordinates of the two-dimensional disparity map obtained in the step (5) in a coordinate system of a left eye camera point by point, and thus obtaining a three-dimensional point cloud map;
(7) Carrying out ground detection on the visible area by using the point cloud data, and determining the overall terrain situation in the sampling area;
(8) And carrying out meshing in a plane area where the ground is positioned, and calculating area information in a sliding window so as to judge the terrain.
Further, the preprocessing in the step (2) is to perform median filtering processing on the left eye image and the right eye image, and the size of a filtering window is m × m, and the calculation method includes the following steps:
a1 The width and height of the image are W, H respectively, the image boundary is expanded, the image width and height are changed into W +2 x [ m/2] and H +2 x [ m/2], and the expanded image pixel is set as 0;
a2 For a point (u) on the original image ori ,v ori ) The gray value is I (u) ori ,v ori ) The median filter calculation is as follows:
Figure BDA0003875400010000021
wherein G (u) f ,v f ) W is a filtering template of m × m, and i and j represent coordinates of pixel points on the template W.
Further, the distortion correction in the step (3) includes the following calculation steps:
the homogeneous coordinate of the k-th pixel point of the image before distortion correction is as follows:
Figure BDA0003875400010000031
the k-th pixel point of the image before distortion correction is subjected to two-dimensional physical homogeneous coordinates:
Figure BDA0003875400010000032
wherein the matrix A is an internal parameter matrix of the camera, A -1 The inverse of the matrix a is represented by,
lens distortion level component values:
Figure BDA0003875400010000033
lens distortion vertical component value:
Figure BDA0003875400010000034
wherein
Figure BDA0003875400010000035
k 1 、k 2 、k 3 Is a radial distortion of one, two or three orders, p 1 、p 2 Is a first order and a second order tangential distortion,
then the distortion corrected two-dimensional physical homogeneous coordinate of the kth pixel point:
Figure BDA0003875400010000036
conversion to two-dimensional pixel homogeneous coordinates:
Figure BDA0003875400010000037
further, in the step (4), the correction formula of the epipolar line correction is as follows:
Figure BDA0003875400010000038
wherein [ u ] c v c 1] T For homogeneous pixel coordinates of spatial points in the pre-epi-polar corrected image, [ u v 1] T Is the homogeneous pixel coordinate of the space point in the epipolar line corrected image, M is the internal parameter matrix of the camera, R is the rotation matrix of the camera coordinate system, M' is the corrected internal parameter matrix of the camera, R rec λ ≠ 0 is a constant for the corrected camera coordinate system rotation matrix.
Further, in the step (5), searching for a matching point of the left-eye image in the right-eye image by using a block matching method to calculate the parallax, the method includes the following steps:
b1 Compute the difference between two pixels, i.e. for gray level similarity measurements at different disparities:
e(u,v,d)=|G L (u,v)-G R (u-d,v)| ⑦
wherein G (u, v) is the gray value of the pixel point with the coordinate (u, v) under the pixel coordinate system, and d is the parallax.
B2 A window around the matching point is selected as the similarity measurement area, where the corresponding pixel is the center point of the window. In the selected window, the matching cost of the corresponding pixel is subjected to superposition operation, and the obtained result is used as the matching similarity measured value of the point:
Figure BDA0003875400010000041
where S is a similarity measurement area, which is generally an n × n rectangular area.
B3 Point corresponding to the value with the minimum superposition value of matching costs is selected as the final matching point in the search range.
B4 The parallax is calculated pixel by pixel for the whole image, and finally a dense parallax image is obtained.
Further, in the step (6), for any point (u, v) in the two-dimensional disparity map obtained by stereo matching calculation, the three-dimensional coordinate (X, Y, Z) in the coordinate system of the left eye camera is calculated according to the formula
Figure BDA0003875400010000042
Wherein f is x 、f y Is the equivalent focal length of the camera, u 0 、v 0 Is the principal point pixel coordinate and B is the baseline length.
Further, in the step (7), the ground detection of the visible area comprises the following steps:
c1 ) randomly selecting K point fitting planes, and if the plane equation is Z = AX + BY + C, then there are
Figure BDA0003875400010000043
Wherein (X) l ,Y l ,Z l ) Is the three-dimensional space coordinate of the ith point. C2 The remaining points are used to verify the effect of the plane fit, calculating the distance of each point to the plane:
Figure BDA0003875400010000051
setting a flatness threshold T, and counting the number N of points with the distance to the plane less than the threshold T under a group of plane parameters c Performing multiple cycles to obtain N c And simultaneously taking the corresponding plane as the ground of the visible area.
Further, in the step (8), the process of judging the terrain by regions comprises the following steps:
s1) determining a grid division area by referring to the operation area of the sampling device and the like, and dividing the point cloud set into p multiplied by q grid areas.
S2) searching the distance value D between the point in each grid area and the ground in the area i And calculating the roughness of the current window. Expressed by histogram descriptor, histogram H is from 0 to maximum distance D max Equally dividing into a plurality of equal parts, and counting the distance.
And S3) setting a threshold and a judgment rule, and judging whether the regional terrain can execute the task or not according to the roughness histogram.
And S4) setting a sliding window and a sliding step length, performing the roughness calculation on the sliding window, and judging whether the sliding window can execute the task or not.
And S5) marking a grid area meeting the task requirement, recording internal terrain parameters of the area, and forming an operable area information list.
Compared with the prior art, the invention has the beneficial effects that:
1. aiming at the problem of topographic detection and identification of sampling points on the surface of a small celestial body, the invention provides an autonomous detection and identification method based on binocular stereovision, and the autonomous detection and identification problem of rocks on the surface of the small celestial body with weak textures and high similarity is effectively solved;
2. the terrain detection and identification method for the small celestial body surface sampling point can realize on-orbit autonomous data processing, and effectively solves the problems that the communication time is prolonged in a small celestial body detection task, and a ground system cannot process in real time;
3. the visual detection and identification method adopted by the invention has the characteristics of simplicity, high efficiency and reliability, and is suitable for the space environment with deficient computing resources.
Drawings
FIG. 1 is a schematic view of a configuration structure of a binocular stereo camera according to the present invention;
FIG. 2 is a flow chart of the sampling point terrain autonomous detection and identification method based on binocular stereo vision;
FIG. 3 is a schematic diagram of the three-dimensional planar grid area division according to the present invention.
Detailed Description
In order that the objects and advantages of the invention will be more clearly understood, the following description is given in conjunction with the accompanying examples. It is to be understood that the following text is merely illustrative of one or more specific embodiments of the invention and does not strictly limit the scope of the invention as specifically claimed.
Examples
As shown in fig. 1, the binocular stereo camera is configured on the ground observation surface of the small celestial body detector to image the sampling region, so as to realize detection and identification of the terrain in the sampling region. The binocular stereo camera ensures to acquire the optimal observation range by designing the position and the posture of the camera on the whole device. The binocular stereo camera is provided with an active light source, so that scenery in a public view field can obtain a better imaging effect. The binocular stereo camera ensures that the left eye camera and the right eye camera form images at the same moment through synchronous acquisition so as to realize stereo vision calculation.
As shown in fig. 2, the sampling point terrain autonomous detection and identification method based on binocular stereo vision mainly includes the following steps:
(1) Images of the binocular camera are pre-processed to reduce the effects of noise. Aiming at the imaging characteristics of the sample rock, the image preprocessing carries out median filtering processing on the left-eye image and the right-eye image, so that data can be smoothed, and tiny sharp details can be reserved. The filtering window size is 3 × 3, and the specific calculation process is as follows:
a1 W, H, the image boundary is expanded, the image width and height are changed into W +2 and H +2, and the expanded image pixel is set to 0.
A2 For a point (u) on the original image ori ,v ori ) The gray value is I (u) ori ,v ori ) The 3 × 3 median filter calculation formula is as follows:
Figure BDA0003875400010000061
wherein G (u) f ,v f ) W is a 3 × 3 filter template for the pixel gray values after filtering, so i and j are both intervals [ -1,1]The above integer.
(2) Distortion correction is carried out on the left eye filtered image and the right eye filtered image, and the calculation process is as follows (taking the image of one of the cameras as an example):
the homogeneous coordinate of the two-dimensional pixel of the kth pixel point of the image before distortion correction is as follows:
Figure BDA0003875400010000062
the k-th pixel point of the image before distortion correction is subjected to two-dimensional physical homogeneous coordinates:
Figure BDA0003875400010000063
wherein the matrix A is an internal parameter matrix of the camera, A -1 Representing the inverse of the matrix a.
Lens distortion level component values:
Figure BDA0003875400010000064
lens distortion vertical component value:
Figure BDA0003875400010000071
wherein
Figure BDA0003875400010000072
k 1 、k 2 、k 3 Is a radial distortion of one, two or three orders, p 1 、p 2 Are first and second order tangential distortions.
Then the distortion corrected two-dimensional physical homogeneous coordinate of the kth pixel point:
Figure BDA0003875400010000073
conversion to two-dimensional pixel homogeneous coordinates:
Figure BDA0003875400010000074
(3) And (3) performing epipolar line correction on the image after distortion correction, wherein the calculation process is as follows (taking the image of one of the cameras as an example):
Figure BDA0003875400010000075
wherein [ u ] c v c 1] T For homogeneous pixel coordinates of spatial points in the pre-epi-polar corrected image, [ u v 1] T Is the homogeneous pixel coordinate of the space point in the image after epipolar line correction, M is the internal parameter matrix of the camera, R is the rotation matrix of the camera coordinate system, M' is the corrected internal parameter matrix of the camera, R rec λ ≠ 0 is a constant for the corrected camera coordinate system rotation matrix.
(4) Searching and matching the homonymous points in the left eye image and the right eye image, wherein the calculation process is as follows:
b1 First calculate the difference between two pixels based on the pixel points:
e(u,v,d)=|G L (u,v)-G R (u-d,v)| ⑦
wherein, G (u, v) is the gray value of the pixel point with the coordinate (u, v) under the pixel coordinate system.
B2 A window around the matching point is selected as the similarity measurement area, where the corresponding pixel is the center point of the window. In the selected window, the matching cost of the corresponding pixel is subjected to superposition operation, and the obtained result is used as the matching similarity measured value of the point:
Figure BDA0003875400010000076
where S is the similarity measurement area, an n × n rectangular area is generally taken, and n =21 is taken for the present example as determined by the resolution of the test image.
B3 Point corresponding to the value with the minimum matching cost superposition value is selected in the search range as a final matching point;
b4 The parallax is calculated pixel by pixel for the whole image, and finally a dense parallax image is obtained.
(5) Calculating the three-dimensional coordinates (X, Y, Z) of any point (u, v) in the two-dimensional disparity map obtained by stereo matching calculation in the coordinate system of the left eye camera, wherein the calculation formula is
Figure BDA0003875400010000081
Wherein f is x 、f y Is the equivalent focal length of the camera, u 0 、v 0 Is the principal point pixel coordinate and B is the baseline length.
(6) And performing ground detection on the visible area through the point cloud data, wherein the calculation process is as follows:
c1 ) randomly selecting K point fitting planes, and if the plane equation is Z = AX + BY + C, then there are
Figure BDA0003875400010000082
Wherein (X) l ,Y l ,Z l ) Is the three-dimensional space coordinate of the ith point.
C2 The remaining points are used to verify the effect of the plane fit, and the distance of each point to the plane is calculated:
Figure BDA0003875400010000083
setting a flatness threshold T, and counting the number N of points with the distance to the plane less than the threshold T under a group of plane parameters c Performing multiple cycles to obtain N c And simultaneously taking the corresponding plane as the ground of the visible area.
(7) Carrying out mesh division in a plane area where the ground is located, judging the terrain of the ground, and carrying out the following calculation process:
s1) performing region division and sliding window presetting in the detected plane, as shown in fig. 3. The gridding size may be determined according to the single point operating area of the sampling device. For example, if the single-point operation area of the sampling device is 100mm × 100mm and the common field of view of the binocular stereo camera is 1.2m × 1.1m, 12 × 11 grid regions may be set, and each grid size is set to 100mm × 100mm.
S2) searching the distance value D between the point in each grid area and the ground in the area i Calculating the roughness of the current window, and using histogram descriptor to represent the current window, wherein the histogram H is from 0 to the maximum distance D max The average is 15 equal parts, and the distance is counted.
And S3) setting a histogram interval threshold and a corresponding judgment rule, and judging whether the region can execute a certain type of task or not according to the roughness histogram.
And S4) setting a sliding window and a sliding step length, wherein the sliding window can be set to be 100mm multiplied by 100mm according to the reference grid size, and the sliding step length is 50mm. And performing the roughness calculation on the sliding window, and judging whether the sliding window can execute the task or not.
And S5) marking a grid area meeting the task requirement, recording internal terrain parameters of the area, and forming an operable area information list.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "disposed" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically coupled, may be directly coupled, or may be indirectly coupled through an intermediary. To those of ordinary skill in the art, the specific meanings of the above terms in the present invention are understood according to specific situations. In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and these improvements and modifications should also be construed as the protection scope of the present invention. Structures, devices, and methods of operation not specifically described or illustrated herein are generally practiced in the art without specific recitation or limitation.

Claims (8)

1. A terrain vision autonomous detection and identification method for small celestial body surface sampling points comprises the following steps:
(1) Imaging the sampling area by using a binocular stereo vision camera; ensuring that the common view field of the left eye camera and the right eye camera can cover the sampling area;
(2) Preprocessing original images acquired by a left eye camera and a right eye camera, and reducing the influence of noise through a filtering algorithm;
(3) Distortion correction is carried out on the denoised image obtained in the step (2) after filtering processing, and errors caused by distortion of an optical system of a camera are reduced;
(4) Performing binocular epipolar line correction on the image subjected to distortion correction obtained in the step (3) to generate a left eye epipolar line corrected image and a right eye epipolar line corrected image;
(5) Searching matching points of the left eye image and the right eye image to calculate parallax, and obtaining a dense parallax image;
(6) Calculating the three-dimensional coordinates of the two-dimensional disparity map obtained in the step (5) in a coordinate system of a left eye camera point by point, and thus obtaining a three-dimensional point cloud map;
(7) Carrying out ground detection on the visible area by using the point cloud data, and determining the overall terrain situation in the sampling area;
(8) And carrying out grid division in a plane area where the ground is positioned, and calculating area information in a sliding window so as to judge the terrain.
2. The method for terrain visual autonomous detection and identification of small celestial surface sampling points according to claim 1, characterized in that: the preprocessing in the step (2) is to perform median filtering processing on the left eye image and the right eye image, the size of a filtering window is m multiplied by m, and the calculation method comprises the following steps:
a1 The width and height of the image are W, H respectively, the image boundary is expanded, the image width and height are changed into W +2 x [ m/2] and H +2 x [ m/2], and the expanded image pixel is set as 0;
a2 For a point (u) on the original image ori ,v ori ) The gray value thereof is I (u) ori ,v ori ) The median filter calculation is as follows:
Figure FDA0003875398000000011
wherein G (u) f ,v f ) Is the gray value of the pixel after filtering, W is the filtering template of m multiplied by m, i, j represent the coordinate of the pixel point on the template W.
3. The method for terrain visual autonomous detection and identification of small celestial surface sampling points according to claim 2, characterized in that: the distortion correction in the step (3) includes the following calculation steps:
the homogeneous coordinate of the two-dimensional pixel of the kth pixel point of the image before distortion correction is as follows:
Figure FDA0003875398000000012
before distortion correctionThe k-th pixel point of the image has two-dimensional physical homogeneous coordinates:
Figure FDA0003875398000000021
wherein the matrix A is an internal parameter matrix of the camera, A -1 The inverse of the matrix a is represented by,
lens distortion level component value:
Figure FDA0003875398000000022
lens distortion vertical component value:
Figure FDA0003875398000000023
/>
wherein
Figure FDA0003875398000000024
k 1 、k 2 、k 3 Is a radial distortion of one, two or three orders, p 1 、p 2 Is a first order and a second order tangential distortion,
then the distortion corrected two-dimensional physical homogeneous coordinate of the kth pixel point:
Figure FDA0003875398000000025
conversion to two-dimensional pixel homogeneous coordinates:
Figure FDA0003875398000000026
4. the method for automatically detecting and identifying the terrain of the sampling point on the surface of the small celestial body according to claim 3, which is characterized in that: in the step (4), the correction formula of the epipolar line correction is as follows:
Figure FDA0003875398000000027
wherein [ u ] c v c 1] T For homogeneous pixel coordinates of spatial points in the pre-epi-polar corrected image, [ u v 1] T Is the homogeneous pixel coordinate of the space point in the epipolar line corrected image, M is the internal parameter matrix of the camera, R is the rotation matrix of the camera coordinate system, M' is the corrected internal parameter matrix of the camera, R rec λ ≠ 0 is a constant for the camera coordinate system rotation matrix after calibration.
5. The method for terrain visual autonomous detection and identification of small celestial surface sampling points according to claim 4, characterized in that: in the step (5), searching for a matching point of the left eye image in the right eye image by using a block matching method to calculate the parallax, and the method comprises the following steps:
b1 Compute the difference between two pixels, i.e. for gray level similarity measurements at different disparities:
e(u,v,d)=|G L (u,v)-G R (u-d,v)| ⑦
wherein G (u, v) is the gray value of the pixel point with the coordinate (u, v) under the pixel coordinate system, and d is the parallax.
B2 A window surrounding the matching point is selected as a similarity measurement area, wherein the corresponding pixel is the central point of the window, in the selected window, the matching cost of the corresponding pixel is subjected to superposition operation, and the obtained result is used as the matching similarity measurement value of the point:
Figure FDA0003875398000000031
where S is a similarity measurement region, which is generally an n × n rectangular region.
B3 Point corresponding to the value with the minimum matching cost superposition value is selected in the search range to serve as a final matching point;
b4 The parallax is calculated pixel by pixel for the whole image, and finally a dense parallax image is obtained.
6. The method for terrain visual autonomous detection and identification of small celestial surface sampling points according to claim 5, characterized in that: in the step (6), for any point (u, v) in the two-dimensional disparity map obtained by stereo matching calculation, a three-dimensional coordinate (X, Y, Z) in the left eye camera coordinate system is calculated according to the following formula:
Figure FDA0003875398000000032
wherein f is x 、f y Is the equivalent focal length of the camera, u 0 、v 0 Is the principal point pixel coordinate and B is the baseline length.
7. The method for terrain visual autonomous detection and identification of small celestial surface sampling points according to claim 6, characterized in that: in the step (7), the ground detection of the visible area comprises the following steps:
c1 ) randomly selecting K point fitting planes, and if the plane equation is Z = AX + BY + C, then there are
Figure FDA0003875398000000033
Wherein (X) l ,Y l ,Z l ) Is the three-dimensional space coordinate of the ith point.
C2 The remaining points are used to verify the effect of the plane fit, and the distance of each point to the plane is calculated:
Figure FDA0003875398000000034
setting a flatness threshold T, and counting the number N of points with the distance to the plane less than the threshold T under a group of plane parameters c Performing multiple cycles to obtain N c Taking the corresponding plane as the visible areaAnd (4) the ground.
8. The method for automatically detecting and identifying the terrain of the sampling point on the surface of the small celestial body according to claim 1, which is characterized in that: in the step (8), the process of judging the terrain by regions comprises the following steps:
s1) determining a grid division area by referring to the operation area of a sampling device and the like, and dividing a point cloud set into p multiplied by q grid areas;
s2) searching the distance value D between the point in each grid area and the ground in the area i Calculating the roughness of the current window, and using histogram descriptor to represent the current window, wherein the histogram H is from 0 to the maximum distance D max Equally dividing the distance into a plurality of equal parts, and counting the distance;
s3) setting a threshold and a judgment rule, and judging whether the regional terrain can execute a task according to the roughness histogram;
s4) setting a sliding window and a sliding step length, performing the roughness calculation on the sliding window, and judging whether the sliding window can execute a task or not;
and S5) marking a grid area meeting the task requirement, recording the internal terrain parameters of the area, and forming an operable area information list.
CN202211211528.4A 2022-09-30 2022-09-30 Terrain vision autonomous detection and identification method for small celestial body surface sampling point Pending CN115909025A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211211528.4A CN115909025A (en) 2022-09-30 2022-09-30 Terrain vision autonomous detection and identification method for small celestial body surface sampling point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211211528.4A CN115909025A (en) 2022-09-30 2022-09-30 Terrain vision autonomous detection and identification method for small celestial body surface sampling point

Publications (1)

Publication Number Publication Date
CN115909025A true CN115909025A (en) 2023-04-04

Family

ID=86488746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211211528.4A Pending CN115909025A (en) 2022-09-30 2022-09-30 Terrain vision autonomous detection and identification method for small celestial body surface sampling point

Country Status (1)

Country Link
CN (1) CN115909025A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524031A (en) * 2023-07-03 2023-08-01 盐城数智科技有限公司 YOLOV 8-based large-range lunar rover positioning and mapping method
CN116758026A (en) * 2023-06-13 2023-09-15 河海大学 Dam seepage area measurement method based on binocular remote sensing image significance analysis

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758026A (en) * 2023-06-13 2023-09-15 河海大学 Dam seepage area measurement method based on binocular remote sensing image significance analysis
CN116758026B (en) * 2023-06-13 2024-03-08 河海大学 Dam seepage area measurement method based on binocular remote sensing image significance analysis
CN116524031A (en) * 2023-07-03 2023-08-01 盐城数智科技有限公司 YOLOV 8-based large-range lunar rover positioning and mapping method
CN116524031B (en) * 2023-07-03 2023-09-22 盐城数智科技有限公司 YOLOV 8-based large-range lunar rover positioning and mapping method

Similar Documents

Publication Publication Date Title
CN105225482B (en) Vehicle detecting system and method based on binocular stereo vision
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
EP2313737B1 (en) System for adaptive three-dimensional scanning of surface characteristics
CN104156536B (en) The visualization quantitatively calibrating and analysis method of a kind of shield machine cutter abrasion
CN111260773B (en) Three-dimensional reconstruction method, detection method and detection system for small obstacle
CN115909025A (en) Terrain vision autonomous detection and identification method for small celestial body surface sampling point
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN105931234A (en) Ground three-dimensional laser scanning point cloud and image fusion and registration method
JP6858415B2 (en) Sea level measurement system, sea level measurement method and sea level measurement program
CN111260715B (en) Depth map processing method, small obstacle detection method and system
IL178299A (en) Fine stereoscopic image matching and dedicated instrument having a low stereoscopic coefficient
d'Angelo et al. Dense multi-view stereo from satellite imagery
CN102997891A (en) Device and method for measuring scene depth
CN105277144A (en) Land area rapid detection method based on binocular vision and detection device thereof
CN113112588A (en) Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction
CN109410234A (en) A kind of control method and control system based on binocular vision avoidance
CN113920183A (en) Monocular vision-based vehicle front obstacle distance measurement method
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN102682435B (en) Multi-focus image edge detection method based on space relative altitude information
JP5501084B2 (en) Planar area detection apparatus and stereo camera system
Al-Rawabdeh et al. A robust registration algorithm for point clouds from UAV images for change detection
Marsy et al. Monitoring mountain cryosphere dynamics by time-lapse stereo-photogrammetry
Gallego et al. A variational wave acquisition stereo system for the 3-d reconstruction of oceanic sea states
CN107610170B (en) Multi-view image refocusing depth acquisition method and system
Huang et al. An Innovative Approach of Evaluating the Accuracy of Point Cloud Generated by Photogrammetry-Based 3D Reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination