CN116563377A - Mars rock measurement method based on hemispherical projection model - Google Patents

Mars rock measurement method based on hemispherical projection model Download PDF

Info

Publication number
CN116563377A
CN116563377A CN202310611126.1A CN202310611126A CN116563377A CN 116563377 A CN116563377 A CN 116563377A CN 202310611126 A CN202310611126 A CN 202310611126A CN 116563377 A CN116563377 A CN 116563377A
Authority
CN
China
Prior art keywords
mars
camera
coordinate system
rock
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310611126.1A
Other languages
Chinese (zh)
Inventor
刘雨
郑殿
魏琳慧
吕炜琨
望育梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202310611126.1A priority Critical patent/CN116563377A/en
Publication of CN116563377A publication Critical patent/CN116563377A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention discloses a Mars rock measurement method based on a hemispherical projection model, and belongs to the field of scene reconstruction; the method comprises the following steps: firstly, shooting the Mars ground surface by using a binocular camera, respectively modeling from a camera principle and a Mars vehicle mechanical principle, and establishing a corresponding relation between pixel points of each image and the real Mars ground surface; and pose information of the camera in a Mars global coordinate system; then, internal reference analysis is performed on each image, distortion is corrected, and left and right image pairs of the binocular camera are line-corresponding, and then a parallax map is calculated based on the matching feature points. Next, determining the pixel positions and the distribution edges of the rocks in the current disparity map by utilizing the rock contours in the existing Mars surface image segmentation data set, and optimizing the current disparity map by utilizing a disparity filtering algorithm; meanwhile, by combining the pose of different cameras, repeatedly appearing rock distances and positions are calculated, mutual verification is carried out, and the distribution of rocks in the current parallax map is determined. The invention improves the parallax estimation precision.

Description

Mars rock measurement method based on hemispherical projection model
Technical Field
The invention belongs to the field of scene reconstruction, relates to technologies such as stereo image matching, three-dimensional scene modeling, spark rock detection and the like, and particularly relates to a spark rock measurement method based on a hemispherical projection model.
Background
Mars are the most similar planets to the earth and are the focus of deep space exploration for humans. Many Mars detection tasks have collected Mars-related datasets. In the Mars scene, rock is one of the main objects, widely distributed on the Mars surface. The detection and measurement of the Mars rock are important bases of a Mars detection plan, and the determination of the size, distance, distribution and other information of the Mars rock is one of the preconditions of Mars ground detection, so that the method can provide guarantee and support for tasks such as detector landing, mars vehicle running, path selection and the like. On the other hand, the rock information is helpful for judging the evolution process of the Mars address, can provide data for researches such as water resource distribution, geographic transition and the like, and provides support for determining the evolution process of the Mars ground surface environment.
The existing measurement method mainly focuses on single characteristics of a Mars scene, and verification is lacking between data. The Mars rock detection task is usually based on remote sensing data or ground data, and the detailed characteristics of the Mars ground surface are difficult to detect by the remote sensing data due to the limitation of resolution, and the measurement of the Mars rock is mainly based on the ground data. However, ground data is mainly collected by a load camera, most Mars detection tasks are equipped with binocular cameras, a Mars ranging method based on binocular stereo images is a main method for determining that the surface characteristics of Mars are unknown, but existing binocular parallax matching algorithms such as BM and SGB are insufficient to support the high-precision Mars rock measurement requirements.
In the Mars big scene, determining the position relation between rocks is a challenge, and the research on the size and distribution relation is the blank of the related technology at present. The complexity of the Mars scene results in irregular sampling patterns of the camera, discrete data collection, and difficulty in recovering the size and position of Mars rock in a unified scene. On the other hand, the accuracy of the prior art in Mars rock measurement is not high, performance is to be improved when the Mars texture is complex, the texture is repeated or illumination is not ideal, and the corresponding measurement technology is not optimized for the rock, so that the measurement accuracy of the rock area part does not meet the task requirement.
The prior art does not make the measurement of Mars rock from a large scene mode nor is it optimized for rock. As in the literature
The method of the method researches image matching in navigation terrain reconstruction work of the inspection probe, proposes an improved dynamic planning matching algorithm for terrain reconstruction in a lunar environment, but cannot meet rock measurement requirements in a Mars scene. The method proposed in the document [2] utilizes the multi-source remote sensing image data of the Mars orbiter to construct the fine modeling and automatic classification of the Mars surface morphology, combines a photogrammetry method and a light and shade recovery shape method, researches and prepares the high-resolution three-dimensional terrain of the 'Tian Hui Yi' landing area, but does not measure the rock, and the resolution is insufficient to meet the ground task requirements.
In summary, the prior art does not propose a specific technical route or solution for "Mars rock measurement", which results in a measurement result of Mars rock obtained by a general technique such as stereo matching to be improved.
[1]Li M L,Liu S C,Peng S.Improved dynamic programming in the lunar terrain reconstruction[J].Opto-Electron Eng,2013,40:6–11
[2]LIU S C,TONG X H,LIU S J,et al.Topography modeling,mapping and analysis of China’s first Mars mission Tianwen-1landing area from remote sensing images[J].Journal of Deep Space Exploration,2022,9(3):338-347.
Disclosure of Invention
The invention provides a Mars rock measurement method based on a hemispherical projection model, which is characterized in that a Mars global coordinate system is established by adopting a Projection Model (PM) by focusing on consistency and global property of Mars characteristics so as to support high-precision and pixel-level Mars rock positioning, parallax is optimized by a parallax filtering algorithm WLS, parallax estimation precision is improved, and Mars characteristics in a weak texture scene are extracted by combining a semantic segmentation algorithm, so that the Mars characteristics are recovered in a three-dimensional environment, and the method has the advantage of better precision.
The Mars rock measurement method based on the hemispherical projection model comprises the following steps of:
shooting the Mars ground surface by using a binocular camera, wherein the same object corresponds to a left image and a right image at each moment;
analyzing each shot image respectively, taking the optical axis of the camera as the shooting direction, calculating the corresponding position of each pixel in the current image in the Mars three-dimensional scene in the view angle range of the camera, and establishing the corresponding relation between the pixel point and the real Mars ground surface;
for a target point P in a pixel coordinate system, the position is (u, v); it is converted into a projection of the image coordinate system as follows:
(x i ,y i ) For the corresponding position of the target point P in the image coordinate system, where (u 0 ,v 0 ) Is the image center point coordinates, (d) x ,d y ) Representing the size of the pixels on the camera's photosensitive element.
The image coordinate system is then converted to a camera coordinate system, denoted as:
(x c ,y c ,z c ) The coordinate of a target point P in a camera coordinate system is represented by f, and the focal length of the camera is represented by f;
finally, the conversion from the camera coordinate system to the world coordinate system is expressed as:
(x w ,y w ,z w ) Is the coordinate of the target point P in the world coordinate system, wherein R is a rotation matrix representing the rotation of the optical axis between the world coordinate system and the initial moment, t is the translation vector from the origin of the world coordinate system to the optical center of the camera, 0 T Is a three-dimensional column vector, R, t, which collectively describes the conversion relationship of pixel points between a camera coordinate system and a world coordinate system.
Finally, the real position of the object in the current image in the Mars ground surface scene is restored by combining the camera parameters, and the pixel correspondence is as follows:
thirdly, calculating a rotation matrix and a translation vector of the camera, and obtaining pose information of the camera in a Mars global coordinate system;
defining the sequence of a Mars global coordinate system around a z-y-x axis, rotating the Mars global coordinate system by respective angles to coincide with a Mars body coordinate system to obtain a rotation matrix R 1 Expressed as:
yaw angle θ around z-axis 3 A pitch angle θ is rotated about the y-axis 2 Rotating the rolling angle around the x axis to be theta 1
And similarly, obtaining a rotation matrix R from the Mars body coordinate system to the Mars mast coordinate system 2
Rotation matrix R for rotating Mars mast coordinate system to Mars camera platform coordinate system 3
Combination R 1 、R 2 、R 3 The three rotation matrixes can obtain the pose of the camera, and the pose information of the camera in a Mars global coordinate system is obtained by combining the odometer and the size of the Mars vehicle.
The translation vector t from the origin of the Mars coordinate system to the camera center is:
t=t 1 +R 1 t 2
t 1 is the vector from the origin of the Mars coordinate system to the center of the rover, t 2 Is the vector from the origin of the rover coordinate system to the center of the camera;
step four, correcting distortion of each image by analyzing image internal parameters of each image with which the corresponding relation between the pixel points and the real Mars ground surface is established, and selecting left and right images of the same object to correspond in rows so as to enable the left and right images to be located on the same plane;
the method comprises the following steps:
firstly, re-projecting pixels according to internal reference data such as a camera focal length, an imaging origin, a distortion coefficient and the like, solving distortion errors caused by a camera lens, and correcting;
after correcting the left and right images of the same object, aligning the images according to external parameter data such as a rotation matrix, a translation vector and the like between two image pairs, so that the epipolar lines of the two images are exactly on the same horizontal line.
And fifthly, detecting matching feature points of overlapping areas of the two images by utilizing stereoscopic vision matching for left and right images on the same plane, and calculating a parallax image based on the matching feature points.
The stereo vision matching detection adopts feature descriptors such as Scale Invariant Feature Transform (SIFT), speeded Up Robust Feature (SURF) and fast rotational Orientation (ORB) to extract feature points in the image respectively, and performs feature point matching.
The corrected left and right images are used as the input of SGBM algorithm, the parallax map is determined according to binocular camera data, and the mapping points P on the left and right image planes are respectively P left And P right And (3) representing. Parallax d is P left And P right And (3) performing iterative solution.
Step six, detecting rock contours in the existing Mars surface image segmentation dataset by adopting a semantic segmentation method, and determining the pixel positions and the distribution edges of the rock in the current parallax map;
firstly, rock identification and segmentation are carried out, and according to the characteristics of different rocks, a semantic segmentation model based on deep learning, a shadow method or a manual marking method is adopted to carry out rock segmentation, so that the range of rock pixel distribution is obtained, and a mask is generated. The mask is selected by selecting rock area pixels in the image, and shielding the rock area pixels to obtain a target area image.
And determining the pixel distribution range of the rock region in the image according to the rock mask obtained by semantic segmentation, and obtaining the parallax of the rock region in the parallax map.
Step seven, calculating the position and the size of the pixel based on the imaging original path and the triangulation principle of the binocular camera according to the position of the rock pixel; optimizing the current disparity map by utilizing a disparity filtering algorithm;
according to the positions of the rock pixels, the distribution azimuth and the distribution boundary of the corresponding pixels are found in the parallax map, and the weighted least square filtering algorithm is adopted to filter the parallax map.
Given a disparity map g, whose size is n×m, the filtered disparity map is u, and the loss function is:
wherein A is x ,A y To a as x ,a y A diagonal matrix of diagonal elements, d' x ,d′ y In the form of a forward differential matrix,and->Is a backward difference operator. λ is a scale factor, and the larger λ is, the more effective the smoothing is.
And step eight, calculating repeated rock distances and positions by taking pose information of the cameras in a Mars global coordinate system as a reference and combining the positions of different cameras, verifying each other, and determining the distribution of rocks in the current parallax map.
The depth Z of the rock to the camera, according to the triangle similarity principle, is given by:
b is a base line of the binocular camera, namely the distance between the optical centers of the left and right cameras; f is the focal length of the camera.
And calculating the size of the rock according to the semantic segmentation mask edge information.
The invention has the advantages that:
1) A Mars rock measurement method based on a hemispherical projection model is used for describing the posture of a Mars vehicle camera of a first-order terrain navigation camera, and the corresponding relation between data and a scene is restored by modeling the posture of the navigation camera, so that the position of the rock in a three-dimensional scene is determined, and the requirements of feature positioning and measurement under a large scale are met.
2) The Mars rock measurement method based on the hemispherical projection model provides an external parameter acquisition method for acquiring a camera based on the hemispherical projection model, and can preprocess images in a scene with large data volume to acquire the position relationship between the images, preliminarily correct the images and match characteristic points, and obtain binocular camera data with good matching degree.
3) The Mars rock measurement method based on the hemispherical projection model is provided with a parallax matching and filtering module suitable for the Mars rock, so that the distance and the size of the Mars rock can be accurately measured in a large-scale scene, and the accuracy of related measurement is improved; the distribution of the Mars rock can be restored under a plurality of scenes, and support is provided for Mars earth surface detection.
Drawings
FIG. 1 is a flow chart of a Mars rock measurement method based on a hemispherical projection model;
FIG. 2 is a schematic illustration of modeling from the mechanical principle of a Mars vehicle in accordance with the present invention;
FIG. 3 is a schematic diagram showing the conversion of pixels from an image coordinate system, a camera coordinate system and a world coordinate system according to the present invention;
FIG. 4 is a schematic diagram of a hemispherical model modeled from the mechanical principle of a Mars vehicle according to the present invention;
fig. 5 is a schematic diagram of view difference based on binocular camera matching data according to the present invention.
Detailed Description
Embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings.
In order to acquire the distribution information of the rock in the Mars scene, the prior art does not consider information extraction from the global aspect, and the Mars rock measurement method based on the hemispherical projection model can acquire global information by adopting the hemispherical projection model and can improve matching precision by adopting a filtering algorithm. Specifically, a Projection Model (PM) is adopted to establish a Mars global coordinate system by focusing on consistency and global property of Mars characteristics, and Mars rocks in a plurality of scenes can be uniformly described in the Mars global coordinate system according to the position and the gesture of a camera, the rotation angle of a mast, the position and the gesture of a Mars vehicle and other information, so that high-precision and pixel-level Mars rock positioning is supported. The parallax matching is carried out by adopting the SGBM algorithm, the parallax is optimized by adopting the parallax filtering algorithm WLS, the effect of improving the parallax estimation precision is achieved, and the Mars feature under the weak texture scene is extracted by combining the semantic segmentation algorithm, so that the Mars feature is recovered in the three-dimensional environment, and the method has scientific research significance for providing support for Mars ground surface detection.
The Mars rock measurement method based on the hemispherical projection model is shown in fig. 1, and comprises the following specific steps:
shooting the Mars ground surface by using a binocular camera, wherein the same object corresponds to a left image and a right image at each moment;
analyzing each shot image respectively, and determining the corresponding relation between the pixel points and the surface of the real Mars by using a hemispherical projection model;
the modeling is carried out from a camera principle and a mechanical principle, and the specific process is as follows:
step 101, based on parameters of the navigation terrain camera, analyzing the spatial position and attitude angle information of the camera during shooting through an image. An analysis method is provided for the mechanical structure of the blessing signal and the carrying mode of the camera;
as shown in fig. 2, the camera, camera platform, mast and spark body all have a certain rotation and displacement when the spark image is acquired. When the blessing number Mars vehicle works, firstly, the mechanical arm of the mast is unfolded upwards to a vertical position, then the cradle head is pitched up and down, and finally the mechanical arm is yawed left and right, so that the sequence imaging of the scientific detection points is completed. To describe the imaging process of the current image, the imaging pose of the current camera is determined according to the current Mars coordinates, mast angle, camera height position, and camera shooting direction.
102, projecting an image by using the extracted camera information, determining a projection source according to the position of the navigation terrain camera, and calculating the shooting range of the camera at the position;
as shown in fig. 3, (u, v) is the corresponding position of the target point P in the pixel coordinate system, (x) i ,y i ) For the corresponding position of the target point P in the image coordinate system, where (u 0 ,v 0 ) Is the principal point of the image center, (d) x ,d y ) Representing the size of the pixels on the photosensitive element of the navigational terrain camera. The projection of the pixel coordinate system into the image coordinate system can be expressed as:
(x c ,y c ,z c ) For the coordinates of the P point in the camera coordinate system, f is the focal length of the navigation terrain camera, and the conversion from the image coordinate system to the camera coordinate system can be expressed as:
(x w ,y w ,z w ) Is the coordinate of the P point in the world coordinate system, and is seated from the cameraThe transformation of the system into the world coordinate system is expressed as:
wherein R is a rotation matrix representing rotation of the optical axis between the world coordinate system and the initial time, t is a translation vector from the origin of the world coordinate system to the optical center of the camera, 0 T Is a three-dimensional column vector, R and t, which together describe the transformation relationship of the pixel points between the camera coordinate system and the world coordinate system.
And calculating a camera imaging range in the visual field angle range of the navigation terrain camera by taking the optical axis of the terrain navigation camera as a shooting direction, calculating the corresponding position of the pixel in the Mars three-dimensional scene, and establishing the corresponding relation between the pixel and the real Mars ground surface.
Step 103, converting pixels from a pixel coordinate system to an image coordinate system, a camera coordinate system and a world coordinate system in sequence by combining camera parameters, and recovering the real position of an object in an image in a Mars ground surface scene, wherein the pixel correspondence is as follows:
step 104, each pixel in the image is projected into the world coordinate system, and the approximate extent of the image distribution and whether there is overlap between the different images are determined by projection.
Thirdly, calculating a rotation matrix and a translation vector of the camera, obtaining pose information of the camera in a Mars global coordinate system, and providing an initial position for stereoscopic vision matching;
modeling from a Mars mechanical principle; external parameters such as a rotation matrix R, a translation vector t and the like of the camera are calculated according to the hemispherical projection model and are used in stereoscopic vision matching, and the hemispherical model is built based on a blessing signal mechanical structure and a camera mounting mode, as shown in fig. 4: the method comprises the following steps:
defining the sequence of a Mars global coordinate system around a z-y-x axis, rotating a certain angle to coincide with a Mars body coordinate systemRotating the yaw angle theta around the z-axis 3 A pitch angle θ is rotated about the y-axis 2 Rotating the rolling angle around the x axis to be theta 1 Obtaining a rotation matrix R 1 Expressed as:
and similarly, obtaining a rotation matrix R from the Mars body coordinate system to the Mars mast coordinate system 2
Rotation matrix R for rotating Mars mast coordinate system to Mars camera platform coordinate system 3
Combination R 1 、R 2 、R 3 The three rotation matrixes can obtain the navigation terrain camera gesture, and the pose information of the camera in a Mars global coordinate system is obtained by combining the odometer and the size of the Mars.
The translation vector t from the origin of the Mars coordinate system to the camera center is:
t=t 1 +R 1 t 2
t 1 is the vector from the origin of the Mars coordinate system to the center of the rover, t 2 Is the vector from the origin of the rover coordinate system to the center of the camera; the rotation matrix of the Mars coordinate system relative to the Mars coordinate system is R 1
Calculating an external parameter of an image based on a rotation matrix and a translation vector between binocular cameras by using a stereoscopic vision matching method, analyzing the internal parameter of the image, and correcting the distortion of the image to enable the left image and the right image to be positioned on the same plane; and detecting relevant matching points of the left image and the right image by utilizing stereo matching, and calculating a parallax image based on the matching characteristic points.
Each image with the corresponding relation between the pixel points and the real Mars ground surface is corrected by analyzing the internal parameters of the image, and the left and right images of the same part are selected to be positioned on the same plane;
the two images in the corrected left and right image pairs are on the same plane, but the relationship between different image pairs is not on the same straight line. Firstly, correcting each image, and then selecting binocular images to perform line correspondence; i.e. the object of correction is each image and the object corresponding to the line is the selected binocular image pair.
The method comprises the following steps:
firstly, correcting left and right images according to internal reference data such as camera focal length, imaging origin, distortion coefficient and the like, and eliminating distortion of left and right views. Secondly, the binocular images are aligned according to external parameter data such as a rotation matrix, a translation vector and the like between the left image pair and the right image pair of the camera.
And (3) carrying out re-projection on the pixels according to the camera reference matrix, and solving distortion errors caused by a camera lens. Distortion is divided into radial distortion, which is caused by the shape of the lens, which affects light propagation by the shape of the lens itself, and tangential distortion, which is caused by the lens being non-parallel to the imaging plane position, which causes a change in the position of the line as it is projected through the lens onto the imaging plane.
The two images after distortion elimination are strictly corresponding in rows, so that the epipolar lines of the two images are exactly on the same horizontal line, any point on one image and the corresponding point on the other image have the same row number, and the corresponding point can be matched by one-dimensional search on the row. The imaging origin coordinates of the left view and the right view of the camera are consistent, the optical axes of the two cameras are parallel, the left imaging plane and the right imaging plane are coplanar, the epipolar lines are aligned, and the calculation process for solving parallax is reduced.
And fifthly, detecting characteristic points of overlapping areas of the two images by utilizing stereoscopic vision matching for the left image and the right image which are positioned on the same plane to match, and calculating a parallax image based on the matched characteristic points.
Restoring distortion of the image by stereo correction so that the same points are substantially on the same line; feature points in the overlapping area of the two photos are detected and matched, feature descriptors such as Scale Invariant Feature Transform (SIFT), speeded Up Robust Features (SURF) and fast rotational Orientation (ORB) are used for matching, feature points in the images are extracted respectively, and feature point matching is performed.
Disparity map estimation is performed based on binocular camera matching data, as shown in fig. 5:
the distance between the optical centers of the binocular camera is a base line b, and the left and right images are positioned at O w -X w Y w Z w In the coordinate system. Optical axis Z between cameras left And Z right Parallel to each other, the mapping points P on the left and right image planes are respectively P left And P right And (3) representing. Parallax d is P left And P right And (3) performing iterative solution.
And taking left and right images corrected by the hemispherical projection model as input of an SGBM algorithm, and determining a parallax map according to binocular camera data.
Step six, detecting rock contours in the existing Mars surface image segmentation dataset by adopting a semantic segmentation method, and determining the pixel positions and the distribution edges of the rock in the current parallax map;
firstly, rock identification and segmentation are carried out, and according to the characteristics of different rocks, a semantic segmentation model based on deep learning, a shadow method or a manual marking method is adopted to carry out rock segmentation, so that the range of rock pixel distribution is obtained, and a mask is generated. The mask is selected by selecting rock area pixels in the image, and shielding the rock area pixels to obtain a target area image.
The method adopts a Mars surface image segmentation dataset TWMARS, and the dataset is a rock segmentation dataset which is manufactured by the team based on a data set of' Tian Hui Yi.
And determining the pixel distribution range of the rock region in the image according to the rock mask obtained by semantic segmentation, and obtaining the parallax of the rock region in the parallax map.
Step seven, calculating the position and the size of the pixel based on the imaging original path and the triangulation principle of the binocular camera according to the position of the rock pixel; optimizing the current disparity map by utilizing a disparity filtering algorithm;
according to the positions of the rock pixels, the distribution azimuth and the distribution boundary of the corresponding pixels are found in the parallax map, and the weighted least square filtering algorithm is adopted to filter the parallax map. Given a disparity map g, whose size is n×m, the filtered disparity map is u, and the loss function is:
wherein A is x ,A y To a as x ,a y A diagonal matrix of diagonal elements, d' x ,d′ y In the form of a forward differential matrix,and->Is a backward difference operator; λ is a scale factor, and the larger λ is, the more effective the smoothing is.
The filtering makes the parallax change of the rock part in the parallax map smoother, and the outline of the mask can be better reflected.
And step eight, calculating repeated rock distances and positions by taking pose information of the cameras in a Mars global coordinate system as a reference and combining the positions of different cameras, verifying each other, and determining the distribution of rocks in the current parallax map.
Since the result of measuring the rock is relative to the current camera position, the relative position of the rock in the different images needs to be found from the position between the cameras.
Based on the imaging principle and the triangulation principle of the stereoscopic vision, the distance and the size of the rock are calculated according to the information such as the base line and the focal length of the binocular camera. The parallax d between the left and right photographs is inversely proportional to the depth Z, and according to the principle of triangle similarity, the following relationship can be established:
wherein the depth Z from the rock to the camera can be calculated from the parallax d, the base line b and the focal length f. Z is the distance of the pixel from the center of the navigational terrain. And calculating the size of the rock according to the semantic segmentation mask edge information.
The method can effectively acquire rock distribution information in a large scene; the preprocessing work such as correction is carried out on binocular data efficiently and rapidly; and optimizing the matching precision of the rock part, and acquiring more accurate rock size and distance information.
Firstly, modeling a camera gesture by using a hemispherical projection model, determining a corresponding relation between a pixel point and a real Mars surface, and determining an approximate range of image distribution and whether overlapping exists between different images.
Specifically, analyzing an image of the blessing number navigation terrain camera to obtain information such as a spatial position, an attitude angle and the like of the camera when shooting the image; and projecting the image by using the extracted information, and establishing the corresponding relation between the pixels and the real Mars ground surface. Sequentially converting a pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system by combining camera parameters, and recovering the real position of an object in an image in a Mars ground surface scene; each pixel in the image is projected into the world coordinate system, and the location and coverage of the Mars image is determined by projection.
Then, calculating an external parameter of an image based on a rotation matrix and a translation vector between binocular cameras by using a stereoscopic vision matching method, analyzing the internal parameter of the image, and correcting the image distortion to enable the left image and the right image to be positioned on the same plane; and detecting relevant matching points of the left image and the right image by utilizing stereo matching, and calculating a parallax image based on the matching characteristic points.
Specifically, the distortion of the image is restored by stereo correction such that the same points are substantially on the same line; feature points in the overlapping area of the two photos are detected and matched, feature descriptors such as Scale Invariant Feature Transform (SIFT), speeded Up Robust Features (SURF) and fast rotational Orientation (ORB) are used for respectively extracting the feature points in the images, and feature point matching is performed. And performing disparity map estimation based on binocular camera matching data, and at the stage, taking the corrected image of the hemispherical projection model as input data of an SGBM algorithm, and determining a disparity map according to the binocular camera data. The innovation point is that the external parameters of the camera are obtained by adopting the proposed hemispherical projection model, and the image is preprocessed to obtain binocular camera data with better matching degree.
Finally, determining the pixel positions and the distribution edges of the rocks in the parallax map by combining the stone contours detected by methods such as semantic segmentation; according to the positions of the rock pixels, calculating the positions and the sizes of the pixels based on the imaging original path and the triangulation principle of the binocular camera; optimizing the parallax map by utilizing a parallax filtering algorithm; and calculating the repeated rock distance and position by combining the positions of different cameras, and verifying each other to determine the rock distribution.
Specifically: firstly, rock identification and segmentation are carried out, and according to the characteristics of different rocks, a semantic segmentation model based on deep learning, a shadow method or a manual marking method is adopted to carry out rock segmentation, so that the rock pixel distribution range is obtained, and a mask is generated. According to the positions of the rock pixels, the distribution azimuth and the distribution boundary of the corresponding pixels are found in the parallax map, and the weighted least square filtering algorithm is adopted to filter the parallax map, so that the parallax change of the rock part in the parallax map is smoother, and the disguised outline is better represented. Based on the imaging principle and the triangulation principle of the stereoscopic vision, the distance and the size of the rock are calculated according to the information such as the base line and the focal length of the binocular camera. And calculating the repeated rock distance and position by combining the positions of different cameras, and verifying each other to determine the rock distribution. The innovation point is that aiming at the characteristics of Mars rock data, a parallax filtering module is designed, and the measurement accuracy of the rock is improved.
Examples:
step 1: based on C++ language and an OpenCV library, analyzing the Mars original data on computer equipment, and extracting and obtaining each frame data parameter of the navigation terrain camera, wherein each frame data parameter comprises shooting time, shooting position, shooting attitude angle (yaw, pitch and roll), camera parameters (focal length, pixel size, photosensitive element size, optical axis direction), rotation angle and the like.
Step 2: and (3) projecting parameters extracted by the camera, establishing a hemispherical projection model according to the shooting position of the Mars, the rotation direction of the Mars, the attitude angle of the mast, the pitch angle of the camera platform, the size of the Mars, the mounting structure and other information, calculating the image coverage range, and projecting the image coverage range onto the Mars ground surface model.
Step 3: and calculating external parameter information such as a rotation matrix, a translation vector and the like between binocular camera data according to the hemispherical projection model, correcting the image, and enabling the same characteristic points to be positioned at the same horizontal position on the corresponding image.
Step 4: and calculating a disparity map of binocular data according to a stereoscopic disparity matching principle, matching feature points by adopting an SGBM algorithm, obtaining the pixel distance of the feature points, and calculating the disparity of the image.
Step 5: according to the rock identification and segmentation result, optimizing the rock part in the parallax map, optimizing the parallax map by adopting a least weighted square filtering algorithm, storing image edge information, and smoothing the rock region pixels.
Step 6: the position and the size of the rock are calculated based on the principle of triangulation, and verification is performed according to the positions among different cameras.
As shown in table 1, the distances of the rocks c to f measured by the present method were 9.877m, 6.377m, 5.208m and 10.820m, respectively, which are consistent with the measurement method and measurement result of the main stream. From the table, it can be seen that the measurement accuracy of the present invention is in millimeters, which is superior to the current measurement methods.
TABLE 1
As shown in Table 2, the size of rock c measured by the present method was (3.470 m,1.130 m), the size of rock d was (0.110 m,0.144 m), and the size of rock e was (0.178 m,0.109 m), which were consistent with the main stream measurement method and measurement result. From the table, it can be seen that the measurement accuracy of the present invention is in millimeters, which is superior to the current measurement methods.
TABLE 2
/>

Claims (6)

1. The Mars rock measurement method based on the hemispherical projection model is characterized by comprising the following steps of:
shooting the Mars ground surface by using a binocular camera, wherein the same object corresponds to a left image and a right image at each moment;
analyzing each shot image respectively, taking the optical axis of the camera as the shooting direction, calculating the corresponding position of each pixel in the current image in the Mars three-dimensional scene in the view angle range of the camera, and establishing the corresponding relation between the pixel point and the real Mars ground surface;
thirdly, calculating a rotation matrix and a translation vector of the camera, and obtaining pose information of the camera in a Mars global coordinate system;
defining the sequence of a Mars global coordinate system around a z-y-x axis, rotating the Mars global coordinate system by respective angles to coincide with a Mars body coordinate system to obtain a rotation matrix R 1 Expressed as:
yaw angle θ around z-axis 3 A pitch angle θ is rotated about the y-axis 2 Rotating the rolling angle around the x axis to be theta 1
And similarly, obtaining a rotation matrix R from the Mars body coordinate system to the Mars mast coordinate system 2
Rotation matrix R for rotating Mars mast coordinate system to Mars camera platform coordinate system 3
Combination R 1 、R 2 、R 3 The three rotation matrixes can obtain the pose of the camera, and the pose information of the camera in a Mars global coordinate system is obtained by combining the odometer and the size of the Mars;
the translation vector t from the origin of the Mars coordinate system to the camera center is:
t=t 1 +R 1 t 2
t 1 is the vector from the origin of the Mars coordinate system to the center of the rover, t 2 Is the vector from the origin of the rover coordinate system to the center of the camera;
step four, correcting distortion of each image by analyzing image internal parameters of each image with which the corresponding relation between the pixel points and the real Mars ground surface is established, and selecting left and right images of the same object to correspond in rows so as to enable the left and right images to be located on the same plane;
step five, aiming at left and right images positioned on the same plane, utilizing stereoscopic vision matching to detect matching feature points of overlapping areas of the two images, and calculating a parallax image based on the matching feature points;
the corrected left and right images are used as the input of SGBM algorithm, the parallax map is determined according to binocular camera data, and the mapping points P on the left and right image planes are respectively P left And P right And (3) representing. Parallax d is P left And P right Performing iterative solution on the differences of the row coordinates of the row;
step six, detecting rock contours in the existing Mars surface image segmentation dataset by adopting a semantic segmentation method, and determining the pixel positions and the distribution edges of the rock in the current parallax map;
step seven, calculating the position and the size of the pixel based on the imaging original path and the triangulation principle of the binocular camera according to the position of the rock pixel; optimizing the current disparity map by utilizing a disparity filtering algorithm;
step eight, calculating repeated rock distances and positions by taking pose information of the cameras in a Mars global coordinate system as a reference and combining the positions of different cameras, verifying each other, and determining the distribution of rocks in a current parallax map;
the depth Z of the rock to the camera, according to the triangle similarity principle, is given by:
b is a base line of the binocular camera, namely the distance between the optical centers of the left and right cameras; f is the focal length of the camera;
and calculating the size of the rock according to the semantic segmentation mask edge information.
2. The method for measuring Mars rock based on hemispherical projection model as claimed in claim 1, wherein said step two is specifically:
for a target point P in a pixel coordinate system, the position is (u, v); it is converted into a projection of the image coordinate system as follows:
(x i ,y i ) For the corresponding position of the target point P in the image coordinate system, where (u 0 ,v 0 ) Is the image center point coordinates, (d) x ,d y ) Representing the size of the pixels on the camera photosensor;
the image coordinate system is then converted to a camera coordinate system, denoted as:
(x c ,y c ,z c ) The coordinate of a target point P in a camera coordinate system is represented by f, and the focal length of the camera is represented by f;
finally, the conversion from the camera coordinate system to the world coordinate system is expressed as:
(x w ,y w ,z w ) Is the coordinate of the target point P in the world coordinate system, wherein R is a rotation matrix representing the rotation of the optical axis between the world coordinate system and the initial moment, t is the translation vector from the origin of the world coordinate system to the optical center of the camera, 0 T Is a three-dimensional column vector, R, t, which describes the conversion relation of pixel points between a camera coordinate system and a world coordinate system together;
finally, the real position of the object in the current image in the Mars ground surface scene is restored by combining the camera parameters, and the pixel correspondence is as follows:
3. the method for measuring Mars rock based on hemispherical projection model as claimed in claim 1, wherein said step four is specifically: firstly, re-projecting pixels according to the focal length of a camera, an imaging origin and distortion coefficient internal reference data, and solving distortion errors caused by a camera lens so as to correct the pixels;
after correcting the left and right images of the same object, aligning the images according to external parameter data such as a rotation matrix, a translation vector and the like between two image pairs, so that the epipolar lines of the two images are exactly on the same horizontal line.
4. The method for measuring Mars rock based on hemispherical projection model as claimed in claim 1, wherein in the fifth step, stereo vision matching detection adopts scale invariant feature transformation, acceleration robust feature and fast rotation orientation feature descriptors to extract feature points in the image respectively, and performs feature point matching.
5. The Mars rock measurement method based on the hemispherical projection model as claimed in claim 1, wherein in the sixth step, firstly, rock identification and segmentation are performed, and for the characteristics of different rocks, a semantic segmentation model based on deep learning, a shadow method or a manual marking method is adopted to perform rock segmentation respectively, so as to obtain a rock pixel distribution range and generate a mask;
and determining the pixel distribution range of the rock region in the image according to the rock mask obtained by semantic segmentation, and obtaining the parallax of the rock region in the parallax map.
6. The method for measuring Mars rock based on hemispherical projection model as claimed in claim 1, wherein in the seventh step, according to the positions of the rock pixels, the distribution azimuth and the distribution boundary of the corresponding pixels are found in the parallax map, and the weighted least square filtering algorithm is adopted to filter the parallax map;
given a disparity map g, whose size is n×m, the filtered disparity map is u, and the loss function is:
wherein A is x ,A y To a as x ,a y A diagonal matrix of diagonal elements, d' x ,d′ y In the form of a forward differential matrix,and->Is a backward difference operator; λ is a scale factor, and the larger λ is, the more effective the smoothing is.
CN202310611126.1A 2023-05-26 2023-05-26 Mars rock measurement method based on hemispherical projection model Pending CN116563377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310611126.1A CN116563377A (en) 2023-05-26 2023-05-26 Mars rock measurement method based on hemispherical projection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310611126.1A CN116563377A (en) 2023-05-26 2023-05-26 Mars rock measurement method based on hemispherical projection model

Publications (1)

Publication Number Publication Date
CN116563377A true CN116563377A (en) 2023-08-08

Family

ID=87496393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310611126.1A Pending CN116563377A (en) 2023-05-26 2023-05-26 Mars rock measurement method based on hemispherical projection model

Country Status (1)

Country Link
CN (1) CN116563377A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132973A (en) * 2023-10-27 2023-11-28 武汉大学 Method and system for reconstructing and enhancing visualization of surface environment of extraterrestrial planet

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132973A (en) * 2023-10-27 2023-11-28 武汉大学 Method and system for reconstructing and enhancing visualization of surface environment of extraterrestrial planet
CN117132973B (en) * 2023-10-27 2024-01-30 武汉大学 Method and system for reconstructing and enhancing visualization of surface environment of extraterrestrial planet

Similar Documents

Publication Publication Date Title
CN109544456B (en) Panoramic environment sensing method based on two-dimensional image and three-dimensional point cloud data fusion
Cheng et al. 3D building model reconstruction from multi-view aerial imagery and lidar data
JP4719753B2 (en) Digital photogrammetry method and apparatus using heterogeneous sensor integrated modeling
CN111080627A (en) 2D +3D large airplane appearance defect detection and analysis method based on deep learning
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
CN112927360A (en) Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data
CN106651942A (en) Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points
WO2015096508A1 (en) Attitude estimation method and system for on-orbit three-dimensional space object under model constraint
CN104268935A (en) Feature-based airborne laser point cloud and image data fusion system and method
CN112767490B (en) Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
Wang et al. Accurate georegistration of point clouds using geographic data
Gao et al. Ground and aerial meta-data integration for localization and reconstruction: A review
CN112001926B (en) RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping
CN113592721B (en) Photogrammetry method, apparatus, device and storage medium
CN110044374A (en) A kind of method and odometer of the monocular vision measurement mileage based on characteristics of image
CN111060006A (en) Viewpoint planning method based on three-dimensional model
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN113393439A (en) Forging defect detection method based on deep learning
CN114170284B (en) Multi-view point cloud registration method based on active landmark point projection assistance
JP2023530449A (en) Systems and methods for air and ground alignment
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN104751451B (en) Point off density cloud extracting method based on unmanned plane low latitude high resolution image
CN109671109B (en) Dense point cloud generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination