CN115876206A - Double-system vision-fused navigation camera relative imaging measurement method - Google Patents

Double-system vision-fused navigation camera relative imaging measurement method Download PDF

Info

Publication number
CN115876206A
CN115876206A CN202211717243.8A CN202211717243A CN115876206A CN 115876206 A CN115876206 A CN 115876206A CN 202211717243 A CN202211717243 A CN 202211717243A CN 115876206 A CN115876206 A CN 115876206A
Authority
CN
China
Prior art keywords
lunar
camera
visible light
information
tof
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211717243.8A
Other languages
Chinese (zh)
Inventor
姜丽辉
毛晓楠
赵旸
曲伟智
石峰源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aerospace Control Technology Institute
Original Assignee
Shanghai Aerospace Control Technology Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aerospace Control Technology Institute filed Critical Shanghai Aerospace Control Technology Institute
Priority to CN202211717243.8A priority Critical patent/CN115876206A/en
Publication of CN115876206A publication Critical patent/CN115876206A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a double-system vision-fused relative imaging measurement method for a navigation camera, which is used for acquiring relative navigation information when a lunar vehicle autonomously drives the lunar surface, and the method utilizes a visible light binocular stereoscopic vision passive imaging principle to obtain lunar surface three-dimensional point cloud information and gray scale information; the TOF active optical ranging principle is also utilized to obtain lunar surface three-dimensional point cloud information; and integrating a TOF active light imaging system and a binocular visible light passive imaging system to recover the depth of the lunar surface in a lunar surface texture missing or texture repeating region. The invention can integrate the advantages of the active light and passive light imaging measurement system and ensure that the lunar rover obtains stable and reliable lunar depth and gray information.

Description

Double-system vision-fused navigation camera relative imaging measurement method
Technical Field
The invention relates to the technical field of autonomous driving of lunar vehicles, in particular to a navigation camera relative imaging measurement method with vision fusion of a double system.
Background
When the lunar surface autonomously runs, the surrounding environment of the lunar surface needs to be obtained, including gray information or depth distance information, common imaging measurement means comprise binocular stereo imaging or laser imaging radars, wherein the binocular stereo imaging is passive light imaging, the stereo recovery and lunar surface imaging effects are greatly influenced by illumination, the shadow area target imaging effect is poor, the depth recovery failure phenomenon exists, the characteristics of texture loss and texture repetition of the lunar surface are limited, and the binocular gray image recovery depth data is poor in precision and easy to fail; the laser imaging radar as an active light measurement system is not influenced by illumination, and can stably and reliably acquire three-dimensional information of a lunar surface, but the traditional laser imaging radar has the defects of large size and power consumption, can not acquire gray information of the lunar surface, and is relatively limited in lunar surface use.
Disclosure of Invention
The invention aims to provide a navigation camera relative imaging measurement method with vision fusion of a binary system, which obtains the gray scale and three-dimensional information of a lunar surface by fusing an active light measurement system and a passive light measurement system so as to support the autonomous driving of a lunar vehicle. The invention also provides a navigation camera capable of using the method and a lunar vehicle with the navigation camera.
In order to achieve the above object, one technical solution of the present invention is to provide a method for measuring relative imaging of a catamaran vision-integrated navigation camera, which is used for acquiring relative navigation information when a lunar vehicle autonomously travels on the lunar surface,
acquiring lunar three-dimensional point cloud information and gray information from data acquired by the two visible light camera heads by utilizing a visible light binocular stereoscopic vision passive imaging principle;
obtaining lunar surface three-dimensional point cloud information by using a TOF active optical ranging principle for data acquired by a TOF camera head;
and a TOF active light imaging system and a binocular visible light passive imaging system are fused, two groups of lunar surface three-dimensional point cloud information are fused, and lunar surface depth recovery is carried out on a lunar surface texture missing or texture repeating region.
Optionally, three-dimensional coordinates of a target point obtained by the head of the TOF camera are projected onto binocular visible light image planes of the heads of the two visible light cameras at the same time, and then an ROI region is selected near the projected point to perform local feature point extraction and matching, so as to obtain three-dimensional information of the target point.
Optionally, two visible-light camera heads are arranged on either side of the TOF camera head, respectively, by means of a transverse mounting bracket.
The invention also provides a navigation camera, which comprises two visible light camera heads, a TOF camera head and an algorithm functional module; the algorithm function further comprises:
the binocular stereoscopic vision depth recovery module is used for processing data acquired by the two visible light camera heads by utilizing a visible light binocular stereoscopic vision passive imaging principle to obtain lunar three-dimensional point cloud information and gray information;
the TOF camera depth recovery module is used for processing data acquired by the TOF camera head by utilizing a TOF active optical ranging principle to obtain lunar surface three-dimensional point cloud information;
and the dual system fusion depth recovery module fuses a TOF active light imaging system and a binocular visible light passive imaging system, fuses two groups of lunar surface three-dimensional point cloud information, and performs lunar surface depth recovery on a lunar surface texture missing or texture repeating region.
Optionally, the dual-system fusion depth restoration module projects the three-dimensional coordinates of the target point obtained by the TOF camera head onto binocular visible light image planes of the two visible light camera heads at the same time, and then selects an ROI region near the projected point to perform local feature point extraction and matching, so as to obtain the three-dimensional information of the target point.
Optionally, the navigation camera further comprises a mounting bracket; the two visible light camera heads are respectively arranged on two sides of the TOF camera head through transverse mounting brackets.
Optionally, the navigation camera further comprises a power supply and an information processing box, and a power supply module, a core processor and corresponding peripheral circuits are arranged in the navigation camera; the algorithm function module performs processing of data by the core processor.
Another technical solution of the present invention is to provide a lunar vehicle, which is provided with any one of the above navigation cameras; and when the lunar vehicle autonomously runs on the lunar surface, relative navigation information is acquired through the navigation camera.
The method disclosed by the invention integrates the advantages of a TOF active light imaging system and a binocular passive light imaging system, ensures that the lunar vehicle can acquire stable and reliable lunar depth and gray information, and is applied to lunar autonomous driving.
The invention has the advantages and technical effects that:
the action distance is not less than 50m; the illumination influence is small; the lunar gray image can be obtained, lunar three-dimensional point cloud data in the distance in the field of view are recovered, the TOF active optical module and the binocular visible optical module are fused, and three-dimensional point cloud data in a lunar texture missing or repeated area are recovered.
Drawings
FIG. 1 is a schematic diagram of a navigation camera;
FIG. 2 is a schematic view of binocular camera depth recovery;
figure 3 is a schematic diagram of TOF camera depth recovery,
FIG. 4 is a schematic illustration of binary fusion.
Detailed Description
When the moon is in autonomous driving, the navigation camera is used for acquiring the gray level and depth information of the moon and then analyzing the navigation information, and the method is a measuring means required for acquiring the navigation information during the autonomous driving of the moon.
The invention provides a navigation camera relative imaging measurement method with a binary system fusion, which fuses TOF (Time of Flight) active light and binocular passive light imaging mechanisms and is used for acquiring relative navigation information when a lunar vehicle autonomously drives on the lunar surface. The invention also provides a navigation camera capable of adopting the method and a lunar vehicle with the navigation camera.
In the method of the embodiment, the lunar three-dimensional point cloud information and the gray information are obtained by utilizing the principle of passive imaging of visible light binocular stereoscopic vision; moreover, the TOF active optical ranging principle is utilized to obtain lunar surface three-dimensional point cloud information; and performing lunar depth recovery on the lunar texture missing or texture repeating region through binocular visible light and an active and passive fusion system of the TOF camera. The method mainly achieves the functions of environmental perception based on the lunar surface, three-dimensional reconstruction of the lunar surface, scene monitoring and the like within the range of not less than 50 m.
In the navigation camera using the above method, as shown in fig. 1, the hardware part includes a TOF camera module, a binocular visible light module, a mounting bracket 4, and a power supply and information processing box 5.
The binocular visible light module comprises two visible light camera heads 2; the TOF camera module contains a TOF camera head 3. The two visible-light camera heads 2 are arranged on both sides of the TOF camera head 3 by means of transverse mounting brackets 4. Illustratively, the visible-light camera head 2 mainly includes an optical lens, a light shield, electronics, and the like; the TOF camera head 3 mainly includes optical lens, light shield, laser, electronics, etc.; the parts can be configured according to the technical knowledge in the field, and are not listed. The reference numeral 1 schematically denotes a lunar vehicle 1.
The power supply and information processing box 5 is provided with a power supply and information processing module which comprises a power supply module, a core processor and a corresponding peripheral circuit. And the algorithm function module in the core processor comprises a binocular stereoscopic vision depth recovery module, a TOF camera depth recovery module and a binary system fusion depth recovery module.
The working principle of the binocular stereoscopic vision depth recovery system is shown in figure 2, a binocular vision depth recovery module can recover three-dimensional data of a scene according to parallax x of a target in left and right cameras by using a binocular vision system formed by two visible light camera heads (corresponding to the left and right cameras) and using a characteristic point extraction and matching technology l -x r The binocular baseline length b and the camera focal length information f can be used for acquiring three-dimensional coordinates (x, y, z) of a target point; wherein (x) l ,y l ) And (x) r ,y r ) The position of the target point in the left and right camera image planes, respectively.
Figure BDA0004026837260000051
Figure BDA0004026837260000052
Figure BDA0004026837260000053
Because the accuracy of the binocular vision system is influenced by surface texture, imaging depends on the texture of the surface of an object, so that the binocular vision system is more suitable for scenes with rich texture, or an active imaging method, namely structured light projection with patterns is adopted, and the problem of matching corresponding points is solved through reflected deformation patterns. Therefore, for scenes with relatively poor lunar surface textures, the left camera and the right camera use the scene graphs to extract the feature points and have the defects of matching failure and long algorithm time consumption in the stage of matching, so that certain limitation exists in recovering the scene three-dimensional point cloud only by using the binocular camera, and particularly when the scene distance is long, texture information is lost, and the algorithm of a binocular vision system is invalid.
Therefore, the invention obtains lunar surface three-dimensional point cloud information by using a TOF active optical ranging principle, and carries out lunar surface depth recovery on a lunar surface texture missing or texture repeated region through active and passive system fusion.
The TOF camera head uses a TOF detector, the distance of an object is measured by controlling an exposure and sampling circuit, and image data are acquired at phases of 90 degrees, 180 degrees, 270 degrees and 360 degrees respectively during working; as shown in FIG. 3, the TOF camera depth recovery module calculates the phase difference between the transmitted light and the received light according to the four frames of image data
Figure BDA0004026837260000054
The light intensity of the four frames of images DCS0, DCS1, DCS2 and DCS3 is as follows:
DCS0:
Figure BDA0004026837260000061
DCS1:
Figure BDA0004026837260000062
DCS2:
Figure BDA0004026837260000063
DCS3:
Figure BDA0004026837260000064
wherein A is the amplitude of received signal energy;
Figure BDA0004026837260000065
the calculation formula of the azimuth function atan2 is shown in formula (1), and the output angle range (-pi, pi).
Figure BDA0004026837260000066
Therefore, the time of light transmission can be calculated according to the formula (2), and then the distance can be calculated according to the formula (3).
Figure BDA0004026837260000067
Figure BDA0004026837260000068
Wherein f is LED Frequency of LED, c light speed, DCS 0-DCS 3 detector data, t OFFSET And D OFFSET Respectively, offset in time and distance.
The TOF camera head can provide three-dimensional coordinate information of a scene target point, and the dual-system fusion depth recovery module further fuses data of the TOF camera head and the binocular visible light module: the specific principle is as shown in fig. 4, the TOF camera head 3 obtains the three-dimensional coordinates of the target point, and projects the three-dimensional coordinates onto the binocular visible light image plane 21, and then selects the ROI region 6 near the projection point 7 to perform local feature point extraction and matching, so as to obtain the three-dimensional information of the point.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (8)

1. A navigation camera relative imaging measurement method of double-body system vision fusion is used for obtaining relative navigation information when a lunar vehicle autonomously drives on the lunar surface, and is characterized in that,
acquiring lunar three-dimensional point cloud information and gray information from data acquired by the two visible light camera heads by utilizing a visible light binocular stereoscopic vision passive imaging principle;
obtaining lunar surface three-dimensional point cloud information by using a TOF active optical ranging principle for data acquired by a TOF camera head;
and fusing a TOF active light imaging system and a binocular visible light passive imaging system, fusing two groups of lunar surface three-dimensional point cloud information, and performing lunar surface depth recovery on a lunar surface texture missing or texture repetition region.
2. The dual-system vision-fused navigation camera relative imaging measurement method of claim 1, wherein three-dimensional coordinates of a target point obtained by the heads of the TOF camera are projected onto binocular visible light image planes of the two visible light camera heads at the same time, and then an ROI region is selected near the projected point for local feature point extraction and matching to obtain three-dimensional information of the target point.
3. The dual-system vision-fused navigation camera relative imaging measurement method according to claim 1 or 2, wherein the two visible light camera heads are respectively arranged on two sides of the TOF camera head through a transverse mounting bracket.
4. A navigation camera is characterized by comprising two visible light camera heads, a TOF camera head and an algorithm function module; the algorithm function further comprises:
the binocular stereoscopic vision depth recovery module is used for processing data acquired by the two visible light camera heads by utilizing a visible light binocular stereoscopic vision passive imaging principle to obtain lunar surface three-dimensional point cloud information and gray information;
the TOF camera depth recovery module is used for processing data acquired by the TOF camera head by utilizing a TOF active optical ranging principle to obtain lunar surface three-dimensional point cloud information;
and the dual system fusion depth recovery module fuses a TOF active light imaging system and a binocular visible light passive imaging system, fuses two groups of lunar surface three-dimensional point cloud information, and performs lunar surface depth recovery on a lunar surface texture missing or texture repeating region.
5. The navigation camera of claim 4, wherein the dual-system fusion depth restoration module projects three-dimensional coordinates of a target point obtained by the TOF camera head onto binocular visible light image planes of the two visible light camera heads simultaneously, and then selects an ROI area near the projected point for local feature point extraction and matching to obtain three-dimensional information of the target point.
6. The navigation camera of claim 4, further comprising a mounting bracket; the two visible light camera heads are respectively arranged on two sides of the TOF camera head through transverse mounting brackets.
7. The navigation camera according to claim 4 or 5, further comprising a power supply and information processing box, inside which a power supply module, a core processor and corresponding peripheral circuits are provided; the algorithm function module performs processing of data by the core processor.
8. A lunar vehicle, characterized in that a navigation camera according to any one of claims 4 to 7 is provided; and when the lunar vehicle autonomously runs on the lunar surface, relative navigation information is acquired through the navigation camera.
CN202211717243.8A 2022-12-29 2022-12-29 Double-system vision-fused navigation camera relative imaging measurement method Pending CN115876206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211717243.8A CN115876206A (en) 2022-12-29 2022-12-29 Double-system vision-fused navigation camera relative imaging measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211717243.8A CN115876206A (en) 2022-12-29 2022-12-29 Double-system vision-fused navigation camera relative imaging measurement method

Publications (1)

Publication Number Publication Date
CN115876206A true CN115876206A (en) 2023-03-31

Family

ID=85757391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211717243.8A Pending CN115876206A (en) 2022-12-29 2022-12-29 Double-system vision-fused navigation camera relative imaging measurement method

Country Status (1)

Country Link
CN (1) CN115876206A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563186A (en) * 2023-05-12 2023-08-08 中山大学 Real-time panoramic sensing system and method based on special AI sensing chip

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563186A (en) * 2023-05-12 2023-08-08 中山大学 Real-time panoramic sensing system and method based on special AI sensing chip

Similar Documents

Publication Publication Date Title
EP3469306B1 (en) Geometric matching in visual navigation systems
CN105115445A (en) Three-dimensional imaging system and imaging method based on combination of depth camera and binocular vision
CN111492265A (en) Multi-resolution, simultaneous localization and mapping based on 3D lidar measurements
Kim et al. Stereo camera localization in 3d lidar maps
CN109444916B (en) Unmanned driving drivable area determining device and method
CN104197928A (en) Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
KR20120058828A (en) System for extracting 3-dimensional coordinate and method thereof
CN112698306A (en) System and method for solving map construction blind area by combining multiple laser radars and camera
CN114998499A (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
JP6727539B2 (en) Distance sensor, running body, robot and three-dimensional measuring device
CN111260773A (en) Three-dimensional reconstruction method, detection method and detection system for small obstacles
CN101234601A (en) Automobile cruise control method based on monocular vision and implement system thereof
CN105004324A (en) Monocular vision sensor with triangulation ranging function
CN115876206A (en) Double-system vision-fused navigation camera relative imaging measurement method
CN112255639B (en) Depth perception sensor and depth perception sensing module for region of interest
WO2023070113A1 (en) Validating an sfm map using lidar point clouds
English et al. TriDAR: A hybrid sensor for exploiting the complimentary nature of triangulation and LIDAR technologies
CN109785431A (en) A kind of road ground three-dimensional feature acquisition method and device based on laser network
Dang et al. Self-calibration for active automotive stereo vision
US11620832B2 (en) Image based locationing
CN108195291B (en) Moving vehicle three-dimensional detection method and detection device based on differential light spots
CN113030960B (en) Vehicle positioning method based on monocular vision SLAM
CN112364741B (en) Monocular remote obstacle detection method and device for unmanned aerial vehicle and unmanned aerial vehicle
CN103697825A (en) System and method of utilizing super-resolution 3D (three-dimensional) laser to measure
CN112804515A (en) Omnidirectional stereoscopic vision camera configuration system and camera configuration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination