CN114359329A - Binocular stereo camera-based motion estimation method and system and intelligent terminal - Google Patents

Binocular stereo camera-based motion estimation method and system and intelligent terminal Download PDF

Info

Publication number
CN114359329A
CN114359329A CN202110504583.1A CN202110504583A CN114359329A CN 114359329 A CN114359329 A CN 114359329A CN 202110504583 A CN202110504583 A CN 202110504583A CN 114359329 A CN114359329 A CN 114359329A
Authority
CN
China
Prior art keywords
time
moment
information
feature point
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110504583.1A
Other languages
Chinese (zh)
Inventor
张欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN202110504583.1A priority Critical patent/CN114359329A/en
Publication of CN114359329A publication Critical patent/CN114359329A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a binocular stereo camera-based motion estimation method, a system and an intelligent terminal, wherein the method comprises the following steps: respectively acquiring view information and binocular parallax information acquired by a reference lens of the binocular stereo camera at two different moments; respectively acquiring matched feature point pixel coordinates and world coordinates at two different moments based on the view information and the binocular parallax information; and estimating a motion relation according to the matched feature point pixel coordinates and world coordinates to acquire the real physical distance of the camera moving in the optical axis direction. Therefore, by introducing binocular parallax information, the estimation of the physical size can be obtained, the accuracy of motion estimation is further improved, and the problem of inaccurate vehicle information acquisition caused by large camera motion estimation error in the prior art is solved.

Description

Binocular stereo camera-based motion estimation method and system and intelligent terminal
Technical Field
The invention relates to the technical field of camera motion estimation methods, in particular to a binocular stereo camera-based motion estimation method, a binocular stereo camera-based motion estimation system and an intelligent terminal.
Background
In recent years, with the development of automatic driving and driving assistance techniques, the demand for in-vehicle sensors has been increasing. In the fields of automatic driving and assistant driving, a monocular camera is mostly used for image acquisition, but in the motion estimation process of the traditional monocular camera, the situation that physical scale constraint is lost exists, so that errors in the motion estimation of the camera are amplified, and the information acquisition of assistant driving vehicles is inaccurate.
Disclosure of Invention
Therefore, the embodiment of the invention provides a binocular stereo camera-based motion estimation method, a binocular stereo camera-based motion estimation system and an intelligent terminal, and aims to solve the problem of inaccurate vehicle information acquisition caused by large camera motion estimation errors in the prior art.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
a binocular stereo camera based motion estimation method, the method comprising:
respectively acquiring view information and binocular parallax information acquired by a reference lens of the binocular stereo camera at two different moments;
respectively acquiring matched feature point pixel coordinates and world coordinates at two different moments based on the view information and the binocular parallax information;
and estimating a motion relation according to the matched feature point pixel coordinates and world coordinates to acquire the real physical distance of the camera moving in the optical axis direction.
Further, the obtaining of the view information and the binocular disparity information acquired by the reference lens of the binocular stereo camera at two different times respectively specifically includes:
acquiring first time view information and first time binocular parallax information acquired by a reference lens at a first time;
acquiring second moment view information and second moment binocular parallax information acquired by the reference lens at a second moment;
the first time and the second time have a preset time interval.
Further, the obtaining of the pixel coordinates and the world coordinates of the feature points matched at two different times based on the view information and the binocular disparity information specifically includes:
extracting a first time characteristic point of the first time view information to obtain a pixel coordinate of the first time characteristic point;
acquiring the depth information of the feature points at the first moment;
and calculating to obtain the three-dimensional world coordinate of the first moment based on the depth information of the feature point of the first moment.
Further, the obtaining pixel coordinates and world coordinates of the feature points matched at two different moments based on the view information and the binocular disparity information further includes:
extracting second moment feature points of the second moment view information to obtain pixel coordinates of the second moment feature points;
acquiring the depth information of the feature points at the second moment;
and calculating to obtain the three-dimensional world coordinate at the second moment based on the depth information of the feature points at the second moment.
Further, the obtaining pixel coordinates and world coordinates of the feature points matched at two different moments based on the view information and the binocular disparity information further includes:
and obtaining the feature point pixel coordinates and the feature point world coordinates which are matched with each other in the first time and the second time through feature matching.
Further, the predicting a motion relationship according to the matched feature point pixel coordinates and world coordinates to obtain a real physical distance that the camera moves in the optical axis direction specifically includes:
estimating a motion relation according to the pixel coordinates of the characteristic points at the first moment and the pixel coordinates of the characteristic points at the second moment as follows:
Ri×pts0_ics+Ti=pts1_ics
wherein R isiIs a 3 × 3 rotation matrix, TiIs a 3 x 1 translation vector representing motion from a first time instant to a second time instant.
Further, the estimating a motion relationship according to the matched feature point pixel coordinates and world coordinates to obtain a real physical distance that the camera moves in the optical axis direction, further includes:
estimating a motion relation according to the world coordinates of the first characteristic points and the world coordinates of the second characteristic points as follows:
Rw×pts0_wcs+Tw=pts1_wcs
wherein R iswIs a 3 × 3 rotation matrix, TwIs a 3 x 1 translation vector representing motion from time t to time t 1.
The present invention also provides a binocular stereo camera-based motion estimation system, the system comprising:
the binocular vision image acquisition unit is used for respectively acquiring view information and binocular parallax information acquired by a reference lens of the binocular stereo camera at two different moments;
the coordinate extraction unit is used for respectively acquiring matched feature point pixel coordinates and world coordinates at two different moments based on the view information and the binocular parallax information;
and the motion estimation unit is used for estimating a motion relation according to the matched characteristic point pixel coordinate and the world coordinate so as to obtain the real physical distance of the camera moving in the optical axis direction.
The present invention also provides an intelligent terminal, including: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
The present invention also provides a computer readable storage medium having embodied therein one or more program instructions for executing the method as described above.
The binocular stereo camera-based motion estimation method provided by the invention comprises the steps of respectively acquiring view information and binocular parallax information acquired by a reference lens of a binocular stereo camera at two different moments; respectively acquiring matched feature point pixel coordinates and world coordinates at two different moments based on the view information and the binocular parallax information; and estimating a motion relation according to the matched feature point pixel coordinates and world coordinates to acquire the real physical distance of the camera moving in the optical axis direction. Therefore, by introducing binocular parallax information, the estimation of the physical size can be obtained, the accuracy of motion estimation is further improved, and the problem of inaccurate vehicle information acquisition caused by large camera motion estimation error in the prior art is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.
Fig. 1 is a flowchart of a binocular stereo camera-based motion estimation method according to an embodiment of the present invention;
fig. 2 is a block diagram of a binocular stereo camera-based motion estimation system according to an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In one embodiment, as shown in fig. 1, the method for estimating motion based on a binocular stereo camera provided by the present invention includes the following steps:
s1: and respectively acquiring view information and binocular parallax information acquired by a reference lens of the binocular stereo camera at two different moments. The reference lens may be a left-eye camera or a right-eye camera, and the left-eye camera is taken as the reference lens in this embodiment. Step S1 is specifically to obtain first time view information and first time binocular disparity information acquired by the reference lens at a first time; acquiring second moment view information and second moment binocular parallax information acquired by the reference lens at a second moment; the first time and the second time have a preset time interval. For example, in an actual usage scenario, at time t, the left eye monocular view and the disparity information are acquired, and at time t1, the left eye monocular view and the disparity information are acquired.
S2: and respectively acquiring matched feature point pixel coordinates and world coordinates at two different moments based on the view information and the binocular parallax information. Specifically, step S2 includes performing first-time feature point extraction on the first-time view information to obtain a pixel coordinate of the first-time feature point; acquiring the depth information of the feature points at the first moment; and calculating to obtain the three-dimensional world coordinate of the first moment based on the depth information of the feature point of the first moment. Extracting second moment feature points of the second moment view information to obtain pixel coordinates of the second moment feature points; acquiring the depth information of the feature points at the second moment; and calculating to obtain the three-dimensional world coordinate at the second moment based on the depth information of the feature points at the second moment. And then, obtaining the feature point pixel coordinates and the feature point world coordinates which are matched with each other in the first time and the second time through feature matching.
Still taking the above specific use scenario as an example, after obtaining the left monocular view and the disparity information at time t, feature point extraction is performed on the left monocular view to obtain a pixel coordinate pts0_ ics of corresponding feature point data (i.e., the pixel coordinate of the feature point at the first time), and a three-dimensional world coordinate pts0_ wcs (i.e., the three-dimensional world coordinate at the first time) is calculated by using the corresponding point depth information disp0 (i.e., the feature point depth information at the first time).
After obtaining the left-eye monocular view and the parallax information at the time t1, feature point extraction is performed on the left-eye monocular view to obtain a pixel coordinate pts1_ ics (i.e., the pixel coordinate of the feature point at the second time) of corresponding feature point data, and the three-dimensional world coordinate pts1_ wcs (i.e., the three-dimensional world coordinate at the second time) is calculated by using the corresponding point depth information disp1 (i.e., the feature point depth information at the second time).
Through feature matching, feature point pixel coordinates pts0_ ics and pts1_ ics, and feature point world coordinates pts0_ wcs and pts1_ wcs which are matched with each other in the front time t and the rear time t and the time t1 are obtained.
S3: and estimating a motion relation according to the matched feature point pixel coordinates and world coordinates to acquire the real physical distance of the camera moving in the optical axis direction. Specifically, the estimated motion relationship according to the pixel coordinates of the feature point at the first moment and the feature pixel coordinates at the second moment is as follows: ri×pts0_ics+TiPts1_ ics, where RiIs a 3 × 3 rotation matrix, TiIs a 3 x 1 translation vector representing motion from a first time instant to a second time instant. Estimating a motion relation according to the world coordinates of the first characteristic points and the world coordinates of the second characteristic points as follows: rw×pts0_wcs+TwPts1_ wcs, wherein RwIs a 3 × 3 rotation matrix, TwIs a 3 x 1 translation vector representing motion from time t to time t 1.
Still taking the above-mentioned usage scenario as an example, in the estimation of the pose, the motion relationship can be estimated from the feature point pixel coordinates pts0_ ics and pts1_ ics as follows: ri × pts0_ ics + Ti ═ pts1_ ics, where Ri is a 3 × 3 rotation matrix and Ti is a 3 × 1 translation vector, representing motion from time t to time t 1. From the feature point world coordinates pts0_ wcs and pts1_ wcs, the shipping relationship can be estimated as follows: rw 0_ wcs + Tw is pts1_ wcs, where Rw is a 3 × 3 rotation matrix and Tw is a 3 × 1 translation vector, representing motion from time t to time t 1. Let Ti and Tw implement homogeneity simultaneously, i.e. T _ i ═ a × Ti, T _ w ═ b × Tw, where a and b are scale coefficients, and T _ i and T _ w are both 3 × 1 homogeneous coordinate vectors. Let m be a/b a motion scale, which represents the actual physical distance that the camera actually moves in the optical axis direction from time t to time t 1.
Furthermore, in order to improve the accuracy, the method further includes a result self-verifying step, specifically, if a norm of Terr | | | Ti-Tw | | is approximately equal to 0, and a norm of Rerr | | | Ri-Rw | | is approximately equal to 0, the aforementioned pose estimation result is correct, where R ═ Ri + Rw)/2, and T ═ m (Ti + Tw)/2 are finally estimated motion parameters. If the constraint is not satisfied, the bit sub-calculation result is considered to be incorrect, an error mark is returned, all data are emptied, and the calculation is carried out again.
In the above specific embodiment, the motion estimation method based on the binocular stereo camera provided by the invention acquires the view information and the binocular disparity information acquired by the reference lens of the binocular stereo camera at two different times respectively; respectively acquiring matched feature point pixel coordinates and world coordinates at two different moments based on the view information and the binocular parallax information; and estimating a motion relation according to the matched feature point pixel coordinates and world coordinates to acquire the real physical distance of the camera moving in the optical axis direction. Therefore, by introducing binocular parallax information, the estimation of the physical size can be obtained, the accuracy of motion estimation is further improved, and the problem of inaccurate vehicle information acquisition caused by large camera motion estimation error in the prior art is solved.
In addition to the above method, the present invention also provides a binocular stereo camera based motion estimation system, which in one embodiment, as shown in fig. 2, includes:
the binocular vision image obtaining unit 100 is configured to obtain view information and binocular parallax information collected by a reference lens of the binocular stereo camera at two different times, respectively. The reference lens may be a left-eye camera or a right-eye camera, and the left-eye camera is taken as the reference lens in this embodiment. The binocular vision image acquisition unit 100 is specifically configured to acquire first time view information and first time binocular parallax information acquired by the reference lens at a first time; acquiring second moment view information and second moment binocular parallax information acquired by the reference lens at a second moment; the first time and the second time have a preset time interval. For example, in an actual usage scenario, at time t, the left eye monocular view and the disparity information are acquired, and at time t1, the left eye monocular view and the disparity information are acquired.
And the coordinate extraction unit 200 is configured to obtain pixel coordinates and world coordinates of the matched feature points at two different moments respectively based on the view information and the binocular disparity information. Specifically, the coordinate extraction unit 200 is specifically configured to perform first-time feature point extraction on the first-time view information to obtain a pixel coordinate of the first-time feature point; acquiring the depth information of the feature points at the first moment; and calculating to obtain the three-dimensional world coordinate of the first moment based on the depth information of the feature point of the first moment. Extracting second moment feature points of the second moment view information to obtain pixel coordinates of the second moment feature points; acquiring the depth information of the feature points at the second moment; and calculating to obtain the three-dimensional world coordinate at the second moment based on the depth information of the feature points at the second moment. And then, obtaining the feature point pixel coordinates and the feature point world coordinates which are matched with each other in the first time and the second time through feature matching.
Still taking the above specific use scenario as an example, after obtaining the left monocular view and the disparity information at time t, feature point extraction is performed on the left monocular view to obtain a pixel coordinate pts0_ ics of corresponding feature point data (i.e., the pixel coordinate of the feature point at the first time), and a three-dimensional world coordinate pts0_ wcs (i.e., the three-dimensional world coordinate at the first time) is calculated by using the corresponding point depth information disp0 (i.e., the feature point depth information at the first time).
After obtaining the left-eye monocular view and the parallax information at the time t1, feature point extraction is performed on the left-eye monocular view to obtain a pixel coordinate pts1_ ics (i.e., the pixel coordinate of the feature point at the second time) of corresponding feature point data, and the three-dimensional world coordinate pts1_ wcs (i.e., the three-dimensional world coordinate at the second time) is calculated by using the corresponding point depth information disp1 (i.e., the feature point depth information at the second time).
Through feature matching, feature point pixel coordinates pts0_ ics and pts1_ ics, and feature point world coordinates pts0_ wcs and pts1_ wcs which are matched with each other in the front time t and the rear time t and the time t1 are obtained.
And the motion estimation unit 300 is configured to estimate a motion relationship according to the matched feature point pixel coordinates and world coordinates to obtain a real physical distance that the camera moves in the optical axis direction. Specifically, the motion estimation unit 300 is configured to estimate the motion relationship according to the pixel coordinate of the feature point at the first time and the pixel coordinate of the feature point at the second time as: ri×pts0_ics+TiPts1_ ics, where RiIs a 3 × 3 rotation matrix, TiIs a 3 x 1 translation vector representing motion from a first time instant to a second time instant. Estimating a motion relation according to the world coordinates of the first characteristic points and the world coordinates of the second characteristic points as follows: rw×pts0_wcs+TwPts1_ wcs, wherein RwIs a 3 × 3 rotation matrix, TwIs a 3 x 1 translation vector representing motion from time t to time t 1.
Still taking the above-mentioned usage scenario as an example, in the estimation of the pose, the motion relationship can be estimated from the feature point pixel coordinates pts0_ ics and pts1_ ics as follows: ri × pts0_ ics + Ti ═ pts1_ ics, where Ri is a 3 × 3 rotation matrix and Ti is a 3 × 1 translation vector, representing motion from time t to time t 1. From the feature point world coordinates pts0_ wcs and pts1_ wcs, the shipping relationship can be estimated as follows: rw 0_ wcs + Tw is pts1_ wcs, where Rw is a 3 × 3 rotation matrix and Tw is a 3 × 1 translation vector, representing motion from time t to time t 1. Let Ti and Tw implement homogeneity simultaneously, i.e. T _ i ═ a × Ti, T _ w ═ b × Tw, where a and b are scale coefficients, and T _ i and T _ w are both 3 × 1 homogeneous coordinate vectors. Let m be a/b a motion scale, which represents the actual physical distance that the camera actually moves in the optical axis direction from time t to time t 1.
Furthermore, to improve the accuracy, the system further includes a result self-verification unit, which is specifically configured to, if a norm of Terr | | | Ti-Tw | is approximately equal to 0, and a norm of Rerr | | | Ri-Rw | | is approximately equal to 0, obtain the aforementioned pose estimation result as correct, where R ═ Ri + Rw)/2, and T ═ m (Ti + Tw)/2 are the final estimated motion parameters. If the constraint is not satisfied, the bit sub-calculation result is considered to be incorrect, an error mark is returned, all data are emptied, and the calculation is carried out again.
In the above embodiment, the motion estimation system based on the binocular stereo camera provided by the invention respectively acquires the view information and the binocular disparity information acquired by the reference lens of the binocular stereo camera at two different times; respectively acquiring matched feature point pixel coordinates and world coordinates at two different moments based on the view information and the binocular parallax information; and estimating a motion relation according to the matched feature point pixel coordinates and world coordinates to acquire the real physical distance of the camera moving in the optical axis direction. Therefore, by introducing binocular parallax information, the estimation of the physical size can be obtained, the accuracy of motion estimation is further improved, and the problem of inaccurate vehicle information acquisition caused by large camera motion estimation error in the prior art is solved.
The present invention also provides an intelligent terminal, including: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
In correspondence with the above embodiments, embodiments of the present invention also provide a computer storage medium containing one or more program instructions therein. Wherein the one or more program instructions are for executing the method as described above by a binocular camera depth calibration system.
In an embodiment of the invention, the processor may be an integrated circuit chip having signal processing capability. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The processor reads the information in the storage medium and completes the steps of the method in combination with the hardware.
The storage medium may be a memory, for example, which may be volatile memory or nonvolatile memory, or which may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory.
The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), SLDRAM (SLDRAM), and Direct Rambus RAM (DRRAM).
The storage media described in connection with the embodiments of the invention are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that the functionality described in the present invention may be implemented in a combination of hardware and software in one or more of the examples described above. When software is applied, the corresponding functionality may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above embodiments are only for illustrating the embodiments of the present invention and are not to be construed as limiting the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the embodiments of the present invention shall be included in the scope of the present invention.

Claims (10)

1. A binocular stereo camera-based motion estimation method is characterized by comprising the following steps:
respectively acquiring view information and binocular parallax information acquired by a reference lens of the binocular stereo camera at two different moments;
respectively acquiring matched feature point pixel coordinates and world coordinates at two different moments based on the view information and the binocular parallax information;
and estimating a motion relation according to the matched feature point pixel coordinates and world coordinates to acquire the real physical distance of the camera moving in the optical axis direction.
2. The motion estimation method according to claim 1, wherein the obtaining of the view information and the binocular disparity information collected by the reference lens of the binocular stereo camera at two different times respectively comprises:
acquiring first time view information and first time binocular parallax information acquired by a reference lens at a first time;
acquiring second moment view information and second moment binocular parallax information acquired by the reference lens at a second moment;
the first time and the second time have a preset time interval.
3. The motion estimation method according to claim 2, wherein the obtaining pixel coordinates and world coordinates of the feature point matched at two different times based on the view information and the binocular disparity information respectively comprises:
extracting a first time characteristic point of the first time view information to obtain a pixel coordinate of the first time characteristic point;
acquiring the depth information of the feature points at the first moment;
and calculating to obtain the three-dimensional world coordinate of the first moment based on the depth information of the feature point of the first moment.
4. The motion estimation method according to claim 3, wherein the obtaining of the matched feature point pixel coordinates and world coordinates at two different times based on the view information and the binocular disparity information, respectively, further comprises:
extracting second moment feature points of the second moment view information to obtain pixel coordinates of the second moment feature points;
acquiring the depth information of the feature points at the second moment;
and calculating to obtain the three-dimensional world coordinate at the second moment based on the depth information of the feature points at the second moment.
5. The motion estimation method according to claim 4, wherein the obtaining of the matched feature point pixel coordinates and world coordinates at two different times based on the view information and the binocular disparity information respectively further comprises:
and obtaining the feature point pixel coordinates and the feature point world coordinates which are matched with each other in the first time and the second time through feature matching.
6. The motion estimation method according to claim 5, wherein the predicting a motion relationship according to the matched feature point pixel coordinates and world coordinates to obtain a true physical distance that the camera moves in the optical axis direction specifically comprises:
estimating a motion relation according to the pixel coordinates of the characteristic points at the first moment and the pixel coordinates of the characteristic points at the second moment as follows:
Ri×pts0_ics+Ti=pts1_ics
wherein R isiIs a 3 × 3 rotation matrix, TiIs a 3 x 1 translation vector representing motion from a first time instant to a second time instant.
7. The motion estimation method according to claim 6, wherein the estimating a motion relationship based on the matched feature point pixel coordinates and world coordinates to obtain a true physical distance that the camera moves in the optical axis direction further comprises:
estimating a motion relation according to the world coordinates of the first characteristic points and the world coordinates of the second characteristic points as follows:
Rw×pts0_wcs+Tw=pts1_wcs
wherein R iswIs a 3 × 3 rotation matrix, TwIs a 3 x 1 translation vector representing motion from time t to time t 1.
8. A binocular stereo camera based motion estimation system, the system comprising:
the binocular vision image acquisition unit is used for respectively acquiring view information and binocular parallax information acquired by a reference lens of the binocular stereo camera at two different moments;
the coordinate extraction unit is used for respectively acquiring matched feature point pixel coordinates and world coordinates at two different moments based on the view information and the binocular parallax information;
and the motion estimation unit is used for estimating a motion relation according to the matched characteristic point pixel coordinate and the world coordinate so as to obtain the real physical distance of the camera moving in the optical axis direction.
9. An intelligent terminal, characterized in that, intelligent terminal includes: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor, configured to execute one or more program instructions to perform the method of any of claims 1-7.
10. A computer-readable storage medium having one or more program instructions embodied therein for performing the method of any of claims 1-7.
CN202110504583.1A 2021-05-10 2021-05-10 Binocular stereo camera-based motion estimation method and system and intelligent terminal Pending CN114359329A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110504583.1A CN114359329A (en) 2021-05-10 2021-05-10 Binocular stereo camera-based motion estimation method and system and intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110504583.1A CN114359329A (en) 2021-05-10 2021-05-10 Binocular stereo camera-based motion estimation method and system and intelligent terminal

Publications (1)

Publication Number Publication Date
CN114359329A true CN114359329A (en) 2022-04-15

Family

ID=81095399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110504583.1A Pending CN114359329A (en) 2021-05-10 2021-05-10 Binocular stereo camera-based motion estimation method and system and intelligent terminal

Country Status (1)

Country Link
CN (1) CN114359329A (en)

Similar Documents

Publication Publication Date Title
CN107633536B (en) Camera calibration method and system based on two-dimensional plane template
US20220277470A1 (en) Method and system for detecting long-distance target through binocular camera, and intelligent terminal
CN111080784B (en) Ground three-dimensional reconstruction method and device based on ground image texture
CN113240632A (en) Road surface detection method and system based on semantic segmentation network and intelligent terminal
CN112907681A (en) Combined calibration method and system based on millimeter wave radar and binocular camera
CN110926408A (en) Short-distance measuring method, device and system based on characteristic object and storage medium
CN112465831A (en) Curve scene perception method, system and device based on binocular stereo camera
CN110969666B (en) Binocular camera depth calibration method, device, system and storage medium
CN113965742B (en) Dense disparity map extraction method and system based on multi-sensor fusion and intelligent terminal
CN114119777B (en) Stereo matching method and system based on deep learning
CN111882655B (en) Method, device, system, computer equipment and storage medium for three-dimensional reconstruction
CN113580134A (en) Visual positioning method, device, robot, storage medium and program product
CN113450334B (en) Overwater target detection method, electronic equipment and storage medium
CN113140002B (en) Road condition detection method and system based on binocular stereo camera and intelligent terminal
CN112308931B (en) Camera calibration method and device, computer equipment and storage medium
CN111382591A (en) Binocular camera ranging correction method and vehicle-mounted equipment
CN114359329A (en) Binocular stereo camera-based motion estimation method and system and intelligent terminal
CN111754574A (en) Distance testing method, device and system based on binocular camera and storage medium
CN111627067B (en) Calibration method of binocular camera and vehicle-mounted equipment
CN111563936A (en) Camera external parameter automatic calibration method and automobile data recorder
CN115100621A (en) Ground scene detection method and system based on deep learning network
CN114511600A (en) Pose calculation method and system based on point cloud registration
CN114690226A (en) Monocular vision distance measurement method and system based on carrier phase difference technology assistance
CN113159197A (en) Pure rotation motion state judgment method and device
CN114298965A (en) Binocular vision system-based interframe matching detection method and system and intelligent terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination