CN112033352A - Robot with multiple cameras for ranging and visual ranging method - Google Patents

Robot with multiple cameras for ranging and visual ranging method Download PDF

Info

Publication number
CN112033352A
CN112033352A CN202010905113.1A CN202010905113A CN112033352A CN 112033352 A CN112033352 A CN 112033352A CN 202010905113 A CN202010905113 A CN 202010905113A CN 112033352 A CN112033352 A CN 112033352A
Authority
CN
China
Prior art keywords
camera
image
feature point
point
preset feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010905113.1A
Other languages
Chinese (zh)
Other versions
CN112033352B (en
Inventor
赖钦伟
肖刚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN202010905113.1A priority Critical patent/CN112033352B/en
Publication of CN112033352A publication Critical patent/CN112033352A/en
Application granted granted Critical
Publication of CN112033352B publication Critical patent/CN112033352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/10Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument
    • G01C3/18Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument with one observation point at each end of the base

Abstract

The invention discloses a robot with multiple cameras for ranging and a visual ranging method, wherein three or more than three cameras which are not on the same preset straight line are arranged on a bearing base of the robot, the problem that the binocular cameras in the prior art cannot detect the depth information of a parallel line scene (a threshold, a step edge and an edge contour line) is solved, compared with a parallel binocular stereo vision model in the prior art, the visual ranging method disclosed by the invention utilizes the binocular ranging results of one camera selected from the first camera and the second camera and a third camera which is not on the same preset straight line to calculate the optimal depth value according to the binocular ranging results of the first camera and the second camera on the same preset straight line, and the misjudgment of the depth value distance when the first depth value is 0 caused by the binocular ranging of the first camera and the second camera is avoided, the accuracy of distance detection is improved.

Description

Robot with multiple cameras for ranging and visual ranging method
Technical Field
The invention belongs to the technical field of binocular vision ranging, and particularly relates to a multi-camera ranging robot and a vision ranging method.
Background
Distance measurement is a necessary function for many robots, especially three-dimensional information, to help the robot to perceive the outside world. The binocular stereo vision distance measurement has the advantages of non-contact, automatic measurement, no harm to human eyes and the like. However, in the actual distance measurement process, the two cameras of the robot may acquire the step edge contour lines and threshold lines parallel to the baselines of the two cameras, but the depth information of the object parallel to the baselines of the two cameras cannot be calculated from the image lines parallel to the baselines.
Disclosure of Invention
The invention adopts three or more than three cameras which are not arranged on the same straight line to solve the problem that the depth of a parallel line of a binocular camera cannot be detected, wherein one camera does not execute binocular vision distance measurement on the straight line, and the precision of distance detection is improved, and the specific technical scheme is as follows:
the utility model provides a robot of many cameras range finding, this robot is including bearing the base, should bear the at least three camera that does not distribute on same predetermined straight line of pedestal mounting. Compared with the parallel binocular stereo vision model in the prior art, the technical scheme has the advantages that at least two cameras are distributed on the same straight line, one camera is not necessarily distributed on the straight line, the problem that the binocular camera in the prior art cannot detect the depth information of a parallel line scene (a threshold, a step edge and an object horizontal edge contour line) is solved, and the distance detection precision is improved.
Further, in the bearing base, at least two cameras are installed on the same preset straight line, and at least one camera is arranged above and/or below the preset straight line. The binocular vision distance measuring method has the advantages that the effect of binocular vision distance measuring can be normally performed by two cameras on the same preset straight line of the robot, and when parallel lines cannot be detected by the two cameras on the same preset straight line, the binocular vision distance measuring work of the parallel lines can be completed by the cameras which are associated with the cameras in geometric positions and are not on the same preset straight line.
Furthermore, in the bearing base, a camera is respectively arranged above and below the vertical center point of the connecting line of two cameras arranged on the preset straight line. Compared with a parallel binocular stereoscopic vision model in the prior art, the matching efficiency of the characteristic points on the polar lines of the cameras is improved.
Furthermore, the preset straight line is a horizontal line, binocular vision distance measurement is conveniently implemented, and the calculated amount of depth information is simplified.
Further, the preset straight line is provided with two or more cameras in the same horizontal acquisition direction, so that: on the preset straight line, the distance between two cameras is different from the distance between the other two cameras, or the length of the base line of the two cameras is different from the length of the base line of the other two cameras. Thereby being compatible with long-distance and short-distance visual ranging scenes.
Further, on the preset straight line, there are a left camera and a right camera horizontally placed at a distance of a base line, and the camera installed above and/or below the preset straight line is located on the vertical line of the base line between the left camera and the right camera. The parallel binocular stereoscopic vision model is conveniently constructed to calculate the depth value, and the operation efficiency is improved.
A vision distance measuring method of a robot based on the multi-camera distance measurement comprises the following steps: searching a transverse matching characteristic point pair based on the same space point of an object to be detected from an image shot by a first camera and an image shot by a second camera, solving a first parallax of the transverse matching characteristic point pair, and then constructing a triangular geometric relationship by combining a baseline distance between the first camera and the second camera and a camera focal length to calculate a first depth value; selecting one camera from the first camera and the second camera as a reference camera, searching a longitudinal matching characteristic point pair based on the same space point of an object to be detected from an image shot by the reference camera and an image shot by a third camera, solving a second parallax of the longitudinal matching characteristic point pair, and then constructing a triangular geometric relation by combining a base line distance between the reference camera and the third camera and a camera focal length to calculate a second depth value; judging whether the first parallax is 0 or not, if so, determining the second depth value as an optimal depth value for representing the distance between the space point of the object to be detected and the robot, and otherwise, processing the first depth value and the second depth value according to the size relation of the first parallax and the second parallax to obtain the optimal depth value; the camera system comprises a camera, a camera body, a camera and a camera, wherein the two cameras, which are placed on the same preset straight line at a distance of a base line, are respectively arranged as a first camera and a second camera, and the camera above the preset straight line or the camera below the preset straight line is arranged as a third camera.
Compare with the parallel binocular stereoscopic vision model of prior art, this technical scheme is according to the binocular range finding result of same first camera and the second camera of predetermineeing the straight line, use the binocular range finding result of benchmark camera and the third camera that is not on this predetermine the straight line to go to calculate the best depth value, overcome prior art's binocular camera and can not detect the problem that is in the object space point depth information that awaits measuring of parallel line scene (threshold, step parallel line edge), the erroneous judgement of producing the depth value distance when the first depth value that avoids the binocular range finding of first camera and second camera to bring is 0, promote the precision that the distance detected.
Further, the method for processing the first depth value and the second depth value according to the magnitude relationship between the first parallax and the second parallax to obtain the optimal depth value includes: when the first parallax is larger than the second parallax, the optimal depth value is determined as the first depth value; when the second parallax is larger than the first parallax, the optimal depth value is determined as the second depth value. The optimal depth value is closer to the actual distance (pixel value) between the space point of the object to be measured and the robot.
Further, the method for processing the first depth value and the second depth value according to the magnitude relationship between the first parallax and the second parallax to obtain the optimal depth value includes: calculating a first proportion of the first parallax in a sum of the first parallax and the second parallax to serve as an average processing weight distributed by the first depth value; calculating a second proportion of the second parallax in the sum of the first parallax and the second parallax to serve as an average processing weight distributed by the second depth value; and then, respectively carrying out weighted average processing on the first depth value and the second depth value by utilizing the first proportion and the second proportion to obtain the optimal depth value. According to the technical scheme, the depth values obtained by the three cameras which are not in the same preset straight line are processed by using a weighted average method, so that the optimal depth value has better stability, and the influence of environmental factors is weakened.
Further, when the first parallax is 0, the robot recognizes that the object space point to be measured is located on a parallel line of a base line between the first camera and the second camera, so that the optimal depth value is derived from a vertical distance between the object space point to be measured and a connecting line between a projection center of the third camera and a projection center of the reference camera (the projection center of the second camera or the projection center of the first camera).
Further, the method for searching out the transverse matching characteristic point pair based on the same object space point to be detected comprises the following steps: setting an image shot by a first camera as a left image, and setting an image shot by a second camera as a right image; firstly, selecting a first preset feature point in a left image by taking the left image as a reference image and the right image as an image to be matched, and searching a second preset feature point matched with the first preset feature point from the right image based on epipolar constraint; then, taking the right image as a reference image and the left image as an image to be matched, selecting a second preset feature point in the right image, and forming a transverse matching feature point pair by using the first preset feature point and the second preset feature point when searching a first preset feature point matched with the second preset feature point from the left image based on epipolar constraint; the first preset feature point is a projection point of the same object space point to be detected on the left image, and the second preset feature point is a projection point of the same object space point to be detected on the right image; or setting an image shot by a first camera as a reference image, setting an image shot by a second camera as an image to be matched, selecting a first preset feature point in the reference image, searching a second preset feature point matched with the first preset feature point from the image to be matched based on polar line constraint, and forming a transverse matching feature point pair by the first preset feature point and the second preset feature point; the first preset feature point is a projection point of the same object space point to be detected in the reference image, and the second preset feature point is a projection point of the same object space point to be detected in the image to be matched. Therefore, the matching range of the characteristic points of the left image shot by the first camera and the characteristic points of the right image shot by the second camera is narrowed, and the transverse or horizontal characteristic point matching efficiency is improved.
Further, the method for searching out the longitudinal matching feature point pair based on the same object space point to be detected comprises the following steps: setting an image shot by the reference camera as a lower image, and setting an image shot by the third camera as an upper image; firstly, a lower image is taken as a reference image, an upper image is taken as an image to be matched, a third preset feature point is selected from the lower image, and a fourth preset feature point matched with the third preset feature point is searched from the upper image based on epipolar constraint; then, the image is taken as a reference image, the lower image is an image to be matched, a fourth preset feature point is selected from the upper image, and then a longitudinal matching feature point pair is formed by the third preset feature point and the fourth preset feature point when a third preset feature point matched with the fourth preset feature point is searched out from the lower image based on polar line constraint; the third preset feature point is a projection point of the same space point of the object to be detected on the lower image, and the fourth preset feature point is a projection point of the same space point of the object to be detected on the upper image; or setting an image shot by a reference camera as a reference image, setting an image shot by a third camera as an image to be matched, selecting a third preset feature point in the reference image, searching a fourth preset feature point matched with the third preset feature point from the image to be matched based on polar line constraint, and forming a longitudinal matching feature point pair by the third preset feature point and the fourth preset feature point; the third preset feature point is a projection point of the same object space point to be detected in the reference image, and the fourth preset feature point is a projection point of the same object space point to be detected in the image to be matched. Therefore, the matching range of the feature points of the image to be matched shot by the third camera and the feature points of the reference image shot by the reference camera (the first camera or the second camera) is narrowed, and the feature point matching efficiency in the longitudinal direction is improved.
Drawings
Fig. 1 is a schematic structural diagram of a multi-camera ranging robot in an embodiment of the present invention.
Fig. 2 is a flowchart of a visual ranging method based on the robot shown in fig. 1 according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings in the embodiments of the present invention. In the description of the present invention, it is to be understood that the terms "center," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "axial," "radial," "circumferential," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the device or element so referred to must have a particular orientation, be constructed and operated in a particular orientation, and are not to be construed as limiting the invention.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiment of the invention discloses a robot for measuring distance by multiple cameras, which comprises a bearing base 101 and a driving wheel 102, wherein the bearing base 101 is provided with at least three cameras which are not on the same preset straight line, and the embodiment comprises the following steps: on the bearing base of the robot, at least two cameras are distributed on the same straight line, and one camera is not distributed on the straight line. Compared with the parallel binocular stereo vision model in the prior art, the three or more than three cameras which are not on the same preset straight line are installed on the bearing base 101 of the robot, the problem that the binocular cameras in the prior art cannot detect the depth information of parallel line scenes (doorsill and step edge areas) is solved, and the distance detection precision is improved.
In the embodiment of the present invention, at least two cameras are installed in the same preset straight line on the bearing base 101, and at least one camera is arranged above and/or below the preset straight line. Ensure that two cameras on the same straight line of predetermineeing of robot can normally carry out binocular vision range finding effect, can be again when two cameras on the same straight line of predetermineeing can not detect parallel line, transfer rather than have the camera of geometric position relevance and accomplish the binocular vision range finding work of parallel line, the geometric position relevance here can be including not setting up the camera in same straight line of predetermineeing, but this camera can carry out binocular vision range finding with the camera of same straight line of predetermineeing the installation for establish new binocular stereoscopic vision model.
Preferably, in bearing the weight of the base, the perpendicular top of the line central point of wherein two cameras of installation in the predetermined straight line and perpendicular below respectively set up a camera, then form 4 cameras that are the setting of cross structure in bearing the weight of the base, compare with the parallel binocular stereo vision model of prior art, promote the matching efficiency of the characteristic point on the polar line of each camera.
Preferably, the preset straight line is a horizontal line on a horizontal plane where the bearing base 101 is located, so that binocular vision ranging is conveniently implemented, and the calculated amount of depth information is simplified.
As an embodiment, as shown in fig. 1, the preset straight line is provided with two or more cameras in the same horizontal collecting direction, so that: on the preset straight line (understand as in the horizontal line), the interval that has wherein two cameras is different with the interval of two other cameras, or the base length that has wherein two cameras is different with the base length of two other cameras, and the interval of two wherein cameras is different with the interval of two other cameras, then: the base length of the two cameras distributed on the same preset straight line is different from that of the other two cameras, so that the robot is compatible with long-distance and short-distance visual ranging scenes, namely the larger the base length of the two cameras distributed on the same preset straight line is, the more the robot can be used for measuring object depth information in a farther scene, and the smaller the base length of the two cameras distributed on the same preset straight line is, the more the robot can be used for measuring object depth information in a closer scene.
As an example, on the preset straight line, as shown in fig. 1, there are a left camera a and a right camera B horizontally placed at a distance of a base line, and a camera C installed above the preset straight line is located on a perpendicular line of the base line between the left camera a and the right camera B, or on a perpendicular line of a connecting line of the left camera a and the right camera B. The internal parameters of the left camera A, the internal parameters of the right camera B and the internal parameters of the camera C are the same, the optical axis of the left camera A, the optical axis of the right camera B and the optical axis of the camera C are parallel and perpendicular to a base line between the left camera A and the right camera B, so that a left image shot by the left camera A and a right image shot by the right camera B are coplanar and have only horizontal parallax, but the image shot by the camera C and the image shot by the camera A are not necessarily coplanar, the image shot by the camera C and the image shot by the camera B are not necessarily coplanar, and the connecting line distance of the projection center of the left camera A and the projection center of the right camera B is the base line distance. The parallel binocular stereoscopic vision model is conveniently constructed to calculate the depth value, and the operation efficiency is improved.
It should be noted that, in the foregoing embodiment, the bearing base of the robot and the camera mounted on the bearing base belong to a visual ranging module.
The invention further discloses a multi-camera ranging robot visual ranging method based on the embodiment. In the robot with multiple cameras for ranging in the foregoing embodiment, the first camera and the second camera are placed at a distance of a baseline selected from the same preset straight line according to the size of the actual ranging scene range of the robot with multiple cameras for ranging, wherein the camera baseline limits the measurement range. The relation between the measurement range and the base line (the distance between the two cameras) is large; the larger the baseline, the longer the measurement range; the smaller the baseline, the closer the measurement range. The baseline limits the measurement range of the depth camera to some extent.
Certainly, a plurality of cameras in the same image acquisition direction can be installed, so that the base line lengths of any two cameras in the cameras are different, and the cameras are compatible with visual ranging scenes in different distances; the camera of predetermineeing sharp top or a camera of predetermineeing sharp below sets up to the third camera, perhaps, predetermine the camera of sharp top and/or a camera of predetermineeing sharp below sets up one or more camera respectively to improve the matching efficiency of characteristic point, search out the operation of richening the matching characteristic point in order to participate in the degree of depth value, increase and overcome prior art's binocular camera and can not detect the depth information's of parallel line scene (threshold, gallery, ladder marginal zone) probability, improve range finding detection precision.
As an embodiment, as shown in fig. 2, the visual ranging method includes the steps of:
s101, searching a transverse matching characteristic point pair based on the same space point of an object to be detected from an image shot by a first camera and an image shot by a second camera, solving a first parallax of the transverse matching characteristic point pair, and then constructing a triangular geometric relation by combining a base line distance between the first camera and the second camera and a camera focal length to calculate a first depth value; then, the process proceeds to step S102.
Preferably, the first camera and the second camera are optimally arranged in a horizontal manner, the optical axis is parallel to and perpendicular to the base line, the camera focal length of the first camera is equal to that of the second camera, when the first camera and the second camera shoot the same space point of the object to be measured at the same time, the first camera and the second camera respectively acquire images of the space point of the object to be measured, then two matched characteristic points based on the space point of the object to be detected are searched out along the same polar line, then, the coordinates of the two characteristic points are subtracted in the same direction to obtain a horizontal parallax, and according to the triangle geometrical relationship, the ratio of the base length between the first camera and the second camera to the horizontal parallax is equal to the ratio of the depth of the space point of the object to be measured to the focal length of the camera, so that a classical binocular vision parallax distance measurement principle of a dual-camera system placed according to standard configuration is formed. Therefore, in step S101, the first depth value, which is the distance between the spatial point of the object to be measured and the baseline between the first camera and the second camera, and the first parallax and the proportional relationship between the camera focal length, can be calculated by combining the baseline distance between the first camera and the second camera, and the depth map can be obtained by representing the pixel value of the corresponding pixel point in the image by the depth value Z. If the object space point is on a parallel line (such as a step edge) parallel to the base line between the first camera and the second camera, the pair of transverse matching feature points based on the same object space point may be overlapped together on the same polar line, and it is difficult to obtain an accurate parallax, in which case the obtained parallax is generally zero.
Step S102, selecting one of the first camera and the second camera as a reference camera, searching a longitudinal matching feature point pair based on the same space point of an object to be detected from an image shot by the reference camera and an image shot by a third camera, solving a second parallax of the longitudinal matching feature point pair, and then constructing a triangular geometric relation by combining a baseline distance between the reference camera and the third camera and a camera focal length to calculate a second depth value; then, the process proceeds to step S103.
Preferably, the optical axis of the reference camera and the optical axis of the third camera are parallel to each other and perpendicular to the base line between the reference camera and the third camera, the camera focal length of the reference camera is equal to the camera focal length of the third camera, when the reference camera and the third camera shoot the same space point of the object to be detected in step S101 at the same time, according to the classic binocular vision parallax ranging principle of a dual-camera system placed in a standard configuration, the reference camera and the third camera respectively acquire the image of the space point of the object to be detected, the space point of the object to be detected respectively exists at a projection point of the image shot by the reference camera and a projection point of the image shot by the third camera, then matched feature points are respectively searched for the two projection points along the polar line direction to determine a longitudinal matching feature point pair based on the same space point of the object to be detected, and then the coordinates of the two matched feature points of the longitudinal matching feature point pair are subtracted in the same direction to obtain a second parallax That is, the distance between this object space point to be measured and the base line between the reference camera and the third camera, the second parallax at this time is not a horizontal parallax, and is not affected by the parallel line parallel to the base line between the first camera and the second camera in step S101. In the present embodiment, since this object-to-be-measured space point, the polar plane formed by the reference camera and the third camera is perpendicular to the base line between the first camera and the second camera, in the imaging plane of the third camera and the imaging plane of the reference camera, the parallel edge line in the object-to-be-measured, which is parallel to the base line between the first camera and the second camera, will not be parallel to the base line between the reference camera and the third camera.
In the previous step, the optical axes of the two cameras are parallel, the left imaging plane and the right imaging plane are coplanar, and the epipolar lines are aligned. Therefore, any point on one image and the corresponding point on the other image have the same line number, and the corresponding point can be matched only by one-dimensional search on the line. The binocular matching (the transverse matching feature point pair searched in step S101 and the longitudinal matching feature point pair searched in step S102) is used to match the corresponding projection points of the same scene on the imaging views of the two cameras, and the purpose of this is to obtain a disparity map.
Step S103, determining whether the first parallax is 0, if yes, going to step S104, otherwise, going to step S105. In this embodiment, when the first parallax is 0, the first depth value is also 0, and the robot recognizes that the spatial point of the object to be measured is located on a parallel line of a baseline between the first camera and the second camera, where the parallel line is a horizontal edge line belonging to a threshold and a step.
And step S104, determining the second depth value as the optimal depth value for representing the distance between the space point of the object to be measured and the robot. At this time, the first parallax of the transverse matching feature point pair based on the same object space point searched from the image shot by the first camera and the image shot by the second camera in step S101 is 0, and this parallax obviously cannot be used to obtain the depth information of this object space point and the robot, because the baseline between the first camera and the second camera is parallel to the edge line where this object space point is located, the mutually matching projection points cannot be acquired by the first camera and the second camera, and only lines can be acquired, at this time, the baseline between the first camera and the second camera is parallel to the edge line where this object space point is located, and the robot may detect and identify the step edge contour line and the threshold line parallel to the baselines of the two cameras. At this time, only the second depth value obtained in step S102 can be used as the optimal depth value for indicating the distance between the spatial point of the object to be measured and the robot.
Step S105, processing the first depth value and the second depth value according to the magnitude relationship between the first parallax and the second parallax to obtain an optimal depth value. The purpose of this step is to acquire a more accurate and stable optimal depth value by processing the first depth value and the second depth value acquired by the cooperation of three cameras which are not on the same horizontal straight line or a preset straight line.
Compared with the parallel binocular stereoscopic vision model in the prior art, the binocular ranging results of the first camera and the second camera in the previous steps according to the same preset straight line are used, the optimal depth value is calculated by using the binocular ranging results of the reference camera and the third camera which is not on the preset straight line, the problem that the binocular camera in the prior art cannot detect the depth information of the space point of the object to be detected in a parallel line scene (threshold, edge of the parallel lines of steps), the misjudgment of the depth value distance generated when the first depth value is 0 due to the binocular ranging of the first camera and the second camera is avoided, and the precision of distance detection is improved.
As an embodiment, the method for processing the first depth value and the second depth value according to the magnitude relationship between the first disparity and the second disparity to obtain the optimal depth value includes: when the first parallax is larger than the second parallax, the optimal depth value is determined as the first depth value; when the second parallax is larger than the first parallax, the optimal depth value is determined as the second depth value. The optimal depth value obtained in step S105 is closer to the actual distance (pixel value) between the spatial point of the object to be measured and the robot. And if a plurality of third cameras and the reference camera respectively participate in the search of longitudinal matching feature point pairs based on the same object space point to be corrected according to a classical binocular vision parallax ranging principle above and/or below the preset straight line so as to obtain a plurality of parallaxes and corresponding depth values to form a depth information mapping table to be corrected, and then selecting the depth value corresponding to the largest parallax from all parallaxes and the first parallax in the depth information mapping table to be corrected as the optimal depth value.
As an embodiment, the method for processing the first depth value and the second depth value according to the magnitude relationship between the first disparity and the second disparity to obtain the optimal depth value includes: calculating a first proportion of the first parallax in a sum of the first parallax and the second parallax to serve as an average processing weight distributed by the first depth value; calculating a second proportion of the second parallax in the sum of the first parallax and the second parallax to serve as an average processing weight distributed by the second depth value; and then, respectively carrying out weighted average processing on the first depth value and the second depth value by utilizing the first proportion and the second proportion to obtain the optimal depth value. If a plurality of third cameras and the reference camera respectively participate in searching of longitudinal matching feature point pairs based on the same object space point to be corrected according to a classical binocular vision parallax ranging principle, so as to obtain a plurality of parallaxes and corresponding depth values to form a depth information mapping table to be corrected, then the respective proportions of all parallaxes and first parallaxes in the depth information mapping table to be corrected are calculated, the depth values corresponding to the parallaxes participate in weighted average calculation, and finally the calculated weighted average value is the optimal depth value. In the embodiment, the depth values obtained by three cameras which are not on the same preset straight line (where at least two cameras are distributed on the same straight line necessarily) are processed by using a weighted average method, so that the optimal depth value has better stability, and the influence of environmental factors is weakened.
In the foregoing embodiment, the method for searching out the pair of transverse matching feature points based on the same object space point to be detected includes: setting an image shot by a first camera as a left image, and setting an image shot by a second camera as a right image; firstly, selecting a first preset feature point in a left image by taking the left image as a reference image and the right image as an image to be matched, and searching a second preset feature point matched with the first preset feature point from the right image based on epipolar constraint; and then, taking the right image as a reference image and the left image as an image to be matched, selecting a second preset feature point in the right image, and forming a transverse matching feature point pair by using the first preset feature point and the second preset feature point when a first preset feature point matched with the second preset feature point is searched out from the left image based on epipolar constraint. Or setting an image shot by a first camera as a reference image, setting an image shot by a second camera as an image to be matched, selecting a first preset feature point in the reference image, searching a second preset feature point matched with the first preset feature point from the image to be matched based on polar line constraint, and forming a transverse matching feature point pair by the first preset feature point and the second preset feature point; the first preset feature point is a projection point of the same object space point to be detected in the reference image, and the second preset feature point is a projection point of the same object space point to be detected in the image to be matched. Therefore, the matching range of the characteristic points of the left image shot by the first camera and the characteristic points of the right image shot by the second camera is narrowed, and the transverse or horizontal characteristic point matching efficiency is improved.
It should be noted that the first preset feature point is a projection point of the same object space point to be measured on the left image, and the second preset feature point is a projection point of the same object space point to be measured on the right image; when the left image and the right image are parallel, the first preset feature point and the second preset feature point are on the same polar line, so that in the polar line direction of the left image, the second preset feature point matched with the first preset feature point of the left image exists in the right image, and the corresponding first parallax is the subtraction of the coordinate of the first preset feature point and the coordinate of the second preset feature point in the same direction, so as to obtain a first depth value; when the left image and the right image are not parallel, the first preset feature point and the second preset feature point are not on the same polar line, the left image and the right image are projected and transformed onto the same plane through a homography matrix, so that the second preset feature point matched with the first preset feature point exists in the projected and transformed right image in the horizontal polar line direction of the projected and transformed left image, and the corresponding first parallax is the subtraction of the coordinate of the first preset feature point and the coordinate of the second preset feature point in the horizontal polar line direction, so that a first depth value is obtained.
In the foregoing embodiment, the method for searching out the pair of longitudinal matching feature points based on the same object space point to be detected includes: setting an image shot by the reference camera as a lower image, and setting an image shot by the third camera as an upper image; firstly, a lower image is taken as a reference image, an upper image is taken as an image to be matched, a third preset feature point is selected from the lower image, and a fourth preset feature point matched with the third preset feature point is searched from the upper image based on epipolar constraint; then, the image is taken as a reference image, the lower image is an image to be matched, a fourth preset feature point is selected from the upper image, and then a longitudinal matching feature point pair is formed by the third preset feature point and the fourth preset feature point when a third preset feature point matched with the fourth preset feature point is searched out from the lower image based on polar line constraint; the third preset feature point is a projection point of the same space point of the object to be detected on the lower image, and the fourth preset feature point is a projection point of the same space point of the object to be detected on the upper image; or setting an image shot by a reference camera as a reference image, setting an image shot by a third camera as an image to be matched, selecting a third preset feature point in the reference image, searching a fourth preset feature point matched with the third preset feature point from the image to be matched based on polar line constraint, and forming a longitudinal matching feature point pair by the third preset feature point and the fourth preset feature point; the third preset feature point is a projection point of the same object space point to be detected in the reference image, and the fourth preset feature point is a projection point of the same object space point to be detected in the image to be matched. Therefore, the matching range of the feature points of the image to be matched shot by the third camera and the feature points of the reference image shot by the reference camera (the first camera or the second camera) is narrowed, and the feature point matching efficiency in the longitudinal direction is improved.
When the lower image and the upper image are parallel, the third preset feature point and the fourth preset feature point are on the same polar line, so that the fourth preset feature point matched with the third preset feature point of the lower image exists in the upper image in the polar line direction of the lower image, and the corresponding second parallax is obtained by subtracting the coordinate of the third preset feature point and the coordinate of the fourth preset feature point in the same direction, so as to obtain a second depth value; when the lower image and the upper image are not parallel, and the third preset feature point and the fourth preset feature point are not on the same polar line, the lower image and the upper image are projected and transformed onto the same plane through a homography matrix, so that the fourth preset feature point matched with the third preset feature point exists in the upper image after projection transformation in the polar line direction of the lower image after projection transformation, and the corresponding second parallax is that the coordinate of the third preset feature point and the coordinate of the fourth preset feature point are subtracted in the polar line direction after projection transformation, so that a second depth value is obtained.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention and not to limit it; although the present invention has been described in detail with reference to preferred embodiments, those skilled in the art will understand that: modifications to the specific embodiments of the invention or equivalent substitutions for parts of the technical features may be made; without departing from the spirit of the present invention, it is intended to cover all aspects of the invention as defined by the appended claims.

Claims (12)

1. The utility model provides a robot of many cameras range finding, this robot is including bearing the base, its characterized in that, should bear the base installation at least three and do not distribute the camera on same predetermined straight line.
2. A multi-camera ranging robot according to claim 1, characterized in that in the load-bearing base at least two cameras are mounted on the same preset straight line, in addition there is at least one camera arranged above and/or below the preset straight line.
3. The robot for measuring distance by multiple cameras according to claim 2, wherein one camera is arranged in the bearing base above and below the center point of the connecting line of two cameras arranged on the preset straight line.
4. A multi-camera ranging robot according to any of claims 1 to 3, wherein the preset straight line is a horizontal line.
5. A robot for multi-camera ranging according to claim 4, wherein the preset straight line is provided with two or more cameras in the same horizontal acquisition direction so that: on the preset straight line, the distance between two cameras is different from the distance between the other two cameras, or the length of the base line of the two cameras is different from the length of the base line of the other two cameras.
6. A multi-camera ranging robot according to claim 5, wherein on the preset straight line, there are a left camera and a right camera horizontally placed at a distance of a base line, and the cameras installed above and/or below the preset straight line are located on a vertical line of a connecting line of the left camera and the right camera.
7. A visual ranging method of a robot based on multi-camera ranging of any one of claims 1 to 6, wherein the visual ranging method comprises the following steps:
searching a transverse matching characteristic point pair based on the same space point of an object to be detected from an image shot by a first camera and an image shot by a second camera, solving a first parallax of the transverse matching characteristic point pair, and then constructing a triangular geometric relationship by combining a baseline distance between the first camera and the second camera and a camera focal length to calculate a first depth value;
selecting one camera from the first camera and the second camera as a reference camera, searching a longitudinal matching characteristic point pair based on the same space point of an object to be detected from an image shot by the reference camera and an image shot by a third camera, solving a second parallax of the longitudinal matching characteristic point pair, and then constructing a triangular geometric relation by combining a base line distance between the reference camera and the third camera and a camera focal length to calculate a second depth value;
judging whether the first parallax is 0 or not, if so, determining the second depth value as an optimal depth value for representing the distance between the space point of the object to be detected and the robot, and otherwise, processing the first depth value and the second depth value according to the size relation of the first parallax and the second parallax to obtain the optimal depth value;
the multi-camera ranging robot of any one of claims 1 to 5, wherein the first camera and the second camera are two cameras which are selected on the same preset straight line and are placed at a distance of a baseline according to the actual ranging scene range of the multi-camera ranging robot; the camera above the preset straight line or the camera below the preset straight line is set as a third camera.
8. The visual ranging method of claim 7, wherein the processing the first depth value and the second depth value according to the magnitude relationship between the first disparity and the second disparity to obtain the optimal depth value comprises:
when the first parallax is larger than the second parallax, the optimal depth value is determined as the first depth value;
when the second parallax is larger than the first parallax, the optimal depth value is determined as the second depth value.
9. The visual ranging method of claim 8, wherein the processing the first depth value and the second depth value according to the magnitude relationship between the first disparity and the second disparity to obtain the optimal depth value comprises:
calculating a first proportion of the first parallax in a sum of the first parallax and the second parallax to serve as an average processing weight distributed by the first depth value;
calculating a second proportion of the second parallax in the sum of the first parallax and the second parallax to serve as an average processing weight distributed by the second depth value;
and then, respectively carrying out weighted average processing on the first depth value and the second depth value by utilizing the first proportion and the second proportion to obtain the optimal depth value.
10. The visual ranging method of claim 8 or 9, wherein when the first parallax is 0, the robot recognizes that the object space point to be measured is located on a parallel line of a base line between the first camera and the second camera;
wherein, the parallel line is a horizontal edge line belonging to the threshold and the step.
11. The visual ranging method of claim 10, wherein the method for searching out the pair of transversely matched feature points based on the same object space point to be measured comprises the following steps:
setting an image shot by a first camera as a left image, and setting an image shot by a second camera as a right image; firstly, selecting a first preset feature point in a left image by taking the left image as a reference image and the right image as an image to be matched, and searching a second preset feature point matched with the first preset feature point from the right image based on epipolar constraint; then, taking the right image as a reference image and the left image as an image to be matched, selecting a second preset feature point in the right image, and forming a transverse matching feature point pair by using the first preset feature point and the second preset feature point when searching a first preset feature point matched with the second preset feature point from the left image based on epipolar constraint; the first preset feature point is a projection point of the same object space point to be detected on the left image, and the second preset feature point is a projection point of the same object space point to be detected on the right image;
or setting an image shot by a first camera as a reference image, setting an image shot by a second camera as an image to be matched, selecting a first preset feature point in the reference image, searching a second preset feature point matched with the first preset feature point from the image to be matched based on polar line constraint, and forming a transverse matching feature point pair by the first preset feature point and the second preset feature point; the first preset feature point is a projection point of the same object space point to be detected in the reference image, and the second preset feature point is a projection point of the same object space point to be detected in the image to be matched.
12. The visual ranging method of claim 10, wherein the method for searching out the pair of longitudinally matched feature points based on the same object space point to be measured comprises the following steps:
setting an image shot by the reference camera as a lower image, and setting an image shot by the third camera as an upper image; firstly, a lower image is taken as a reference image, an upper image is taken as an image to be matched, a third preset feature point is selected from the lower image, and a fourth preset feature point matched with the third preset feature point is searched from the upper image based on epipolar constraint; then, the image is taken as a reference image, the lower image is an image to be matched, a fourth preset feature point is selected from the upper image, and then a longitudinal matching feature point pair is formed by the third preset feature point and the fourth preset feature point when a third preset feature point matched with the fourth preset feature point is searched out from the lower image based on polar line constraint; the third preset feature point is a projection point of the same space point of the object to be detected on the lower image, and the fourth preset feature point is a projection point of the same space point of the object to be detected on the upper image;
or setting an image shot by a reference camera as a reference image, setting an image shot by a third camera as an image to be matched, selecting a third preset feature point in the reference image, searching a fourth preset feature point matched with the third preset feature point from the image to be matched based on polar line constraint, and forming a longitudinal matching feature point pair by the third preset feature point and the fourth preset feature point; the third preset feature point is a projection point of the same object space point to be detected in the reference image, and the fourth preset feature point is a projection point of the same object space point to be detected in the image to be matched.
CN202010905113.1A 2020-09-01 2020-09-01 Multi-camera ranging robot and visual ranging method Active CN112033352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010905113.1A CN112033352B (en) 2020-09-01 2020-09-01 Multi-camera ranging robot and visual ranging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010905113.1A CN112033352B (en) 2020-09-01 2020-09-01 Multi-camera ranging robot and visual ranging method

Publications (2)

Publication Number Publication Date
CN112033352A true CN112033352A (en) 2020-12-04
CN112033352B CN112033352B (en) 2023-11-07

Family

ID=73590878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010905113.1A Active CN112033352B (en) 2020-09-01 2020-09-01 Multi-camera ranging robot and visual ranging method

Country Status (1)

Country Link
CN (1) CN112033352B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115561A1 (en) * 2021-12-24 2023-06-29 深圳市大疆创新科技有限公司 Movement control method and apparatus for movable platform, and movable platform

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040207831A1 (en) * 2003-04-15 2004-10-21 Honda Motor Co., Ltd. Ranging apparatus, ranging method, and ranging program
CN102385237A (en) * 2010-09-08 2012-03-21 微软公司 Depth camera based on structured light and stereo vision
CN103292710A (en) * 2013-05-27 2013-09-11 华南理工大学 Distance measuring method applying binocular visual parallax error distance-measuring principle
CN109813251A (en) * 2017-11-21 2019-05-28 蒋晶 Method, apparatus and system for three-dimensional measurement
CN109974659A (en) * 2019-03-31 2019-07-05 徐州工程学院 A kind of embedded range-measurement system based on binocular machine vision
CN212363177U (en) * 2020-09-01 2021-01-15 珠海市一微半导体有限公司 Robot with multiple cameras for distance measurement

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040207831A1 (en) * 2003-04-15 2004-10-21 Honda Motor Co., Ltd. Ranging apparatus, ranging method, and ranging program
CN102385237A (en) * 2010-09-08 2012-03-21 微软公司 Depth camera based on structured light and stereo vision
CN103292710A (en) * 2013-05-27 2013-09-11 华南理工大学 Distance measuring method applying binocular visual parallax error distance-measuring principle
CN109813251A (en) * 2017-11-21 2019-05-28 蒋晶 Method, apparatus and system for three-dimensional measurement
CN109974659A (en) * 2019-03-31 2019-07-05 徐州工程学院 A kind of embedded range-measurement system based on binocular machine vision
CN212363177U (en) * 2020-09-01 2021-01-15 珠海市一微半导体有限公司 Robot with multiple cameras for distance measurement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115561A1 (en) * 2021-12-24 2023-06-29 深圳市大疆创新科技有限公司 Movement control method and apparatus for movable platform, and movable platform

Also Published As

Publication number Publication date
CN112033352B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN102410811B (en) Method and system for measuring parameters of bent pipe
CN107093195B (en) A kind of locating mark points method of laser ranging in conjunction with binocular camera
EP3869797B1 (en) Method for depth detection in images captured using array cameras
CN111243002A (en) Monocular laser speckle projection system calibration and depth estimation method applied to high-precision three-dimensional measurement
CN103604417B (en) The multi-view images bi-directional matching strategy that object space is information constrained
CN111210468A (en) Image depth information acquisition method and device
CN109447908A (en) A kind of coil of strip recognition positioning method based on stereoscopic vision
CN110425996A (en) Workpiece size measurement method based on binocular stereo vision
KR20160121509A (en) Structured light matching of a set of curves from two cameras
CN104794717A (en) Binocular vision system based depth information comparison method
CN108180888A (en) A kind of distance detection method based on rotating pick-up head
CN111780678A (en) Method for measuring diameter of track slab embedded sleeve
CN107504917B (en) Three-dimensional size measuring method and device
CN212363177U (en) Robot with multiple cameras for distance measurement
CN112633035B (en) Driverless vehicle-based lane line coordinate true value acquisition method and device
CN116188558A (en) Stereo photogrammetry method based on binocular vision
CN112033352B (en) Multi-camera ranging robot and visual ranging method
CN112164099A (en) Self-checking and self-calibrating method and device based on monocular structured light
CN112129262B (en) Visual ranging method and visual navigation chip of multi-camera group
CN106447709A (en) Rapid high-precision binocular parallax matching method
US20240085448A1 (en) Speed measurement method and apparatus based on multiple cameras
CN109815966A (en) A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm
CN111986248A (en) Multi-view visual perception method and device and automatic driving automobile
CN112785647A (en) Three-eye stereo image detection method and system
CN114663486A (en) Building height measurement method and system based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: Room 105-514, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant