CN108460797B - Method and device for calculating relative pose of depth camera and height of scene plane - Google Patents

Method and device for calculating relative pose of depth camera and height of scene plane Download PDF

Info

Publication number
CN108460797B
CN108460797B CN201710095626.9A CN201710095626A CN108460797B CN 108460797 B CN108460797 B CN 108460797B CN 201710095626 A CN201710095626 A CN 201710095626A CN 108460797 B CN108460797 B CN 108460797B
Authority
CN
China
Prior art keywords
value
depth camera
rotation angle
calculating
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710095626.9A
Other languages
Chinese (zh)
Other versions
CN108460797A (en
Inventor
郑江红
霍澄平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Honghe Innovation Information Technology Co Ltd
Original Assignee
Shenzhen Honghe Innovation Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Honghe Innovation Information Technology Co Ltd filed Critical Shenzhen Honghe Innovation Information Technology Co Ltd
Priority to CN201710095626.9A priority Critical patent/CN108460797B/en
Publication of CN108460797A publication Critical patent/CN108460797A/en
Application granted granted Critical
Publication of CN108460797B publication Critical patent/CN108460797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and a device for calculating the relative pose of a depth camera and the height of a scene plane. According to the method, the coordinate information contained in the scene depth image data is converted, and the converted coordinate information is subjected to statistics and classification calculation, so that the overlooking angle of the depth camera can be obtained, and further, the scene plane height information in the scene can be calculated by utilizing the angle. The device is connected with the depth camera and comprises a power supply module and an information processing module for processing the used data according to the calculation method. By the method and the device, the relative pose of the depth camera and the height of the scene plane can be accurately and fully automatically calculated, and excellent support can be provided for the application of the depth camera.

Description

Method and device for calculating relative pose of depth camera and height of scene plane
Technical Field
The invention relates to the field of depth image data processing, in particular to a method and a device for calculating a relative pose of a depth camera and a scene plane height.
Background
Detecting scene planes is a common task in computer vision. Scene plane detection has many practical applications, ranging from robotics, autopilot, personal entertainment, and video surveillance.
Most of the conventional image-based ground detection methods perform ground detection according to prior color information of the ground, consistency of the color information and strong edges, such as using dark pavement of a road and white marking information of the roadside. The working mode of the device can only be suitable for some simpler scenes, and the device does not have reliable performance for complex environments.
Because the depth information lost in the general perspective imaging transformation can be directly acquired, the depth camera has more obvious technical advantages compared with a common visible light camera in the application of scene plane detection. The depth cameras include, but are not limited to, depth cameras based on three systems of a binocular vision method, a time-of-flight method and a structured light coding method.
In practical application, however, many times, the depth camera observes a scene in an oblique downward viewing manner, and has a top view angle; or the pose of the depth camera may be changed by the influence of the system, so that the height and the pose of the depth camera relative to the ground and the heights of other scene planes in the scene relative to the ground cannot be estimated at any time.
Disclosure of Invention
The invention aims to provide a method and a device for calculating the relative pose of a depth camera and the height of a scene plane, and the method and the device can be used for accurately and fully automatically calculating the relative pose of the depth camera and the height of the scene plane.
In one embodiment, a depth camera relative pose and scene plane height calculation method is provided for processing depth image data, comprising the steps of:
calculating a coordinate B (X1, y1, z1) of point cloud data in the depth image data output by the depth camera after rotating around a horizontal axis X of the depth camera by a plurality of angles theta r;
counting coordinate value data of point cloud data under the same rotation angle theta r, classifying according to the value y1, defining range units H, counting the proportion of points in each range unit H in the point cloud data, and listing the maximum proportion value Q;
the rotation angle thetac corresponding to the maximum ratio Qmax is listed by comparing the ratio Q of all the rotation angles thetar.
Preferably, the calculated rotation angle θ c can be used to calculate the vertical distance D between the ground and the depth camera, and the method includes the following steps:
sorting the point cloud data rotated according to the rotation angle theta c according to the size of the y1 value and the range unit H;
and calculating the vertical distance D between the ground and the depth camera according to the range unit H, the position value K of the range in the size ordering in which the proportion value Q falls and the maximum or minimum y1 value of the point cloud data.
Preferably, when the y1 values are ordered from large to small, the formula for calculating the vertical distance D between the ground and the depth camera is: d ═ H × K-y1max, where y1max is the maximum y1 value in the point cloud data rotated by the rotation angle θ c;
when the y1 values are ordered from small to large, the formula for calculating the vertical distance D of the ground from the depth camera is: d is-y 1min-H × K, and y1min is the minimum y1 value in the point cloud data rotated by the rotation angle θ c.
Preferably, the rotation angle θ r ranges from 0 ° to 80 °, and the rotation angle θ r is measured in units of accuracy Δ1Derived value, precision unit Delta1The value range of (a) is from 0.1 ° to 10 °, and the value range of the range unit H is from 1mm to 500 mm.
Preferably, after obtaining the rotation angle θ c, the method further comprises the following fine adjustment step of the rotation angle θ c:
the range of the rotation angle theta r1 is set as the rotation angle theta c + -P and is measured in the accuracy unit delta2To rotate the coordinate values of the point cloud data, preferably by the unit of precision Δ2A precision unit less than the rotation angle θ r;
calculating the maximum ratio Q1 in the range unit H1, preferably, the range unit H1 is less than the range unit H;
calculating the corresponding rotation angle theta c1 by using the proportion value Q1;
the rotation angle θ c1 is assigned to the rotation angle θ c.
Preferably, the value of P is in the range of 1 ° to 20 °.
Preferably, the method further comprises the following steps:
listing a first large proportion value Q and a proportion value Qp corresponding to the size relation of the scene plane area in the scene;
and adding the proportional value Q and the proportional value Qp of the rotation angle theta r to obtain an F value, comparing the F values of all the rotation angles theta r, and listing the rotation angle theta c corresponding to the maximum value Fmax.
Preferably, the calculating step further comprises:
sorting the point cloud data rotated according to the rotation angle theta c according to the size of the y1 value and the range unit H;
the vertical distance D between the ground and the depth camera, the vertical distance D2 between the scene plane and the depth camera, and the scene plane height Dp are calculated according to the range unit H, the position value K in the size ordering of the range in which the proportion value Q falls, the position value Kp in the size ordering of the range in which the proportion value Qp falls, and the maximum or minimum y1 value of the point cloud data.
Preferably, when the ranking is that the y1 values are ranked from large to small, the formula for calculating the vertical distance D between the ground and the depth camera is: d — H × K-y1max, the formula for calculating the vertical distance D2 between the scene plane and the depth camera is: d2 ═ H × Kp-y1max, and the formula for calculating the scene plane height Dp is: dp is H × K-H × Kp, and y1max is the maximum y1 value in the point cloud data rotated by the rotation angle θ c; (ii) a
When the y1 values are ordered from small to large, the formula for calculating the vertical distance D of the ground from the depth camera is: d ═ y1min — H × K, and the formula for calculating the perpendicular distance D2 between the scene plane and the depth camera is: d2 ═ y1min-H × Kp, and the formula for calculating the scene plane height Dp is: dp is H × Kp — H × K, and y1min is the minimum y1 value in the point cloud data rotated by the rotation angle θ c.
Preferably, the rotation angle θ c may be calculated by comparing the proportional values Qp of all the rotation angles θ r and listing the rotation angles θ c corresponding to the maximum proportional value Q2 max.
Preferably, the statistical range of the proportional value Qp is outside the range of E-M to E + M, where E is the range value within which the proportional value Q falls.
Preferably, the value of M ranges from 200mm to 1200 mm.
In one embodiment, a depth camera relative pose and scene plane height calculation apparatus is provided, comprising:
a power module for supplying power to a computing device;
and the information processing module is used for calculating the relative pose of the depth camera and the height of the scene plane according to the depth image data.
The information processing module includes:
a storage unit for storing a range of the rotation angle θ r; and the calculation processing unit is used for processing the depth image data output by the depth camera according to the information stored in the storage unit.
The calculation step of the calculation processing unit includes:
calculating coordinates A (X, y, z) of point cloud data in depth image data output by a depth camera, and coordinates B (X1, y1, z1) after rotating around the horizontal axis X of the depth camera by a plurality of angles theta r;
counting coordinate value data of the point cloud data at the same rotation angle theta r, classifying according to the value of y1, counting the proportion of points falling in each range unit H in the point cloud data, and listing the maximum proportion value Q;
the rotation angle thetac corresponding to the maximum ratio Qmax is listed by comparing the ratio Q of all the rotation angles thetar.
Preferably, the calculation processing unit is further configured to execute the following calculation steps, including:
sorting the point cloud data rotated according to the rotation angle theta c according to the size of the y1 value and the range unit H;
the vertical distance D between the ground and the depth camera, the vertical distance D2 between the scene plane and the depth camera, and the scene plane height Dp are calculated according to the range unit H, the position value K in the size ordering of the range in which the proportion value Q falls, the position value Kp in the size ordering of the range in which the proportion value Qp falls, and the maximum or minimum y1 value of the point cloud data.
Preferably, the storage unit is used for storing the range of the rotation angle theta r and the precision unit delta1The rotation angle theta r ranges from 0 DEG to 80 DEG, and the precision unit delta1The value range of (a) is from 0.1 ° to 10 °, and the value range of the range unit H is from 1mm to 500 mm.
The method and the device for calculating the relative pose of the depth camera and the height of the scene plane have the advantages that the overlooking angle and the height of the depth camera can be automatically calculated only according to the depth image data output by the depth camera, the height of some scene planes in a scene can be calculated, and good support is provided for later application of the depth camera.
Drawings
FIG. 1 is a diagram of an application scenario of an embodiment of the computing method of the present invention;
FIG. 2 is a depth image data histogram of an embodiment of the present invention;
FIG. 3 is a flow chart of an embodiment of a computing method of the present invention;
FIG. 4 is a flow chart of the method for calculating the angle θ c;
FIG. 5 is a flowchart of an embodiment of the calculation method of the present invention for calculating the vertical distance D between the ground and the depth camera;
FIG. 6 is a histogram of depth image data for a classroom scene for use with the computing method of the present invention;
FIG. 7 is a flow chart of a next embodiment of the computing method of the present invention as applied to a classroom setting;
FIG. 8 is a statistical view of still another depth image data for use in a classroom setting in accordance with the present invention;
FIG. 9 is a flowchart of an embodiment of the calculation method of the present invention for calculating the vertical distance D between the ground and the depth camera and the vertical distance D1 between the desk and the depth camera;
FIG. 10 is a functional block diagram of an embodiment of a computing device.
Detailed Description
In order to better understand the present invention, some concepts are first explained.
A depth camera: the depth camera in the market at present adopts the following three technologies or the fusion of the three technologies: structured light encoding, binocular vision, and time of flight. Specifically, the depth camera may be a video camera that continuously acquires a plurality of frames of depth images, in addition to a camera that acquires a single frame of depth image at a time.
Depth image data: image data output by a depth camera and containing scene depth information is different from ordinary image data which represents the gray scale or color of a scene, and the depth image data represents the distance information between a scene point and the depth camera.
The depth image data output by the depth camera is composed of points with three-dimensional coordinate information, all the points form point cloud data information, the coordinate system of the point cloud data is based on the depth camera, in the scene shown in fig. 1, the point cloud data coordinate information obtained by the depth camera is based on the coordinate system which takes the position of the depth camera as the origin and is composed of the horizontal axis X of the depth camera, the longitudinal axis Y of the depth camera and the distance (depth) axis Z, and the application of scene modeling, scene plane detection and the like can be carried out by utilizing the point cloud data coordinate information.
In practical applications, many depth cameras are installed at a top view angle, and have a top view angle θ with respect to a water scene plane based on the ground, so that coordinate values of point cloud data obtained by the depth cameras are different from coordinate values in the real world, specifically, a scene plane formed by a horizontal axis X and a distance axis Z of a depth camera coordinate system is different from a real world water scene plane by a top view angle θ, and thus a y value of a point on the same scene plane, such as the ground, changes with a change in a Z value.
In order to obtain the height of the scene plane in the real scene and the convenience of later-stage calculation and application, the data of the coordinate system of the scene plane, which is returned by the depth camera, needs to be converted into the data of the coordinate system of the scene plane by taking the ground as a reference.
In common environmental factors, the ground usually occupies the largest area in a scene, in depth image data point cloud data corresponding to the scene, the points constituting the ground should have the largest proportion in the point cloud data, and if the point cloud data is established on a coordinate system of a reference plane parallel to the ground, the point constituting the ground occupies the largest proportion in the point cloud data and has the same y value.
The original depth data obtained by the depth camera is different from the real world in that a overlook angle theta is formed between the depth camera and a scene plane which is based on the ground, the point cloud coordinate values in the original depth data are rotated around the horizontal axis X of the depth camera by an angle which is the same as the overlook angle theta, point cloud data coordinate values established on a reference plane coordinate system which is parallel to the ground can be obtained, and the vertical distance D between the depth camera and the ground can be calculated according to the converted point cloud data coordinate values.
Since the top view angle θ is unknown, we can set a plurality of rotation angles θ r to calculate the top view angle θ. The point cloud data of the same rotation angle theta r is divided into a range H according to the size of y value, and when the proportion of points falling in a certain range to all the points is maximum, the points falling in the range Hr can be considered as the points forming the ground. In the point cloud data obtained at different rotation angles θ r, when the rotation angle θ r is closer to the actual top view angle θ, the proportion of the points constituting the ground to all the points is also larger, so that the rotation angle θ c corresponding to the point cloud data having the maximum proportion value Qmax can be considered to be the closest to the actual top view angle θ by comparing the proportion values Q of the points constituting the ground in the point cloud data at the plurality of rotation angles θ r.
Specifically, as shown in fig. 2, the horizontal axis of the data of the graph is the value of the rotation angle θ r, the vertical axis is the value of the proportional value Q, the proportional values Q at different rotation angles θ r are counted, and the change of the proportional value Q is still obvious, and the maximum proportional value Qmax and the corresponding rotation angle θ c can be obtained by comparing.
In one embodiment, a method for calculating the relative pose of a depth camera and the height of a scene plane is provided, as shown in fig. 3, and includes the following steps:
s100, coordinates A (X, y, z) of point cloud data in depth image data output by a depth camera are calculated by a rotation matrix T, and coordinates B (X1, y1, z1) of original coordinates A of points are rotated by a plurality of angles theta r around a horizontal axis X of the depth camera.
The formula for coordinate B is:
B=A×T,
Figure BDA0001230182550000071
specifically, the range of the rotation angle θ r may be set to 0 ° to 80 °, by Δ1The coordinate values of the point cloud data are rotated by 1 ° as a unit of precision. In other embodiments, the rotation angle θ r and the accuracy unit Δ are1The specific value can also be determined according to the specific application environment scene and the required operation speed and precisionTo make it.
S200, counting coordinate value data of point cloud data under the same rotation angle theta r, classifying the coordinate value data according to the value y1, defining range units H, counting the proportion of points in each range unit H in the point cloud data, and listing the maximum proportion value Q.
Specifically, the range defining unit H is 20mm, and the minimum value Ymin and the maximum value Ymax of the y1 values of the selected points are set to remove the influence of noise in the point cloud data so as to improve the calculation speed and the calculation accuracy.
In an application scenario, most environmental features including the ground are under the depth camera, so the y1 value of the obtained point cloud data is mostly negative and is the smallest value of y1 of the points constituting the ground, therefore Ymin is set to be smaller than the installation height of the depth camera, i.e., the distance value between the ground and the depth camera, and Ymax may be set to be 0. And S300, comparing the proportional values Q of all the rotation angles theta r, and listing the rotation angles theta c corresponding to the maximum proportional values Qmax.
Through the steps, the error delta between the error delta and the real overlooking angle theta can be obtained1If the accuracy requirement is not high, θ c may be considered to be the true top view angle θ of the depth camera.
And S400, calculating the vertical distance D between the ground and the depth camera according to the rotation angle theta c.
Further, the rotation angle θ c obtained in step S300 may be further finely adjusted so as to be closer to the top view angle θ.
The following is a step of finely adjusting the preliminarily obtained rotation angle θ c, as shown in fig. 4, and includes:
s301, coordinates A (X, y, z) of point cloud data in depth image data output by a depth camera are calculated by a rotation matrix T, coordinates C (X2, y2, z2) of original coordinates A of points rotated by a plurality of angles theta r1 around a horizontal axis X of the depth camera are calculated, a range of the rotation angle theta r1 is set to theta C +/-P, and the range is set to delta r12The coordinate values of the point cloud data are rotated by 0.1 ° in units of accuracy.
In particular, for fine-tuningThe range of the rotation angle thetar 1 is based on the preliminarily obtained rotation angle thetac, on which a smaller P value is increased or decreased to reduce the amount of calculation, and it is generally preferable that the P value is 1 ° to 20 °, and the unit of precision Δ is2Is generally less than the precision unit Δ1The specific value is related to the required precision.
And S302, counting coordinate value data of point cloud data under the same rotation angle theta r1, classifying the coordinate value data according to the value of y2, defining a range unit H1 as 5mm, setting the minimum value Ymin and the maximum value Ymax of the y2 value of the selected point, counting the proportion of the points falling in each range unit to the point cloud data, and listing the maximum proportion value Q1.
In practice, the range unit H1 defined for fine tuning is usually smaller than the range unit H1 used to obtain the preliminary rotation angle θ c, so as to obtain a higher precision value, and the criteria for selecting the minimum value Ymin and the maximum value Ymax of the y2 value in this step are the same as the criteria for selecting the minimum value Ymin and the maximum value Ymax of the y1 value in the preceding implementation step.
S303, comparing the proportional values Q1 of all the rotation angles theta r1, and listing the rotation angles theta c1 corresponding to the maximum proportional value Q1 max.
And S304, assigning the obtained theta 1 to the final rotation angle theta c.
Finally, the rotation angle thetac is input to the step S400 to calculate the vertical distance D between the ground and the depth camera. If a more precise rotation angle thetac is required, then steps S301-S303 are repeated, and a more precise result can be achieved by simply narrowing the range, precision unit of thetar 1, and range unit H1 that bounds the y2 value.
As shown in fig. 5, the specific calculation process of step S400 includes the following steps:
s401, sorting the point cloud data rotated according to the rotation angle theta c according to the size of the y1 value and the range unit H.
Specifically, the order of arranging the y1 values may be from small to large, or from large to small.
S402, calculating the vertical distance D between the ground and the depth camera according to the range unit H, the position value K of the range in which the proportion value Q falls in the size sorting and the maximum or minimum y1 value of the point cloud data.
Specifically, when the y1 values are sorted from large to small, the formula for calculating the vertical distance D between the ground and the depth camera is: d is H × K-y1max, and y1max is the maximum y1 value included in the point cloud data rotated by the rotation angle θ c.
When the y1 values are sorted from small to large, the formula for calculating the vertical distance D of the ground from the depth camera is: d is-y 1min-H × K, and y1min is the minimum y1 value in the point cloud data rotated by the rotation angle θ c.
In a specific embodiment, the accuracy of the distance D is related to the value of the selected range unit H, and can be changed as needed.
In other embodiments, for convenience of calculation, the minimum value Ymin and the maximum value Ymax of the values of the option point y1 may be set according to practical situations, where y1max is equal to Ymax and y1min is equal to Ymin, and the setting specification may refer to the criteria in Ymax and Ymin in the foregoing embodiments.
The method of the present invention can also be applied to other scenes with a plurality of large-area scene planes, and a classroom for teaching with a plurality of desks is taken as an example for explanation.
In a classroom scene, in which the desk top is a scene plane occupying the largest area in the scene except the ground, in the configuration of the depth point cloud data obtained in the scene, the point of the desk top should be larger than the second largest in the point cloud data, and the desk top is parallel to the ground, the vertical distance D1 between the desk and the depth camera can be calculated by the approximation step with the step of calculating the vertical distance D between the ground and the depth camera, and the height Dt of the desk can be obtained as the difference between the distance D and the distance D1.
Specifically referring to fig. 6, the data of fig. 6 is obtained by processing raw depth data obtained by a depth camera installed at a top view angle θ in a classroom scene and depth data obtained by rotating a plurality of angles, where a horizontal axis is set as a range unit of H20 mm and a vertical axis is the number of point cloud data points, and from this figure, it can be obviously observed that point cloud data has two peaks, and one peak is greater than the other peak, so that the largest peak corresponds to a point on the ground, and the second largest peak corresponds to a point on the desk top. Meanwhile, the difference of the point cloud data peak values under different rotation angles is also large.
Further, the method for calculating the rotation angle θ c for the classroom scene needs to calculate a second largest proportion value Q2 of the point cloud data corresponding to the desk desktop, in addition to all the calculation steps for calculating the rotation angle θ c in the normal scene.
Specifically, an embodiment flow for calculating the vertical distance D between the floor and the depth camera and the vertical distance D1 between the desk and the depth camera for a classroom scene is shown in fig. 7, and includes:
s710, coordinates a (X, y, z) of point cloud data in depth image data output by the depth camera are calculated by using a rotation matrix T, and coordinates D (X3, y3, z3) of original coordinates a of points rotated by a plurality of angles θ r around a horizontal axis X of the depth camera.
Specifically, the rotation matrix T is the same as that in step S100, and this step can refer to the range of the rotation angle θ r and the accuracy unit Δ selected in step S1001
S720, counting coordinate value data of point cloud data under the same rotation angle theta r, classifying the coordinate value data according to the value y3, defining range units H, counting the proportion of points falling in each range unit H in the point cloud data, and listing the maximum proportion value Q and the second largest proportion value Q2.
This step can also refer to the specific implementation method of step S200.
Referring to fig. 8, fig. 8 shows a depth data statistical graph in a classroom scene with a range unit H of 5mm as a horizontal axis and the number of points in the point cloud data as a vertical axis, where the highest peak of the point cloud data is obvious in the graph, but the second peak has several similar values, so that when calculating the second proportional value Q2, in order to prevent errors caused by the size of the value of the range unit H or the data acquisition, the statistical data may be further divided, specifically, after calculating the proportional value Q, the selected range of the data selected by calculating the second proportional value Q32 should fall outside the range corresponding to the proportional value Q, and then the selected range of the data selected by calculating the second proportional value Q2 should be expanded by one M value, that is, when the proportional value Q falls within the range E, the data selected by selecting the second proportional value Q2 should be selected outside the range from E-M to E + M. In the context of calculating desk height, M may be set to less than 600mm of the desk height. Meanwhile, the value of M may be set to other values according to the calculation scenario information concerned.
And S730, adding the proportional value Q and Q2 of each rotation angle theta r to obtain a J value, comparing the J values of all the rotation angles theta r, and listing the rotation angle theta c corresponding to the maximum value Jmax.
Specifically, the rotation angle θ c may be calculated by comparing the proportional value Q and selecting the rotation angle θ r corresponding to the maximum proportional value Qmax as the rotation angle θ c, or comparing the proportional value Q2 and selecting the rotation angle θ r corresponding to the maximum proportional value Q2max as the rotation angle θ c.
Meanwhile, the rotation angle θ c may be further refined by repeating steps S301 to S304, except that the second largest proportion Q2 is calculated.
And S740, calculating the vertical distance D between the ground and the depth camera and the vertical distance D1 between the desk and the depth camera according to the rotation angle theta c.
As shown in FIG. 9, the specific steps of calculating the vertical distance D between the ground and the depth camera and the vertical distance D1 between the desk and the depth camera are as follows:
s741, sorting the point cloud data rotated by the rotation angle θ c by the size of the y3 value and the range unit H.
Specifically, the order of arranging the y3 values may be from small to large, or from large to small.
S742, the vertical distance D between the ground and the depth camera and the vertical distance D1 between the desk and the depth camera are calculated according to the range unit H, the position value K in the size ordering of the range in which the proportion value Q falls, the position value K2 in the size ordering of the range in which the proportion value Q2 falls, and the maximum or minimum y3 value of the point cloud data.
Specifically, when the y3 values are sorted from large to small, the formula for calculating the vertical distance D between the ground and the depth camera is: d ═ H × K-y3max, and the formula for calculating the vertical distance D1 between the desk and the depth camera is: d1 is H × K2-y3max, where y3max is the maximum y3 value in the point cloud data rotated by the rotation angle θ c.
At this time, the calculation formula of the desk height Dt is as follows: Dt-D1-H × K2.
When the y3 values are sorted from small to large, the formula for calculating the vertical distance D of the ground from the depth camera is: d ═ y3 min-hxk, and the formula for calculating the vertical distance D1 between the desk and the depth camera is: d1 ═ y3min — H × K2, where y3min is the minimum y3 value in the point cloud data rotated by the rotation angle θ c.
At this time, the calculation formula of the desk height Dt is as follows: Dt-D1-H × K2-H × K.
Further, a minimum value Ymin and a maximum value Ymax of the coordinate y3 value of the selection point may be set, where y3max is equal to Ymax and y3min is equal to Ymin, and the setting specification may refer to the criteria of selecting Ymax and Ymin for the respective coordinate y value in step S200 or S302 in the foregoing general scenario.
In particular, in some scenarios, there may be a case where the desk or other scene plane has a first large area and the ground has a second large area, but the desk is parallel to the ground, or other scene planes are also parallel to the ground, the calculation method in the above embodiment may also be used, and only the calculation results of the final distances are compared, wherein the larger distance value of the two is the vertical distance D between the ground and the depth camera, and the other distance value is the vertical distance between the desk or the scene plane and the depth camera.
In other embodiments, when depth information for a scene plane in a scene may be obtained and the scene plane is parallel to the ground, the vertical distance of the scene plane from the depth camera may also be calculated to obtain the height of the scene plane. Specifically, the proportional value Qp corresponding to the obtained scene plane area is calculated according to the size relationship between the scene plane area and the remaining scene plane areas in the scene, and if the obtained scene plane area has the third largest area, the third largest proportional value Q3 is corresponding to the scene plane area. The vertical distance D2 between the scene plane and the depth camera and the height Dp of the scene plane are then calculated using the scale value Qp according to the calculation method described in the above embodiment.
In other embodiments of the present invention, in order to obtain a more accurate result, 3 frames of depth image data may be taken from the depth image data output by the depth camera, the point cloud data in the 3 frames of depth image data may be superimposed to form a set of calculation data, the same operation may be repeated to obtain a plurality of sets of calculation data, and the calculation may be performed with the plurality of sets of data based on a statistical angle, so that the obtained result may be more accurate. Taking 20 groups of calculation data as an example, calculating the 20 groups of point cloud data according to the method provided by the invention, and when calculating the rotation angle theta c, arranging the rotation angle theta c obtained by the 20 groups of data according to the size, and taking the value thereof as the finally determined rotation angle. When the vertical distance D between the ground and the depth camera or the vertical distance D1 between the desk and the depth camera is calculated, the D value or the D1 value of the 20 groups of data can be counted and the median value can be taken as the final result.
In an embodiment shown in fig. 10, an apparatus 800 for calculating the relative pose of a depth camera and the height of a scene plane is provided, which obtains depth image data including depth information of a scene by a depth camera 900, and includes an information processing module 810 for processing the depth image data according to the above method for calculating the relative pose of a depth camera and the height of a scene plane and a power module 820 for supplying power to the apparatus, where the power module 820 may be a battery.
The information processing module 810 may include the following elements:
the storage unit 811: for storing the range of the rotation angle thetar and the unit of precision delta1And a range unit H.
The calculation processing unit 812: the method for processing the depth image data output by the depth camera 900 according to the information stored in the storage unit includes the following steps:
calculating coordinates B (X1, y1, z1) of all points included in the point cloud data in the depth image data output by the depth camera 900 after rotating around the horizontal axis X of the depth camera 900 by a plurality of angles theta r;
counting coordinate value data of the point cloud data at the same rotation angle theta r, classifying according to the value of y1, counting the proportion of points falling in each range unit H in the point cloud data, and listing the maximum proportion value Q and the proportion value Qp corresponding to the size relation of the scene plane area in the scene;
and adding the proportional value Q and Qp of each rotation angle theta r to obtain an F value, comparing the F values of all the rotation angles theta r, and listing the rotation angle theta c corresponding to the maximum value Fmax.
The calculation processing unit 812 can also calculate the vertical distance D between the ground and the depth camera, the vertical distance D2 between the scene plane and the depth camera, and the height Dp of the scene plane according to the calculated rotation angle θ c, and the calculation steps include:
sorting the point cloud data rotated according to the rotation angle theta c according to the size of the y1 value and the range unit H;
the vertical distance D between the ground and the depth camera, the vertical distance D2 between the scene plane and the depth camera, and the scene plane height Dp are calculated according to the range unit H, the position value K in the size ordering of the range in which the proportion value Q falls, the position value Kp in the size ordering of the range in which the proportion value Qp falls, and the maximum or minimum y1 value of the point cloud data.
Specifically, the depth camera 900 includes a depth camera capable of acquiring depth information of a scene, the technology adopted by the depth camera includes but is not limited to one or a combination of three technologies of a structured light coding method, a binocular vision method and a time-of-flight method, the information processing module 810 can be implemented by software plus a necessary general hardware platform, and certainly, the corresponding function of the information processing module 810 can also be implemented by hardware, which can be clearly understood by those skilled in the art and implemented in the above two ways.
It should be understood that the above preferred embodiments are only for illustrating the technical solutions of the present invention, and not for limiting the same, and those skilled in the art can modify the technical solutions described in the above preferred embodiments or substitute some technical features thereof; and all such modifications and alterations are intended to fall within the scope of the appended claims.

Claims (15)

1. A method for calculating the relative pose of a depth camera and the height of a scene plane is used for processing depth image data and is characterized by comprising the following steps of:
calculating a coordinate B (X1, y1, z1) of point cloud data in the depth image data output by the depth camera after rotating around a horizontal axis X of the depth camera by a plurality of angles theta r;
counting coordinate value data of point cloud data under the same rotation angle theta r, classifying according to the value y1, defining range units H, counting the proportion of points falling in each range unit H in the point cloud data, and listing the maximum proportion value Q;
comparing the proportional values Q of all the rotation angles theta r, and listing the rotation angles theta c corresponding to the maximum proportional value Qmax;
and calculating the vertical distance D between the ground and the depth camera according to the rotation angle thetac.
2. The method for calculating the relative pose of the depth camera and the height of the scene plane according to claim 1, wherein the step of calculating the vertical distance D between the ground and the depth camera according to the rotation angle θ c comprises the steps of:
sorting the point cloud data rotated according to the rotation angle theta c according to the size of the y1 value and the range unit H;
and calculating the vertical distance D between the ground and the depth camera according to the range unit H, the position value K of the range in the size ordering of the range in which the proportion value Q falls and the maximum or minimum y1 value of the point cloud data.
3. The depth camera relative pose and scene plane height calculation method of claim 2, wherein when the ordering is y1 values ordered from large to small, the formula for calculating the vertical distance D of the ground to the depth camera is: d is H × K-y1max, and the y1max is the maximum y1 value in the point cloud data rotated by the rotation angle θ c;
when the y1 values are ordered from small to large, the formula for calculating the ground-to-depth camera vertical distance D is: and D is-y 1min-H multiplied by K, wherein the y1min is the minimum y1 value in the point cloud data rotated according to the rotation angle theta c.
4. The method for calculating the relative pose of the depth camera and the height of the scene plane according to any one of claims 1, 2 or 3, wherein the rotation angle θ r ranges from 0 ° to 80 °, and the rotation angle θ r is calculated in units of accuracy Δ1To take the value, the accuracy unit Δ1The value range of (a) is from 0.1 ° to 10 °, and the value range of the range unit H is from 1mm to 500 mm.
5. The depth camera relative pose and scene plane height calculation method of any one of claims 1, 2, or 3, further comprising the steps of:
setting the range of the rotation angle theta 1 as the rotation angle theta c +/-P and taking the precision unit delta2To rotate and calculate the coordinate value of the point cloud data, the precision unit delta2A unit of precision less than the rotation angle θ r;
calculating a maximum scale value Q1 in a range unit H1, the range unit H1 being less than the range unit H;
calculating the corresponding rotation angle theta c1 by using the proportion value Q1;
the rotation angle thetac 1 is assigned to the rotation angle thetac.
6. The method for calculating the relative pose of the depth camera and the height of the scene plane according to claim 5, wherein the value of the P value ranges from 1 ° to 20 °.
7. The depth camera relative pose and scene plane height calculation method of claim 1, further comprising the steps of:
listing the maximum proportional value Q and a proportional value Qp corresponding to the size relation of the scene plane area in the scene;
and adding the proportional value Q and the proportional value Qp of the rotation angle theta r to obtain an F value, comparing the F values of all the rotation angles theta r, and listing the rotation angle theta c corresponding to the maximum value Fmax.
8. The depth camera relative pose and scene plane height calculation method of claim 7, further comprising the steps of:
sorting the point cloud data rotated according to the rotation angle theta c according to the size of the y1 value and the range unit H;
calculating a vertical distance D between the ground and the depth camera, a vertical distance D2 between the scene plane and the depth camera, and a height Dp of the scene plane according to the range unit H, the position value K in the size ordering of the range in which the proportion value Q falls, the position value Kp in the size ordering of the range in which the proportion value Qp falls, and the maximum or minimum y1 value of the point cloud data.
9. The depth camera relative pose and scene plane height calculation method of claim 8, wherein when the ordering is y1 values ordered from large to small, the formula for calculating the vertical distance D of the ground to the depth camera is: d — H × K-y1max, the formula for calculating the vertical distance D2 between the scene plane and the depth camera is: d2 ═ H × Kp-y1max, and the formula for calculating the scene plane height Dp is: the Dp is H multiplied by K-H multiplied by Kp, and the y1max is the maximum y1 value in the point cloud data rotated according to the rotation angle theta c;
when the y1 values are ordered from small to large, the formula for calculating the ground-to-depth camera vertical distance D is: d ═ y1min-H × K, and the formula for calculating the perpendicular distance D2 between the scene plane and the depth camera is: d2 ═ y1min-H × Kp, and the formula for calculating the scene plane height Dp is: and Dp is H × Kp-H × K, and y1min is a minimum y1 value included in the point cloud data rotated by the rotation angle θ c.
10. The depth camera relative pose and scene plane height calculation method of any one of claims 7, 8, or 9, wherein the step of calculating the rotation angle θ c further comprises:
comparing the proportional values Qp of all the rotation angles theta r, and listing the rotation angles theta c corresponding to the maximum proportional values Qpmax.
11. The depth camera relative pose and scene plane height calculation method of any one of claims 7, 8, or 9, wherein the statistical range of the scale value Qp is outside the range of E-M to E + M, where the E value is a range value within which the scale value Q falls.
12. The method for calculating the relative pose of the depth camera and the height of the scene plane according to claim 11, wherein the value range of the M value is 200mm to 1200 mm.
13. A depth camera relative pose and scene plane height calculation apparatus for processing depth image data output by a depth camera, comprising:
a power module for supplying power to the computing device;
the information processing module is used for calculating the relative pose of the depth camera and the height of a scene plane according to the depth image data;
the information processing module includes:
a storage unit for storing a range unit H;
the calculation processing unit is used for processing the depth image data output by the depth camera according to the information stored in the storage unit, and comprises the following steps:
calculating coordinates B (X1, y1, z1) of point cloud data in the depth image data after rotating around the horizontal axis X of the depth camera by a plurality of angles theta r;
counting coordinate value data of the point cloud data of the same rotation angle theta r, classifying according to the value of y1, counting the proportion of points falling in each range unit H in the point cloud data, and listing the maximum proportion value Q and the proportion value Qp corresponding to the size relation of the scene plane area in the scene;
adding the proportional value Q and Qp of each rotation angle theta r to obtain an F value, comparing the F values of all the rotation angles theta r, and listing the rotation angle theta c corresponding to the maximum value Fmax;
and calculating the vertical distance D between the ground and the depth camera according to the rotation angle thetac.
14. The depth camera relative pose and scene plane height calculation apparatus of claim 13, wherein the computation processing unit is further configured to perform the following calculation steps:
sorting the point cloud data rotated according to the rotation angle theta c according to the size of the y1 value and the range unit H;
calculating a vertical distance D between the ground and the depth camera, a vertical distance D2 between the scene plane and the depth camera, and a height Dp of the scene plane according to the range unit H, the position value K in the size ordering of the range in which the proportion value Q falls, the position value Kp in the size ordering of the range in which the proportion value Qp falls, and the maximum or minimum y1 value of the point cloud data.
15. The depth camera relative pose and scene plane height calculation apparatus according to any one of claims 13 or 14, wherein the storage unit is configured to store a range of the rotation angle θ r and a precision unit Δ1The rotation angle θ r ranges from 0 ° to 80 °, and the accuracy unit Δ1The value range of (a) is from 0.1 ° to 10 °, and the value range of the range unit H is from 1mm to 500 mm.
CN201710095626.9A 2017-02-22 2017-02-22 Method and device for calculating relative pose of depth camera and height of scene plane Active CN108460797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710095626.9A CN108460797B (en) 2017-02-22 2017-02-22 Method and device for calculating relative pose of depth camera and height of scene plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710095626.9A CN108460797B (en) 2017-02-22 2017-02-22 Method and device for calculating relative pose of depth camera and height of scene plane

Publications (2)

Publication Number Publication Date
CN108460797A CN108460797A (en) 2018-08-28
CN108460797B true CN108460797B (en) 2020-08-25

Family

ID=63229137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710095626.9A Active CN108460797B (en) 2017-02-22 2017-02-22 Method and device for calculating relative pose of depth camera and height of scene plane

Country Status (1)

Country Link
CN (1) CN108460797B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741399A (en) * 2018-12-07 2019-05-10 苏州中科广视文化科技有限公司 Precomputation camera calibration method based on rotary taking
CN112445208A (en) * 2019-08-15 2021-03-05 纳恩博(北京)科技有限公司 Robot, method and device for determining travel route, and storage medium
CN113965701B (en) * 2021-09-10 2023-11-14 苏州雷格特智能设备股份有限公司 Multi-target space coordinate corresponding binding method based on two depth cameras

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361575A (en) * 2014-10-20 2015-02-18 湖南戍融智能科技有限公司 Automatic ground testing and relative camera pose estimation method in depth image
CN104376558A (en) * 2014-11-13 2015-02-25 浙江大学 Cuboid-based intrinsic parameter calibration method for Kinect depth camera
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105976353A (en) * 2016-04-14 2016-09-28 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401242B2 (en) * 2011-01-31 2013-03-19 Microsoft Corporation Real-time camera tracking using depth maps

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361575A (en) * 2014-10-20 2015-02-18 湖南戍融智能科技有限公司 Automatic ground testing and relative camera pose estimation method in depth image
CN104376558A (en) * 2014-11-13 2015-02-25 浙江大学 Cuboid-based intrinsic parameter calibration method for Kinect depth camera
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105976353A (en) * 2016-04-14 2016-09-28 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram;Radu Bogdan Rusu 等;《The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems》;20101022;2155-2162 *
Kinect深度相机标定算法研究;李雅娜;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315(第03期);I138-6606 *

Also Published As

Publication number Publication date
CN108460797A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
CN111750820B (en) Image positioning method and system
CN109146980B (en) Monocular vision based optimized depth extraction and passive distance measurement method
US10085011B2 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US11748906B2 (en) Gaze point calculation method, apparatus and device
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN108460797B (en) Method and device for calculating relative pose of depth camera and height of scene plane
CN110300292A (en) Projection distortion bearing calibration, device, system and storage medium
CN114022560A (en) Calibration method and related device and equipment
CN113361365B (en) Positioning method, positioning device, positioning equipment and storage medium
CN112947526B (en) Unmanned aerial vehicle autonomous landing method and system
CN113534737B (en) PTZ (Pan/Tilt/zoom) dome camera control parameter acquisition system based on multi-view vision
CN112328715A (en) Visual positioning method, training method of related model, related device and equipment
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
US20170024846A1 (en) Systems and methods for selecting an image transform
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN111815715A (en) Method and device for calibrating zoom pan-tilt camera and storage medium
CN113256718A (en) Positioning method and device, equipment and storage medium
CN107067441B (en) Camera calibration method and device
Wang et al. Measurement and analysis of depth resolution using active stereo cameras
CN111222586B (en) Inclined image matching method and device based on three-dimensional inclined model visual angle
CN112288821B (en) Method and device for calibrating external parameters of camera
CN114926316A (en) Distance measuring method, distance measuring device, electronic device, and storage medium
US11747141B2 (en) System and method for providing improved geocoded reference data to a 3D map representation
CN113674356A (en) Camera screening method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant