CN111025330B - Target inclination angle detection method and device based on depth map - Google Patents
Target inclination angle detection method and device based on depth map Download PDFInfo
- Publication number
- CN111025330B CN111025330B CN201911295839.1A CN201911295839A CN111025330B CN 111025330 B CN111025330 B CN 111025330B CN 201911295839 A CN201911295839 A CN 201911295839A CN 111025330 B CN111025330 B CN 111025330B
- Authority
- CN
- China
- Prior art keywords
- information
- target
- acquisition unit
- coordinate system
- depth map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C1/00—Measuring angles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
Abstract
The invention discloses a target inclination angle detection method based on a depth map, which comprises the following steps: s1, acquiring the 3D information of the target to be detected by using a 3D information acquisition unit; step S2, acquiring gravity acceleration information of the 3D information acquisition unit; and step S3, calculating the inclination angle of the target to be measured according to the 3D information of the target to be measured obtained in the step S1 and the gravity acceleration information of the 3D information acquisition unit obtained in the step S2. The target inclination angle detection method based on the depth map utilizes a 3D information acquisition unit to acquire 3D information of a target to be detected; acquiring gravity acceleration information of a 3D information acquisition unit through an IMU (inertial measurement Unit); and calculating the inclination angle of the target to be detected according to the obtained 3D information of the target to be detected and the gravity acceleration information of the 3D information acquisition unit, so that accurate three-dimensional information of the object can be obtained, and the spatial pose of the object can be determined.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a target inclination angle detection method and device based on a depth map.
Background
The depth camera can acquire depth information of a target to generate a depth map of the target object, so that functions of 3D scanning, scene modeling, gesture interaction and the like are realized. Currently, depth cameras can be classified according to the working principle: tof (time of flight) depth cameras, binocular vision based depth cameras, structured light depth cameras. When the depth camera scans an object to obtain a depth map, two-dimensional pixel information of the object is obtained, and three-dimensional coordinate information of the object can be obtained through coordinate transformation; however, when the object is placed obliquely with respect to the ground, the depth camera cannot obtain the inclination angle of the object with respect to the ground, so that the three-dimensional information of the object restored by the depth map is not accurate enough, and it is difficult to specify the spatial pose of the object according to the three-dimensional information.
The above background disclosure is only for the purpose of assisting understanding of the inventive concept and technical solutions of the present invention, and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
Disclosure of Invention
The present invention is directed to a method and an apparatus for detecting a target tilt angle based on a depth map, so as to solve at least one of the above-mentioned problems.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
a target inclination angle detection method based on a depth map comprises the following steps:
s1, acquiring the 3D information of the target to be detected by using a 3D information acquisition unit;
acquiring a depth map of a target to be detected through a 3D information acquisition unit, and converting information extracted from the depth map into 3D information of the target to be detected after corresponding processing;
step S2, acquiring gravity acceleration information of the 3D information acquisition unit;
acquiring the gravity acceleration information of a 3D information acquisition unit through an IMU (inertial measurement Unit), wherein the IMU and the 3D information acquisition unit are installed on the same platform, or the IMU is fixedly arranged on the 3D information acquisition unit;
and step S3, calculating the inclination angle of the target to be measured according to the 3D information of the target to be measured obtained in the step S1 and the gravity acceleration information of the 3D information acquisition unit obtained in the step S2.
In some embodiments, the step S1 includes:
s11, collecting a depth map of the area where the target to be detected is located;
s12, preprocessing the depth map to obtain the depth information of the target to be detected;
and S13, calculating the 3D information of the target to be measured according to the depth information of the target to be measured.
In some embodiments, in step S13, a camera coordinate system is established, an image coordinate system and a pixel coordinate system are established in the acquired depth map, and a transformation relationship from the camera coordinate system to the image coordinate system and a transformation relationship from the image coordinate system to the pixel coordinate system are calculated as follows:
wherein, mu and ν are pixel coordinates of a single pixel point in the depth map, the corresponding depth value is Z, X, Y and Z are space coordinates of the single pixel point in a camera coordinate system, namely 3D information, Δ X and Δ Y are sizes of the single pixel point in the horizontal direction and the vertical direction, fxAnd fyIs the size of the focal length f in the horizontal direction and the vertical direction, respectively; and according to the depth information of the target to be detected, calculating through the conversion relational expression to obtain the 3D information of the target to be detected.
In some embodiments, the step S2 includes:
s21, calibrating the relative poses of the IMU unit and the 3D information acquisition unit in advance;
s22, acquiring data information of the IMU unit test;
the IMU unit comprises a three-axis accelerometer and a three-axis gyroscope, and three-dimensional attitude angle information and linear acceleration of the 3D information acquisition unit are obtained by combining measurement data of the accelerometer and the gyroscope;
and S23, acquiring the gravity acceleration information of the 3D information acquisition unit according to the data information of the IMU unit.
In some embodiments, in step S3, the three-dimensional posture information of the target is calculated by using the 3D information, the gravitational acceleration information of the 3D information acquisition unit is obtained according to the calculation of the IMU unit, and the inclination angle of the target relative to the ground is calculated by using the three-dimensional posture information of the target and the gravitational acceleration information.
In some embodiments, step S3 includes: and solving a normal vector of the plane to be measured, taking the gravity acceleration as a normal vector of the ground, solving a cosine value of an included angle between the two planes according to the normal vector of the plane to be measured and the normal vector of the ground, and calculating the inclination angle of the plane to be measured relative to the ground according to the cosine value.
The other technical scheme of the invention is as follows:
a target inclination angle detection device based on a depth map comprises a 3D information acquisition unit, an IMU unit and a processing unit; the 3D information acquisition unit is used for acquiring a depth map of a target to be detected; the IMU unit is used for acquiring the gravity acceleration information of the 3D information acquisition unit, wherein the IMU unit and the 3D information acquisition unit are arranged on the same platform, or the IMU unit is fixedly arranged on the 3D information acquisition unit; and the processing unit is used for processing the depth map acquired by the 3D information acquisition unit so as to acquire the 3D information of the target to be detected, and calculating the inclination angle of the target to be detected according to the 3D information and the gravity acceleration information.
In some embodiments, the processing unit is further configured to establish an image coordinate system and a pixel coordinate system in the acquired depth map, and calculate a conversion relation from the camera coordinate system to the image coordinate system and a conversion relation from the image coordinate system to the pixel coordinate system as follows:
wherein, mu and ν are pixel coordinates of a single pixel point in the depth map, the corresponding depth value is Z, X, Y and Z are space coordinates of the single pixel point in a camera coordinate system, namely 3D information, Δ X and Δ Y are sizes of the single pixel point in the horizontal direction and the vertical direction, fxAnd fyIs the size of the focal length f in the horizontal direction and the vertical direction, respectively; and according to the depth information of the target to be detected, calculating through the conversion relational expression to obtain the 3D information of the target to be detected.
In some embodiments, the IMU unit includes a three-axis accelerometer and a three-axis gyroscope; the accelerometer is used for acquiring acceleration information of the IMU unit, the gyroscope is used for acquiring rotation angular velocity change rates of the IMU unit on three axes, and attitude angle information is obtained through integration.
The other technical scheme of the invention is as follows:
a terminal electronic device, comprising: the device comprises a shell, a screen and the depth map-based target inclination angle detection equipment in the technical scheme; the 3D information acquisition unit of the inclination angle detection device is arranged on a first plane of the terminal electronic device and used for acquiring depth map information; the screen is arranged on a second plane of the electronic equipment and is used for displaying image information; the first plane and the second plane are the same plane or the first plane and the second plane are opposite planes.
The technical scheme of the invention has the beneficial effects that:
the target inclination angle detection method based on the depth map utilizes a 3D information acquisition unit to acquire 3D information of a target to be detected; acquiring gravity acceleration information of a 3D information acquisition unit through an IMU (inertial measurement Unit); and calculating the inclination angle of the target to be detected according to the obtained 3D information of the target to be detected and the gravity acceleration information of the 3D information acquisition unit, so that accurate three-dimensional information of the object can be obtained to determine the spatial pose of the object.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustration of a depth map-based target tilt angle detection method according to an embodiment of the present invention.
Fig. 2a and 2b are schematic diagrams of a coordinate system constructed by a target inclination angle detection method based on a depth map according to an embodiment of the invention.
Fig. 3 is a schematic diagram of a depth map-based target tilt angle detection apparatus according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a terminal electronic device including the device shown in fig. 3 according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the embodiments of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. The connection may be for fixation or for circuit connection.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the embodiments of the present invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be in any way limiting of the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 is a flowchart illustrating a depth map-based target tilt angle detection method according to an embodiment of the present invention, where the detection method includes the following steps:
s1, acquiring the 3D information of the target to be detected by using a 3D information acquisition unit;
and acquiring a depth map of the target to be detected through the 3D information acquisition unit, and converting the information extracted from the depth map into 3D information of the target to be detected after corresponding processing. Specifically, the method comprises the following steps:
s11, collecting a depth map of the area where the target to be detected is located;
specifically, a depth map of an area where a target to be measured is located is acquired by using a 3D information acquisition unit. The 3D information acquisition unit may be a depth camera or other devices capable of acquiring a depth map of a target object, and the depth camera may include a TOF depth camera, a binocular vision-based depth camera, a structured light depth camera, and the like, which are not particularly limited in the present invention. In one embodiment, the 3D information acquisition unit is a structured light depth camera, and the structured light depth camera includes a transmitting unit, a receiving unit, and a processing unit; and transmitting the coded structured light pattern light beam to the area where the target is located through the transmitting unit, receiving the structured light pattern reflected from the area where the target is located through the receiving unit, and inputting the structured light pattern into the processing unit for processing to obtain the depth map of the area where the target to be detected is located.
It should be noted that the depth map reflects a two-dimensional matrix of pixels, each pixel has a one-to-one correspondence with its position in the scene, and the value of each pixel indicates the distance between its position in the scene and the reference position, i.e. the depth. However, in practical applications, not all pixels in the depth map contain valid information, and when the position of the target to be measured is not within the distance range acquired by the depth camera, the pixels acquired by the depth camera contain a predetermined zero value, which indicates that the pixels do not contain depth information. Thus, in some embodiments, the depth map needs to be preprocessed.
S12, preprocessing the depth map to obtain the depth information of the target to be detected;
background information and other interference information are usually included in the depth map of the region where the target to be detected is acquired by using the 3D information acquisition unit, and therefore, the depth map needs to be processed to remove the background and the interference information and extract the depth information of the target to be detected, so as to obtain the depth information, i.e., the depth value, of the target to be detected.
In an embodiment, taking a structured light depth camera as an example for explanation, after receiving a structured light pattern, a processing unit of the structured light depth camera first performs preprocessing, where the preprocessing generally includes denoising, optimization and the like for removing coarse error points and random error points in a depth map, and processing image contrast, brightness and the like; and then, calculating the depth information of the target to be measured based on the image matching result and the structured light trigonometry principle. In some embodiments, the image matching algorithm may be a mean absolute difference algorithm, a sum of squared errors algorithm, or the like. In some embodiments, the preprocessing further includes a background removal process, and the depth of each pixel belonging to the background in the acquired depth map is set to zero, and the specific method for removing the background is not limited in the present invention.
S13, calculating the 3D information of the target to be detected according to the depth information of the target to be detected;
specifically, a camera coordinate system is established and an image coordinate system and a pixel coordinate system are established in the acquired depth map. Shown in conjunction with FIG. 2a, wherein the camera coordinate system is at the depth camera optical center OcIs a coordinate origin and has an optical axis direction of ZcAxis, XcAxis and YcThe plane of the axes is the plane of the depth camera, and XcAxis and YcThe axes are mutually perpendicular.
The imaging plane of the depth camera is parallel to the XOY plane and is spaced from the origin O of the camera coordinate systemcThe distance of (d) is equal to the focal length f of the camera, the intersection point of the imaging plane and the optical axis is a central point o, an image coordinate system is established by taking the point o as the coordinate origin of the image coordinate system, and the coordinate of the point o is (mu)0,ν0) Horizontal direction is X-axis, and X of camera coordinate systemcThe axis is parallel to the Y-axis in the vertical direction, and is parallel to the Y of the camera coordinate systemcThe axes are parallel.
The pixel coordinate system takes the vertex of the upper left corner of the image as an origin, the horizontal direction is a mu axis, and the vertex is parallel to an x axis in the image coordinate system, and the vertical direction is a v axis and is parallel to a y axis in the image coordinate system. According to the projection geometry principle, taking a point P with pixel coordinates (μ, ν) in a depth map as an example, how to calculate the spatial coordinates (X, Y, Z) of the point P in a camera coordinate system is described.
The transformation from the camera coordinate system to the image coordinate system is from three-dimensional to two-dimensional, belonging to perspective projection relation, which satisfies the following relation:
and the conversion from the image coordinate system to the pixel coordinate system satisfies the following relationship:
according to the relations (1) and (2), the conversion relation of the point P from the camera coordinate system to the pixel coordinate system can be obtained:
according to the above transformation relation (3), if any point P is obtained in the depth map of the target to be measured, the pixel coordinate of the point P is (μ, ν), and the corresponding depth value is Z, the spatial coordinate (X, Y, Z) of the point P in the camera coordinate system can be obtained according to the formula (3), that is, the 3D information of the point P. Where Δ x and Δ y are the sizes of the individual pixel points in the horizontal and vertical directions, fxAnd fyIs the size of the focal length f in the horizontal direction and the vertical direction, respectively. And according to the depth information of the target to be detected, calculating through the transformation relation to obtain the 3D information of the target to be detected.
As will be understood by those skilled in the art, f and μ are given above0、ν0Δ x and Δ y all belong to internal parameters of the depth camera, and may be specifically set by an internal structure of the depth camera, and in some embodiments, the depth camera may be calibrated by using a Zhang friend calibration method.
Step S2, acquiring gravity acceleration information of the 3D information acquisition unit;
acquiring gravity acceleration information of a 3D information acquisition unit through an Inertial Measurement Unit (IMU); the IMU unit and the 3D information acquisition unit are installed on the same platform, or the IMU unit is fixedly arranged on the 3D information acquisition unit. In one embodiment, the IMU unit includes a three-axis accelerometer and a three-axis gyroscope, and three-dimensional attitude angle information of the 3D information acquisition unit is obtained by combining the accelerometer and gyroscope measurement data. Wherein, step S2 specifically includes:
s21, calibrating the relative poses of the IMU unit and the 3D information acquisition unit in advance;
when the IMU unit is placed horizontally, the central point L is the origin of coordinates, the direction perpendicular to the ground and upward is the positive direction of the Z axis, the direction parallel to the optical axis of the depth camera is the Y axis, and the XOY plane is parallel to the ground, specifically referring to fig. 2b, the description will be given by taking the 3D information acquisition unit as the depth camera as an example.
The depth camera 10 is calibrated to obtain internal and external parameters of the depth camera 10. The coordinate of the camera coordinate system and the coordinate of the IMU coordinate system are known to satisfy the following transformation relation:
Tc=RTI+S (4)
wherein, TcAs a depth camera coordinate system, TIIs the IMU coordinate system and R and S are the rotational and translational relationship matrices between the camera and IMU coordinate systems, respectively. In one embodiment, R and S may be solved by collecting IMU data at different poses and extrinsic parameters of the depth camera in combination with least squares.
S22, acquiring data information of the IMU unit test;
the three-axis accelerometer of the IMU unit outputs data a of three axesx,ay,azAcceleration is obtained from the data of three axesProjection on three axes, whereinWhich may be the acceleration of gravity or the sum of the acceleration external to the IMU unit and the acceleration of gravity, then:
in one embodiment, if the IMU unit and the 3D information acquisition unit are placed horizontally stationary relative to the ground or are kept in uniform motion, the acceleration components of the three-axis accelerometer in the X and Y directions are zero, and the acceleration vector obtained in the Z axis isThe acceleration measured at this time is the gravity vector of the 3D information acquisition unit.
In one embodiment, if the IMU unit and the 3D information acquisition unit are placed in a tilted and stationary manner or move at a uniform speed relative to the ground, the tri-axial accelerometer outputs data g of three axesx,gy,gzThe gravitational acceleration at this time is:
the inclination angles of the IMU unit and the 3D information acquisition unit at this time, i.e. the included angle Azr with the z-axis, can be obtained from the data of the accelerometer:
in the same way, included angles Axr and Ayr between the x axis and the y axis can be obtained, wherein the Axr and the Ayr respectively satisfy the following conditions:
in one embodiment, the IMU unit and the 3D information acquisition unit are combinedMoving together, during measurement, the data of three axes output by the three-axis accelerometer not only contains the component of gravity acceleration but also contains the component of linear acceleration generated by movement, namely the measured accelerationIs acceleration of gravityAnd linear acceleration generated by the movementAnd, noted as:
at the moment, the accelerometer cannot accurately distinguish the gravity vector of the 3D information acquisition unit, the gravity acceleration information can be obtained by adopting a low-pass filtering algorithm, but because the accelerometer has errors during real-time measurement, the errors need to be corrected by using the stability of the gyroscope during short-time measurement. The gravity vector of the 3D information acquisition unit is obtained by using the gravity acceleration value of the triaxial accelerometer in a static state as an initial value and combining the measurement value correction error calculation of the triaxial gyroscopeIn some embodiments, the algorithm for the correction includes a kalman filter, a complementary filter, a kalman filter based on euler angles, a multiplicative extended kalman filter, or the like.
On the other hand, the three-dimensional attitude angle information and the current linear acceleration of the 3D information acquisition unit can be calculated by combining the gyroscope and the accelerometer. The three-axis gyroscope can be used for measuring the rotation angular speed change rate of the IMU unit on three axes, and the rotation angular speed change rate is respectively recorded as wx,wy,wzThe attitude angle information is obtained by integration, so that the influence of linear acceleration or linear motion can be avoided. But due to the need to calculate the angle using angular velocityThe integration process is introduced, angle drift can be inevitably caused due to zero offset and noise, and the gyroscope cannot directly measure the horizontal plane, so that the gravity acceleration at the moment can be obtained by combining the measurement of the accelerometer and the gyroscope, and further the three-dimensional attitude angle information and the linear acceleration of the 3D information acquisition unit can be obtained.
S23, acquiring the gravity acceleration information of the 3D information acquisition unit according to the data information of the IMU unit;
specifically, the gravity acceleration information of the 3D information acquisition unit is calculated according to a pre-calibrated relation and the acceleration information and the angular velocity information measured by the IMU unit.
And step S3, calculating the inclination angle of the target to be measured according to the 3D information and the gravity acceleration information of the target to be measured in the camera coordinate system.
Specifically, the 3D information of the target to be measured in the camera coordinate system is calculated according to the information obtained from the depth map of the target to be measured, and the specific calculation process is as shown in step S13, and further the three-dimensional posture information of the target to be measured is calculated by using the 3D information. Calculating to obtain the gravity acceleration vector of the 3D information acquisition unit according to the IMU unitAnd calculating the inclination angle of the target to be measured relative to the ground by using the three-dimensional attitude information and the gravity acceleration information of the target to be measured.
In one embodiment, assuming that the target to be measured is a plane which is obliquely arranged relative to the ground and is recorded as the plane to be measured, the inclination angle of the plane to be measured relative to the ground is θ, the three-dimensional attitude information of the target to be measured, which is calculated according to the 3D information, includes a plane normal vector of the plane to be measured, and the inclination angle can be calculated according to the plane normal vector and the gravity acceleration information. The specific calculation steps are as follows:
s31, extracting coordinate information of at least any three pixel points, namely p, from the depth map according to the depth map of the target to be detected acquired by the 3D information acquisition unit and the principle that at least 3 points can determine one plane1(μ1,ν1),p2(μ2,ν2),p3(μ3,ν3) Solving three corresponding points P according to the coordinate transformation matrix1(x1,y1,z1),P2(x2,y2,z2) And P3(x3,y3,z3) Spatial coordinates in the camera coordinate system.
S32, according to three points P on the plane to be measured1(x1,y1,z1),P2(x2,y2,z2),P3(x3,y3,z3) Respectively calculating two pairs of points [ P ]1,P2]And [ P1,P3]The normal vector of the plane to be measured can be obtained according to the two vectors, and the calculation formula is as follows:
s33, calculating a normal vector of the plane to be measured, wherein the gravity acceleration is a normal vector of the ground, and calculating a cosine value of an included angle between two planes (the plane to be measured and the ground) according to a mathematical formula:
the inclination angle of the plane to be measured relative to the ground can be calculated according to the cosine value.
It should be noted that the method for solving the inclination angle according to the normal vector is only a specific example for explaining the design idea of the present invention, and in some other embodiments, other calculation methods may be used to solve the inclination angle of the plane to be measured, and all other calculation methods belonging to the design idea of the present invention belong to the protection scope of the present invention.
Another embodiment of the present invention is a target tilt angle detection apparatus based on a depth map, and fig. 3 is a schematic diagram of a target tilt angle detection apparatus based on a depth map according to an embodiment of the present invention. The device comprises a 3D information acquisition unit 10, an IMU unit 12 and a processing unit 14. The 3D information acquisition unit 10 is configured to acquire a depth map of the target 16 to be measured, and the IMU unit 12 is configured to acquire gravitational acceleration information of the 3D information acquisition unit.
The 3D information acquisition unit 10 and the IMU unit 12 are integrally arranged on the same platform or the IMU unit 12 is fixed on the 3D information acquisition unit 10, so that the 3D information acquisition unit and the IMU unit are kept in a relatively static state in the moving process of the equipment. The processing unit 14 includes a processor and a memory therein, and is configured to perform a series of processing on the depth map acquired by the 3D information acquisition unit 10 to acquire 3D information of the target to be measured, and calculate an included angle θ between the target to be measured 16 and the ground 18 according to the 3D information and the gravitational acceleration information.
The 3D information acquisition unit 10 is used to obtain a depth map of the target to be measured. In some embodiments, the 3D information acquisition unit includes a depth camera or other device that can acquire depth information; specifically, the depth camera may be a binocular vision-based depth camera, a tof (time of flight) depth camera, a structured light depth camera, and the like, and in the present invention, there is no limitation to which depth camera is specifically configured, and any form can be used in the embodiments of the present invention. Taking a structured light depth camera as an example for explanation, the structured light depth camera includes a transmitting unit, a receiving unit and a processing unit; the coded structured light pattern light beam is transmitted to the area where the target is located through the transmitting unit, the structured light pattern reflected by the area where the target is located is received by the receiving unit, and the received structured light pattern is input to the processing unit to be processed to obtain the depth map of the area where the target to be detected is located.
A camera coordinate system is established within the depth camera and an image coordinate system and a pixel coordinate system are established in the acquired depth map. The value of each pixel in the depth map represents the distance of the corresponding space point from the depth camera, namely the Z value in the camera coordinate system, and the X and Y values are calculated by the coordinate of each pixel point in the pixel coordinate system and the internal parameters of the depth camera. Therefore, three-dimensional information, also referred to as 3D information, i.e. the actual spatial position of the object, of the object to be measured in the camera coordinate system can be acquired by the depth camera.
The IMU unit 12 is configured to acquire gravitational acceleration information of the 3D information acquisition unit. In one embodiment, the IMU unit 12 includes a three-axis accelerometer and a three-axis gyroscope, wherein the accelerometer is configured to obtain acceleration information of the IMU unit, and the gyroscope is configured to obtain rotation angular velocity change rates of the IMU unit in three axes, and obtain attitude angle information through integration, so that the IMU unit is not affected by linear acceleration or linear motion. In some embodiments, the IMU unit 12 may further include at least one or more of an accelerometer, a gyroscope, and a magnetometer in any combination for obtaining gravitational acceleration information, and has the characteristics of small size and low power consumption.
The processing unit 14 is further configured to receive the depth map and perform further processing, calculate 3D information of the target to be measured in the camera coordinate system according to the information acquired from the depth map, further calculate three-dimensional posture information of the target to be measured by using the 3D information, and calculate an inclination angle of the target to be measured with respect to the ground according to the gravitational acceleration information of the 3D information acquisition unit.
In some embodiments, the processing unit 14 includes a memory 20 and a processor 22, the memory 20 being operable to store a computer program for execution by the processor 22. The memory 20 may be composed of one or more memory units, and each memory unit is used for storing different kinds of data. In the present invention, the memory 20 can be used to store camera parameters needed by the depth camera when calculating the depth, and a corresponding algorithm program for calculating the tilt angle of the object to be measured.
In some embodiments, the processing unit 14 may be a Central Processing Unit (CPU), but may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. The memory 20 may be an internal memory unit or an external memory device connected to the processing unit, such as a plug-in hard disk, a smart Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), etc.
Another embodiment of the present invention further provides a terminal electronic device, where the terminal electronic device may be a mobile phone, a tablet, a computer, a television, a robot, and the like, and as shown in fig. 4, the terminal electronic device 300 is described by taking the mobile phone as an example, and includes a housing 31, a screen 32, and the target tilt angle detection device based on the depth map according to the foregoing embodiment; the 3D information acquisition unit of the inclination angle detection device is arranged on a first plane of the terminal electronic device and used for acquiring depth map information; a screen 32 mounted on a second plane of the electronic device for displaying image information; the first plane and the second plane are the same plane or the first plane and the second plane are opposite planes. In some embodiments, the processing unit of the tilt angle detection device may share the processor of the terminal electronic device; in some embodiments, the tilt angle detection device may also share the same IMU unit with the terminal electronic device, when the terminal electronic device itself is provided with the IMU unit.
By integrating the tilt angle detection device into the terminal electronics, such as: the functions of the terminal electronic equipment are continuously expanded, and the application is more and more extensive, for example, face-brushing payment and intelligent unlocking developed by using a face recognition technology.
It is to be understood that the foregoing is a more detailed description of the invention, and that specific embodiments are not to be considered as limiting the invention. It will be apparent to those skilled in the art that various substitutions and modifications can be made to the described embodiments without departing from the spirit of the invention, and these substitutions and modifications should be considered to fall within the scope of the invention. In the description herein, references to the description of the term "one embodiment," "some embodiments," "preferred embodiments," "an example," "a specific example," or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention.
In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the invention as defined by the appended claims.
Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. One of ordinary skill in the art will readily appreciate that the above-disclosed, presently existing or later to be developed, processes, machines, manufacture, compositions of matter, means, methods, or steps, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
Claims (10)
1. A target inclination angle detection method based on a depth map is characterized by comprising the following steps:
s1, acquiring the 3D information of the target to be detected by using a 3D information acquisition unit;
acquiring a depth map of a target to be detected through a 3D information acquisition unit, and converting information extracted from the depth map into 3D information of the target to be detected after processing;
step S2, acquiring gravity acceleration information of the 3D information acquisition unit;
acquiring gravity acceleration information of the 3D information acquisition unit through an IMU (inertial measurement Unit), wherein the IMU and the 3D information acquisition unit are installed on the same platform, or the IMU is fixedly arranged on the 3D information acquisition unit;
and step S3, calculating the inclination angle of the target to be measured according to the 3D information of the target to be measured obtained in the step S1 and the gravity acceleration information of the 3D information acquisition unit obtained in the step S2.
2. The depth map-based target inclination angle detection method according to claim 1, characterized in that: the step S1 includes:
s11, collecting a depth map of the area where the target to be detected is located;
s12, preprocessing the depth map to acquire depth information of the target to be detected;
and S13, calculating the 3D information of the target to be detected according to the depth information of the target to be detected.
3. The depth map-based target inclination angle detection method according to claim 2, characterized in that: in step S13, a camera coordinate system and an image coordinate system and a pixel coordinate system are established in the acquired depth map, and according to a transformation relationship from the camera coordinate system to the image coordinate system and a transformation relationship from the image coordinate system to the pixel coordinate system, a transformation relationship from the camera coordinate system to the pixel coordinate system is calculated as follows:
wherein, mu and nu are pixel coordinates of a single pixel point in the depth map, and the corresponding pixel coordinatesThe depth value is Z, X, Y and Z are space coordinates of a single pixel point under the camera coordinate system, namely 3D information, delta X and delta Y are the sizes of the single pixel point in the horizontal direction and the vertical direction, fxAnd fyIs the size of the focal length f in the horizontal direction and the vertical direction, respectively; and according to the depth information of the target to be detected, calculating through the conversion relational expression to obtain the 3D information of the target to be detected.
4. The depth map-based target inclination angle detection method according to claim 1, characterized in that: the step S2 includes:
s21, calibrating the relative poses of the IMU unit and the 3D information acquisition unit in advance;
s22, acquiring data information of the IMU unit test;
the IMU unit comprises a three-axis accelerometer and a three-axis gyroscope, and three-dimensional attitude angle information and linear acceleration of the 3D information acquisition unit are obtained by combining measurement data of the accelerometer and the gyroscope;
and S23, acquiring the gravity acceleration information of the 3D information acquisition unit according to the data information of the IMU unit.
5. The depth map-based target inclination angle detection method according to claim 1, characterized in that: in step S3, the three-dimensional attitude information of the target to be measured is calculated by using the 3D information, the gravitational acceleration information of the 3D information acquisition unit is calculated according to the IMU unit, and the inclination angle of the target to be measured with respect to the ground is calculated by using the three-dimensional attitude information of the target to be measured and the gravitational acceleration information.
6. The depth map-based target inclination angle detection method according to claim 1, characterized in that: the step S3 includes: and solving a normal vector of a plane to be measured, taking the gravity acceleration as a normal vector of the ground, solving a cosine value of an included angle between the two planes according to the normal vector of the plane to be measured and the normal vector of the ground, and calculating the inclination angle of the plane to be measured relative to the ground according to the cosine value.
7. A target inclination angle detection apparatus based on a depth map, characterized in that: the system comprises a 3D information acquisition unit, an IMU unit and a processing unit; wherein the content of the first and second substances,
the 3D information acquisition unit is used for acquiring a depth map of a target to be detected;
the IMU unit is used for acquiring the gravity acceleration information of the 3D information acquisition unit; the IMU unit and the 3D information acquisition unit are installed on the same platform, or the IMU unit is fixedly arranged on the 3D information acquisition unit;
the processing unit is used for processing the depth map acquired by the 3D information acquisition unit to acquire the 3D information of the target to be detected, and calculating the inclination angle of the target to be detected according to the 3D information and the gravity acceleration information.
8. The depth-map-based target inclination angle detection apparatus according to claim 7, characterized in that: the processing unit is further configured to establish a camera coordinate system, establish an image coordinate system and a pixel coordinate system in the collected depth map, and calculate a conversion relation from the camera coordinate system to the pixel coordinate system according to a conversion relation from the camera coordinate system to the image coordinate system and a conversion relation from the image coordinate system to the pixel coordinate system as follows:
wherein, mu and ν are pixel coordinates of a single pixel point in the depth map, the corresponding depth value is Z, X, Y and Z are space coordinates of the single pixel point in a camera coordinate system, namely 3D information, Δ X and Δ Y are sizes of the single pixel point in the horizontal direction and the vertical direction, fxAnd fyIs the size of the focal length f in the horizontal direction and the vertical direction, respectively; according to the to-be-detectedAnd calculating the depth information of the target through the conversion relational expression to obtain the 3D information of the target to be detected.
9. The depth-map-based target inclination angle detection apparatus according to claim 7, characterized in that: the IMU unit comprises a three-axis accelerometer and a three-axis gyroscope; the accelerometer is used for acquiring acceleration information of the IMU unit, the gyroscope is used for acquiring rotation angle speed change rates of the IMU unit on three axes, and attitude angle information is obtained through integration.
10. A terminal electronic device, comprising: a housing, a screen, and the depth map-based target tilt angle detection apparatus of any one of claims 7-9; the 3D information acquisition unit of the inclination angle detection device is arranged on a first plane of the terminal electronic device and used for acquiring depth map information; the screen is arranged on a second plane of the electronic equipment and is used for displaying image information; the first plane and the second plane are the same plane or the first plane and the second plane are opposite planes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911295839.1A CN111025330B (en) | 2019-12-16 | 2019-12-16 | Target inclination angle detection method and device based on depth map |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911295839.1A CN111025330B (en) | 2019-12-16 | 2019-12-16 | Target inclination angle detection method and device based on depth map |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111025330A CN111025330A (en) | 2020-04-17 |
CN111025330B true CN111025330B (en) | 2022-04-26 |
Family
ID=70209662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911295839.1A Active CN111025330B (en) | 2019-12-16 | 2019-12-16 | Target inclination angle detection method and device based on depth map |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111025330B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113848964A (en) * | 2021-09-08 | 2021-12-28 | 金华市浙工大创新联合研究院 | Non-parallel optical axis binocular distance measuring method |
CN116912805B (en) * | 2023-09-07 | 2024-02-02 | 山东博昂信息科技有限公司 | Well lid abnormity intelligent detection and identification method and system based on unmanned sweeping vehicle |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1833156A (en) * | 2003-08-08 | 2006-09-13 | 卡西欧计算机株式会社 | Inclination angle detection device and inclination angle detection method |
CN103256920A (en) * | 2012-02-15 | 2013-08-21 | 天宝导航有限公司 | Determining tilt angle and tilt direction using image processing |
DE102016110461A1 (en) * | 2016-06-07 | 2017-12-07 | Connaught Electronics Ltd. | Method for detecting an inclination in a roadway for a motor vehicle, driver assistance system and motor vehicle |
CN109764856A (en) * | 2019-02-28 | 2019-05-17 | 中国民航大学 | Road face Slope-extraction method based on MEMS sensor |
CN110392845A (en) * | 2017-03-15 | 2019-10-29 | Zf 腓德烈斯哈芬股份公司 | For determining the facility and method of grade signal in the car |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI612276B (en) * | 2017-02-13 | 2018-01-21 | 國立清華大學 | Object pose measurement system based on mems imu and method thereof |
-
2019
- 2019-12-16 CN CN201911295839.1A patent/CN111025330B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1833156A (en) * | 2003-08-08 | 2006-09-13 | 卡西欧计算机株式会社 | Inclination angle detection device and inclination angle detection method |
CN103256920A (en) * | 2012-02-15 | 2013-08-21 | 天宝导航有限公司 | Determining tilt angle and tilt direction using image processing |
DE102016110461A1 (en) * | 2016-06-07 | 2017-12-07 | Connaught Electronics Ltd. | Method for detecting an inclination in a roadway for a motor vehicle, driver assistance system and motor vehicle |
CN110392845A (en) * | 2017-03-15 | 2019-10-29 | Zf 腓德烈斯哈芬股份公司 | For determining the facility and method of grade signal in the car |
CN109764856A (en) * | 2019-02-28 | 2019-05-17 | 中国民航大学 | Road face Slope-extraction method based on MEMS sensor |
Also Published As
Publication number | Publication date |
---|---|
CN111025330A (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9109889B2 (en) | Determining tilt angle and tilt direction using image processing | |
CN109752003B (en) | Robot vision inertia point-line characteristic positioning method and device | |
US20170169604A1 (en) | Method for creating a spatial model with a hand-held distance measuring device | |
JP5027747B2 (en) | POSITION MEASUREMENT METHOD, POSITION MEASUREMENT DEVICE, AND PROGRAM | |
CN111380514A (en) | Robot position and posture estimation method and device, terminal and computer storage medium | |
CN111156998A (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
CN104848861B (en) | A kind of mobile device attitude measurement method based on picture drop-out point identification technology | |
CN111791235B (en) | Robot multi-camera visual inertia point-line characteristic positioning method and device | |
US10401175B2 (en) | Optical inertial measurement apparatus and method | |
US10841570B2 (en) | Calibration device and method of operating the same | |
US20180075614A1 (en) | Method of Depth Estimation Using a Camera and Inertial Sensor | |
US11042984B2 (en) | Systems and methods for providing image depth information | |
US10992879B2 (en) | Imaging system with multiple wide-angle optical elements arranged on a straight line and movable along the straight line | |
CN111025330B (en) | Target inclination angle detection method and device based on depth map | |
Huttunen et al. | A monocular camera gyroscope | |
CN113256728B (en) | IMU equipment parameter calibration method and device, storage medium and electronic device | |
CN116184430B (en) | Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit | |
Bakuła et al. | Capabilities of a smartphone for georeferenced 3dmodel creation: An evaluation | |
US8903163B2 (en) | Using gravity measurements within a photogrammetric adjustment | |
EP3093614B1 (en) | System and method for estimating three-dimensional measurements of physical objects | |
JP5230354B2 (en) | POSITIONING DEVICE AND CHANGED BUILDING DETECTION DEVICE | |
CN111750850B (en) | Angle information acquisition method, device and system | |
CN104567812A (en) | Method and device for measuring spatial position | |
CN109579871B (en) | Inertial navigation part installation error detection method and device based on computer vision | |
Ghuffar et al. | Relative orientation of videos from range imaging cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Applicant after: Obi Zhongguang Technology Group Co., Ltd Address before: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Applicant before: SHENZHEN ORBBEC Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |