CN112750205B - Plane dynamic detection system and detection method - Google Patents

Plane dynamic detection system and detection method Download PDF

Info

Publication number
CN112750205B
CN112750205B CN201911046410.9A CN201911046410A CN112750205B CN 112750205 B CN112750205 B CN 112750205B CN 201911046410 A CN201911046410 A CN 201911046410A CN 112750205 B CN112750205 B CN 112750205B
Authority
CN
China
Prior art keywords
depth
continuously
plane
inertial sensor
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911046410.9A
Other languages
Chinese (zh)
Other versions
CN112750205A (en
Inventor
萧淳泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Shenshi Optical Point Technology Co ltd
Original Assignee
Nanjing Shenshi Optical Point Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Shenshi Optical Point Technology Co ltd filed Critical Nanjing Shenshi Optical Point Technology Co ltd
Priority to CN201911046410.9A priority Critical patent/CN112750205B/en
Publication of CN112750205A publication Critical patent/CN112750205A/en
Application granted granted Critical
Publication of CN112750205B publication Critical patent/CN112750205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)
  • Gyroscopes (AREA)

Abstract

The invention provides a plane dynamic detection system and a detection method, wherein an inertial sensor can continuously acquire inertial data, a depth camera can continuously acquire a depth image of a physical object (such as a plane or the ground) in a viewing range, and an operation device can continuously judge whether acceleration and angular velocity acquired by the inertial sensor exceed a threshold value so as to judge the motion state of the inertial sensor or a device carried by the inertial sensor, wherein the operation device can initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor is in a stable state according to the acceleration, the depth image coordinate, the depth value and an internal reference matrix, and can also acquire pose information of the depth camera through a VIO algorithm so as to continuously correct the plane equation of the inertial sensor when the inertial sensor moves fast.

Description

Plane dynamic detection system and detection method
Technical Field
The present invention relates to computer vision technology, and is especially one kind of planar dynamic detecting system and method capable of referring to depth image, color image and inertial data to detect plane accurately and update the relative position of plane dynamically in three-dimensional space.
Background
In order to provide a more realistic interaction effect for applications requiring 3D information (e.g. AR/VR services), it is quite critical to detect a plane in reality, and if the plane belonging to the ground is detected, the method for detecting the ground may be: (a) Assuming that the ground is the largest plane, using a RANSAC (Random Sample Consensus, random sampling) algorithm, or using a Hough Transform algorithm to find the largest plane in the three-dimensional space, and defining the largest plane as the ground; (b) Assuming that the Z value of the ground on each scan line (scan line) in the image is the largest, and defining the ground as the ground by using the pixel set with the largest Z value in the image and conforming to the C curve (fit curve C) after correcting the camera pose (roll rotation).
However, in many cases, the maximum plane assumed by the method (a) is often not the ground (for example, the maximum plane in the image may be the wall surface of the corridor), but the RANSAC or HoughTransform algorithm may determine that the error occurs, and the RANSAC algorithm has a limit that the correct data (indexes) is at least more than 50%, and the HoughTransform algorithm is also quite time-consuming; the above method (b) may also generate a set of pixels with the largest Z value in the image and conforming to the C curve, which is not the case for the ground.
Furthermore, no matter what method is used to detect the plane in the image, after the depth sensor (e.g. the depth camera) acquires the depth image, according to the conventional well-known method of the Point Cloud base (Point Cloud Library, PCL), each pixel (pixel) acquired by the depth sensor is sequentially subjected to matrix multiplication operation with a camera projection inverse matrix (inverse camera matrix) and a depth value to be converted into a plurality of three-dimensional coordinates in the Point Cloud coordinate system, as shown in the relation:
Figure BDA0002254252910000021
wherein (1)>
Figure BDA0002254252910000022
Z is a depth value, K is a three-dimensional coordinate in a point cloud coordinate system -1 For camera projection inverse matrix, K is usually an internal parameter (the internal parameter is the intrinsic parameter of the depth sensor, mainly related to the conversion relation between camera coordinates and image coordinates),/>
Figure BDA0002254252910000023
Image coordinates (which are in an image coordinate system) for each pixel in the depth image; then, the feature point set of the three-dimensional coordinates is presented in the form of a point cloud, and then the plane in the cloud image is detected by the methods (a) or (b), but the matrix multiplication is performed on each pixel, so that the calculation amount is quite huge and the calculation efficiency is poor.
In summary, conventionally known methods for detecting planes in a three-dimensional space, for different plane types (such as planes of a floor, a wall, etc.), a strong assumption needs to be made first, and there is a problem of misjudging the plane type, and meanwhile, there is a disadvantage of poor computing performance, so how to provide a "plane detection system and a detection method" capable of detecting planes more accurately and saving computing resources.
Disclosure of Invention
In order to achieve the above object, the present invention provides a planar dynamic detection system, comprising: an inertial sensor, a depth camera and an arithmetic device, wherein the inertial sensor comprises an accelerometer and a gyroscope; the depth camera can continuously acquire a depth image so as to continuously input a depth image coordinate and a depth value of the depth camera for one or more physical objects in a viewing range; the operation device is respectively coupled with the inertial sensor and the depth camera, and is provided with a motion state judging unit and a plane detecting unit, wherein the motion state judging unit is used for continuously judging whether the acceleration information and the angular velocity information acquired by the inertial sensor exceed a threshold value, and if the acceleration information and the angular velocity information do not exceed the threshold value, the plane detecting unit can calculate a normal vector and a distance constant according to the acceleration information, the depth image coordinates, the depth value and an internal parameter matrix, and initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor is in a stable state according to the normal vector and the distance constant; otherwise, if the threshold value is exceeded, the plane detection unit can execute a visual inertial odometer algorithm according to a gravity acceleration of the acceleration information to obtain pose information of the depth camera, continuously correct a plane equation of the inertial sensor during rapid movement based on a rotation matrix and displacement information of the pose information, and the meaning of the plane equation, namely any point on a plane and a normal line perpendicular to the plane, can uniquely define the plane in the three-dimensional space.
In order to achieve the above object, the present invention also provides a method for dynamically detecting a plane, comprising:
(1) And detecting inertial data: an inertial sensor continuously acquires inertial data such as acceleration information, angular velocity information and the like;
(2) A step of judging the motion state: the computing device continuously judges whether acceleration information and angular velocity information acquired by the inertial sensor exceed a threshold value or not so as to judge the motion state of the inertial sensor;
(3) A first update plane equation step: if the threshold value is not exceeded, the computing device calculates a normal vector and a distance constant according to the acceleration information, the depth image coordinates, the depth value and an internal parameter matrix, and initializes or continuously updates a plane equation of the physical object in a camera coordinate system when the inertial sensor is in a stable state according to the normal vector and the distance constant; and
(4) A second update plane equation step: if the threshold value is exceeded, the computing device executes a visual inertial odometer algorithm according to a gravity acceleration of the acceleration information to obtain pose information of the depth camera, and continuously corrects a plane equation of the inertial sensor during rapid movement based on a rotation matrix and displacement information of the pose information.
For the purpose of making the objective, technical features and effects of the present invention clear, the following description will be presented with reference to the drawings.
Drawings
FIG. 1 is a system architecture diagram of the present invention;
FIG. 2 is a flow chart of a plane detection method according to the present invention;
FIG. 3 is a flow chart of a plane detection method according to the present invention;
FIG. 4 is a system architecture diagram of another preferred embodiment of the present invention.
Detailed Description
The present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, which is a system architecture diagram of the present invention, the present invention provides a plane dynamic detection system 1, which mainly includes an inertial sensor 10, a depth camera 20 and an arithmetic device 30, wherein:
(1) The inertial sensor (Inertial Measurement Unit, IMU) 10 includes an accelerometer (accelerometer/G-sensor) 101 and a Gyroscope (gyroscillope) 102, which can continuously obtain an acceleration information and an angular velocity information;
(2) The depth camera 20 may continuously acquire a depth image to continuously input a depth image coordinate and a depth value of the depth camera 20 for one or more physical objects within a viewing range, and the depth camera 20 may be configured to measure a depth sensor of the physical object by using a Time of Flight (TOF), a Structured Light (Structured Light) or a binocular vision (Stereo vision) scheme, wherein the Time of Flight scheme refers to that the depth camera 20 may be used as a TOF camera and emits infrared Light by using a Light Emitting Diode (LED) or a Laser Diode (LD), and when the Light irradiated to the surface of the physical object is reflected, since the Light speed is known, the depth image sensor may be used to measure the reflection Time of the Light at different depth positions of the physical object, so as to calculate the depth of the physical object and the depth image of the physical object at different positions; the structured light scheme is that the depth camera 20 can use a Laser Diode (LD) or a digital light source processor (DLP) to make different light patterns, and diffract the light patterns onto the object surface of the physical object through a specific grating, so as to form a light spot Pattern (Pattern), and the light spot Pattern reflected by the physical object at different depth positions is distorted, so that after the reflected light enters the infrared light image sensor, the three-dimensional structure of the physical object and the depth image thereof can be reversely pushed; the binocular vision scheme means that the depth camera 20 can be used as a binocular camera (stereo camera), and utilizes at least two photographing lenses to photograph the physical object and the parallax (disparity) generated by the depth camera 20, and measures the three-dimensional information (depth image) of the physical object through the principle of Triangulation (Triangulation);
(3) The computing device 30 is coupled to the inertial sensor 10 and the depth camera 20, and has a motion state determining unit 301 and a plane detecting unit 302, where the motion state determining unit 301 and the plane detecting unit 302 are communicatively connected, the motion state determining unit 301 is configured to continuously determine whether the acceleration information and the angular velocity information obtained by the inertial sensor 10 exceed a threshold (threshold) so as to determine the motion state of the inertial sensor 10 itself or the device mounted thereon, and it is noted that the computing device 30 may have at least one processor (not shown in the figure, such as a CPU or an MCU) for running the computing device 30 and having functions of logical operation, temporary storage of operation results, and storing execution instruction positions, and the motion state determining unit 301 and the plane detecting unit 302 may run on a plane dynamic device (not shown in the figure, such as a head-mounted display, and the head-mounted display may be a head-mounted display such as a VR helmet or an MR helmet), a Host (Host), a physical server or a virtual server (VM), which is not limited thereto;
(4) On the contrary, if the threshold is not exceeded, the plane detection unit 302 is configured to calculate a normal vector and a distance constant (D value) according to the acceleration information, the depth image coordinate (Pixel Domain), the depth value (depth value) and an internal parameter matrix (intrinsic parameter matrix), and initialize or continuously update a plane equation (3D plane) of the physical object in a camera coordinate system (camera coordinate system) when the inertial sensor 10 is in a stable state according to the normal vector and the distance constant (its position is in the image coordinate system), wherein the meaning of the plane equation, i.e., any point on a plane and a normal perpendicular to the plane, can uniquely define the plane in the three-dimensional space;
(5) Conversely, if the threshold value has been exceeded, the plane detection unit 302 is configured to perform a filter-based or optimization-based visual odometer (visual inertial odometry, VIO) algorithm according to a gravitational acceleration in the acceleration information to obtain pose information of the depth camera 20, and continuously correct the plane equation of the inertial sensor 10 during the fast movement based on a rotation matrix (orientation matrix) and a displacement information (transformation) of the pose information;
(6) The image coordinates mentioned above are introduced for describing the projection transmission relation of the physical object from the camera coordinate system to the image coordinate system during the imaging process, and are the coordinate system in which the image actually read from the depth camera 20 is located, the unit is pixels, and the camera coordinates mentioned above are the coordinate system established with the depth camera 20 as the origin, and are defined for describing the object position from the angle of the depth camera 20.
With continued reference to fig. 1, in a preferred embodiment of the present invention, the plane detection unit 302 of the computing device 30 may also perform an inner product operation on the depth image coordinates and the depth values of the physical object to continuously generate a three-dimensional coordinate of the physical object in an image coordinate system, and calculate the plane equation according to the three-dimensional coordinate and the internal parameter matrix.
With continued reference to fig. 1, in a preferred embodiment of the present invention, the plane detection unit 302 of the computing device 30 may also perform an iterative optimization (iterative optimization) algorithm or a gauss newton algorithm on the normal vector to obtain an optimal normal vector and a distance constant (d value) corresponding thereto, and replace the normal vector with the optimal normal vector to calculate a more accurate plane equation.
Please refer to fig. 2 to fig. 3, which are respectively flowcharts (one) and (two) of the planar dynamic detection method according to the present invention, and please refer to fig. 1, the present invention provides a planar dynamic detection method S, which comprises the following steps:
(1) An image acquisition step (step S10): a depth camera 20 continuously acquires a depth image to continuously input a depth image coordinate and a depth value of the depth camera 20 for one or more physical objects in a viewing range;
(2) Inertial data detection step (step S20): an inertial sensor 10 continuously acquires inertial data such as acceleration information and angular velocity information;
(3) Judging the motion state (step S30): an operation device 30 continuously determines whether an acceleration information and an angular velocity information obtained by the inertial sensor 10 exceed a threshold value, so as to determine a motion state of the inertial sensor 10 itself or a device mounted thereon;
(4) A first update plane equation step (step S40): if the threshold is not exceeded, the computing device 30 may calculate a normal vector and a distance constant (corresponding to the image coordinate system) according to the acceleration information, the depth image coordinate, the depth value and an internal parameter matrix, and initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor 10 is in a stable state according to the normal vector and the distance constant;
(5) A second update plane equation step (step S50): in step S30, if the threshold value has been exceeded, the computing device 30 executes a visual inertial odometer algorithm according to a gravitational acceleration of the acceleration information to obtain pose information of the depth camera 20, and continuously corrects the plane equation of the inertial sensor 10 during the fast movement based on a rotation matrix and a displacement information of the pose information.
In addition, referring to fig. 2 to 3, and referring to fig. 1, if the type of plane to be detected is taken as the ground, and the inertial data of the inertial sensor 10 does not exceed the threshold value, that is, the inertial sensor 10 itself or the mounting device thereof is in a stable state (e.g. stationary), the inertial sensor 10 only reads the stationary acceleration value g (gravity force direction), and the plane equation opposite to the stationary acceleration value g (gravity force direction) is the normal vector n of the camera coordinates of the physical object, the following relation can be referred to:
(1) Stationary acceleration value of inertial sensor 10: g=9.8 m/s 2 Or 10m/s 2
(2) Normal vector n= -g= (n) of plane equation to camera coordinates 1 ,n 2 ,n 3 )
(3) Accordingly, the normal vector n' of the physical object (ground) in the depth image under the image coordinates can be expressed as:
Figure BDA0002254252910000071
in addition, referring to fig. 2 to 3, and referring to fig. 1, if the type of plane to be detected is taken as the ground, in step S50, since the normal vector of the plane equation cannot be estimated by the reading of the accelerometer 101 when the inertial sensor 10 is in a severe or rapid motion, the above-mentioned step S50 is performed,the plane equation of the physical object (ground) may be updated, for example, using a filtering-based or optimization-based VIO algorithm, assuming that the relative pose (Relative Pose Motion) of the VIO predicted depth camera 20 is
Figure BDA0002254252910000072
And assuming that the plane equation before update is n t-1 ·p=d t-1 The following plane equations are updated according to the following relations, but the following are only examples and not limiting:
n t ·p=d t
Figure BDA0002254252910000073
Figure BDA0002254252910000074
in addition, referring to fig. 2 to 3 and fig. 1, in a preferred embodiment, if the system targets the type of plane to be detected, since the motion state determining unit 301 determines that the inertial sensor 10 is in a stable state when executing the step S40, the inertial sensor 10 itself or the device mounted thereon may not be completely stationary, and there is a situation that the physical object (the ground itself) is inclined, the computing device 30 may further perform an iterative optimization algorithm or a gaussian newton algorithm (e.g. gauss newton least square) on the normal vector to obtain an optimal normal vector n 'when executing the step S40' opt And its corresponding distance constant (hereinafter referred to as d 'value) and expressed as an optimal normal vector n' opt The plane equation is calculated in place of the normal vector n ', more specifically, the plane detection unit 302 of the arithmetic device 30 calculates the optimal normal vector n' opt The following formulas may be referred to, but are not limited thereto:
(1) First, the depth value in the depth image exceeds a certain value d' th Is eliminated by the pixels of (2) and thenThe aforementioned normal vector n '(herein referred to temporarily as normal vector n' 0 Corresponding to the image coordinate system) and the n depth image coordinates after the exclusion, the corresponding d' value is calculated, as shown in the following relation:
n′ 0 ·p 1 =d′ 1
n′ 0 ·p 2 =d′ 2
n′ 0 ·p n =d′ n
(2) Then, assuming that d ' value of the physical object (ground) is n ' as normal vector in all depth images ' 0 Since the ground should be the plane furthest from the depth camera 20, the corresponding d' value from the plane furthest from the depth camera 20 is calculated according to the following relationship:
d′ g (n′ 0 )=min(d 1 ,d 2 ,...,d n )
(3) Thereafter, the plane detection unit 302 further performs an iterative optimization algorithm or a gaussian newton algorithm on the normal vector to obtain an optimal normal vector n 'with minimum Error Function (also called as evaluation Function)' opt An error function E (n') and a threshold delta are defined before g The following is shown:
E(n′):S 2 →R
Figure BDA0002254252910000081
with continued reference to fig. 2 to 3, and with reference to fig. 1, if the plane type to be detected is taken as the ground, the calculation formula of the normal vector calculated by the plane detection unit 302 of the computing device 30 may be referred to as follows, but not limited thereto, and it is specifically stated that:
A. n pixels belonging to the ground part in the depth image are assumed;
Figure BDA0002254252910000091
three-dimensional camera coordinates of the i-th point
B. Assume that a pixel point coordinate in the depth image is
Figure BDA0002254252910000092
Depth value Z i Then:
Figure BDA0002254252910000093
depth image coordinates of the i-th point
for i∈[1…N]
C. The Z values of the three-dimensional coordinates of the ith point in the two different coordinate systems are the same, and the conversion relationship between the two three-dimensional coordinates in the camera coordinate system and the image coordinate system is as follows:
Figure BDA0002254252910000094
D. therefore, the camera coordinate system is related to the three-dimensional image coordinate of the image coordinate system, and the x and y values of the depth image coordinate of the ith point in the image coordinate system can be obtained by developing the above formula by correlating the internal parameter matrix K of the depth camera 20:
Figure BDA0002254252910000095
Figure BDA0002254252910000096
E. according to the definition of the plane equation, assuming that the plane of the physical object has the ith point, it can be known that the P is in the camera coordinate system i The calculated plane equation is:
n 1 X+n 2 Y+n 3 Z=n·(X,Y,Z)=d
F. on the support, camera coordinate systemNormal vector n= (n) 1 ,n 2 ,n 3 ) D is the distance from the origin of the camera coordinate system where the distance is located;
G. according to the definition of the plane equation, and assuming that the plane where the physical object is located has the ith point, it can be known that p is in the image coordinate system i The calculated plane equation is:
n′ 1 x+n′ 2 y+n′ 3 z=d′
H. on the support, normal vector n ' = (n ' of image coordinate system ' 1 ,n′ 2 ,n′ 3 ) D 'is the distance from the origin of the image coordinate system where d' is located;
I. then, calculate the normal vector of the physical object (plane) in the camera coordinate system, assuming p 1 ,p 2 If both points are on the plane, the plane equation of the G point is satisfied, and the substituted plane equations are as follows:
Figure BDA0002254252910000101
Figure BDA0002254252910000102
J. after subtracting the two plane equations, it can be obtained that:
Figure BDA0002254252910000103
K. then, substituting x and y values of the depth image coordinates of the ith point of the above-mentioned ith point in the image coordinate system into the equation of the jth point can obtain:
n′ 1 f x (X 1 -X 2 )+n′ 2 f y (Y 1 -Y 2 )+(n′ 3 +n′ 1 O x +n′ 2 O y )(Z 1 -Z 2 )=0
l, so, the normal vector of the plane equation of the physical object to the camera coordinate system is:
Figure BDA0002254252910000104
in addition, please refer to fig. 2 to 3, and refer to fig. 1, and after the computing device 30 calculates the normal vector n of the plane equation of the physical object corresponding to the camera coordinate system, the following algorithm for calculating the d value may be referred to, but not limited to, the following algorithm is specifically stated:
m, first, let a constant c= II n' 1 f x ,n′ 2 f y ,n′ 3 +n′ 1 O x +n′ 2 O y
N, pixel point p in image coordinate system 1 Substituting the plane equation for the G point can yield:
Figure BDA0002254252910000105
o, substituting the aforementioned D point 'x, y values of the depth image coordinates of the ith point in the image coordinate system' into the plane equation of the aforementioned N point can be obtained:
n′ 1 (f x X 1 +O x Z 1 )+n′ 2 (f y Y 1 +O y Z 1 )+n′ 3 Z 1 =d′
→n′ 1 f x X 1 +n′ 2 f y Y 1 +(n′ 3 +n′ 1 O x +n′ 2 O y )Z 1 =d′
p, the equation for the plane equation for the O-th point above is divided by c:
Figure BDA0002254252910000111
q, here, the d value of the plane equation of the physical object in the camera coordinates can be obtained as:
Figure BDA0002254252910000112
in addition, please refer to fig. 2 to 3, and fig. 1, in a preferred embodiment, a step of obtaining three-dimensional coordinates (step S25) is performed before the step S30: the computing device 30 performs an inner product operation on the depth image coordinates and the depth values of the physical object to continuously generate a three-dimensional coordinate of the physical object in an image coordinate system, so that the normal vector and the distance constant can be calculated according to the three-dimensional coordinate, the internal parameter matrix and the acceleration information when the step S40 or the step S50 is performed, and further a plane equation of the physical object can be calculated, and more specifically, the calculation formula for generating the three-dimensional coordinate can be referred to as follows:
Figure BDA0002254252910000113
wherein (1)>
Figure BDA0002254252910000114
Z is depth value for three-dimensional coordinates in image coordinate system, < >>
Figure BDA0002254252910000115
The depth image coordinates (in the image coordinate system) are obtained by multiplying each pixel obtained by the depth camera 20 with a camera projection inverse matrix (K) and a depth value in sequence compared with the conventional known Point Cloud Library (PCL) to convert the pixel, the depth value and the camera projection inverse matrix into a plurality of three-dimensional coordinates in the point cloud coordinate system.
Referring to fig. 4, which is a system architecture diagram of another preferred embodiment of the present invention, the main difference between the present embodiment and the techniques disclosed in fig. 1 to 3 is that the planar dynamic detection system 1 of the present embodiment further includes a color camera 40 (e.g. RGB camera) respectively coupled to the depth camera 20 and the computing device 30 for continuously acquiring a color image of the physical object, so that the computing device 30 can establish a correspondence between the depth image coordinates and a color image coordinates of the physical object to improve the accuracy of planar detection when executing the step S10 (image acquisition step), and the color camera 40 of the present embodiment and the depth camera 20 can also form an RGB-D camera, i.e. as shown in the present figure, and the depth camera 20 of the present embodiment can be a binocular camera, but is not limited thereto.
In summary, the present invention can solve the problem that when detecting the plane in the three-dimensional space, the plane misjudgment is possible for different plane types by making strong assumptions, and can improve the disadvantage of poor computing efficiency of the conventional well-known plane detection method, thereby achieving the beneficial effects of more accurate plane detection and more saving computing resources.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention; it is contemplated that equivalent changes and modifications will occur to those skilled in the art without departing from the spirit and scope of the invention.
[ symbolic description ]
1. Plane dynamic detection system
10. Inertial sensor 101 accelerometer
102. Gyroscope
20. Depth camera
30. Motion state judging unit of arithmetic device 301
302. Plane detection unit
40. Color video camera
S-plane dynamic detection method
S10, image acquisition step
S20 step of detecting inertial data
S25, three-dimensional coordinate acquisition step
S30, judging the motion state
S40, a first plane equation updating step
S50, a second plane equation updating step.

Claims (8)

1. A planar dynamic detection system, comprising:
an inertial sensor comprising an accelerometer and a gyroscope;
a depth camera for continuously acquiring a depth image to continuously input a depth image coordinate and a depth value of the depth camera for one or more physical objects in a viewing range;
the operation device is respectively coupled with the inertial sensor and the depth camera and is provided with a motion state judging unit and a plane detecting unit, the motion state judging unit is in communication connection with the plane detecting unit, and the motion state judging unit is used for continuously judging whether the acceleration information and the angular velocity information acquired by the inertial sensor exceed a threshold value or not;
if the threshold value is not exceeded, the plane detection unit is configured to calculate a normal vector and a distance constant according to the acceleration information, the depth image coordinates, the depth values and an internal parameter matrix, and initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor is in a stable state according to the normal vector and the distance constant; and
if the threshold value is exceeded, the plane detection unit is configured to execute a visual inertial odometer algorithm according to a gravitational acceleration of the acceleration information to obtain pose information of the depth camera, and continuously correct the plane equation of the inertial sensor during rapid movement based on a rotation matrix and displacement information of the pose information.
2. The planar dynamic detection system as claimed in claim 1, wherein said planar detection unit is also configured to perform an inner product operation on said depth image coordinates and said depth values to continuously generate a three-dimensional coordinate of said physical object in an image coordinate system, and calculate the planar equation based on said three-dimensional coordinate, said internal parameter matrix and said acceleration information.
3. The planar dynamic detection system as claimed in claim 1 or 2, wherein said computing device is also configured to perform an iterative optimization algorithm or a gaussian newton algorithm on said normal vector to obtain an optimal normal vector, and to replace said normal vector with said optimal normal vector to calculate said planar equation.
4. The planar dynamic detection system as claimed in claim 1 or 2, further comprising a color camera coupled to the depth camera and the computing device, respectively, for continuously acquiring a color image of the physical object, so that the computing device establishes a correspondence between the depth image coordinates and a color image coordinates of the physical object.
5. A method for dynamic planar detection comprising:
an image acquisition step: a depth camera continuously acquires a depth image so as to continuously input a depth image coordinate and a depth value of the depth camera for one or more physical objects in a viewing range;
and detecting inertial data: an inertial sensor continuously acquires acceleration information and angular velocity information;
a step of judging the motion state: the operation device continuously judges whether the acceleration information and the angular velocity information acquired by the inertial sensor exceed a threshold value or not so as to judge the motion state of the inertial sensor;
a first update plane equation step: if the threshold value is not exceeded, the computing device calculates a normal vector and a distance constant according to the acceleration information, the depth image coordinate, the depth value and an internal parameter matrix, and initializes or continuously updates a plane equation of the physical object in a camera coordinate system when the inertial sensor is in a stable state according to the normal vector and the distance constant; and
a second update plane equation step: if the threshold value is exceeded, the computing device executes a visual inertial odometer algorithm according to a gravity acceleration of the acceleration information to obtain pose information of the depth camera, and continuously corrects the plane equation of the inertial sensor during rapid movement based on a rotation matrix and displacement information of the pose information.
6. The method of claim 5, further comprising a step of obtaining three-dimensional coordinates before the step of determining motion state: the computing device performs an inner product operation on the depth image coordinate and the depth value of the physical object to continuously generate a three-dimensional coordinate of the physical object in an image coordinate system, so as to calculate the normal vector and the distance constant according to the three-dimensional coordinate, the internal parameter matrix and the acceleration information when the first plane equation updating step or the second plane equation updating step is performed.
7. The method according to claim 5 or 6, wherein the first plane equation updating step is performed by the computing device also performing an iterative optimization algorithm or a gaussian newton algorithm on the normal vector to obtain an optimal normal vector, and replacing the normal vector with the optimal normal vector to calculate the plane equation.
8. The method of claim 5 or 6, wherein the capturing step is performed further comprising a color camera continuously inputting a color image of the physical object to continuously input a color image coordinate of the color camera for the physical object.
CN201911046410.9A 2019-10-30 2019-10-30 Plane dynamic detection system and detection method Active CN112750205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911046410.9A CN112750205B (en) 2019-10-30 2019-10-30 Plane dynamic detection system and detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046410.9A CN112750205B (en) 2019-10-30 2019-10-30 Plane dynamic detection system and detection method

Publications (2)

Publication Number Publication Date
CN112750205A CN112750205A (en) 2021-05-04
CN112750205B true CN112750205B (en) 2023-05-16

Family

ID=75640751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046410.9A Active CN112750205B (en) 2019-10-30 2019-10-30 Plane dynamic detection system and detection method

Country Status (1)

Country Link
CN (1) CN112750205B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103389042A (en) * 2013-07-11 2013-11-13 夏东 Ground automatic detecting and scene height calculating method based on depth image
CN104361575A (en) * 2014-10-20 2015-02-18 湖南戍融智能科技有限公司 Automatic ground testing and relative camera pose estimation method in depth image
CN108885791A (en) * 2018-07-06 2018-11-23 深圳前海达闼云端智能科技有限公司 ground detection method, related device and computer readable storage medium
CN109785444A (en) * 2019-01-07 2019-05-21 深圳增强现实技术有限公司 Recognition methods, device and the mobile terminal of real plane in image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103389042A (en) * 2013-07-11 2013-11-13 夏东 Ground automatic detecting and scene height calculating method based on depth image
CN104361575A (en) * 2014-10-20 2015-02-18 湖南戍融智能科技有限公司 Automatic ground testing and relative camera pose estimation method in depth image
CN108885791A (en) * 2018-07-06 2018-11-23 深圳前海达闼云端智能科技有限公司 ground detection method, related device and computer readable storage medium
CN109785444A (en) * 2019-01-07 2019-05-21 深圳增强现实技术有限公司 Recognition methods, device and the mobile terminal of real plane in image

Also Published As

Publication number Publication date
CN112750205A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
US20210190497A1 (en) Simultaneous location and mapping (slam) using dual event cameras
US10937214B2 (en) System and method for merging maps
US11062475B2 (en) Location estimating apparatus and method, learning apparatus and method, and computer program products
Scherer et al. Using depth in visual simultaneous localisation and mapping
WO2015134795A2 (en) Method and system for 3d capture based on structure from motion with pose detection tool
KR102169309B1 (en) Information processing apparatus and method of controlling the same
WO2014116545A1 (en) Camera pose estimation for 3d reconstruction
CN113888639B (en) Visual odometer positioning method and system based on event camera and depth camera
CN113137968B (en) Repositioning method and repositioning device based on multi-sensor fusion and electronic equipment
CN112750205B (en) Plane dynamic detection system and detection method
TWI822423B (en) Computing apparatus and model generation method
US20230104937A1 (en) Absolute scale depth calculation device, absolute scale depth calculation method, and computer program product
Vaida et al. Automatic extrinsic calibration of LIDAR and monocular camera images
KR20240015464A (en) Line-feature-based SLAM system using vanishing points
TWI730482B (en) Plane dynamic detection system and detection method
TWM594152U (en) Planar dynamic detection system
WO2021111613A1 (en) Three-dimensional map creation device, three-dimensional map creation method, and three-dimensional map creation program
JP5464671B2 (en) Image processing apparatus, image processing method, and image processing program
Burschka Monocular navigation in large scale dynamic environments
KR102555269B1 (en) Posture estimation fusion method and system using omnidirectional image sensor and inertial measurement sensor
US20230326074A1 (en) Using cloud computing to improve accuracy of pose tracking
CN113706596A (en) Method for densely constructing image based on monocular camera
She et al. A General Means for Depth Data Error Estimation of Depth Sensors
Ramli et al. Enhancement of Depth Value Approximation for 3D Image-Based Modelling using Noise Filtering and Inverse Perspective Mapping Techniques for Complex Object
OK Rahmat et al. Enhancement of depth value approximation using noise filtering and inverse perspective mapping techniques for image based modelling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant