CN112750205A - Plane dynamic detection system and detection method - Google Patents

Plane dynamic detection system and detection method Download PDF

Info

Publication number
CN112750205A
CN112750205A CN201911046410.9A CN201911046410A CN112750205A CN 112750205 A CN112750205 A CN 112750205A CN 201911046410 A CN201911046410 A CN 201911046410A CN 112750205 A CN112750205 A CN 112750205A
Authority
CN
China
Prior art keywords
depth
continuously
plane
camera
normal vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911046410.9A
Other languages
Chinese (zh)
Inventor
萧淳泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Shenshi Optical Point Technology Co ltd
Original Assignee
Nanjing Shenshi Optical Point Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Shenshi Optical Point Technology Co ltd filed Critical Nanjing Shenshi Optical Point Technology Co ltd
Priority to CN201911046410.9A priority Critical patent/CN112750205A/en
Publication of CN112750205A publication Critical patent/CN112750205A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention provides a plane dynamic detection system and a detection method, wherein an inertial sensor can continuously obtain inertial data, a depth camera can continuously obtain a depth image of a solid object (such as a plane or the ground) in a viewing range, and an arithmetic device can continuously judge whether the acceleration and the angular velocity obtained by the inertial sensor exceed a threshold value through configuration so as to judge the motion state of the inertial sensor or a device carried by the inertial sensor, wherein the arithmetic device can initialize or continuously update a plane equation of the solid object in a camera coordinate system when the inertial sensor is in a stable state according to the acceleration, the depth image coordinate, the depth value and an internal reference matrix, and can also obtain the pose information of the depth camera through a VIO (visual analysis) algorithm so as to continuously correct the plane equation of the sensor in rapid movement.

Description

Plane dynamic detection system and detection method
Technical Field
The present invention relates to computer vision technology, and more particularly, to a dynamic planar inspection system and method capable of accurately inspecting a plane and dynamically updating the relative position of the plane in a three-dimensional space by referring to depth images, color images and inertial data.
Background
In order to provide a more realistic interactive effect for applications requiring 3D information (e.g., AR/VR services), it is critical to detect a real plane, and if the detection of a plane belonging to the ground is taken as a target, the method for detecting the ground may be: (a) assuming that the ground is the largest plane and using RANSAC (Random Sample Consensus) algorithm or Hough Transform algorithm to find the largest plane in the three-dimensional space and define it as the ground; (b) suppose that the Z value of the ground on each scan line (scan line) in the image is the maximum, and after correcting the camera attitude (roll rotation), the set of pixels in the image with the maximum Z value and conforming to the C curve (fit curve C) is defined as the ground.
However, in many cases, the maximum plane assumed by the method (a) is not ground (for example, the maximum plane in the image may be a wall of a corridor), and a judgment error of the RANSAC or hougtransform algorithm may occur, and the RANSAC algorithm has a limit of at least 50% for correct data (entries), and the hougtransform algorithm is time-consuming; the aforementioned method (b) may also occur in the case of the pixel set with the largest Z value in the image and conforming to the C curve, which is not the case for the ground.
Furthermore, no matter what method is used to detect the plane in the image, after the depth sensor (such as a depth camera) acquires the depth image, according to the conventional well-known method of a Point Cloud Library (PCL), each pixel (pixel) acquired by the depth sensor needs to be sequentially subjected to matrix multiplication with a camera projection inverse matrix (inverse camera matrix) and a depth value to be converted into a plurality of three-dimensional coordinates in a Point Cloud coordinate system, as shown in the present relation:wherein the content of the first and second substances,is a three-dimensional coordinate in a point cloud coordinate system, Z is a depth value, K-1The inverse matrix is projected for the camera, and K is usually an internal parameter (the internal parameter is an intrinsic property parameter of the depth sensor, mainly related to the camera coordinatesAnd the translation relationship between the image coordinates),the image coordinates of each pixel in the depth image (which is in the image coordinate system); then, the feature point sets of these three-dimensional coordinates are presented in the form of point cloud, and then the plane in the point cloud image is detected by the above-mentioned methods (a) or (b), but the above-mentioned method of matrix multiplication for each pixel has huge calculation amount and poor calculation performance.
In summary, conventionally, a method for detecting a plane in a three-dimensional space is well known, and for different plane types (e.g., planes such as ground, wall, etc.), a strong assumption must be made first, which may cause a problem of misjudgment of the plane type, and a disadvantage of poor calculation performance is also included.
Disclosure of Invention
To achieve the above object, the present invention provides a planar dynamic detection system, comprising: the system comprises an inertial sensor, a depth camera and a computing device, wherein the inertial sensor comprises an accelerometer and a gyroscope; the depth camera can continuously acquire a depth image so as to continuously input a depth image coordinate and a depth value of the depth camera for one or more entity objects in a viewing range; the operation device is respectively coupled with the inertial sensor and the depth camera, and is provided with a motion state judgment unit and a plane detection unit, wherein the motion state judgment unit is used for continuously judging whether acceleration information and angular velocity information acquired by the inertial sensor exceed a threshold value, and if the acceleration information, the depth image coordinate, the depth value and an internal parameter matrix do not exceed the threshold value, the plane detection unit can calculate a normal vector and a distance constant according to the acceleration information, the depth image coordinate, the depth value and an internal parameter matrix, and initialize or continuously update a plane equation of an object in a camera coordinate system when the inertial sensor is in a stable state by using the normal vector and the distance constant; on the contrary, if the threshold value is exceeded, the plane detection unit can execute a visual inertial odometer algorithm according to a gravity acceleration of the acceleration information to obtain a position posture information of the depth camera, and continuously correct a plane equation of the inertial sensor during rapid movement based on a rotation matrix and a displacement information of the position posture information, and the meaning of the plane equation, namely any point on a plane and a normal line perpendicular to the plane, can uniquely define the plane in a three-dimensional space.
To achieve the above object, the present invention also provides a plane dynamic detection method, comprising:
(1) a step of detecting inertial data: an inertial sensor continuously obtains inertial data such as acceleration information and angular velocity information;
(2) a step of judging the motion state: an arithmetic device continuously judges whether acceleration information and angular velocity information acquired by the inertial sensor exceed a threshold value so as to judge the motion state of the inertial sensor;
(3) a first step of updating the plane equation: if the acceleration information, the depth image coordinate, the depth value and an internal parameter matrix do not exceed the threshold, the computing device calculates a normal vector and a distance constant according to the acceleration information, the depth image coordinate, the depth value and the internal parameter matrix, and initializes or continuously updates a plane equation of the entity object in a camera coordinate system when the inertial sensor is in a stable state by using the normal vector and the distance constant; and
(4) a second step of updating the plane equation: if the acceleration information exceeds the threshold value, the arithmetic device executes a visual inertial odometer algorithm according to the gravity acceleration of the acceleration information to obtain a position posture information of the depth camera, and continuously corrects a plane equation of the inertial sensor during rapid movement based on a rotation matrix and a displacement information of the position posture information.
In order to make the examination and review board clear the objects, technical features and effects of the invention, please refer to the following description together with the drawings.
Drawings
FIG. 1 is a system architecture diagram of the present invention;
FIG. 2 is a flow chart of a plane detection method according to the present invention;
FIG. 3 is a flow chart of a plane detection method according to the present invention;
FIG. 4 is a system architecture diagram of another preferred embodiment of the present invention.
Detailed Description
The present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, which is a system architecture diagram of the present invention, the present invention provides a dynamic planar inspection system 1, which mainly includes an inertial sensor 10, a depth camera 20 and a computing device 30, wherein:
(1) an Inertial Measurement Unit (IMU) 10 includes an accelerometer (accelerometer/G-sensor) 101 and a Gyroscope (gyro) 102, and can continuously obtain acceleration information and angular velocity information;
(2) the depth camera 20 can continuously acquire a depth image to continuously input a depth image coordinate and a depth value of the depth camera 20 for one or more physical objects within a viewing range, and the depth camera 20 can be configured as a depth sensor for measuring the depth of the physical object by using a Time of Flight (TOF) scheme, a Structured Light (Structured Light) scheme or a two-dimensional Visual scheme (Stereo Visual), wherein the TOF scheme means that the depth camera 20 can be used as a TOF camera and emit infrared Light by using a Light Emitting Diode (LED) or a Laser Diode (LD), and when Light irradiated to the surface of the physical object is reflected, since the Light speed is known, the Time for the physical object to reflect the Light at different depth positions can be measured by using an infrared Light image sensor, so as to calculate the depth of the solid object at different positions and the depth image of the solid object; the structured light scheme means that the depth camera 20 can use a Laser Diode (LD) or a digital light source processor (DLP) to emit different light patterns, and diffract the light patterns to the object surface of the solid object through a specific grating, so as to form a light spot Pattern (Pattern), and the light spot Pattern reflected by the solid object at different depths is distorted, so that when the reflected light enters the infrared image sensor, the three-dimensional structure of the solid object and the depth image thereof can be reversely deduced; the binocular vision scheme is that the depth camera 20 may be a stereo camera (stereo camera), and uses at least two cameras to capture an object and a parallax (disparity) generated by the depth camera 20, and measures three-dimensional stereo information (depth image) of the object through a Triangulation principle;
(3) the computing device 30 is coupled to the inertial sensor 10 and the depth camera 20, and has a motion state determining unit 301 and a plane detecting unit 302, the motion state determining unit 301 and the plane detecting unit 302 are connected in communication, the motion state determining unit 301 is configured to continuously determine whether the acceleration information and the angular velocity information obtained by the inertial sensor 10 exceed a threshold (threshold) to determine the motion state of the inertial sensor 10 itself or the device carried by the inertial sensor 10, and it is noted that the computing device 30 may have at least one processor (not shown, such as a CPU or a MCU) for operating the computing device 30, and has functions of logic operation, temporary storage of the operation result, and storage of the execution instruction position, and in addition, the motion state determining unit 301 and the plane detecting unit 302 may operate on a plane dynamic device (not shown, such as a head-mounted display, and the head-mounted display can be a VR helmet, MR helmet, or other head-mounted display), a Host (Host), a physical server, or a computing device 30 of a virtualized server (VM), but not limited thereto;
(4) as mentioned above, if the threshold is not exceeded at present, the plane detection unit 302 is configured to calculate a normal vector (normal vector) and a distance constant (D value) according to the acceleration information, the depth image coordinate (Pixel Domain), the depth value (depth value) and an internal parameter matrix (intrinsic parameter matrix), and initialize or continuously update a plane equation (3D plane equation) of the physical object in a camera coordinate system (camera coordinate system) when the inertial sensor 10 is in a stable state with the normal vector and the distance constant (which are in the image coordinate system), and the meaning of the plane equation, that is, any point on a plane and a normal perpendicular to the plane, can uniquely define the plane in a three-dimensional space;
(5) on the contrary, if the threshold is exceeded, the plane detection unit 302 is configured to execute a filter-based (filter-based) or optimization-based (optimization-based) Visual Inertial Odometry (VIO) algorithm according to a gravitational acceleration in the acceleration information to obtain a pose information of the depth camera 20, and continuously correct the plane equation of the inertial sensor 10 during fast movement based on a rotation matrix (orientation matrix) and a displacement information (displacement) of the pose information;
(6) in addition, the aforementioned image coordinates are introduced for describing the projection transmission relationship of the physical object from the camera coordinate system to the image coordinate system during the imaging process, and are the coordinate system where the image actually read from the depth camera 20 is located, and the unit is a pixel, and the aforementioned camera coordinates are the coordinate system established with the depth camera 20 as the origin, and are defined for describing the object position from the perspective of the depth camera 20.
Referring to fig. 1, in a preferred embodiment of the present invention, the plane detection unit 302 of the computing device 30 can also perform an inner product operation on the depth image coordinates and the depth values of the physical objects to continuously generate three-dimensional coordinates of the physical objects in an image coordinate system, and calculate the plane equation according to the three-dimensional coordinates and the internal parameter matrix.
Referring to fig. 1, in a preferred embodiment of the present invention, the plane detection unit 302 of the computing device 30 may also perform an iterative optimization (iterative optimization) algorithm or a gauss newton (gauss newton) algorithm on the normal vector to obtain an optimal normal vector and a corresponding distance constant (d value), and replace the optimal normal vector with the normal vector to calculate a more accurate plane equation.
Referring to fig. 2 to fig. 3, which are flow charts (a) and (b) of the dynamic plane inspection method of the present invention, respectively, and referring to fig. 1, the present invention provides a dynamic plane inspection method S, which includes the following steps:
(1) image acquisition step (step S10): a depth camera 20 continuously acquires a depth image to continuously input a depth image coordinate and a depth value of the depth camera 20 for one or more physical objects within a viewing range;
(2) inertial data detection step (step S20): an inertial sensor 10 continuously obtains inertial data such as acceleration information and angular velocity information;
(3) motion state determination step (step S30): an arithmetic device 30 continuously determines whether acceleration information and angular velocity information obtained by the inertial sensor 10 exceed a threshold value, so as to determine the motion state of the inertial sensor 10 itself or the device carried thereon;
(4) first update plane equation step (step S40): in step S30, if the threshold is not exceeded, the computing device 30 calculates a normal vector and a distance constant (which corresponds to the image coordinate system) according to the acceleration information, the depth image coordinates, the depth value and an internal parameter matrix, and initializes or continuously updates a plane equation of the physical object in a camera coordinate system when the inertial sensor 10 is in a stable state with the normal vector and the distance constant;
(5) second update plane equation step (step S50): in step S30, if the threshold is exceeded, the computing device 30 executes a visual inertial odometry algorithm according to a gravitational acceleration of the acceleration information to obtain a pose information of the depth camera 20, and continuously corrects the plane equation of the inertial sensor 10 during fast movement based on a rotation matrix and a displacement information of the pose information.
In step S40, taking the plane type to be detected as the ground, and the inertial data of the inertial sensor 10 does not exceed the threshold, that is, the inertial sensor 10 itself or its carrying device is in a stable state (e.g., stationary), the inertial sensor 10 will only read a stationary acceleration value g (velocity direction) and the opposite direction is the normal vector n of the plane equation of the physical object to the coordinates of the camera, and the relationship can be referred to as follows:
(1) statics of inertial sensor 10A stopped acceleration value: g is 9.8m/s2Or 10m/s2
(2) Normal vector n-g-n of plane equation to camera coordinate1,n2,n3)
(3) In this way, the normal vector n' of the physical object (ground) in the depth image at the image coordinates can be expressed as:
in summary, with continuing reference to fig. 2 to fig. 3 and with adding reference to fig. 1, when step S50 is executed, for example, when the type of the plane to be detected is the ground, since the normal vector of the plane equation cannot be estimated from the readings of the accelerometer 101 when the inertial sensor 10 is in severe or rapid Motion, the aforementioned step S50 may update the plane equation of the physical object (ground) by using, for example, filtering-based or optimization-based VIO algorithm, assuming that the Relative Pose (Relative position Motion) of the depth camera 20 estimated by VIO is the same as the Relative Pose (Relative position Motion) of the depth camera 20 estimated by VIOAnd assume that the plane equation before updating is nt-1·p=dt-1Then, the following plane equation is updated according to the following relation, but the following is only an example and not a limitation:
nt·p=dt
in addition, with continuing reference to fig. 2 to fig. 3 and with further reference to fig. 1, in a preferred embodiment of the present invention, if the system targets the type of the plane to be detected as the ground, due to the stepsIn the case of S40, even if the motion state determination unit 301 determines that the inertial sensor 10 is in a stable state, the inertial sensor 10 itself or the device mounted thereon may not be completely stationary, and the physical object (the ground itself) may be inclined, so that the computing device 30 may further perform an iterative optimization algorithm or a gauss-newton algorithm (for example, a gauss-Newton-least square) on the normal vector to obtain an optimal normal vector n'optAnd its corresponding distance constant (hereinafter referred to as d 'value) by using the optimal normal vector n'optThe plane equation is calculated instead of the normal vector n ', and more specifically, the plane detection unit 302 of the calculation device 30 calculates the optimum normal vector n'optThe following formulas can be referred to, but the following are only examples and are not limited thereto:
(1) first, the depth value in the depth image is made to exceed a predetermined value d'thIs excluded from the pixels of (a) and is further provided with the aforementioned normal vector n '(which is temporarily referred to as normal vector n'0Corresponding to the image coordinate system), the corresponding d' values are calculated for the n depth image coordinates excluded, as shown in the following relation:
n′0·p1=d′1
n′0·p2=d′2
n′0·pn=d′n
(2) next, assume that the d ' value of the physical object (ground) is n ' as the normal vector in all depth images '0The smallest of the physical objects (other planes) because the ground should be the plane farthest from the depth camera 20, the corresponding d' value of the plane farthest from the depth camera 20 is calculated according to the following relation:
d′g(n′0)=min(d1,d2,...,dn)
(3) thereafter, the plane detection unit 302 further performs an iterative optimization algorithm or a gaussian-newton algorithm on the normal vector to obtain an Error Function (also called an evaluation Function)Small one best normal vector n'optBefore this, an error function E (n') and a threshold value delta are definedgAs follows:
E(n′):S2→R
with reference to fig. 2 to fig. 3 and fig. 1, if the plane type to be detected is the ground, the plane detection unit 302 of the computing device 30 calculates the above-mentioned normal vector according to the following formula, but not limited thereto, it is stated that:
A. assuming that N pixels belonging to the ground part in the depth image are provided;
three-dimensional camera coordinates of ith point
B. Assuming a pixel coordinate in the depth image asDepth value is as large as ZiAnd then:
depth image coordinate of ith point
for i∈[1…N]
C. The Z values of the ith point in the three-dimensional coordinates of the two different coordinate systems are the same, and the conversion relationship between the two three-dimensional coordinates in the camera coordinate system and the image coordinate system is as follows:
D. therefore, the camera coordinate system and the three-dimensional image coordinates of the image coordinate system can be associated through the internal parameter matrix K of the depth camera 20, and the above formula is developed to obtain the x and y values of the depth image coordinates of the ith point in the image coordinate system as follows:
E. according to the definition of the plane equation, assuming that the physical object is located on the plane having the above-mentioned i-th point, it can be known that the physical object is located in P of the camera coordinate systemiThe plane equation calculated is:
n1X+n2Y+n3Z=n·(X,Y,Z)=d
F. in the following description, the normal vector n of the camera coordinate system is equal to (n)1,n2,n3) D is the distance from the origin of the coordinate system of the camera;
G. according to the definition of the plane equation, assuming that the physical object is located on the plane having the above-mentioned ith point, it can be known that p is located in the image coordinate systemiThe plane equation calculated is:
n′1x+n′2y+n′3z=d′
H. in the video coordinate system, the normal vector n 'is (n'1,n′2,n′3) D' is the distance from the origin of the image coordinate system;
I. next, the normal vector of the physical object (plane) in the camera coordinate system is calculated, assuming p1,p2If both points are on the plane, the plane equation of the G-th point is satisfied, and the substituted plane equations are as follows:
J. subtracting the two plane equations to obtain:
K. then, the x and y values of the depth image coordinate of the ith point of the D-th point in the image coordinate system are substituted into the equation of the J-th point to obtain:
n′1fx(X1-X2)+n′2fy(Y1-Y2)+(n′3+n′1Ox+n′2Oy)(Z1-Z2)=0
l, therefore, the normal vector of the plane equation of the physical object corresponding to the camera coordinate system is:
with reference to fig. 2 to fig. 3 and fig. 1, when the computing device 30 computes the normal vector n of the plane equation of the coordinate system of the camera corresponding to the physical object, the following formula for computing the value d is referred to, but not limited to, the following formula is stated in advance:
m, firstly, a constant c ═ n'1fx,n′2fy,n′3+n′1Ox+n′2Oy
N, pixel point p to be in image coordinate system1Substituting the plane equation of the G point can obtain:
and O, substituting the D-th point (x and y values of the depth image coordinate of the ith point in the image coordinate system) into the plane equation of the N-th point to obtain:
n′1(fxX1+OxZ1)+n′2(fyY1+OyZ1)+n′3Z1=d′
→n′1fxX1+n′2fyY1+(n′3+n′1Ox+n′2Oy)Z1=d′
p, both sides of the equation for the aforementioned point O plane equation are divided by c:
q, herein, the d value of the plane equation of the physical object in the camera coordinates can be obtained as:
in addition, with reference to fig. 2 to fig. 3 and with reference to fig. 1, in a preferred embodiment of the present invention, before the step S30 is executed, a step of obtaining three-dimensional coordinates (step S25) is executed: the computing device 30 performs an inner product operation on the depth image coordinates and the depth values of the physical object to continuously generate a three-dimensional coordinate of the physical object in an image coordinate system, and thus, when the step S40 or the step S50 is performed, the normal vector and the distance constant are computed from the three-dimensional coordinate, the internal parameter matrix and the acceleration information to further compute a plane equation of the physical object, and more specifically, the computing equation for generating the three-dimensional coordinate may refer to:wherein the content of the first and second substances,is a three-dimensional coordinate in an image coordinate system, Z is a depth value,compared with the conventional well-known Point Cloud Library (PCL), the present embodiment eliminates the step of performing matrix operation on the pixels, the depth values and the inverse camera projection matrix, and directly performs detection on the physical object (plane) by using the three-dimensional coordinates, thereby achieving the beneficial effect of saving the operation amount and saving the time for converting the depth image into the point cloud.
Referring to fig. 4, which is a system architecture diagram of another preferred embodiment of the present invention, the present embodiment is similar to the technologies disclosed in fig. 1 to fig. 3, and the main difference is that the planar motion detection system 1 of the present embodiment further includes a color camera 40 (e.g. RGB camera), coupled to the depth camera 20 and the computing device 30, respectively, for continuously acquiring a color image of the physical object, so that when the computing device 30 executes step S10 (image acquiring step), the corresponding relationship between the depth image coordinates and a color image coordinates of the physical object can be determined to improve the accuracy of the plane detection, and the color camera 40 of the present embodiment can also form an RGB-D camera with the depth camera 20, that is, as shown in the figure, the depth camera 20 of the present embodiment may be a binocular camera, but not limited thereto.
In summary, after the present invention is implemented, the problem that plane misjudgment may occur due to strong assumption for different plane types when detecting planes in a three-dimensional space is solved, and the defect of poor calculation performance of the conventional well-known plane detection method is improved, so that the beneficial effects of more accurately detecting planes and more saving calculation resources can be achieved.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention; all equivalent changes and modifications that can be made by one skilled in the art without departing from the spirit and scope of the present invention should be covered by the appended claims.
[ notation ] to show
1 plane dynamic detection system
10 inertial sensor 101 accelerometer
102 Gyroscope
20 depth camera
30 arithmetic unit 301 motion state judging unit
302 plane detection unit
40 color camera
S plane dynamic detection method
S10 image acquisition step
Step S20 detecting inertial data
Step S25 of obtaining three-dimensional coordinates
Step of judging motion state S30
S40 first step of updating plane equation
S50 second update plane equation step.

Claims (8)

1. A planar dynamic inspection system, comprising:
an inertial sensor comprising an accelerometer and a gyroscope;
a depth camera for continuously acquiring a depth image, and continuously inputting a depth image coordinate and a depth value of the depth camera for one or more physical objects within a viewing range;
the operation device is respectively coupled with the inertial sensor and the depth camera, and is provided with a motion state judgment unit and a plane detection unit, wherein the motion state judgment unit is in communication connection with the plane detection unit, and the motion state judgment unit is used for continuously judging whether acceleration information and angular velocity information acquired by the inertial sensor exceed a threshold value;
if the acceleration information, the depth image coordinates, the depth value and an internal parameter matrix do not exceed the threshold, the plane detection unit is configured to calculate a normal vector and a distance constant according to the acceleration information, the depth image coordinates, the depth value and an internal parameter matrix, and initialize or continuously update a plane equation of the physical object in a camera coordinate system when the inertial sensor is in a stable state according to the normal vector and the distance constant; and
if the threshold value is exceeded, the plane detection unit is configured to execute a visual inertial odometry algorithm according to a gravitational acceleration of the acceleration information to obtain a pose information of the depth camera, and continuously correct the plane equation of the inertial sensor during fast movement based on a rotation matrix and a displacement information of the pose information.
2. The system of claim 1, wherein the plane detection unit is further configured to perform an inner product operation on the depth image coordinates and the depth values to continuously generate a three-dimensional coordinate of the physical object in an image coordinate system, and calculate the plane equation according to the three-dimensional coordinate, the internal parameter matrix and the acceleration information.
3. The system of claim 1 or 2, wherein the computing device is further configured to perform an iterative optimization algorithm or a gauss-newton algorithm on the normal vector to obtain an optimal normal vector, and replace the normal vector with the optimal normal vector to calculate the plane equation.
4. The system as claimed in claim 1 or 2, further comprising a color camera coupled to the depth camera and the computing device respectively for continuously acquiring a color image of the physical object, so that the computing device can determine a corresponding relationship between the depth image coordinates and a color image coordinates of the physical object.
5. A dynamic planar inspection method, comprising:
an image acquisition step: a depth camera continuously acquires a depth image so as to continuously input a depth image coordinate and a depth value of the depth camera for one or more physical objects in a viewing range;
a step of detecting inertial data: an inertial sensor continuously obtains acceleration information and angular velocity information;
a step of judging the motion state: an arithmetic device continuously judges whether the acceleration information and the angular velocity information acquired by the inertial sensor exceed a threshold value so as to judge the motion state of the inertial sensor;
a first step of updating the plane equation: if the acceleration information, the depth image coordinate, the depth value and an internal parameter matrix do not exceed the threshold, the arithmetic device calculates a normal vector and a distance constant according to the acceleration information, the depth image coordinate, the depth value and the internal parameter matrix, and initializes or continuously updates a plane equation of the entity object in a camera coordinate system when the inertial sensor is in a stable state by using the normal vector and the distance constant; and
a second step of updating the plane equation: if the acceleration information exceeds the threshold value, the arithmetic device executes a visual inertial odometer algorithm according to the gravity acceleration of the acceleration information to obtain a position information of the depth camera, and continuously corrects the plane equation of the inertial sensor during rapid movement based on a rotation matrix and a displacement information of the position information.
6. The planar dynamic sensing method as claimed in claim 5, further comprising a step of obtaining three-dimensional coordinates performed before the step of determining the motion state: the computing device performs an inner product operation on the depth image coordinate and the depth value of the entity object to continuously generate a three-dimensional coordinate of the entity object in an image coordinate system, and calculates the normal vector and the distance constant by using the three-dimensional coordinate, the internal parameter matrix and the acceleration information when the first plane equation updating step or the second plane equation updating step is performed.
7. The method of claim 5 or 6, wherein when the first updating plane equation step is executed, the computing device also executes an iterative optimization algorithm or a Gaussian Newton algorithm on the normal vector to obtain an optimal normal vector, and replaces the normal vector with the optimal normal vector to calculate the plane equation.
8. The method as claimed in claim 5 or 6, wherein the step of acquiring images is performed by a color camera for continuously inputting a color image coordinate of the color camera for the physical object.
CN201911046410.9A 2019-10-30 2019-10-30 Plane dynamic detection system and detection method Pending CN112750205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911046410.9A CN112750205A (en) 2019-10-30 2019-10-30 Plane dynamic detection system and detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046410.9A CN112750205A (en) 2019-10-30 2019-10-30 Plane dynamic detection system and detection method

Publications (1)

Publication Number Publication Date
CN112750205A true CN112750205A (en) 2021-05-04

Family

ID=75640751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046410.9A Pending CN112750205A (en) 2019-10-30 2019-10-30 Plane dynamic detection system and detection method

Country Status (1)

Country Link
CN (1) CN112750205A (en)

Similar Documents

Publication Publication Date Title
US9242171B2 (en) Real-time camera tracking using depth maps
EP2671384B1 (en) Mobile camera localization using depth maps
US9251590B2 (en) Camera pose estimation for 3D reconstruction
JP5548482B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, program, and storage medium
US20140253679A1 (en) Depth measurement quality enhancement
KR20130138247A (en) Rapid 3d modeling
Scherer et al. Using depth in visual simultaneous localisation and mapping
WO2015134795A2 (en) Method and system for 3d capture based on structure from motion with pose detection tool
Jordt-Sedlazeck et al. Refractive calibration of underwater cameras
US20160260250A1 (en) Method and system for 3d capture based on structure from motion with pose detection tool
JP2012042396A (en) Position attitude measurement device, position attitude measurement method, and program
US20140300736A1 (en) Multi-sensor camera recalibration
US20200011668A1 (en) Simultaneous location and mapping (slam) using dual event cameras
US10760907B2 (en) System and method for measuring a displacement of a mobile platform
Budge et al. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation
US9171393B2 (en) Three-dimensional texture reprojection
CN111156998A (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN112750205A (en) Plane dynamic detection system and detection method
TWI730482B (en) Plane dynamic detection system and detection method
US10843068B2 (en) 6DoF inside-out tracking game controller
TWM594152U (en) Planar dynamic detection system
Vaida et al. Automatic extrinsic calibration of LIDAR and monocular camera images
Burschka Monocular navigation in large scale dynamic environments
WO2021111613A1 (en) Three-dimensional map creation device, three-dimensional map creation method, and three-dimensional map creation program
Biström Comparative analysis of properties of LiDAR-based point clouds versus camera-based point clouds for 3D reconstruction using SLAM algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination