CN112198528A - Reference plane adjustment and obstacle detection method, depth camera and navigation equipment - Google Patents

Reference plane adjustment and obstacle detection method, depth camera and navigation equipment Download PDF

Info

Publication number
CN112198528A
CN112198528A CN202011061523.9A CN202011061523A CN112198528A CN 112198528 A CN112198528 A CN 112198528A CN 202011061523 A CN202011061523 A CN 202011061523A CN 112198528 A CN112198528 A CN 112198528A
Authority
CN
China
Prior art keywords
depth value
value
plane
depth
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011061523.9A
Other languages
Chinese (zh)
Inventor
胡洪伟
梅健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Opnous Smart Sensing & Ai Technology
Original Assignee
Opnous Smart Sensing & Ai Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Opnous Smart Sensing & Ai Technology filed Critical Opnous Smart Sensing & Ai Technology
Priority to CN202011061523.9A priority Critical patent/CN112198528A/en
Publication of CN112198528A publication Critical patent/CN112198528A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The application discloses a reference plane adjusting method, an obstacle detecting method, a depth camera and a navigation device, wherein the reference plane adjusting method comprises the following steps: acquiring a standard depth value D0 of a standard plane; measuring the actual plane by using a depth camera to obtain a measured depth value F1; obtaining an equivalent rotation angle alpha of the depth camera according to the measured depth value F1 and the standard depth value D0, wherein the equivalent rotation angle alpha is an equivalent inclination angle of the depth camera in a plane perpendicular to a sensing surface; and obtaining a reference depth value D1 of the reference plane corresponding to the actually located plane according to the equivalent rotation angle alpha and the standard depth value D0. The method can obtain a more accurate reference plane based on the environmental interference of the actual detection scene.

Description

Reference plane adjustment and obstacle detection method, depth camera and navigation equipment
Technical Field
The application relates to the technical field of distance sensing, in particular to a reference plane adjusting and obstacle detecting method, a depth camera and ground navigation equipment.
Background
The Time of Flight (ToF) camera measures the distance, three-dimensional structure or three-dimensional profile of an object to be measured by the Time interval between transmission and reception of a pulse signal from a sensor or the phase generated by laser light once traveling to and from the object to be measured. The ToF sensor can simultaneously obtain a gray image and a distance image, and is widely applied to the fields of somatosensory control, behavior analysis, monitoring, automatic driving, artificial intelligence, machine vision, automatic 3D modeling and the like.
ToF cameras are widely used in the field of agv (automated Guided vehicle) ground navigation, for example, in sweeping robots and the like. When ground navigation is carried out, the situation that the ground environment is complex can be met, for example, the reflectivity difference of different areas is large due to different materials and scenes of the ground, so that factors such as multipath reflected light cause large influence on the accuracy of distance measurement, the ground position cannot be accurately identified, and further the judgment of an obstacle is subjected to an error.
How to improve the accuracy of detecting obstacles in the ground navigation process is a problem to be solved urgently at present.
Disclosure of Invention
In view of this, the present application provides a reference plane adjustment method, an obstacle detection method, a depth camera and a ground navigation device, so as to solve the problem in the prior art that ground obstacle detection is inaccurate due to ground environment differences.
The application provides a method for adjusting a reference plane, which comprises the following steps: acquiring a standard depth value D0 of a standard plane R, and measuring the plane where the standard plane R is actually located by using a depth camera to acquire a measured depth value F1; obtaining an equivalent rotation angle alpha of the depth camera according to the measured depth value F1 and the standard depth value D0, wherein the equivalent rotation angle alpha is an equivalent rotation angle of the depth camera in a plane where a sensing surface is located; and obtaining a reference depth value D1 of the reference plane corresponding to the actually located plane according to the equivalent rotation angle alpha and the standard depth value D0.
Optionally, the method for determining the equivalent rotation angle α includes: obtaining a pixel characteristic value res1 according to the measured depth value F1 and the standard depth value D0, wherein
Figure BDA0002712531960000021
H is the height difference between the standard plane corresponding to the standard depth value D0 and the depth camera; calculating a pixel row characteristic value res1_ colavg (j) in a pixel coordinate system row by row, wherein res1_ colavg (j) is an arithmetic average value of pixel characteristic values res1 of pixel units in a j row; performing point cloud conversion on the measured depth value F1 corresponding to each pixel to obtain a spatial characteristic value PCL _ F1 in a corresponding spatial coordinate system; calculating a spatial row characteristic value res1_ colavg _ dx (j) in a spatial coordinate system row by row, wherein the spatial row characteristic value res1_ colavg _ dx (j) is an arithmetic mean value of spatial characteristic values PCL _ F1 corresponding to pixels in a j-th row; performing linear fitting on the pixel column characteristic value res1_ colavg (j) and the space column characteristic value res1_ colavg _ dx (j) to obtain a fitting function res1_ colavg ═ k1 · res1_ colavg _ dx + b 1; and calculating the equivalent rotation angle alpha, arctan (k1) according to the slope k1 of the fitting function.
Optionally, the reference depth value D1 is calculated according to the equivalent rotation angle α and the standard depth value D0, and D1 is D0 · cos α.
Optionally, the method further includes: and compensating the reference plane according to the actual topographic characteristics of the plane where the depth camera is located to obtain a compensated reference depth value D1'.
Optionally, when the actual ground has a recess or a protrusion, the reference plane is moved by a compensation distance d in the height direction of the depth camera, so as to obtain a compensated reference plane, where the compensation distance d is greater than or equal to the height of the recess or the protrusion.
Optionally, the reference depth value corresponding to the compensated reference plane
Figure BDA0002712531960000022
Optionally, when the height difference between the plane and the depth camera is H', the reference depth value compensated for the reference plane is obtained
Figure BDA0002712531960000023
Optionally, the depth camera is a ToF camera.
The application also provides a ground obstacle detection method, which comprises the following steps: according to any one of the methods, obtaining a reference depth value corresponding to the ground where the reference depth value is located; distance detection is carried out, and the actually measured depth value at the position to be detected is compared with the corresponding reference depth value; if the measured depth value is larger than or equal to the reference depth value, replacing the measured depth value with a characteristic value, and marking the corresponding position as a passable area.
Optionally, the method further includes: and if the actually measured depth value is smaller than the reference depth value, taking the actually measured depth value at the corresponding position as the depth value of the obstacle.
The present application further provides a depth camera, comprising: the detection module is used for acquiring a detection signal; the processor is used for controlling the detection module to carry out distance detection and acquiring a measured depth value according to a detection signal; a storage module, configured to store a standard depth value and a feature value, where the feature value is used to mark a passable area, and a computer program for execution by the processor is further stored in the storage module, and when the computer program is executed by the processor, the computer program can implement the method for adjusting a reference plane according to any one of the above descriptions to obtain a reference depth value.
Optionally, when executed by the processor, the computer program is further capable of performing the following steps: controlling the detection module to carry out distance detection to obtain a detection signal; obtaining an actually measured depth value at the measured position according to the detection signal; comparing the measured depth value with a corresponding reference depth value; if the measured depth value is larger than or equal to the reference depth value, replacing the measured depth value with a characteristic value, and marking the corresponding position as a passable area.
Optionally, the method further includes: and if the actually measured depth value is smaller than the reference depth value, taking the actually measured depth value of the corresponding position as the depth value of the obstacle.
The application also provides a ground navigation device comprising a depth camera as in any one of the above.
The reference plane adjusting method is based on the standard depth value of the standard plane which is not interfered, and the equivalent rotation angle of the depth camera is combined to obtain the corrected reference plane and the corresponding reference depth value, the reference plane can represent the plane where the reference plane is located more accurately, and the influence of the measuring environment on the identification accuracy of the plane where the depth camera is located is avoided. Furthermore, the reference plane can be compensated, and the reference plane is translated by combining the topography characteristics of the plane, so that measurement noise caused by the topography change with small fluctuation is ignored.
According to the obstacle detection method, the reference plane is used for representing the actual ground, the reference depth value is not interfered by the measuring environment, so that the accuracy of detecting the obstacle on the ground can be improved, the obstacle and the passable area are distinguished through parameter calibration, the algorithm is simple, and the method is easy to realize.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a and 1b are schematic diagrams of ToF camera ranging according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating a method for adjusting a reference plane according to an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating an embodiment of obtaining a standard depth of a standard plane;
FIG. 4 is a schematic diagram of a depth camera with an equivalent rotation angle according to an embodiment of the present application;
FIG. 5a is a schematic diagram illustrating a distribution of pixel feature values in a pixel coordinate system according to an embodiment of the present application;
FIG. 5b is a graph illustrating a pixel row characteristic value and a spatial row characteristic value according to an embodiment of the present application;
fig. 6 is a schematic flow chart illustrating a method for detecting a ground obstacle according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a depth camera for ground obstacle detection according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a depth camera for ground obstacle detection according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a depth camera for ground obstacle detection according to an embodiment of the present invention;
FIG. 10a and FIG. 10b are schematic diagrams illustrating an actual distance measurement effect in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a depth camera in an embodiment of the present application.
Detailed Description
As described in the background art, in the prior art, due to the complexity of the ground material and the ground scene, the development difficulty of the corresponding ground identification algorithm is high, especially at the position close to the ground, the reflectivity distribution is complex, and there are significant factors such as multipath reflection, which may cause a large error for the depth measurement of the depth camera, especially for the ToF camera that measures the distance by detecting the time difference between the light and the reflected light, which may cause a significant measurement error, so that the ground position cannot be accurately identified, and the height measurement of the obstacle is deviated.
Please refer to fig. 1a and fig. 1b, which are schematic structural diagrams of a ToF camera.
The ToF camera includes a light source module 11 and a sensing module 12, the light source module 11 is configured to emit a detection light to the object M to be measured, the sensing module 12 is configured to receive a reflection light of the detection light after reaching the surface of the object M to be measured, and the distance between the object to be measured and the depth camera can be calculated according to the time when the detection light is emitted and received after being reflected.
Fig. 1a shows that the detection light directly reaches the object M to be measured and is reflected and then received by the sensing module 12, and the time of light propagation corresponds to the distance between the depth camera and the object M to be measured. In fig. 1b, when there are other objects around the object M to be measured, for example, there is an object N, part of the detection light reaches the surface of the object N to be measured, and then reaches the surface of the object M to be measured, and is received by the sensing module 12 after being reflected. At this time, part of the light received by the sensing module 12 is reflected by multiple paths, resulting in a larger measurement result. This generally occurs when the reflectivity of the surface of the object N is greater than the reflectivity of the object M, and the multipath reflection has a greater intensity and a greater effect on the measurement results.
The accuracy of ground identification determines the accuracy of judging the obstacles on the plane where the depth camera is located, and in the case of complex scenes, the identification algorithm faces greater development difficulty. The inventor finds that the reference depth value calibrated by external reference can be used, the reference plane corresponding to the reference depth value is used for representing the plane where the depth camera is located, the actually-measured depth value is compared with the reference depth value, and the plane where the depth camera is located is separated from the obstacle through characteristic value calibration according to the comparison result, so that the obstacle identification is not influenced by the ground scene, and the obstacle identification accuracy is improved.
Further research finds that due to the complex environment of the measurement scene, a certain rotation angle exists between the calibrated reference plane and the actual plane, for example, the installation error of the depth camera and the pressure influence on different positions of the equipment provided with the depth camera can cause the depth camera to rotate in the plane where the sensing plane is located, so that the depth value of the current reference plane cannot represent the depth value of the plane where the depth camera is located; or, in the actual measurement process, due to problems such as MPI (multi-path reflection) or scattering, a certain inclination angle exists between the actually detected plane and the reference plane, which is equivalent to that the depth camera rotates in the plane where the sensing plane is located. In all the above cases, the reference plane deviates from the actual plane, and the plane and the obstacle cannot be accurately distinguished.
Therefore, the reference plane needs to be further adjusted during the actual detection process.
In order to solve the above problems, the present invention provides a new method for adjusting a reference plane.
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. The following embodiments and their technical features may be combined with each other without conflict.
Fig. 2 is a schematic flow chart illustrating a method for adjusting a reference plane according to an embodiment of the invention.
In this embodiment, the method for adjusting the reference plane includes the following steps:
in step S201, a standard depth value D0 of a standard plane is obtained.
The method for acquiring the standard depth value comprises the following steps: selecting a standard plane R, fixing the position of a depth camera at a calibration position, detecting the depth of each position of the standard plane R, obtaining and storing the standard depth value D0.
The standard plane R is a flat plane, and the detected standard depth value is not interfered by the external environment, i.e. the standard depth value D0 is not affected by MPI, reflectivity or scattered light, and can accurately reflect the actual depth of the plane having the specific relative position with the depth camera. The depth camera may be a ToF camera, a structured light depth camera, an RGB binocular camera, or the like having a depth measurement function.
Referring to fig. 3, a schematic diagram of detecting standard depth data of a standard plane according to an embodiment of the invention is shown. The standard plane R is a flat ground, the reflectivity of each position of the surface is consistent, and no other object or reflecting surface exists in the measured area of the standard plane R, so that the influence of the external environment on the measured standard depth value D0 is eliminated.
The fixed depth camera, in this embodiment, employing ToF camera 301 as the depth camera, has a significant effect on the results of depth detection of the ToF camera by the external environment compared to other types of depth cameras. The ToF camera 301 may be a camera module, or may be mounted on an electronic device. And fixing the depth camera at a calibration position, wherein the calibration position corresponds to the relative set position of the depth camera and the ground in the actual ground navigation equipment. In this embodiment, taking as an example the acquisition of a standard plane characterizing the ground, when ToF camera 301 is located at the calibration position, the sensing plane is perpendicular to said standard plane R, and the perpendicular distance between ToF camera 301 and the standard plane is H. The vertical distance specifically means that the height difference between the center of the sensing plane of the ToF camera 301 and the standard plane R is H. In other embodiments, the calibration position may also be set according to a specific actual situation.
The sensing module of ToF camera 301 includes pixel arrays, each of which corresponds to a position of standard plane R, and in the case where the pixel position is fixed and height H of ToF camera 301 is fixed, the distance between the corresponding pixels at the positions on standard plane R is a certain value. The ToF camera 301 obtains and stores depth data at each position of the standard plane R as a standard depth value D0 of the standard plane R.
Step S202: and measuring the actual plane by using a depth camera to obtain a measured depth value F1.
The measured depth value F1 corresponds only to the measured depth value where no obstacle is actually present in the plane in which it is present.
And S203, obtaining an equivalent rotation angle alpha of the depth camera according to the measured depth value F1 and the standard depth value D0.
The equivalent rotation angle α is a rotation angle of the depth camera in a plane in which the sensing surface is located. The sensing surface is a depth camera pixel array surface.
Please refer to fig. 4, which is a schematic diagram illustrating an equivalent rotation angle of a depth camera according to an embodiment of the present invention.
In this embodiment, taking the ToF camera as an example, a sensing plane of the ToF camera is parallel to a direction of the fixed shaft, and an equivalent rotation angle α exists between the sensing plane and the calibration position, where the equivalent rotation angle α may be an actual tilt angle generated by a mounting error in a mounting process, or a tilt of a device on which the depth camera is mounted, or the like; it is also possible that the detected scene environment causes the ToF camera to detect the detected plane obtained by the actual plane, and the rotation angle generated relative to the actual plane is equivalent to the equivalent rotation angle generated by the rotation of the depth camera in the plane of the sensing surface.
The magnitude of the equivalent rotation angle α can be obtained by detecting the located plane to obtain the measured depth value F1 and the standard depth value D0.
In one embodiment, the method of obtaining the equivalent rotation angle α is as follows:
step 1, obtaining a pixel characteristic value res1, wherein
Figure BDA0002712531960000081
H is the height difference between the standard plane R and the depth camera center for the standard depth value D0. Wherein res1 represents the offset of the measured depth value F and the standard depth value D0 of each pixel in the direction perpendicular to the standard plane. The pixel characteristic value res1 can be calculated for each pixel, or for pixels within the target area.
Step 2, calculating the characteristic value res1_ colavg (j) of the pixel column under the pixel coordinate system row by row, wherein res1_ colavg (j) is the arithmetic average value of the characteristic value res1 of the pixel unit in the jth column, and the specific calculation formula is as follows:
Figure BDA0002712531960000082
wherein j is the number of columns, i is the pixel in the selected ith row in the jth column of pixels, and n is less than or equal to the total number of pixels in one column of pixels. The pixel characteristic values of at least two or more pixels in the jth column of pixels may be chosen to obtain the pixel column characteristic value res1_ colavg (j).
Calculating each column of pixels in the pixel array to obtain a corresponding pixel column characteristic value, wherein j is 1: col _ size, and col _ size is the number of pixel columns of the TOF camera, and calculating column by column to obtain a pixel column characteristic value res1_ colavg corresponding to each pixel column; the characteristic values of a plurality of pixel columns corresponding to the pixels of the specific column can also be obtained by calculating the characteristic values of a plurality of specific columns of pixels, namely j takes a plurality of values from 1 to col _ size, and the characteristic values of the pixel columns corresponding to the pixels of the plurality of columns can be obtained.
And step 3: and performing point cloud conversion on the measured depth value F1 to obtain a spatial coordinate value PCL _ F1 in a corresponding spatial coordinate system.
And (3) performing point cloud conversion on the measured depth value F1 corresponding to each pixel measured in the step (1) and converting the point cloud conversion into a spatial characteristic value PCL _ F1 in a spatial coordinate system. The point cloud conversion can be performed by methods known to those skilled in the art, and will not be described herein.
The point cloud conversion may be performed on the measured depth values F1 corresponding to all pixels in the pixel array, and a corresponding spatial coordinate feature value PCL _ F1 is obtained. It is also possible to perform point cloud conversion only on the measured depth values F1 corresponding to pixels within the region of interest (ROI) or within the target region.
And 4, calculating a spatial row characteristic value res1_ colavg _ dx (j) in a spatial coordinate system row by row, wherein the spatial row characteristic value res1_ colavg _ dx (j) is an arithmetic average value of the spatial characteristic value PCL _ F1 corresponding to the pixel of the jth row.
The specific calculation formula is as follows:
Figure BDA0002712531960000091
wherein j is the number of columns, i is the pixel in the selected ith row in the jth column of pixels, and n is less than or equal to the total number of pixels in one column of pixels. The spatial characteristic values corresponding to at least two pixels in the jth column of pixels may be selected to obtain the spatial column characteristic value res1_ colavg _ dx (j).
Calculating each column of pixels in the pixel array to obtain a corresponding spatial feature value, wherein j is 1: col _ size, and col _ size is the number of pixel columns of the ToF camera; calculating column by column to obtain a space column characteristic value res1_ colavg _ dx corresponding to each column of pixels; or only the spatial feature values corresponding to a plurality of specific columns of pixels may be calculated, so as to obtain a plurality of spatial column feature values corresponding to the specific columns of pixels, that is, j takes a plurality of values from 1 to col _ size, and obtains the spatial column feature values corresponding to the plurality of columns of pixels.
And 5: performing linear fitting on the pixel column characteristic value res1_ colavg (j) and the spatial column characteristic value res1_ colavg _ dx (j) to obtain a fitting function:
res1_colavg=k1·res1_colavg_dx+b1。
preferably, the linear fitting may be performed using a least squares method. The pixel column characteristic value and the space column characteristic value corresponding to each column of pixels of the pixel array can be subjected to linear fitting to obtain the fitting function; or only the pixel column characteristic values and the space column characteristic values corresponding to a plurality of specific columns of pixels can be selected for fitting to obtain the fitting function, so that the calculation efficiency is improved.
And 6, obtaining the equivalent rotation angle alpha (arctan) (k1) according to the slope k 1.
In other embodiments, the value of the equivalent rotation angle α may also be obtained according to other calculation manners.
Referring to fig. 5a, an embodiment of a distribution diagram of the obtained pixel characteristic value res1 along the column pixels in the pixel space is shown; fig. 5b is a graph illustrating a pixel row characteristic value res1_ colavg in the pixel coordinate system and a spatial row characteristic value res1_ colavg _ dx in the spatial coordinate system, res1_ colavg and res1_ colavg _ dx are in a linear relationship, a fitted curve res1_ colavg ═ 0.086363 · res1_ colavg _ dx ± 1.8148 is obtained through linear fitting, a slope k1 ═ 0.086363 is obtained, and the equivalent rotation angle α ═ 4.936 °.
Step S204: and obtaining a reference plane corresponding to the actually located plane and a reference depth value D1 of the reference plane according to the equivalent rotation angle alpha and the standard depth value D0.
Due to the existence of the equivalent rotation angle α, the standard plane and the standard depth value D0 thereof cannot accurately represent the depth information of the actual ground, and therefore, a reference plane capable of representing the actual plane is required to be acquired based on the standard plane R. The reference depth value D1 of the reference plane is obtained through the standard depth value D0, the reference depth value D1 is not affected by the detection environment, and the actual depth value of the plane where the reference plane is located can be accurately reflected.
Referring to fig. 4, in the calibration position, the sensing surface of the ToF camera is perpendicular to the standard plane R, and the standard plane R may be used to represent the actual plane; and in the actual measurement process in an inclined position with an equivalent rotation angle alpha. At this time, the standard plane R corresponding to the ToF camera in the tilted state cannot represent the actual plane where the ToF camera is located, and the reference depth value D1 needs to be obtained according to the equivalent rotation angle α and the standard depth value D0.
According to fig. 4, there is a geometrical relationship H1 ═ H · cos α, where H is the nominal height of the center of the sensing plane of the ToF camera, and H1 is the actual height of the center of the sensing plane in the presence of the equivalent rotation angle α.
Due to the fact that
Figure BDA0002712531960000101
So that it is possible to obtain: d1 ═ D0 · cos α.
In some embodiments, further comprising: and compensating the reference plane according to the actual topography characteristic of the plane where the depth camera is located to obtain a compensated reference depth value D1'.
In some embodiments, some undulations, such as some lower amplitude bumps or depressions, are present in the ground. In practical ground navigation applications, these depressions or protrusions of the ground need to be ignored. For example, in the application scenario of the sweeping robot, there are usually small height floor protrusions such as carpet, floor tile gaps, door frames, etc. on the floor, and these areas need to be passed by the sweeping robot normally and cannot be identified as obstacles. The reference plane may be compensated such that the protrusions or depressions are both below the compensated reference plane and not identified as obstacles. When the actual ground has a recess or a protrusion, translating the reference plane to the height direction of the depth camera by a compensation distance d to obtain a compensated reference plane, wherein the compensation distance d is greater than or equal to the height of the recess or the protrusion. The reference depth value D1' corresponding to the compensated reference plane,
Figure BDA0002712531960000111
the compensation distance d mayThe dynamic adjustment is carried out according to the actual situation of the ground for an adjustable parameter. For example, in the case of a ground surface with a large (poor flatness), d may be set to 10mm, and the reference plane may be raised by 10 mm; in the region where the flatness of the ground is high, the translation distance is appropriately reduced, for example, d is set to 5 mm.
In some embodiments, when there is no equivalent rotation angle, the distance between the depth camera and the lying plane is H0; when the equivalent rotation angle exists, after a reference plane is obtained, the reference depth value after the reference plane is further compensated
Figure BDA0002712531960000112
The standard plane and the reference plane can be both horizontal planes and inclined planes, and can be set according to actual conditions.
By the above method for adjusting the reference plane, the reference plane and its reference depth value D1 or the compensated reference plane and its compensated reference depth value D1' capable of representing the actual depth of the plane where the depth camera is located are obtained according to the standard depth D0 without being interfered by the environment.
Based on the above adjusting method of the reference plane, the embodiment of the invention also provides a ground obstacle detecting method.
Referring to fig. 6, the ground obstacle detection method includes the following steps:
step S601, obtaining a reference depth value corresponding to the ground.
The method for referencing depth values may be obtained according to the method for adjusting a reference plane described in the above embodiments.
Step S602, distance detection is carried out.
Distance detection is performed in the actual measurement scene using a depth camera, such as a ToF camera, to obtain measured depth values of the measured object at various locations within the detection field of view.
And step S603, comparing the measured depth value at the measured position with the corresponding reference depth value.
The reference depth value corresponds to a reference plane which is equivalent to an externally calibrated reference ground without environmental interference, the actually measured depth value is compared with the reference depth value, namely the distance of the measured object is compared with the reference ground, and the actual obstacle in the measured object is distinguished from the reference ground.
In step S604, it is determined whether the measured depth value is smaller than the reference depth value.
If not, step S605 is executed to replace the measured depth value with a feature value and mark the corresponding position as a passable area.
When the actually measured depth value is equal to the reference depth value, the height of the position is consistent with the height of the reference plane and is the ground; when the measured depth value is greater than the reference depth value, the corresponding situation may include: this position department is ground, but because receive multipath reflection light or other factors interference for the actual measurement depth value is greater than the actual depth value on ground, consequently, the actual measurement depth value is bigger than normal, leads to the ground that detects out to take place the error of caving in. To solve this problem, the measured depth value is replaced with a feature value by which the detected actual ground surface is marked as a passable area so that the entire actual ground surface corresponds to a same feature value. In the distance detection image obtained by rendering the gray scale or the color according to the actually measured depth value, the actually measured depth value on the actual ground is replaced by the characteristic value, the gray scale or the color is the same, the measured depth of other areas different from the characteristic value displays different gray scales or colors, namely the different gray scales or colors are the obstacles, and therefore the obstacles can be obviously distinguished from the actual ground. In the entire distance detection depth map, only the obstacle is displayed, and the actual ground serves as a background. In one embodiment, the characteristic value is 0, and the position with the measured depth value greater than 0 is the obstacle, so that the accuracy of obstacle identification can be improved. In other embodiments, the characteristic value may be set to other values, which is not limited herein.
If the measured depth value is smaller than the reference depth value, step S606 is executed to use the measured depth value at the position as the depth value of the obstacle. When the measured depth value is smaller than the reference depth value, it indicates that an obstacle exists at the position, and flight of the detected light is blocked, resulting in a smaller measured depth value. Therefore, if the actually measured depth value is smaller than the reference depth value, the actually measured depth value can be used as the depth value of the obstacle, and the height and the horizontal distance of the obstacle can be judged according to the depth value.
According to the scheme, a calibrated reference ground is set for an actual measurement scene through calibration of the reference depth value of the reference plane, so that the influence of a measurement environment on ground identification is avoided, the accuracy of obstacle detection is improved, and through parameter calibration, the algorithm is simple and easy to realize.
Please refer to fig. 7, which is a schematic diagram of ground obstacle detection according to an embodiment of the present invention.
In this embodiment, the actual ground M is a flat plane and has a vertical distance H from the ToF camera 601.
Due to the existence of a plurality of objects, for example, including the object a and the object N, the multi-path reflection caused by each object may result in a large depth value of the measured ground surface, so that the measured ground surface P is lower than the actual ground surface M.
By the method for adjusting the reference plane in the above specific embodiment, the reference depth value corresponding to the actual ground M is obtained. In this embodiment, when the measured depth value is greater than or equal to the corresponding reference depth value, the characteristic value is used to replace the measured depth value, thereby eliminating the interference of the external environment to the detection structure. By using the reference depth value of the reference plane as the measured depth value of the actual ground M, the interference of the outside world on the ground detection can be eliminated.
When an obstacle, such as an object a, is detected, the originally corresponding ground position is a point C, and when the object a exists, due to the blockage of the object a to the detection light, the actually measured depth value obtained by the corresponding pixel is smaller than the reference depth value corresponding to the point C. Thus, when the measured depth value is smaller than the reference depth value, the measured depth value is retained as the measured depth value.
By the method, the barrier can be separated from the ground without being influenced by complex ground materials and surrounding environment, so that the barrier can be accurately identified.
In the case where the ground of the area under test has a certain height of protrusion, or depression, these ground depressions or protrusions need to be ignored in the actual ground navigation application. For example, in the application scenario of the sweeping robot, there are usually small height floor protrusions such as carpet, floor tile gaps, door frames, etc. on the floor, and these areas need to be passed by the sweeping robot normally and cannot be identified as obstacles. The identification of the protrusion or depression as an obstacle can be avoided by adjusting the height of the reference plane.
Please refer to fig. 8, which is a schematic diagram illustrating ground obstacle detection according to another embodiment of the present invention.
In this embodiment, the actual floor surface M has a projection 701 with a projection height d 1. And d1 is raised to the reference plane used for representing the actual ground M after being adjusted according to the equivalent rotation angle alpha, and a compensated reference plane M 'is formed, and the compensated reference plane M' is taken as a reference ground, which is equivalent to raising the ground by d 1.
During the actual measurement, only objects having a height above the compensated reference plane M' are considered as obstacles. Since the compensated reference plane M' is higher than the actual ground M, the measured height of the object a is the actual height minus the height difference d 1. Since the height of the obstacle is usually large, although the measurement height is small, the judgment of the obstacle is not affected.
In other embodiments, where the floor has multiple projections, the height difference D1 may be set to coincide with, or be slightly greater than, the highest projection height.
Fig. 9 is a schematic view of ground obstacle detection according to another embodiment of the present invention.
In this embodiment, the actual ground M has a recess 801 with a depth d 2. The reference plane used for representing the actual ground M after being adjusted according to the equivalent rotation angle alpha is raised by d2 relative to the standard plane R to form a compensated reference plane M'.
When ToF camera 601 is located at a flat position of actual ground surface M, compensated reference plane M' of ToF camera 601 is higher than the actual ground surface, and can mask the entire actual ground surface, including said depression 801.
When the ToF camera 601 is moved into the recess 801, for example to the lowest of said recess 801. At this time, the compensated reference plane M' descends to the actual ground M along with the ToF camera, so that the actual ground M can be shielded, and the actual ground M outside the recess 801 is prevented from being detected as an obstacle.
In other embodiments, in order to completely shield the actual ground when ToF camera 601 is brought into recess 801, the reference plane M' of ToF camera 601 may be elevated a distance above d 2.
In other embodiments of the present invention, the ToF camera 601 can dynamically compensate the reference plane according to the topography of the actual ground during the movement process.
Please refer to fig. 10a and 10b, which are schematic diagrams illustrating an actual ranging effect according to an embodiment of the present invention.
In this embodiment, for a checkerboard floor environment using tile floors of different colors, a depth map obtained by performing gray level rendering on measured depth values measured by a conventional distance measurement method is shown in fig. 10a, where the measured distances at positions on the floor periodically change with the tile colors due to differences in tile reflectivity of different colors and tile gap positions due to factors such as multipath reflection, and the projection of wires appears on the floor due to the reflection projection of tiles to ground wires.
Referring to fig. 10b, by calibrating the reference depth value of the reference plane of the ToF camera, the method of the present invention shields the influence of the material and reflectivity of the ground during the measurement process, eliminates the reflection of the wire, separates the wire from the actual ground, and accurately identifies the wire on the ground.
The embodiment of the invention also provides a depth camera.
Fig. 11 is a schematic structural diagram of a depth camera according to an embodiment of the invention.
The depth camera includes a storage module 1001, a detection module 1002, and a processor 1003.
The detection module 1002 is configured to acquire a distance detection signal at a measured position; the processor 1003 is configured to obtain a measured depth value at the measured position according to the distance detection signal.
In this embodiment, the depth camera is a ToF camera, and the detection module 1002 includes a light emitting unit and a sensing unit, wherein the light emitting unit is configured to emit detection light, and the sensing unit is configured to receive reflected light of the detection light. By detecting the time of flight of the light, a measured depth value of the measured position can be obtained. In other embodiments, the detection module may have other configurations depending on the type of depth camera.
The storage module 1001 is configured to store a standard depth value and a feature value, where the feature value is used to mark a passable area, and a computer program executed by the processor 1003 is further stored in the storage module 1001, and when the computer program is executed by the processor 1003, the computer program can implement the method for adjusting a reference plane in the foregoing embodiment, and obtains a reference depth value according to a measured depth value and the standard depth value obtained by the processor 1003.
In some embodiments, the computer program when executed by the processor 1003 is further capable of performing the steps of: controlling the detection module 1002 to perform distance detection; comparing the measured depth value at the measured position with the corresponding reference depth value; if the measured depth value is larger than or equal to the reference depth value, replacing the measured depth value with a characteristic value, and marking the corresponding position as a passable area. And if the actually measured depth value is smaller than the reference depth value, taking the actually measured depth value of the corresponding position as the depth value of the obstacle.
The depth camera can accurately separate the obstacle from the ground, and accuracy of obstacle identification is improved.
The embodiment of the invention also provides ground navigation equipment which comprises the depth camera in the embodiment, can accurately identify the obstacles in the ground navigation process, and is applied to complex ground scenes.
The above-mentioned embodiments are only examples of the present application, and not intended to limit the scope of the present application, and all equivalent structures or equivalent flow transformations made by the contents of the specification and the drawings, such as the combination of technical features between the embodiments and the direct or indirect application to other related technical fields, are also included in the scope of the present application.

Claims (14)

1. A method for adjusting a reference plane, comprising:
acquiring a standard depth value D0 of a standard plane;
measuring the actual plane by using a depth camera to obtain a measured depth value F1;
obtaining an equivalent rotation angle alpha of the depth camera according to the measured depth value F1 and the standard depth value D0, wherein the equivalent rotation angle alpha is a rotation angle of the depth camera in a plane where a sensing surface is located;
and obtaining a reference depth value D1 of the reference plane corresponding to the actually located plane according to the equivalent rotation angle alpha and the standard depth value D0.
2. Method for the adjustment of a reference plane according to claim 1, characterized in that the method for determining the equivalent rotation angle α comprises:
obtaining a pixel characteristic value res1 according to the measured depth value F1 and the standard depth value D0, wherein
Figure FDA0002712531950000011
H is the height difference between the standard plane corresponding to the standard depth value D0 and the depth camera;
calculating a pixel row characteristic value res1_ colavg (j) in a pixel coordinate system row by row, wherein res1_ colavg (j) is an arithmetic average value of pixel characteristic values res1 of pixel units in a j row;
performing point cloud conversion on the measured depth value F1 to obtain a spatial characteristic value PCL _ F1 in a corresponding spatial coordinate system;
calculating a spatial row characteristic value res1_ colavg _ dx (j) in a spatial coordinate system row by row, wherein the spatial row characteristic value res1_ colavg _ dx (j) is an arithmetic mean value of spatial characteristic values PCL _ F1 corresponding to pixels in a j-th row;
performing linear fitting on the pixel column characteristic value res1_ colavg (j) and the space column characteristic value res1_ colavg _ dx (j) to obtain a fitting function
res1_colavg=k1·res1_colavg_dx+b1;
And calculating the equivalent rotation angle alpha, arctan (k1) according to the slope k1 of the fitting function.
3. The method for adjusting a reference plane according to claim 2, wherein the reference depth value D1 is calculated according to an equivalent rotation angle α and a standard depth value D0:
D1=D0·cosα。
4. the method for adjusting a reference plane according to claim 1, further comprising: and compensating the reference plane according to the actual topographic characteristics of the plane where the depth camera is located to obtain a compensated reference depth value D1'.
5. The method for adjusting the reference plane according to claim 4, wherein when the actual ground has a depression or a protrusion, the reference plane is moved to the height direction of the depth camera by a compensation distance d to obtain a compensated reference plane, and the compensation distance d is greater than or equal to the height of the depression or the protrusion.
6. The method of claim 5, wherein the reference depth value corresponding to the compensated reference plane is the reference depth value
Figure FDA0002712531950000021
7. The method according to claim 4, wherein the reference depth value compensated for the reference plane is obtained when the height difference between the plane and the depth camera is H
Figure FDA0002712531950000022
8. The method of claim 1, wherein the depth camera is a ToF camera.
9. A method of detecting a ground obstacle, comprising:
the method according to any one of claims 1 to 8, obtaining a reference depth value corresponding to the ground;
distance detection is carried out, and the actually measured depth value at the position to be detected is compared with the corresponding reference depth value;
if the measured depth value is larger than or equal to the reference depth value, replacing the measured depth value with a characteristic value, and marking the corresponding position as a passable area.
10. The ground obstacle detection method according to claim 9, further comprising: and if the actually measured depth value is smaller than the reference depth value, taking the actually measured depth value at the corresponding position as the depth value of the obstacle.
11. A depth camera, comprising:
the detection module is used for acquiring a detection signal;
the processor is used for controlling the detection module to carry out distance detection and acquiring a measured depth value according to a detection signal;
a storage module for storing standard depth values and feature values, the feature values being used for marking a passable area, and a computer program being stored in the storage module for execution by the processor, the computer program being capable of implementing the method for adjusting a reference plane according to any one of claims 1 to 8 when executed by the processor to obtain a reference depth value.
12. The depth camera of claim 11, wherein the computer program, when executed by the processor, is further operable to perform the steps of: obtaining an actually measured depth value at the measured position according to the detection signal; comparing the measured depth value with a corresponding reference depth value; if the measured depth value is larger than or equal to the reference depth value, replacing the measured depth value with a characteristic value, and marking the corresponding position as a passable area.
13. The depth camera of claim 12, further comprising: and if the actually measured depth value is smaller than the reference depth value, taking the actually measured depth value of the corresponding position as the depth value of the obstacle.
14. A ground navigation device, comprising a depth camera according to any one of claims 12 to 13.
CN202011061523.9A 2020-09-30 2020-09-30 Reference plane adjustment and obstacle detection method, depth camera and navigation equipment Withdrawn CN112198528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011061523.9A CN112198528A (en) 2020-09-30 2020-09-30 Reference plane adjustment and obstacle detection method, depth camera and navigation equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011061523.9A CN112198528A (en) 2020-09-30 2020-09-30 Reference plane adjustment and obstacle detection method, depth camera and navigation equipment

Publications (1)

Publication Number Publication Date
CN112198528A true CN112198528A (en) 2021-01-08

Family

ID=74013152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011061523.9A Withdrawn CN112198528A (en) 2020-09-30 2020-09-30 Reference plane adjustment and obstacle detection method, depth camera and navigation equipment

Country Status (1)

Country Link
CN (1) CN112198528A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192144A (en) * 2021-04-22 2021-07-30 上海炬佑智能科技有限公司 ToF module parameter correction method, ToF device and electronic equipment
CN113689391A (en) * 2021-08-16 2021-11-23 炬佑智能科技(苏州)有限公司 ToF device installation parameter acquisition method and system and ToF device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192144A (en) * 2021-04-22 2021-07-30 上海炬佑智能科技有限公司 ToF module parameter correction method, ToF device and electronic equipment
CN113689391A (en) * 2021-08-16 2021-11-23 炬佑智能科技(苏州)有限公司 ToF device installation parameter acquisition method and system and ToF device

Similar Documents

Publication Publication Date Title
CN112198526A (en) Reference plane adjustment and obstacle detection method, depth camera and navigation equipment
US10291904B2 (en) Auto commissioning system and method
US7436522B2 (en) Method for determining the 3D coordinates of the surface of an object
US11602850B2 (en) Method for identifying moving object in three-dimensional space and robot for implementing same
JP3070953B2 (en) Method and system for point-by-point measurement of spatial coordinates
US8321167B2 (en) Surveying instrument and surveying compensation method
US20190128665A1 (en) Method for the three dimensional measurement of a moving objects during a known movement
US20110010033A1 (en) Autonomous mobile robot, self position estimation method, environmental map generation method, environmental map generation apparatus, and data structure for environmental map
US9134117B2 (en) Distance measuring system and distance measuring method
CN112198529B (en) Reference plane adjustment and obstacle detection method, depth camera and navigation equipment
CN112198528A (en) Reference plane adjustment and obstacle detection method, depth camera and navigation equipment
US10733740B2 (en) Recognition of changes in a detection zone
JPH07104127B2 (en) Position detection method and device
CN112198527B (en) Reference plane adjustment and obstacle detection method, depth camera and navigation equipment
US10444398B2 (en) Method of processing 3D sensor data to provide terrain segmentation
CN114035584B (en) Method for detecting obstacle by robot, robot and robot system
EP3550326A1 (en) Calibration of a sensor arrangement
CN111862182B (en) ToF camera, ground obstacle detection method thereof and ground navigation equipment
CN102401901B (en) Distance measurement system and distance measurement method
US7026620B2 (en) Method and device for the geometrical measurement of a material strip
EP4050377A1 (en) Three-dimensional image sensing system and related electronic device, and time-of-flight ranging method
Mader et al. An integrated flexible self-calibration approach for 2D laser scanning range finders applied to the Hokuyo UTM-30LX-EW
CN103852031A (en) Electronic device, and method for measuring shape of object
JP7363545B2 (en) Calibration judgment result presentation device, calibration judgment result presentation method and program
CN113050073B (en) Reference plane calibration method, obstacle detection method and distance detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210108