CN117872385A - Combined depth measuring device - Google Patents

Combined depth measuring device Download PDF

Info

Publication number
CN117872385A
CN117872385A CN202410046376.XA CN202410046376A CN117872385A CN 117872385 A CN117872385 A CN 117872385A CN 202410046376 A CN202410046376 A CN 202410046376A CN 117872385 A CN117872385 A CN 117872385A
Authority
CN
China
Prior art keywords
receiving end
range
transmitting end
depth
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410046376.XA
Other languages
Chinese (zh)
Inventor
杨煦
李文浩
葛启杰
黄龙祥
汪博
朱力
吕方璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Guangjian Aoshen Technology Co ltd
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Chongqing Guangjian Aoshen Technology Co ltd
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Guangjian Aoshen Technology Co ltd, Shenzhen Guangjian Technology Co Ltd filed Critical Chongqing Guangjian Aoshen Technology Co ltd
Publication of CN117872385A publication Critical patent/CN117872385A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of Optical Distance (AREA)

Abstract

The combined depth measuring device is characterized by comprising a first transmitting end, a first receiving end, a second transmitting end and a second receiving end; the first receiving end receives the reflected signal of the light rays sent by the first sending end, and generates a first image so as to measure a target object in a first range; the second receiving end receives the reflected signal of the light rays sent by the second transmitting end, and generates a second image so as to measure a target object in a second range; wherein the first range of values is greater than the second range of values; and when the first transmitting end and the first receiving end detect that the target object is smaller than the first range, starting the second transmitting end and the second receiving end to measure. The invention solves the problem that the measurement distance is out of the depth of field range, especially under the extremely near measurement distance, the effective measurement range is exceeded, so that the combined depth measurement device can quickly measure in the whole range, and has quick response speed and high precision.

Description

Combined depth measuring device
Technical Field
The invention relates to the technical field of depth reconstruction, in particular to a combined depth measuring device.
Background
With a 3D camera (also called a depth camera), distance information of a photographing space can be detected by the camera, which is the greatest difference from a general camera. The pictures taken by a common color camera can see all objects within the camera's view angle and record, but the data recorded does not contain the distance of these objects from the camera. Only by the semantic analysis of the image can it be determined which objects are far and which are near, there is no exact data.
The 3D camera can solve the problem, and the distance between each point in the image and the camera can be accurately known through the data acquired by the 3D camera, so that the three-dimensional space coordinate of each point in the image can be acquired by adding the (x, y) coordinate of the point in the 2D image. The real scene can be restored through the three-dimensional coordinates, and the application of scene modeling and the like is realized. The following specifically describes the principles of several depth cameras:
structured Light (invisible Light), usually using invisible laser Light with specific wavelength as Light source, emits Light with coded information, projects the Light on the object, calculates the distortion of the returned coded pattern by a certain algorithm to obtain the position and depth information of the object.
The time of flight (TOF) method, which uses the measured time of flight to obtain distance, simply by sending out a processed light, reflecting it back after it hits the object, capturing the time of the back and forth, and thus calculating the distance to the object quickly and accurately, since the speed of light and the wavelength of the modulated light are known.
Binocular stereo vision (Binocular Stereo Vision), which is an important form of machine vision, is based on the parallax principle and uses imaging equipment to acquire two images of a measured object from different positions, and three-dimensional information of the object is acquired by calculating the position deviation between corresponding points of the images.
A Laser Radar (english: laser Radar) is a Radar system that detects a characteristic quantity such as a position and a speed of a target by emitting a Laser beam. The working principle is that a detection signal (laser beam) is emitted to a target, then a received signal (target echo) reflected from the target is compared with the emission signal, and after proper processing, the related information of the target, such as parameters of the target, such as the distance, the azimuth, the altitude, the speed, the gesture, the even the shape and the like, can be obtained, so that the targets of an airplane, a missile and the like are detected, tracked and identified.
Different measurement technologies have different measurement ranges, but more and more application scenes require larger measurement ranges, and measurement can be performed not only at a far distance but also at a near place. The prior art solution is to combine multiple different depth cameras to achieve a full range of measurements, but this undoubtedly adds cost, occupies volume, and increases power consumption.
The foregoing background is only for the purpose of providing an understanding of the inventive concepts and technical aspects of the present invention and is not necessarily prior art to the present application and is not intended to be used to evaluate the novelty and creativity of the present application in the event that no clear evidence indicates that such is already disclosed at the filing date of the present application.
Disclosure of Invention
Therefore, the invention reconstructs through the characteristics of the received signal area, solves the problem that the measurement distance is out of the depth of field range, especially under the extremely near measurement distance, and exceeds the effective measurement range, so that the combined depth measurement device can quickly measure in the whole range, and has quick response speed and high precision.
In a first aspect, the present invention provides a combined depth measurement device, which is characterized by comprising a first transmitting end, a first receiving end, a second transmitting end and a second receiving end;
the first receiving end receives the reflected signal of the light rays sent by the first sending end, and generates a first image so as to measure a target object in a first range;
the second receiving end receives the reflected signal of the light rays sent by the second transmitting end, and generates a second image so as to measure a target object in a second range; wherein the first range of values is greater than the second range of values;
And when the first transmitting end and the first receiving end detect that the target object is smaller than the first range, starting the second transmitting end and the second receiving end to measure.
Optionally, the combined depth measurement device is characterized in that when the first transmitting end and the first receiving end detect that the target object is smaller than the first range, a bright area and a dark area of the first image are identified, the boundary between the bright area and the dark area is calculated, the calibrated position obtained in the calibration stage is subtracted from the position of the boundary to obtain parallax, and the depth of the target object is reconstructed according to the reconstruction principle, so that the second transmitting end and the second receiving end can be focused rapidly.
Optionally, the combined depth measurement device is characterized in that the second transmitting end, the first receiving end and the second receiving end are arranged in sequence.
Optionally, the combined depth measuring device is characterized in that the combined depth measuring device comprises:
step S31: calculating the bright area and the dark area according to the rows to obtain boundary pixel points of each row;
step S32: subtracting the positions of the boundary pixel points from the calibration positions to obtain parallax;
Step S33: reconstructing the depth of the target object according to the parallax.
Optionally, the combined depth measurement device is characterized by further comprising a third transmitting end and a third receiving end. The third transmitting end and the third receiving end have a smaller measurement range than the first transmitting end, the first receiving end, the second transmitting end and the second receiving end.
Optionally, the combined depth measurement device is characterized in that when the first transmitting end and the first receiving end detect that the target object is smaller than the first range, the second transmitting end and the second receiving end or the third transmitting end and the third receiving end are selectively started according to the depth value of the target object.
Optionally, the combined depth measurement device is characterized in that the third transmitting end, the second transmitting end, the first receiving end, the second receiving end and the third receiving end are arranged in sequence.
Optionally, the combined depth measurement device is characterized in that the rows are parallel lines of the baseline.
Optionally, the combined depth measuring device is characterized in that the depth is measured
In a second aspect, the present invention provides a combined depth measurement device, which is characterized by comprising a first transmitting end, a first receiving end and a second transmitting end;
the first receiving end receives the reflected signal of the light rays sent by the first sending end, and generates a first image so as to measure a target object in a first range;
the first receiving end receives the reflected signal of the light rays sent by the second transmitting end, and generates a second image so as to measure a target object in a second range; wherein the first range of values is greater than the second range of values; the first transmitting end and the second transmitting end do not transmit at the same time;
when the first transmitting end and the first receiving end detect that the target object is smaller than the first range, the second transmitting end and the first receiving end are started to conduct measurement.
Compared with the prior art, the invention has the following beneficial effects:
aiming at the extremely short distance which cannot be effectively measured by the depth measurement technology, the boundary between the bright area and the dark area formed by the infrared signal of the depth measurement technology at the receiving end is used for calculating the calibration position of the calibration stage, and the depth of the target object is reconstructed, so that the depth value of the target object can be obtained by the depth measurement technology at the extremely short distance, the measurement range of the depth measurement technology is enlarged, and the method is suitable for more application scenes.
According to the invention, by measuring the depth in the extremely short distance, the transmitting end and the receiving end which are closer to the measuring range can be started, so that the measuring device can measure in the whole range, and the measuring device has a higher measuring range and integration level.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art. Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
fig. 1 is a schematic structural diagram of a depth camera according to an embodiment of the present invention;
FIG. 2 is a graph showing a single speckle as a function of distance in an embodiment of the present invention;
FIG. 3 is a schematic view showing a speckle pattern according to a distance according to an embodiment of the present invention;
FIG. 4 is a speckle image at very close range in an embodiment of the invention;
FIG. 5 is a schematic diagram of a camera imaging according to an embodiment of the invention;
FIG. 6 is a graph of parallax versus depth according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an object under test according to an embodiment of the present invention;
FIG. 8 is a calibration result of a range finder in an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a method for deep reconstruction at very close range according to an embodiment of the present invention;
FIG. 10 is a flowchart illustrating steps for reconstructing a depth of a target object according to an embodiment of the present invention;
FIG. 11 is a flowchart illustrating another embodiment of a method for depth reconstruction at very close range;
FIG. 12 is a flowchart illustrating another step of reconstructing a depth of a target object according to an embodiment of the present invention;
FIG. 13 is a diagram of a lidar transmitter according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of an array laser according to an embodiment of the present invention;
FIG. 15 is a schematic view of a combined depth measurement device according to an embodiment of the present invention;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention provides a novel depth camera measuring method, which aims to solve the problems in the prior art.
The following describes the technical scheme of the present invention and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
As shown in fig. 1, a depth camera typically includes two ends. For a single receiver depth camera, a transmitting end and a receiving end are included. For a multi-receiver depth camera, a first sensor and a second sensor are included. It should be noted that if the depth camera has multiple receivers and is also an active light source, the processing may be performed by a single receiver depth camera or by multiple receiver depth cameras. The choice of whether to process with a single receiver depth camera or multiple receiver depth cameras may be based on the imaging quality of the obtained depth image. Depth cameras also typically include a processor to complete the processing of the image and data. A single receiver depth camera is described below as an example.
The transmitting end and the receiving end face in the same direction, so that the receiving end can receive the reflected signal to the greatest extent. Here, the transmitting end and the receiving end are located on the same plane. It should be noted that if the transmitting end and the receiver are not located in the same plane, but a certain depth difference exists, the receiving end is taken as a reference for measurement.
As shown in fig. 1, there is a certain effective range between the transmitting end and the receiving end. The transmitting end is a transmitting range, and the receiving end is a receiving range. Starting with the plane of the receiving end as a reference, obtaining two distances d according to the intersecting condition of the transmitting range and the receiving range 1 、d 2 。d 1 Is the depth value at which the transmission range and the reception range intersect for the first time. d, d 2 Depth values that completely cover the reception range for the transmission range. In the prior art, d 2 For the minimum measurement of the depth camera, only the depth value is greater than d 2 Is the effective measured depth. As the distance increases, the light of the transmitting end becomes weaker and the signal received by the receiving end becomes weaker, so that the maximum measurement range can be defined. Of course, for iTOF, winding is also one criterion for determining the maximum measurement range. Whereas the invention can obtain a value less than d 2 Is a depth value of (a).
As shown in fig. 2, when the emitting end emits the structured light, the speckle becomes large as the measurement distance becomes small, and becomes blurred when it is sufficiently close. As the single light spot becomes larger when the distance is reduced, as shown in fig. 3, the speckle pattern becomes gradually blurred along with the reduction of the measured distance, and the speckle pattern is dispersed, blurred and aliased together, and the speckle pattern can form a flaky bright area, so that the deformation of the single speckle can not be effectively obtained, and the structural light measurement is invalid. Under the extremely short distance, when the receiving end normally receives, an overexposed area often appears due to the overlarge signal intensity, and the method can be used for distinguishing whether the target object is in the extremely short distance.
For the TOF technology, when the measurement distance becomes smaller, the light propagation time is shortened, so that the error duty ratio of the receiver is dramatically increased, and the TOF cannot obtain effective data for measurement at a short distance due to the limitations of device precision and frequency such as a Single Photon Avalanche Diode (SPAD).
For binocular techniques, when the measured distance is less than d 2 When the common field area of the first sensor and the second sensor is too largeThe obtained image quality cannot be ensured, and the complete image cannot be matched, so that the images obtained by the first sensor and the second sensor cannot be reliably calculated, and effective data cannot be obtained.
The invention is used for depth reconstruction beyond the depth of field of the depth camera lens, and the reconstruction cannot be completed by means of a conventional depth reconstruction method due to the reasons. The invention completes the depth reconstruction under the extremely short distance by the algorithm on the premise of not changing any hardware.
The invention uses the property that the smaller the depth is, the larger the parallax is to reconstruct the depth. The property is the fundamental property of visual measurement of binocular cameras or other depth cameras, independent of camera type, camera parameters or structured light type.
For single receiver depth cameras, the FOV overlap between the transmit and receive ends is smaller and smaller as the target object depth is reduced. The image received by the receiving end is characterized in that the image is bright on one side and dark on the other side. Taking a speckle pattern as an example, as shown in fig. 4, the bright side is caused by speckle irradiation, and at this time, the speckle cannot be distinguished in the image due to dispersion, blurring and aliasing; the dark side is outside the projection range of the projector and the brightness of the image is significantly lower due to the lack of speckle illumination.
A distinct boundary is presented between the bright and dark areas. The relation between the position of the boundary in the image and the depth of the measured object, and the relation between the parallax and the depth of the measured object are completely identical. That is, the fully blended bright areas may be considered as a whole. While the boundaries of the bright areas in the reference image may be acquired during the calibration phase.
As shown in FIG. 5, the calibration depth is H, the measurement depth is D, and the projection end T x The projected light is reflected by the target object and is received by the receiving end R x Is received at the imaging plane of the camera. Projection end T x And receiving end R x Is B. Receiving end R x Is f. The deviation of the target object and the calibration object on the imaging plane is parallax delta x. Depth of measurement measured in this example
And subtracting the boundary position obtained in the calibration stage from the boundary position positioned in the measurement process to obtain parallax, and reconstructing the depth of the target object by utilizing the parallax according to a reconstruction principle. The parallax versus depth relationship is shown in fig. 6. The smaller the depth, the greater the parallax, which means that the closer the distance, the more accurate the measurement of this embodiment, and thus better complements existing depth measurement techniques.
The light projected by the transmitting end is rectangular, and the receiver is also rectangular. The shape of the target object may be any shape. As shown in fig. 7, when the object to be measured is in the shape of a circular arc, there is no vertical boundary on the image received by the receiver, but a boundary line formed by an irregular shape. Thus, in calculating the depth value, the depth value is calculated for each pixel in the image in a row. The depth value at the intersection point of the boundary of the transmitting range of the transmitting end and the target object is obtained in this embodiment, and the depth value at the intersection point is represented as the depth value of the target object.
The surface reflectivity and surface flatness of the target object have an effect on the reflection of light. Since the intensity of the light emitted by the emitter is relatively high in this embodiment, the reflectivity of the target object has little effect on the imaging of the receiver, and will not be described in detail here. When the surface of the target object is flat, the diffuse reflection effect is lower, and the boundary of the image generated by the receiver is clearer. When the surface of the target object is uneven, the diffuse reflection effect is obvious, and the boundary of the image generated by the receiver is blurred.
In some embodiments, the boundary is compensated with calibration to obtain an accurate boundary value. The embodiment is suitable for fixed measuring objects, such as specific external packing objects, scenes of similar materials or surfaces, warehouses, garages and the like in various industrial places like logistics. Fig. 8 is a table comparing depth values measured with a structured light camera with depth values measured with a range finder. According to the data in the table in fig. 8, the boundary can be calibrated and compensated, so that a more accurate boundary value is obtained.
In some embodiments, the boundary of the image is clearer by adjusting the exposure time of the receiving end, so as to obtain an accurate boundary value. When the depth camera is sufficiently close to the target object, the intensity of the light intensity received by the receiving end is too high, which may cause overexposure. On an image, the bright area is the same surface as the boundary, so the brightness of the boundary should be the same as the bright area. However, in practice, when there are a large number of pixel points overexposed, the boundary points are also overexposed. And the exposure time of the receiving end is adjusted, so that a pixel point with a preset proportion of a bright area on the image is overexposed. The predetermined ratio is any value between (90%, 100%). If the predetermined ratio is less than 90%, the boundary value is not clear enough. If the predetermined proportion reaches 100%, it is indicated that the power of the transmitting end is too high, and even diffuse reflection at the boundary may cause overexposure of pixel points outside the boundary, so that an erroneous boundary point is identified. The exposure adjustment of the receiving end is carried out according to the rows, so that the proportion of overexposed pixels of each row on the image is a preset proportion.
In some embodiments, the boundary of the image is clearer by adjusting the transmitting power of the transmitting end, so as to obtain an accurate boundary value. The principle of the foregoing embodiments is the same, and the power of the emitting end is adjusted in this embodiment, so that the pixel point with the predetermined proportion of the bright area is overexposed. The power adjustment of the transmitting end is performed according to the rows, so that the proportion of overexposed pixels of the first row on the image is a preset proportion.
In some embodiments, an average value is calculated for the bright area on an image by rows, and a point at the boundary equal to the average value is taken as a boundary pixel point. The calculation may be performed with the bright areas removed in the boundary region before the boundary of the bright and dark areas is determined. For example, in a certain line of an image, the position range is [1,1024], [1,200] is a dark area, (200, 240) is a boundary area, and [240,1024] is a bright area, the average brightness a is calculated by using the pixel point of [240,1024], and in the range (200, 240), the first pixel point with the brightness value a is obtained by searching from 240 to 200, and the pixel point is the boundary pixel point of the line. The determination of the boundary region may take various methods such as a region with a luminance within a certain range, expanding a fixed width from a dark region, and the like.
In some embodiments, the gray scale change rate is obtained in the boundary region, and the maximum change rate point is taken as the boundary point. At the time of calculation, calculation is performed in units of rows. The boundary pixel point is the boundary point between reflection and diffuse reflection, so that the gray scale change rate is the largest. By solving the gray level change speed, accurate boundary points can be obtained without various adjustments, and the method has the characteristics of good self-adaption and low cost.
In some embodiments, the adaptive thresholding method is used to convert the speckle image into a binary image, and then the boundary is obtained by the connected domain markers and the edge finding algorithm. The embodiment can be performed by various existing algorithms and has the characteristics of maturity and stability.
Fig. 9 is a flowchart illustrating steps of a depth reconstruction method under a very close distance according to an embodiment of the present invention. As shown in fig. 9, a depth reconstruction method under a very close range includes the following steps:
step S1: and transmitting infrared light through the transmitting end, receiving a reflected signal of the infrared light through the receiving end, and generating an image.
In this step, the transmitting end and the receiving end are used in pairs, and the infrared light emitted by the transmitting end may be either structured light or floodlight or laser. Namely, the embodiment is suitable for various depth measurement technologies such as structured light, TOF, laser radar and the like.
Step S2: and detecting the image, and if a bright area and a dark area exist in the image and the bright area is a connected complete area, executing step S3.
In this step, unlike various depth measurement techniques such as structured light, TOF, and lidar, this step recognizes that an image of a depth value cannot be obtained by using the above technique. For a structured light camera, at very close distances, structured light spots are integrated and difficult to distinguish, resulting in failure to obtain depth values using structured light techniques. For a TOF camera, at very close distances, part of the area at the receiving end receives the reflected signal and exposes, while the other part of the area does not receive the reflected signal, and the exposed image is very dark. For lidar and TOF alike, there is a very dark region. Unlike the bright and dark areas formed by normal exposure, the dark areas in this step refer to areas with very weak exposure, i.e. the bright floors of the dark areas are very low, typically less than 10. While the bright areas form a complete enclosed area.
Step S3: calculating the boundary between the bright area and the dark area, subtracting the position of the boundary from the calibration position obtained in the calibration stage to obtain parallax, and reconstructing the depth of the target object according to the reconstruction principle.
In this step, the step of calculating the depth of the target object is referred to the foregoing embodiments, and will not be described herein.
According to the embodiment, the recognition is carried out by actively emitting light under the extremely short distance, the depth recognition is carried out by utilizing the signal boundary of the receiving end, the reconstruction is carried out by utilizing the characteristics of the received signal area, the problem that the measurement distance is out of the depth of field range, particularly under the extremely short measurement distance, the effective measurement range is exceeded, so that the three-dimensional reconstruction cannot be carried out is solved, the measurement range of the structured light camera can be enlarged without changing any optical element or other hardware, and the robot can still maintain high-precision measurement under the extreme condition.
FIG. 10 is a flowchart illustrating a method for reconstructing a depth of a target object according to an embodiment of the present invention. As shown in fig. 10, the step of reconstructing the depth of the target object according to the embodiment of the present invention includes:
step S31: calculating the bright area and the dark area according to the rows to obtain boundary pixel points of each row;
in this step, the transmitting end or the receiving end may be controlled to obtain a more accurate image, and the specific method is referred to the foregoing embodiment and will not be described herein. And processing the image to obtain the accurate boundary between the bright edge and the dark area. The method of identifying the boundaries of an image is referred to in any of the foregoing embodiments. The image is processed row by row to obtain boundary pixel points of each row. In the present specification, when an image is processed, the row refers to a direction in which the receiving end is parallel to the base line. The definition of the rows is unchanged regardless of whether the image has various operations such as correction, deformation and the like in the post-processing. If the image is rotated or the like, the lines in the present specification follow the rotation, i.e., the lines in the present specification are parallel lines of the base line, not with reference to the coordinate system of the image, but with reference to the direction at the time of image generation.
Step S32: subtracting the positions of the boundary pixel points from the calibration positions to obtain parallax;
in this step, the calibration position is the position of the calibration pixel point of the row at the calibration depth. And subtracting the positions of the demarcation position and the boundary pixel point to obtain parallax.
Step S33: reconstructing the depth of the target object according to the parallax.
In this step, the formula is as followsThe calculation may result in a depth of the target object.
According to the embodiment, the images are processed according to the rows to obtain boundary pixel points and parallax to obtain the depth of the target object, so that the method can be suitable for processing various irregular shapes and has wider applicability.
For a multi-receiver depth camera, as the depth of the target object is continuously reduced, the common field of view area of the first sensor and the second sensor is smaller and smaller. The image area appearing to overlap on the first image generated by the first sensor and the second image generated by the second sensor is smaller and smaller. By comparing the first image and the second image, the overlapping region and the comparison boundary of the overlapping region can be identified. The contrast border corresponds to the border between the bright and dark areas in the previous embodiment. The parallax can be obtained by comparing the position of the boundary with the calibration position in the calibration process, and the depth of the target object can be further obtained.
FIG. 11 is a flowchart illustrating another embodiment of a method for deep reconstruction at very close range. As shown in fig. 11, another method for reconstructing depth under a very short distance in an embodiment of the present invention, which is applicable to a multi-receiver depth camera, such as a binocular camera, includes the following steps:
step A1: the first sensor and the second sensor are exposed simultaneously to generate a first image and a second image;
in this step, the first sensors are the same type of sensor. The first sensor and the second sensor may be infrared sensors, RGB sensors, or other types of sensors. The first sensor and the second sensor may refer not only to two sensors but to a plurality of sensors. Such as three sensors, where any two of the three sensors constitute the first sensor and the second sensor in this embodiment. The exposure parameters of the first sensor and the second sensor are also the same.
Step A2: comparing the first image with the second image, if only partial areas in the images are successfully compared and the partial areas are connected complete areas, marking the partial areas as overlapping areas, and executing a step A3;
In this step, the binocular camera calculates the depth value from the parallax between the first image and the second image, but since the imaging quality of the binocular camera is significantly reduced at a very short distance, overexposure or overdiagnment is likely to occur in a large area, and the parallax is calculated simply by using the overlapping area of the first image and the second image, so that it is difficult to obtain an effective depth value. The step utilizes the size of the overlapping area of the images to identify the distance range between the target object and the camera. If the target object and the camera are in very close distance, namely the first image and the second image are successfully compared with each other only by partial areas, step A3 is executed, and the depth value is obtained by using the comparison boundary. The connected complete area is a closed area, and all pixel points surrounded by the boundary of the closed area are effective pixel points, i.e. no holes, bubbles and the like exist in the closed area.
Step A3: and reconstructing the depth of the target object according to a reconstruction principle by utilizing the overlapping region and the comparison boundary of the overlapping region and according to the parallax of the comparison boundary between the first image and the second image.
In this step, the alignment boundary of the overlapping region is identified. The subsequent calculation step is similar to step S3, except that in step S3, the parallax is calculated by using the boundary pixel points at the time of the boundary and the calibration, and in this step, the parallax is directly calculated by using the comparison between the boundary pixel points on the first image and the second image.
According to the embodiment, according to the characteristics of the multi-receiver depth camera, a first image and a second image are respectively obtained, the first image and the second image are compared to obtain a comparison boundary, so that the junction of the sensing ranges of the first sensor and the second sensor is established, and further, the depth value of a target object can be determined according to parallax.
FIG. 12 is a flowchart illustrating another step of reconstructing a depth of a target object according to an embodiment of the present invention. As shown in fig. 12, another step of reconstructing a depth of a target object according to an embodiment of the present invention includes:
step A31: calculating the overlapping area according to the rows to obtain comparison boundary pixel points of each row;
in this step, the first sensor and the second sensor also adjust the exposure parameters so as to maximize the sharpness of the first image and the second image, thereby improving the accuracy of the determination of the overlapping area. The present step processes the image by rows to obtain a comparison boundary pixel point for each row. Unlike the foregoing embodiment, in this step, when determining the comparison boundary pixel points, the comparison result is determined according to the overlapping region.
In some embodiments, the exposure time of the first sensor and the second sensor is further prolonged or shortened according to the imaging content of the first image and the second image, so that the comparison boundary of the overlapping area is clearer. The image content of the overlapping area is valid for the present embodiment, while the content of the other areas is invalid, and the present embodiment is an adjustment target for the overlapping area, and can better identify the comparison boundary.
Step A32: subtracting the positions of the contrast boundary pixel points in the first image and the second image to obtain parallax;
step A33: reconstructing the depth of the target object according to the parallax.
In this step, the formula is as followsCan be calculated byTo the depth of the target object.
According to the embodiment, the overlapping area is calculated, boundary pixel points are processed according to the rows, and the method has better benefit results for the target object with uneven surface and obtains more accurate and fine depth.
Lidar is a special depth measurement technique that is one of the d-TOF, so the description of the previous embodiments also applies to lidar.
Because the laser radar is long in measuring distance and small in measuring angle, the effective measuring range of the laser radar can be greatly increased when the technology is applied to the laser radar.
When the lidar transmitter covers the target space a plurality of times, as shown in fig. 13, the lidar transmitter includes at least two transmitting units, each of which is different in distance from the receiver. The transmitting units transmit sequentially to achieve full coverage of the target space. The transmitting order of the transmitting units is not from left to right or from right to left in its order, but rather exhibits a certain jump and finally completes the transmission of all transmitting units. The emitting unit may be a single laser or a plurality of lasers. Each transmitting unit is different from the baseline of the receiver, so a baseline value corresponding to that transmitting unit needs to be used in calculating depth. The multiple transmitting units and the multiple receivers respectively measure different ranges, and compared with the whole transmission of the laser radar transmitter, more accurate results and larger measuring ranges can be obtained.
In some embodiments, the device further includes a cylindrical lens, configured to collimate the light beams emitted by the at least two emission units in a fast axis direction, and combine the plurality of light beams emitted from the light outlet into a uniform light beam on the target surface.
Specifically, the cylindrical lens is a lens with a fixed position. The embodiment only collimates the light beam in the fast axis direction, but does not need to collimate the light beam in the slow axis or other directions, so that the requirement on the cylindrical lens can be reduced, the projection effect is ensured, and the cost is lower. The cylindrical lens collimates and shapes the laser beams emitted by all the emitting units so that the laser beams are fused into a uniform beam on the target surface after passing through the cylindrical lens. The light emitting surface of the emitting unit is positioned at the focus of the cylindrical lens. The light emitting surface is a surface formed by a plurality of light emitting ports. The cylindrical lens has a certain arc to shape the beam. The distance between the cylindrical lens and the transmitting unit is 0.1-1mm so as to minimize the size of the laser radar light source. The cylindrical lens may be a normal lens, a superlens, or the like. When the cylindrical lens is a superlens, the size of the laser radar light source can be greatly reduced.
In some embodiments, as shown in fig. 14, the plurality of emission units are array lasers and share an anode or a cathode. The light outlets of the active areas are adjacently arranged, the cathodes or the anodes of the active areas share one electrode, and the anodes or the cathodes are separated. In the description of the common cathode, a plurality of lasers 10 are provided on the substrate 13. The cathodes of the plurality of lasers 10 are connected with the polar plate 11 through gold wires 12, so that the effect that the plurality of lasers 10 share the cathode is realized, and the control is convenient. Anodes 14 of the plurality of lasers 10 are connected to laser emitters in the substrate 13 for controlling the emission of the lasers. A light outlet 15 is provided on the other side of the substrate 13. Each laser is provided with a light outlet. The gold wires 12 and the polar plates 11 are made of conductive materials. The substrate 13 is an insulating material. The polar plate 11 not only has the function of conducting electricity, but also can play a role of restraining the laser 10, so that the emergent beam of the array laser 1 is more stable. The lasers are closely attached, so that the relative positions are firmer, and the directions of the light beams emitted from the light outlet 15 are more consistent and uniform.
As shown in fig. 15, a combined depth measuring apparatus includes at least two transmitting ends and at least two receiving ends, and the transmitting ends and the receiving ends are arranged at intervals. One transmitting end and at least two receiving ends work synchronously. One receiving end can receive the reflected signals of at least two transmitting ends. The receiving end processes the data according to the structured light or TOF technology, and when the normal data acquisition can not be completed, the method of the foregoing embodiment is executed.
In some embodiments, the data is processed by a single receiver to obtain depth information measured by each receiving end, and then the depth information obtained by the two receiving ends is processed. If one of the at least two receiving ends obtains depth data according to the structured light or TOF technology, the depth data obtained by the structured light or TOF technology is taken as final depth data. If the effective structured light or the depth data obtained by the TOF technology are not obtained in at least two receiving ends, the two receiving ends are respectively calculated according to the embodiment to obtain a first depth and a second depth, and the first depth and the second depth are averaged according to rows to obtain a final depth value of the target object.
In some embodiments, the data is processed by multiple receivers, and a binocular system is formed by at least two receiving ends, and the first image and the second image are evaluated and compared. When at least one of the first image and the second image is completely clear, the completely clear image is taken as an effective depth image. When the first image and the second image are incomplete and unclear, the first image and the second image are compared, an overlapping area and a contrast boundary are identified, and then the depth of the target object is calculated according to the parallax of the contrast boundary on the first image and the second image.
In some embodiments, a combined depth measurement device includes a first transmitting end, a first receiving end, a second transmitting end, and a second receiving end. The first receiving end receives a reflected signal of the light rays sent by the first sending end; the second receiving end receives the reflected signal of the light rays sent by the second transmitting end. The first receiving end and the first transmitting end measure target objects in a first range, and the second receiving end and the second transmitting end measure target objects in a second range. The values of the first range are greater than the values of the second range, e.g., the first range is 5m-200m and the second range is 0.5m-5m. The combined depth measuring device is arranged according to the sequence of the second transmitting end, the first receiving end and the second receiving end, so that the measuring precision is improved. When the first transmitting end and the first receiving end detect that the target object is smaller than the first range, the depth value of the target object is obtained according to the method of the embodiment, and the second transmitting end and the second receiving end are started to conduct more accurate measurement, so that the second transmitting end and the second receiving end can be focused rapidly, and accurate collection of the target object in a larger range is achieved. Compared with the simple superposition of depth cameras in two different measuring ranges, the embodiment has higher focusing speed, thereby realizing faster response speed, and having great significance for many scenes with high requirements on measuring speed.
In some embodiments, a third transmitting end and a third receiving end are further included. The third transmitting end and the third receiving end have a smaller measurement range than the first transmitting end, the first receiving end, the second transmitting end and the second receiving end. When the first transmitting end and the first receiving end detect that the target object is smaller than the first range, the depth value of the target object is obtained according to the method of the foregoing embodiment, and the second transmitting end and the second receiving end, or the third transmitting end and the third receiving end can be selectively started according to the depth value, so as to achieve finer division and data acquisition.
The embodiment comprises at least two transmitting ends and at least two receiving ends, and can realize measurement in a larger range than the prior art through the combination of the transmitting ends and the receiving ends, meanwhile, effective depth data can be obtained for a shorter distance, the breadth and the depth of the depth measurement are greatly increased, and the popularization and the application of the depth measurement technology are facilitated.
Fig. 16 shows a schematic structural diagram of an electronic device in an embodiment of the present invention. The photographing end of the electronic device 801 is provided with a depth camera module 802. The depth camera module 802 is any of the foregoing embodiments. The location of the depth camera module 802 may be different, either at the edge or in the middle, depending on the device. Although shown as a drone in fig. 8, those skilled in the art will appreciate that the electronic device described in this embodiment may be a cell phone, a scanner, a drone, a robot, an automobile, a ship, or the like.
Components of electronic device 801 may include, but are not limited to: at least one processing unit, at least one memory unit, a bus connecting the different platform components (including the memory unit and the processing unit), a display unit, etc.
The memory unit stores program code that can be executed by the processing unit.
The bus may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The display unit may be configured to display the depth map acquired in any one of the foregoing embodiments, thereby completing the presentation of the depth map. The display unit may be integrated on the electronic device 801 or may be externally connected, and a corresponding interface is provided by the electronic device 801. The interface may be of various interface types such as HDMI, USB, etc.
The electronic device may also communicate with one or more external devices (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device, and/or with any device (e.g., router, modem, etc.) that enables the electronic device to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface. And, the electronic device may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through a network adapter. The network adapter may communicate with other modules of the electronic device via a bus.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the invention.

Claims (10)

1. The combined depth measuring device is characterized by comprising a first transmitting end, a first receiving end, a second transmitting end and a second receiving end;
The first receiving end receives the reflected signal of the light rays sent by the first sending end, and generates a first image so as to measure a target object in a first range;
the second receiving end receives the reflected signal of the light rays sent by the second transmitting end, and generates a second image so as to measure a target object in a second range; wherein the first range of values is greater than the second range of values;
and when the first transmitting end and the first receiving end detect that the target object is smaller than the first range, starting the second transmitting end and the second receiving end to measure.
2. The combined depth measurement device according to claim 1, wherein when the first transmitting end and the first receiving end detect that the target object is smaller than the first range, a bright area and a dark area of the first image are identified, boundaries of the bright area and the dark area are calculated, a parallax is obtained by subtracting the positions of the boundaries from the calibration positions obtained in the calibration stage, and the depth of the target object is reconstructed according to a reconstruction principle, so that the second transmitting end and the second receiving end can be focused rapidly.
3. The combined depth measuring device of claim 1, wherein the second transmitting end, the first receiving end, and the second receiving end are arranged in that order.
4. A combined depth measurement device according to claim 1, wherein the processing of bright and dark areas comprises:
step S31: calculating the bright area and the dark area according to the rows to obtain boundary pixel points of each row;
step S32: subtracting the positions of the boundary pixel points from the calibration positions to obtain parallax;
step S33: reconstructing the depth of the target object according to the parallax.
5. The combined depth measurement of claim 1, further comprising a third transmitting end and a third receiving end. The third transmitting end and the third receiving end have a smaller measurement range than the first transmitting end, the first receiving end, the second transmitting end and the second receiving end.
6. The combined depth measurement of claim 5, wherein the second transmitting end and the second receiving end, or the third transmitting end and the third receiving end, are selectively activated based on a depth value of the target object when the first transmitting end and the first receiving end detect that the target object is less than the first range.
7. The combined depth measuring apparatus of claim 5, wherein the third transmitting end, the second transmitting end, the first receiving end, the second receiving end, and the third receiving end are arranged in this order.
8. A combined depth measurement device according to claim 4, wherein the rows are parallel lines to a baseline.
9. A combined depth measuring device according to claim 4, wherein the depth is measured
10. The combined depth measuring device is characterized by comprising a first transmitting end, a first receiving end and a second transmitting end;
the first receiving end receives the reflected signal of the light rays sent by the first sending end, and generates a first image so as to measure a target object in a first range;
the first receiving end receives the reflected signal of the light rays sent by the second transmitting end, and generates a second image so as to measure a target object in a second range; wherein the first range of values is greater than the second range of values; the first transmitting end and the second transmitting end do not transmit at the same time;
when the first transmitting end and the first receiving end detect that the target object is smaller than the first range, the second transmitting end and the first receiving end are started to conduct measurement.
CN202410046376.XA 2023-07-18 2024-01-12 Combined depth measuring device Pending CN117872385A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310882092 2023-07-18
CN202310882092X 2023-07-18

Publications (1)

Publication Number Publication Date
CN117872385A true CN117872385A (en) 2024-04-12

Family

ID=90591425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410046376.XA Pending CN117872385A (en) 2023-07-18 2024-01-12 Combined depth measuring device

Country Status (1)

Country Link
CN (1) CN117872385A (en)

Similar Documents

Publication Publication Date Title
WO2020233443A1 (en) Method and device for performing calibration between lidar and camera
CN109831660B (en) Depth image acquisition method, depth image acquisition module and electronic equipment
CN113538591B (en) Calibration method and device for distance measuring device and camera fusion system
US10337860B2 (en) Distance sensor including adjustable focus imaging sensor
Zennaro et al. Performance evaluation of the 1st and 2nd generation Kinect for multimedia applications
CN108377380B (en) Image scanning system and method thereof
CN111742241B (en) Optical distance measuring device
Kuhnert et al. Fusion of stereo-camera and pmd-camera data for real-time suited precise 3d environment reconstruction
US10643349B2 (en) Method of calibrating a camera and a laser scanner
WO2021098448A1 (en) Sensor calibration method and device, storage medium, calibration system, and program product
US20220308232A1 (en) Tof depth measuring device and method
WO2020047248A1 (en) Glare mitigation in lidar applications
CN111815716A (en) Parameter calibration method and related device
CN110709722A (en) Time-of-flight camera
WO2021098439A1 (en) Sensor calibration method and apparatus, and storage medium, calibration system and program product
CN107241592B (en) Imaging device and imaging method
US11494925B2 (en) Method for depth image acquisition, electronic device, and storage medium
CN113780349A (en) Method for acquiring training sample set, model training method and related device
CN112034485A (en) Reflectivity sensing with time-of-flight camera
CN112379389A (en) Depth information acquisition device and method combining structured light camera and TOF depth camera
CN111510700A (en) Image acquisition device
Conde et al. Adaptive high dynamic range for time-of-flight cameras
CN117872385A (en) Combined depth measuring device
Langmann Wide area 2D/3D imaging: development, analysis and applications
CN107835361B (en) Imaging method and device based on structured light and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination