WO2021218196A1 - 一种深度成像方法、装置及计算机可读存储介质 - Google Patents

一种深度成像方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2021218196A1
WO2021218196A1 PCT/CN2020/138118 CN2020138118W WO2021218196A1 WO 2021218196 A1 WO2021218196 A1 WO 2021218196A1 CN 2020138118 W CN2020138118 W CN 2020138118W WO 2021218196 A1 WO2021218196 A1 WO 2021218196A1
Authority
WO
WIPO (PCT)
Prior art keywords
light source
speckle
module
image
pixel
Prior art date
Application number
PCT/CN2020/138118
Other languages
English (en)
French (fr)
Inventor
徐玉华
肖振中
徐彬
余宇山
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2021218196A1 publication Critical patent/WO2021218196A1/zh
Priority to US17/830,010 priority Critical patent/US20220299314A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • G01B11/2527Projection by scanning of the object with phase change by in-plane movement of the patern
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B26/00Optical devices or arrangements for the control of light using movable or deformable optical elements
    • G02B26/08Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
    • G02B26/0808Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more diffracting elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B26/00Optical devices or arrangements for the control of light using movable or deformable optical elements
    • G02B26/08Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
    • G02B26/0875Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more refracting elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination

Definitions

  • the present invention relates to the field of three-dimensional imaging technology, in particular to a depth imaging method, device and computer readable storage medium.
  • the structured light method is an active optical ranging technology. Its basic principle is that the structured light projector projects a controllable speckle pattern or fringe pattern on the surface of the object to be measured, and the image is obtained by the image sensor. Through the geometric relationship of the system, Use trigonometric calculations to obtain the depth of the object.
  • the current three-dimensional reconstruction techniques of structured light include: single-frame structured light reconstruction and multiple-frame structured light reconstruction.
  • the structured light three-dimensional reconstruction method based on speckle matching usually collects the speckle image of the target scene and the pre-stored reference image Perform matching to obtain a disparity map, and calculate the depth or three-dimensional structure of the target scene according to the disparity map and the calibration parameters of the measurement system.
  • speckle matching such as the method used in Kinect V1, Obi Zhongguang Astra and other products
  • the advantages of this method are low cost, higher frame rate can be obtained, and it is suitable for three-dimensional reconstruction of moving objects.
  • the disadvantage is that the measurement accuracy is limited.
  • the structured light three-dimensional reconstruction method based on Gray code is widely used.
  • it is necessary to project more than three frames of phase shift fringe patterns to the target scene.
  • the single-frequency phase shift map can only obtain the relative phase, in order to obtain the absolute phase, it is also necessary to project multiple frames of phase shift maps with different frequencies.
  • the advantage of this method is that it has higher measurement accuracy and is more suitable for high-precision three-dimensional reconstruction of static objects.
  • the disadvantage is that the structure of the transmitter is complex and the algorithm is complex, which leads to higher costs.
  • the prior art lacks a depth imaging method and device with high measurement accuracy and low cost.
  • the present invention provides a depth imaging method, device and computer-readable storage medium.
  • a depth imaging method including the following steps: S1: controlling a transmitting module to emit at least two speckle patterns that change with time to a target object; S2: controlling a collection module to collect the speckles reflected by the target object Pattern; S3: use the speckle pattern and at least two pre-stored reference speckle patterns to perform space-time stereo matching to calculate the offset of each pixel, and calculate the depth of the pixel according to the offset value.
  • the individual sub-light source arrays in the emission module including a plurality of discrete sub-light source arrays are controlled to be turned on independently or a plurality of the discrete sub-light source arrays are simultaneously turned on to the target
  • the object emits at least two speckle patterns that vary with time.
  • the beam emitted by the emitting module is controlled to be deflected and then emitted to the target object at least two speckle patterns that vary with time.
  • normalized correlation matching is used to calculate the offset of each pixel, and the specific formula is:
  • d is the disparity value
  • i is the image sequence index
  • ncc (x, y, d) represents the reference speckle image I i, the pixel point (x, y) in R as the center of K different time series speckle images
  • ⁇ (x, y) is the neighborhood centered at (x, y); with These are the average values of the pixel gray levels in the reference speckle image and the captured speckle image in the three-dimensional window; using the offset to calculate the depth value of the pixel point by triangulation, the details are as follows:
  • d is the parallax value
  • b is the baseline length from the light source of the emission module to the camera of the acquisition module
  • Z 0 is the distance between the plane where the emission module and the acquisition module are located and the reference plane
  • f is the focal length of the camera
  • the speckle pattern is collected by a forward and backward frame acquisition method to calculate the depth value of the pixel point.
  • the present invention also provides a depth imaging device, including: a transmitting module for transmitting at least two speckle patterns that change with time to a target object; and a collecting module for collecting speckles reflected by the target object Pattern; control and processor, respectively connected with the transmission module and the acquisition module, which are configured to implement any of the above methods.
  • the emission module includes a light source array
  • the light source array includes a plurality of discrete sub-light source arrays
  • each of the discrete sub-light source arrays is independently controlled in groups, and the groups are independently controlled.
  • the sub-light source array is turned on independently or multiple synchronized turned on to generate the speckle pattern that changes in time sequence.
  • a beam deflection unit connected to the emission module is used to deflect the light beam emitted by the light source array of the emission module to generate the speckle pattern that changes in time sequence .
  • the emission module includes a light source and an optical element connected in sequence, the optical element includes a lens or a diffractive optical element; the beam deflection unit is connected to any one of the light source, the lens or the diffractive optical element
  • the upper part is configured to move or deflect the light source, the lens, or the diffractive optical element in a single direction or multiple directions.
  • the present invention further provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of any of the foregoing methods are implemented.
  • the beneficial effects of the present invention are: to provide a depth imaging method, device and computer readable storage medium, by controlling the emission module to emit at least two speckle patterns that change with time; Speckle pattern; use the speckle pattern to match the pre-stored reference speckle pattern to calculate the offset of each pixel, and calculate the depth value of each pixel according to the offset; to achieve the traditional stereo matching method On this basis, time sequence information is added.
  • a three-dimensional window is used for stereo matching, so as to realize depth imaging with low cost, high accuracy, and high frame rate.
  • the present invention provides a method for improving the software program of the control and processor to obtain depth imaging with low cost, high precision, and high frame rate.
  • the present invention provides a method that combines hardware and software improvements to obtain depth imaging with low cost, high precision, and high frame rate.
  • Fig. 1 is a schematic structural diagram of a depth imaging device in an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a speckle pattern emitted by a VCSEL sub-array in a transmitting module according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of the structure of the first transmitting module in the embodiment of the present invention.
  • Fig. 4 is a schematic structural diagram of a second type of transmitting module in an embodiment of the present invention.
  • Figure 5(a) is a schematic diagram of the stereo matching principle of the prior art stereo matching technology in an embodiment of the present invention.
  • Fig. 5(b) is a schematic diagram of the principle of space-time stereo matching used in an embodiment of the present invention.
  • Fig. 6 is a schematic diagram of the principle of calculating the depth value according to the forward and backward frame offsets in the embodiment of the present invention.
  • Fig. 7 is a schematic diagram of sequentially acquiring images according to the preceding and following frames in an embodiment of the present invention.
  • Fig. 8 is a schematic diagram of a depth imaging method in an embodiment of the present invention.
  • connection can be used for fixing or circuit connection.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features.
  • a plurality of means two or more than two, unless otherwise specifically defined.
  • FIG. 1 is a schematic structural diagram of a depth imaging device 10 based on time series speckle according to an embodiment of the present invention.
  • the depth imaging device 10 includes a transmission module 11, an acquisition module 12, and a control and processor 13 connected to the transmission module 11 and the acquisition module 12, respectively.
  • the transmitting module 11 is used to transmit at least two speckle patterns that change in time sequence to the target object 20;
  • the collection module 12 is used to collect the corresponding speckle patterns reflected by the target object 20;
  • the control and processor 13 uses the scattered speckle patterns.
  • the speckle pattern is matched with at least two pre-stored reference speckle patterns in space-time stereo matching to calculate the offset of each pixel, and the depth value of each pixel is calculated according to the offset. It can be understood that the parallax for emitting at least two speckle patterns can be preset.
  • the emission module 11 includes a light source 111 and an optical element 112.
  • the light source 111 can be a light source such as a light emitting diode (LED), an edge emitting laser (EEL), a vertical cavity surface emitting laser (VCSEL), etc., or an array light source composed of multiple light sources, preferably multiple VCSEL light sources are used to form a VCSEL array Light source, because VCSEL has the characteristics of small size, small light source emission angle, and good stability, at the same time, multiple VCSEL light sources can be arranged on the semiconductor substrate.
  • the VCSEL light source array chip constituted by this is not only small in size, low power consumption, but also more Conducive to generating spot pattern beams.
  • the corresponding speckle patterns are also different, so that a speckle pattern that changes in time sequence can be generated.
  • the light source array includes a plurality of discrete sub-light source arrays, and each of the discrete sub-light source arrays can be grouped and independently controlled, and the control and processor 13 independently controls the individual sub-light source arrays of the emission module 11 to emit to the target object 20 through grouping.
  • At least two speckle patterns that change in sequence with time, and grouped independent control includes the sub-light source arrays being turned on independently or multiple simultaneous turning on.
  • Fig. 2 is a schematic diagram of a VCSEL light source array according to an embodiment of the present invention.
  • a first sub-light source array is formed by a plurality of sub-light sources 201 (light sources shown in hollow).
  • the first sub-light source array forms a first two-dimensional pattern, which can be used in the control and processor 13
  • the first speckle pattern is emitted separately under the control of
  • the second sub-light source array is composed of multiple sub-light sources 202 (light sources shown by black dots).
  • the second sub-light source array forms a second two-dimensional pattern, which can be controlled and processed
  • the second speckle pattern is separately emitted under the control of the device 13, and the first sub-light source array and the second sub-light source array are spatially separated.
  • the hollow point 201 and the black point 202 in the figure are only to show the difference. In fact, they are both light sources. When the light source is turned off, the two can not be distinguished, and all the hollow points 201 are controlled together, and all the black points 202 are controlled. Common control means that the sub-light source arrays represented by black dots and hollow dots can be independently controlled.
  • the patterns of the first two-dimensional pattern and the second two-dimensional pattern may be the same or different. It is understandable that the first sub-light source array and the second sub-light source array can also be turned on simultaneously to form a third two-dimensional pattern. This is only an example.
  • the light source array may include multiple sub-light source arrays, and the sub-light source arrays may be turned on individually or two or more may be turned on together.
  • a plurality of sub-light source arrays can be spatially arranged separately, cross-arranged and compounded.
  • the first sub-light source array is area A
  • the second sub-light source array is area B
  • the third sub-light source array is A+ Area B and so on.
  • the arrangement of multiple sub-light source arrays can be set reasonably according to needs, and the pattern, number, density and arrangement of the sub-light source arrays can be the same or different.
  • the arrangement density of the first sub-light source is higher than the arrangement density of the second sub-light source, and the number of the first sub-light source is less than the number of the second sub-light source. Due to the different arrangement mode, different speckle patterns will be output, which can produce The speckle pattern changes in time sequence.
  • the optical element 112 receives the light beam from the light source 111 and modulates the light beam, such as diffraction, transmission, etc., and then emits the modulated light beam to the target object 20.
  • the optical element may be one or more combinations of lenses, diffractive optical elements (DOE), microlens array, etc., and is set according to specific usage scenarios.
  • DOE diffractive optical elements
  • the depth imaging device 10 includes a beam deflection unit connected to the emission module.
  • the beam deflection unit may be a specific hardware or a combination of hardware for emitting the light source array of the emission module. Deflection of the beam to achieve the emission of a speckle pattern that changes in time sequence to the target object 20. It is understandable that the deflection angle and time can be set according to specific needs, and the deflection of the emitted light beam can generate speckle patterns with different timing changes. The details are as follows As stated in the text.
  • the transmitting module 11 is connected to one or more actuators 301, and the actuator 301 can be connected to any one of the VCSEL array, lens or DOE, which is configured to make the VCSEL array, lens or The DOE moves or deflects in a single direction or multiple directions, thereby generating a speckle pattern that varies in time series.
  • the VCSEL array emits a beam 303
  • the lens receives the beam 303, and converges the beam to form a beam 304.
  • the DOE receiving beam 304 diffracts it into a zero-order diffracted beam 305 and a positive and negative first-order diffracted beam 306a-b. Form a speckle pattern. It is understandable that this is only for the convenience of explanation, taking the zero-order and positive-negative first-order diffracted beams as examples, in fact, DOE can generate a larger number of diffraction orders.
  • the actuator 301 is connected to the lens and the control and processor 13 respectively, and is configured to translate the lens transversely to its optical axis, so that the light beam 303 is translated or deflected.
  • the actuator 301 is configured to translate the lens to the right, and the beam 304 rotates clockwise (as shown by the arrow 402 in the figure) to form a beam 401 at a certain angle ⁇ .
  • the angle is based on the amount of lens translation and the lens The ratio of focal lengths is determined.
  • This rotation continues to be transmitted to the DOE, and its zero-order diffracted beam 305 and positive and negative first-order diffracted beams 306a-b will also rotate clockwise by a certain angle ⁇ (as shown by arrow 403 in the figure) to make the speckle pattern move in the lateral direction. Realize the effect of changes in timing.
  • the fulcrum of the diffraction order will also move laterally, but the amount of movement is negligible compared to the deflection angle.
  • only the translation of the lens to the right is taken as an example.
  • the moving direction of the lens can also be translated to the left or other directions, and the moving direction is not limited here.
  • the transmitting module 11 transmits at least two speckle patterns that change with time to the target object 20; the collection module 12 collects the corresponding speckle patterns reflected by the target object 20; control and processing The device 13 uses the speckle pattern to match the pre-stored reference speckle pattern to calculate the offset of each pixel, and calculates the depth value of each pixel based on the offset.
  • the multiple reference speckle patterns stored in advance are collected through a calibration process, that is, a flat plate is placed at one or more preset distances, and then the speckle patterns are projected, and the collection module 12 collects the speckle patterns.
  • the spot pattern is also stored in a memory (not shown).
  • FIGS. 5(a)-5(b) are a schematic diagram of stereo matching of a general stereo matching technique and a schematic diagram of space-time stereo matching used in an embodiment of the present invention, respectively.
  • the stereo matching technology commonly used in the prior art is to establish a point correspondence between a pair of stereo images, and by calculating the correspondence between pixels, the three-dimensional coordinates of the corresponding points can be obtained.
  • the arrow in the figure is the search direction of the corresponding point.
  • the space-time stereo matching method is adopted. Based on the traditional stereo matching method, timing information is added.
  • the depth of the target object is calculated using the principle of triangulation.
  • the left side is the reference image
  • the right side is the target shot image.
  • the arrow in the figure is the search direction of the corresponding point.
  • the transmitting module 11 transmits a plurality of speckle patterns that change in time sequence to the target object, and according to the principle of space-time stereo matching, a three-dimensional window is used for stereo matching.
  • the normalized correlation matching method (Normalized Cross Correlation, NCC) can be used to obtain a dense disparity map.
  • Correlation matching is to use the gray scale of the speckle pattern and the pre-stored speckle pattern, and calculate the degree of matching between the two through a normalized correlation measurement formula.
  • the expression of the calculation method of the matching degree of the 3D window NCC is as follows:
  • d is the disparity value
  • i is the image sequence index
  • ncc (x, y, d) represents the reference speckle image I i, the pixel point (x, y) in R as the center of K different time series speckle images
  • ⁇ (x, y) is the neighborhood centered at (x, y); with They are the average values of pixel gray levels in the three-dimensional window of the reference speckle image and the captured speckle image.
  • a pyramid search strategy is adopted to achieve precise to detailed matching.
  • the image width or height of the current layer is 1/2 of the next layer.
  • Two-way matching is used to eliminate mismatched points, that is, it is assumed that a pixel point P R in the reference speckle image finds the corresponding point P O in the target speckle image, and then the point P O in the speckle image of the target is captured in the reference image Reverse matching is performed in, and the corresponding point P R1 is obtained , which must satisfy
  • the disparity map obtained by NCC is at the entire pixel level.
  • the two pixels in the center of the matching position obtained by NCC are subdivided, with a subdivision interval of 0.1 pixels. Then use the NCC similarity to search these 21 positions, and find the position with the highest NCC score, which is the final sub-pixel matching result. Using this method, theoretically, 1/10 pixel matching accuracy can be obtained.
  • the transmitting module 11 emits a speckle pattern to the target object
  • the acquisition module 12 collects the speckle pattern reflected back by the target object.
  • the disparity map is obtained by speckle matching
  • the disparity of each pixel in the disparity map is You can use the triangulation method to calculate the depth value of each pixel. The expression is as follows:
  • d is the parallax value
  • b is the baseline length from the light source of the emission module to the camera of the acquisition module
  • Z 0 is the distance between the plane where the emission module and the acquisition module are located and the reference plane
  • f is the focal length of the camera
  • the depth map of the target object 20 is calculated by collecting multiple frames of images.
  • the transmitting module 11 successively transmits two time series speckle images A and B to the target object 20, and the time sequence is A1, B1, A2, B2, A3, and B3 are emitted to the target object 20.
  • a depth image D is output according to A+B in sequence
  • a depth image is calculated through A1 and B1
  • a frame image is calculated by A2 and B2.
  • A3 and B3 calculate one frame of image, a total of three frames of images, therefore, the frame rate of the depth image is reduced by half the frame rate of the speckle image collection.
  • the forward and backward frames can be adopted, so that the number of acquisition frames will not be reduced.
  • the forward and backward frame acquisition method according to an embodiment of the present invention is used to calculate a frame through the forward and backward frames. Depth image, one frame of depth image is calculated from A1 and B1, then one frame of depth image is calculated from B1, A2, and so on, except that the first frame A1 does not have a corresponding depth image, and every subsequent frame of speckle image has a corresponding The depth image will not reduce the measurement frame rate.
  • the module transmits at least two speckle patterns that change in sequence with time to the target object.
  • timing information is added.
  • the method of stereo matching using three-dimensional windows should be It belongs to the protection scope of the present invention.
  • FIG. 8 shows a depth imaging method based on time series speckle according to an embodiment of the present invention, which includes the following steps:
  • controlling the transmitting module to transmit at least two speckle patterns that change in sequence with time to the target object
  • the emission module includes a VCSEL array, a lens, and a DOE, and each sub-array of the VCSEL array emits a different speckle pattern, thereby generating a sequential speckle pattern;
  • the transmitting module includes a VCSEL array, a lens, a DOE, and an actuator, and the actuator is connected to any one of the VCSEL array, the lens, the DOE, and the actuator, so that the The VCSEL array, lens, or DOE moves in a single direction or multiple directions to produce a speckle pattern that varies in time series.
  • S3 Perform space-time stereo matching using the speckle pattern and at least two pre-stored reference speckle patterns to calculate the offset of each pixel, and calculate the depth value of the pixel according to the offset.
  • the multiple discrete sub-light source arrays in the emission module to be turned on independently or the multiple sub-light source arrays are synchronized to be turned on to emit at least two speckle patterns that vary in sequence with time to the target object, the specific realization is achieved As mentioned earlier, I won't repeat it here.
  • the light beam emitted by the light source array of the emitting module is controlled to be deflected and then emitted to the target object at least two speckle patterns that vary with time.
  • the specific implementation is as described above. No longer.
  • the control and processor uses the NCC matching method to match the reference speckle pattern with the shot speckle pattern to calculate to obtain the disparity map.
  • the expression of the NCC matching calculation method is as follows:
  • d is the disparity value
  • i is the image sequence index
  • ncc (x, y, d) represents the reference speckle image I i, the pixel point (x, y) in R as the center of K different time series speckle images
  • ⁇ (x, y) is the neighborhood centered at (x, y); with They are the average values of pixel gray levels in the three-dimensional window of the reference speckle image and the captured speckle image.
  • the depth value of each pixel can be calculated by triangulation according to the disparity of each pixel in the disparity map, the expression is as follows:
  • d is the parallax value
  • b is the baseline length from the light source of the emission module to the camera of the acquisition module
  • Z 0 is the distance between the plane where the emission module and the acquisition module are located and the reference plane
  • f is the focal length of the camera
  • An embodiment of the present application also provides a control device, including a processor and a storage medium for storing a computer program; wherein the processor is used to execute the computer program at least to execute the method described above.
  • the embodiment of the present application further provides a storage medium for storing a computer program, and the computer program at least executes the above-mentioned method when the computer program is executed.
  • An embodiment of the present application also provides a processor, which executes a computer program and at least executes the method described above.
  • the storage medium may be implemented by any type of volatile or non-volatile storage device, or a combination thereof.
  • the non-volatile memory can be read-only memory (ROM, Read Only Memory), programmable read-only memory (PROM, Programmable Read-Only Memory), and erasable programmable read-only memory (EPROM, Erasable Programmable Read- Only Memory, Electrically Erasable Programmable Read-Only Memory, Magnetic Random Access Memory (FRAM, Ferromagnetic Random Access Memory), Flash Memory, Magnetic Surface Memory , CD-ROM, or CD-ROM (Compact Disc Read-Only Memory); magnetic surface memory can be magnetic disk storage or tape storage.
  • the volatile memory may be a random access memory (RAM, Random Access Memory), which is used as an external cache.
  • RAM static random access memory
  • SRAM Static Random Access Memory
  • SSRAM synchronous static random access memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM enhanced Type synchronous dynamic random access memory
  • SLDRAM Sync Link Dynamic Random Access Memory
  • DRAM Direct Rambus Random Access Memory
  • DRRAM Direct Rambus Random Access Memory
  • the disclosed system and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the embodiments of the present invention can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit; the above-mentioned integration
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: removable storage devices, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks, etc.
  • ROM read-only memory
  • RAM Random Access Memory
  • magnetic disks or optical disks etc.
  • the aforementioned integrated unit of the present invention is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the methods described in the various embodiments of the present invention.
  • the aforementioned storage media include: removable storage devices, ROM, RAM, magnetic disks, or optical disks and other media that can store program codes.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种深度成像方法、装置(10)及计算机可读存储介质,方法包括:控制发射模组(11)向目标物体(20)发射至少两幅随时序变化的散斑图案;控制采集模组(12)采集经目标物体(20)反射回的散斑图案;利用散斑图案与预先存储的至少两幅参考散斑图案进行空时立体匹配以计算各像素点的偏移量,并根据偏移量计算像素点的深度值。在传统的立体匹配方法的基础上,增加了时序信息,根据空时立体匹配原理,采用三维的窗口进行立体匹配,从而实现成本低、精度高、帧率高的深度成像。

Description

一种深度成像方法、装置及计算机可读存储介质 技术领域
本发明涉及三维成像技术领域,尤其涉及一种深度成像方法、装置及计算机可读存储介质。
背景技术
结构光方法是一种主动式光学测距技术,其基本原理是由结构光投射器向被测物体表面投射可控制的散斑图案或条纹图案,并由图像传感器获得图像,通过系统几何关系,利用三角法计算以获取物体深度。
目前结构光的三维重建技术包括:单幅结构光重建和多幅结构光重建。在单幅结构光重建技术中,基于散斑匹配的结构光三维重建方法(如Kinect V1、奥比中光Astra等产品中采用的方法)通常采集目标场景的散斑图与预先存储的参考图进行匹配以获取视差图,根据视差图和测量系统的标定参数计算目标场景的深度或三维结构。该方法的优点是成本低、且可以获得更高的帧率,适用于运动物体的三维重建,缺点是测量精度有限。
在多幅结构光重建技术中,基于格雷码的结构光三维重建方法被广泛应用。通常需要向目标场景投射三帧以上的相移条纹图,由于单频相移图只能获得相对相位,因此,为了获得绝对相位,还需要投射多帧频率不同的相移图。该方法的优点是测量精度更高,较适合于静态物体的高精度三维重建,缺点是发射端结构复杂,且算法复杂,导致成本较高。
现有技术中缺乏一种测量精度高、成本低的深度成像方法及装置。
以上背景技术内容的公开仅用于辅助理解本发明的构思及技术方案,其并不必然属于本专利申请的现有技术,在没有明确的证据表明上述内容在本专利申请的申请日已经公开的情况下,上述背景技术不应当用于评价本申请的新颖性和创造性。
发明内容
本发明为了解决现有的问题,提供一种深度成像方法、装置及计算机可读存储介质。
为了解决上述问题,本发明采用的技术方案如下所述:
一种深度成像方法,包括如下步骤:S1:控制发射模组向目标物体发射至少 两幅随时序变化的散斑图案;S2:控制采集模组采集经所述目标物体反射回的所述散斑图案;S3:利用所述散斑图案与预先存储的至少两幅参考散斑图案进行空时立体匹配以计算各像素点的偏移量,并根据所述偏移量计算所述像素点的深度值。
在发明的一种实施例中,控制包含多个分立的子光源阵列的所述发射模组中所述分立的子光源阵列独立开启或多个所述分立的子光源阵列同步开启向所述目标物体发射至少两幅随时序变化的散斑图案。
在本发明的另一种实施例中,控制将所述发射模组发射出的光束偏转后向所述目标物体发射至少两幅随时序变化的散斑图案。
在本发明的再一种实施例中,采用归一化相关匹配以计算各所述像素点的偏移量,具体公式为:
Figure PCTCN2020138118-appb-000001
其中,d为视差值;i为图像序列索引;ncc(x,y,d)表示参考散斑图像I i,R中的像素点(x,y)为中心的K个不同时序散斑图中的图像块与拍摄的散斑图像I i,O中以像素点(x-d,y)为中心的K个不同时序散斑图中的图像块之间的归一化相关值;Ω(x,y)为以(x,y)为中心的邻域;
Figure PCTCN2020138118-appb-000002
Figure PCTCN2020138118-appb-000003
分别为参考散斑图像和拍摄的散斑图像三维窗口中的像素灰度平均值;利用所述偏移量采用三角法计算所述像素点的深度值,具体如下:
Figure PCTCN2020138118-appb-000004
其中,d为视差值;b为发射模组的光源至采集模组的相机的基线长度;Z 0为发射模组和采集模组所在平面与参考平面的距离;f为相机的焦距;Z为像素点的深度值。
在本发明的又一种实施例中,采用前后帧顺延采集方法采集所述散斑图案计算所述像素点的深度值。
本发明还提供一种深度成像装置,包括:发射模组,用于向目标物体发射至少两幅随时序变化的散斑图案;采集模组,用于采集经所述目标物体反射回的散斑图案;控制与处理器,分别与所述发射模组和所述采集模组连接,其被配置为实现如上任一所述的方法。
在发明的一种实施例中,所述发射模组包括光源阵列,所述光源阵列包括多个分立的子光源阵列,且各个分立的所述子光源阵列被分组独立控制,所述分组独立控制包括所述子光源阵列独立开启或者多个同步开启以产生所述随时序变化的散斑图案。
在本发明的另一种实施例中,与所述发射模组连接的光束偏转单元,用于将所述发射模组的光源阵列发射出的光束偏转以产生所述随时序变化的散斑图案。所述发射模组包括依序连接的光源、光学元件,所述光学元件包括透镜或者衍射光学元件;所述光束偏转单元连接在所述光源、所述透镜或者所述衍射光学元件的任一者上,被配置为以使得所述光源、所述透镜或者所述衍射光学元件沿单个方向或者多个方向移动或者偏转。
本发明再提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如前任一所述方法的步骤。
本发明的有益效果为:提供一种深度成像方法、装置及计算机可读存储介质,通过控制发射模组发射至少两幅随时序变化的散斑图案;控制采集模组采集经目标物体反射回的散斑图案;利用该散斑图案与预先存储的参考散斑图案进行匹配以计算各像素点的偏移量,并根据偏移量计算各像素点的深度值;实现在传统的立体匹配方法的基础上,增加了时序信息,根据空时立体匹配原理,采用三维的窗口进行立体匹配,从而实现成本低、精度高、帧率高的深度成像。
进一步的,本发明提供一种控制与处理器的软件程序改进的方法获取成本低、精度高、帧率高的深度成像。
再进一步的,本发明提供一种硬件和软件改进相结合的方法获取成本低、精度高、帧率高的深度成像。
附图说明
图1是本发明实施例中一种深度成像装置的结构示意图。
图2是本发明实施例中发射模组中VCSEL子阵列发射的散斑图案示意图。
图3是本发明实施例中第一种发射模组的结构示意图。
图4是本发明实施例中第二种发射模组的结构示意图。
图5(a)是本发明实施例中现有技术的立体匹配技术的立体匹配原理示意图。
图5(b)是本发明实施例中采用的空时立体匹配的原理示意图。
图6是本发明实施例中根据前后帧顺延偏移量计算深度值的原理示意图。
图7是本发明实施例中根据前后帧顺延采集图像的示意图。
图8是本发明实施例中深度成像方法的示意图。
具体实施方式
为了使本发明实施例所要解决的技术问题、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
需要说明的是,当元件被称为“固定于”或“设置于”另一个元件,它可以直接在另一个元件上或者间接在该另一个元件上。当一个元件被称为是“连接于”另一个元件,它可以是直接连接到另一个元件或间接连接至该另一个元件上。另外,连接既可以是用于固定作用也可以是用于电路连通作用。
需要理解的是,术语“长度”、“宽度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多该特征。在本发明实施例的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
图1为本发明实施例提供的一种基于时序散斑的深度成像装置10的结构示意图。深度成像装置10包括有发射模组11、采集模组12以及分别与发射模组11和采集模组12连接的控制与处理器13。发射模组11用于向目标物体20发射至少两幅随时序变化的散斑图案;采集模组12用于采集对应的被目标物体20 反射回的散斑图案;控制与处理器13利用该散斑图案与预先存储的至少两幅参考散斑图案进行空时立体匹配以计算各像素点的偏移量,并根据该偏移量计算各像素点的深度值。可以理解的是,发射至少两幅散斑图案的视差是可以预先设置的。
发射模组11包括光源111、光学元件112。光源111可以是发光二极管(LED)、边发射激光器(EEL)、垂直腔面发射激光器(VCSEL)等光源,也可以是多个光源组成的阵列光源,优选地采用多个VCSEL光源以形成VCSEL阵列光源,由于VCSEL拥有体积小、光源发射角小、稳定性好等特点,同时可以在半导体衬底上布置多个VCSEL光源,由此构成的VCSEL光源阵列芯片不仅体积小、功耗低,同时更加有利于生成斑点图案光束。并且由于多个VCSEL光源在空间位置上的不同,所对应的散斑图案也不同,这样就可以产生时序上变化的散斑图案。
光源阵列包含多个分立的子光源阵列,且各个分立的子光源阵列可以被分组独立控制,控制与处理器13通过分组独立控制发射模组11的各个分立的子光源阵列实现向目标物体20发射至少两幅随时序变化的散斑图案,分组独立控制包括所述子光源阵列独立开启或者多个同步开启。
图2是根据本发明一实施例的VCSEL光源阵列的示意图。在图2所示的实施例中,由多个子光源201(空心所示的光源)共同组成第一子光源阵列,第一子光源阵列形成了第一二维图案,可以在控制与处理器13的控制下单独发射第一散斑图案;由多个子光源202(黑点所示的光源)共同组成第二子光源阵列,第二子光源阵列形成了第二二维图案,可以在控制与处理器13的控制下单独发射第二散斑图案,且第一子光源阵列和第二子光源阵列在空间上分开。图中空心点201和黑点202仅以示区别,实际上二者均为光源,在光源关闭时,二者也可以无法分辨,并且所有的空心点201被共同控制,所有的黑点202被共同控制,即可以独立的控制黑点与空心点所代表的子光源阵列。第一二维图案和第二二维图案的图案可以相同也可以不同。可以理解的是,第一子光源阵列和第二子光源阵列也可以同步开启形成第三二维图案。此处仅是示例性的,光源阵列中可以包括多个子光源阵列,子光源阵列可以单独开启也可以两个或者多个共同开启。
可以理解的是,多个子光源阵列在空间上可以被分开设置、交叉设置以及复合设置,比如第一子光源阵列是A区,第二子光源阵列是B区,第三子光源阵列 是A+B区等。此外,多个子光源阵列的排列方式可以根据需要进行合理的设置,子光源阵列之间的图案、数量、密度和排列方式可以相同或者不同。比如第一子光源的排列密度高于第二子光源的排列密度,第一子光源的数量少于第二子光源的数量,由于排列方式的不同会导致输出不同的散斑图案,从而可以产生时序上变化散斑图案。
光学元件112接收来自光源111的光束,并将光束进行调制,比如衍射、透射等调制,随后向目标物体20发射被调制后的光束。光学元件可以是透镜、衍射光学元件(Diffractive Optical Elements,DOE)、微透镜阵列等形式中的一种或多种组合,根据具体使用场景设置。
在本发明的另一种实施例中,深度成像装置10包括与发射模组连接的光束偏转单元,光束偏转单元可以是一个具体的硬件或硬件组合,用于将发射模组的光源阵列发射出的光束偏转以实现向目标物体20发射随时序变化的散斑图案,可以理解的是,可以根据具体需要设置偏转角度和时间,通过对发射光束的偏转产生不同时序变化的散斑图案,具体如下文所述。
在一个实施例中,发射模组11连接一个或多个致动器301,致动器301可以连接在VCSEL阵列、透镜或者DOE的任一者上,其被配置为以使得VCSEL阵列、透镜或者DOE沿单个方向或者多个方向移动或者偏转,从而产生时序上变化的散斑图案。
如图3所示,VCSEL阵列发射光束303,透镜接收光束303,并对光束进行汇聚形成光束304,DOE接收光束304将其衍射成零级衍射光束305和正负一级衍射光束306a-b以形成散斑图案。可以理解的是,在此仅是为了方便说明,以零级和正负一级衍射光束示例,其实DOE可以产生更大数量的衍射级。致动器301分别与透镜和控制与处理器13连接,被配置为以使得透镜横向于其光轴平移,从而使光束303发生平移或偏转。
如图4所示,致动器301被配置为使透镜向右平移,光束304即顺时针旋转(如图箭头402所示)一定的角度θ形成光束401,该角度是根据透镜平移量与透镜的焦距之比确定。这种旋转继续传递到DOE,其零级衍射光束305和正负一级衍射光束306a-b也会顺时针旋转一定的角度θ(如图箭头403所示)以使得散斑图案沿横向移动从而实现时序上的变化效果。可以理解的是,衍射级的支点也 会横向移动,但其移动量与偏转角度相比可以忽略不计。在本实施例中,仅以透镜向右平移为例,其实透镜的移动方向也可以向左平移或其它方向移动,在此对其移动方向不做限制。
可以理解的是,此处仅仅是示例性的,实际上,只要偏转单元的硬件或硬件组合能够实现与制动器相类似的功能即可,比如扫描组件、分束器;同时,考虑具体的应用场景,选择大小合适的硬件或硬件组合即可。
在本发明再一个实施例中,发射模组11向目标物体20发射至少两幅随时序变化的散斑图案;采集模组12采集相应的被目标物体20反射回的散斑图案;控制与处理器13利用该散斑图案与预先存储的参考散斑图案进行匹配以计算各像素点的偏移量,并根据该偏移量计算各像素点的深度值。可以理解的是,预先存储的多幅参考散斑图案是通过标定工艺采集的,即在预设的一个或者多个距离上放置平板,然后将散斑图案投射上去,采集模组12采集该散斑图案并存储在存储器(未图示)中。
如图5(a)-5(b)所示,分别为一般的立体匹配技术的立体匹配示意图和本发明实施例采用的空时立体匹配示意图。现有技术中常用的立体匹配技术是建立一对立体图像之间的点对应关系,通过计算像素之间的对应关系,可以获得对应点的三维坐标。根据事先标定的相机参数对立体图像做立体校正,可以使得对应点搜索只需要在该像素点所在的水平线上进行,如图5(a)所示,左边是参考图像,右边是目标拍摄图像,图中箭头为对应点搜索方向。这种方法在目标表面没有足够丰富的纹理特征时,就难以建立正确的点对应关系,要得到精确的立体匹配会变得非常困难。
可以理解的是,利用拍摄的散斑图案与预先存储的散斑图案进行匹配的方法有很多种,比如SSD相似度(Sum of Squared Differences),归一化相关匹配(Normalized Cross Correlation,NCC)等方法,在此对其匹配方法不做限制。
在本实施例中,采用的空时立体匹配方法。在传统的立体匹配方法基础上,增加了时序信息。通过建立参考图像与目标拍摄图像之间的点对应关系,利用三角测量原理计算目标物体的深度。如图5(b)所示左边是参考图像,右边是目标拍摄图像,图中箭头为对应点搜索方向。发射模组11向目标物体发射多幅随时序变化的散斑图案,根据空时立体匹配原理,采用三维的窗口进行立体匹配。由 于三维的窗口中包含丰富的图像信息,即使匹配窗口的半径很小(如5x5,甚至3x3),采用归一化相关匹配方法(Normalized Cross Correlation,NCC)即可得到致密的视差图,归一化相关匹配是利用散斑图案与预先存储的散斑图案的灰度,通过归一化的相关性度量公式来计算二者之间的匹配程度。三维窗口NCC匹配度计算方法表达式如下:
Figure PCTCN2020138118-appb-000005
其中,d为视差值;i为图像序列索引;ncc(x,y,d)表示参考散斑图像I i,R中的像素点(x,y)为中心的K个不同时序散斑图中的图像块与拍摄的散斑图像I i,O中以像素点(x-d,y)为中心的K个不同时序散斑图中的图像块之间的归一化相关值;Ω(x,y)为以(x,y)为中心的邻域;
Figure PCTCN2020138118-appb-000006
Figure PCTCN2020138118-appb-000007
分别为参考散斑图像和拍摄的散斑图像三维窗口中的像素灰度平均值。
为了加快匹配速度,采用金字塔搜索策略,实现由精到细的匹配。采用三层金字塔,当前层的图像宽度或高度为其下一层的1/2。采用双向匹配剔除误匹配点,即假设参考散斑图像中的一个像素点P R在目标拍摄散斑图像中找到对应点为P O,再对目标拍摄散斑图像中的点P O在参考图像中进行反向匹配,得到对应点P R1,必须要满足|P R-P R1≤1|,否则认为是误匹配。
NCC得到的视差图是整像素级的,为了得到亚像素级的匹配精度,对以NCC得到的匹配位置中心的2个像素进行细分,细分间隔为0.1像素。再用NCC相似度对这21个位置进行搜索,找到NCC得分最高的位置,即为最终的亚像素匹配结果。采用该方法,理论上可以得到1/10像素的匹配精度。
如图6所示,发射模组11向目标物体发射散斑图案,采集模组12采集目标物体反射回的散斑图案,利用散斑匹配得到视差图以后,根据视差图中各像素点的视差可以利用三角法计算各像素点的深度值,表达式如下:
Figure PCTCN2020138118-appb-000008
其中,d为视差值;b为发射模组的光源至采集模组的相机的基线长度;Z 0为发射模组和采集模组所在平面与参考平面的距离;f为相机的焦距;Z为像素点的深度值。可以理解的是,发射模组和采集模组一般设置在同一基线上,此处发射模组和采集模组所在平面实际是基线所在平面。
可以理解的是,在上述实施例中是通过采集多帧图像计算出目标物体20的深度图,比如发射模组11先后向目标物体20发射A、B两幅时序散斑图,在时序上就是以A1、B1、A2、B2、A3、B3向目标物体20发射,如果依次按照A+B输出一幅深度图像D,即通过A1、B1计算一帧深度图像,A2、B2计算一帧图像,A3、B3计算一帧图像,共三帧图像,因此,深度图像的帧率要比散斑图像采集的帧率减少一半。但在一个实施例中,可以采取前后帧顺延的方式,如此不会降低采集帧数,如图7所示的根据本发明一个实施例的前后帧顺延采集方法,即通过前后帧来计算一帧深度图像,由A1、B1计算出一帧深度图像,接下来由B1、A2计算一帧深度图像,依次类推,除了第一帧A1没有对应的深度图像,后面每一帧散斑图都有对应的深度图像,从而不会降低测量帧率。
可以理解的是,上面两种方法仅仅是示例性的从控制与处理单元的软件改进和深度成像装置的硬件与软件相结合的改进作出说明,其他的可以实现本发明思路的方法,即使得发射模组向目标物体发射至少两幅随时序变化的散斑图案,在传统的立体匹配方法的基础上,增加了时序信息,根据空时立体匹配原理,采用三维的窗口进行立体匹配的方法都应该属于本发明的保护范围。
基于上述各实施例中基于时序散斑的深度成像装置,本申请还提供相应的深度成像方法。图8示出了根据本发明一实施例中一种基于时序散斑的深度成像方法,包括如下步骤:
S1,控制发射模组向目标物体发射至少两幅随时序变化的散斑图案;
在一个实施例中,发射模组包括VCSEL阵列、透镜和DOE,VCSEL阵列的每个子阵列发射的散斑图案不同,从而产生时序散斑图案;
在另一个实施例中,发射模组包括VCSEL阵列、透镜、DOE和致动器,所述致动器连接在所述VCSEL阵列、透镜、DOE和致动器任一者上,以使得所述VCSEL阵列、透镜或DOE沿单个方向或多个方向移动,从而产生时序上变化的散斑图案。
S2,控制采集模组采集经所述目标物体反射回的所述散斑图案;
S3,利用所述散斑图案与预先存储的至少两幅参考散斑图案进行空时立体匹配以计算各像素点的偏移量,并根据所述偏移量计算所述像素点的深度值。
在本发明的一种实施例中,通过控制发射模组中多个分立的子光源阵列独立开启或多个子光源阵列的同步开启向目标物体发射至少两幅随时序变化的散斑图案,具体实现如前所述,此处不再赘述。
在本发明的另一种实施例中,通过控制将发射模组的光源阵列发射出的光束偏转后向目标物体发射至少两幅随时序变化的散斑图案,具体实现如前所述,此处不再赘述。
控制与处理器采用NCC的匹配方法对参考散斑图案与拍摄的散斑图案进行匹配以计算以获取视差图,NCC匹配计算方法表达式如下:
Figure PCTCN2020138118-appb-000009
其中,d为视差值;i为图像序列索引;ncc(x,y,d)表示参考散斑图像I i,R中的像素点(x,y)为中心的K个不同时序散斑图中的图像块与拍摄的散斑图像I i,O中以像素点(x-d,y)为中心的K个不同时序散斑图中的图像块之间的归一化相关值;Ω(x,y)为以(x,y)为中心的邻域;
Figure PCTCN2020138118-appb-000010
Figure PCTCN2020138118-appb-000011
分别为参考散斑图像和拍摄的散斑图像三维窗口中的像素灰度平均值。
根据散斑匹配得到视差图以后,根据视差图中各像素点的视差可以利用三角法计算各像素点的深度值,表达式如下:
Figure PCTCN2020138118-appb-000012
其中,d为视差值;b为发射模组的光源至采集模组的相机的基线长度;Z 0为发射模组和采集模组所在平面与参考平面的距离;f为相机的焦距;Z为像素点的深度值。
本申请实施例还提供一种控制装置,包括处理器和用于存储计算机程序的存储介质;其中,处理器用于执行所述计算机程序时至少执行如上所述的方法。
本申请实施例还提供一种存储介质,用于存储计算机程序,该计算机程序被执行时至少执行如上所述的方法。
本申请实施例还提供一种处理器,所述处理器执行计算机程序,至少执行如上所述的方法。
所述存储介质可以由任何类型的易失性或非易失性存储设备、或者它们的组合来实现。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,Ferromagnetic Random Access Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,Sync Link Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本发明实施例描述的存储介质旨在包括但不限于这些和任意其它适合类型的存储器。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通 信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上内容是结合具体的优选实施方式对本发明所做的进一步详细说明,不能 认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干等同替代或明显变型,而且性能或用途相同,都应当视为属于本发明的保护范围。

Claims (10)

  1. 一种深度成像方法,其特征在于,包括如下步骤:
    S1:控制发射模组向目标物体发射至少两幅随时序变化的散斑图案;
    S2:控制采集模组采集经所述目标物体反射回的所述散斑图案;
    S3:利用所述散斑图案与预先存储的至少两幅参考散斑图案进行空时立体匹配以计算各像素点的偏移量,并根据所述偏移量计算所述像素点的深度值。
  2. 如权利要求1所述的深度成像方法,其特征在于,控制包含多个分立的子光源阵列的所述发射模组中所述分立的子光源阵列独立开启或多个所述分立的子光源阵列同步开启向所述目标物体发射至少两幅随时序变化的散斑图案。
  3. 如权利要求1所述的深度成像方法,其特征在于,控制将所述发射模组发射出的光束偏转后向所述目标物体发射至少两幅随时序变化的散斑图案。
  4. 如权利要求1-3任一所述的深度成像方法,其特征在于,采用归一化相关匹配以计算各所述像素点的偏移量,具体公式为:
    Figure PCTCN2020138118-appb-100001
    其中,d为视差值;i为图像序列索引;ncc(x,y,d)表示参考散斑图像I i,R中的像素点(x,y)为中心的K个不同时序散斑图中的图像块与拍摄的散斑图像I i,O中以像素点(x-d,y)为中心的K个不同时序散斑图中的图像块之间的归一化相关值;Ω(x,y)为以(x,y)为中心的邻域;
    Figure PCTCN2020138118-appb-100002
    Figure PCTCN2020138118-appb-100003
    分别为参考散斑图像和拍摄的散斑图像三维窗口中的像素灰度平均值;利用所述偏移量采用三角法计算所述像素点的深度值,具体如下:
    Figure PCTCN2020138118-appb-100004
    其中,d为视差值;b为发射模组的光源至采集模组的相机的基线长度;Z 0为发射模组和采集模组所在平面与参考平面的距离;f为相机的焦距;Z为像素点的深度值。
  5. 如权利要求1-3任一所述的深度成像方法,其特征在于,采用前后帧顺 延采集方法采集所述散斑图案计算所述像素点的深度值。
  6. 一种深度成像装置,其特征在于,包括:
    发射模组,用于向目标物体发射至少两幅随时序变化的散斑图案;
    采集模组,用于采集经所述目标物体反射回的散斑图案;
    控制与处理器,分别与所述发射模组和所述采集模组连接,其被配置为实现如权利要求1-5任一所述的方法。
  7. 如权利要求6所述的深度成像装置,其特征在于,所述发射模组包括光源阵列,所述光源阵列包括多个分立的子光源阵列,且各个分立的所述子光源阵列被分组独立控制,所述分组独立控制包括所述子光源阵列独立开启或者多个同步开启以产生所述随时序变化的散斑图案。
  8. 如权利要求6所述的深度成像装置,其特征在于,还包括:与所述发射模组连接的光束偏转单元,用于将所述发射模组的光源阵列发射出的光束偏转以产生所述随时序变化的散斑图案。
  9. 如权利要求8所述的深度成像装置,其特征在于,所述发射模组包括依序连接的光源、光学元件,所述光学元件包括透镜或者衍射光学元件;
    所述光束偏转单元连接在所述光源、所述透镜或者所述衍射光学元件的任一者上,被配置为以使得所述光源、所述透镜或者所述衍射光学元件沿单个方向或者多个方向移动或者偏转。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-5任一所述方法的步骤。
PCT/CN2020/138118 2020-04-29 2020-12-21 一种深度成像方法、装置及计算机可读存储介质 WO2021218196A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/830,010 US20220299314A1 (en) 2020-04-29 2022-06-01 Depth imaging method and device and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010357591.3A CN111664798B (zh) 2020-04-29 2020-04-29 一种深度成像方法、装置及计算机可读存储介质
CN202010357591.3 2020-04-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/830,010 Continuation US20220299314A1 (en) 2020-04-29 2022-06-01 Depth imaging method and device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2021218196A1 true WO2021218196A1 (zh) 2021-11-04

Family

ID=72382899

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138118 WO2021218196A1 (zh) 2020-04-29 2020-12-21 一种深度成像方法、装置及计算机可读存储介质

Country Status (3)

Country Link
US (1) US20220299314A1 (zh)
CN (1) CN111664798B (zh)
WO (1) WO2021218196A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113793339A (zh) * 2021-11-18 2021-12-14 北京的卢深视科技有限公司 Doe脱落程度检测方法、电子设备和存储介质
CN114783041A (zh) * 2022-06-23 2022-07-22 合肥的卢深视科技有限公司 目标对象识别方法、电子设备和计算机可读存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210262787A1 (en) * 2020-02-21 2021-08-26 Hamamatsu Photonics K.K. Three-dimensional measurement device
CN111664798B (zh) * 2020-04-29 2022-08-02 奥比中光科技集团股份有限公司 一种深度成像方法、装置及计算机可读存储介质
CN112184811B (zh) * 2020-09-22 2022-11-04 合肥的卢深视科技有限公司 单目空间结构光系统结构校准方法及装置
CN112346075B (zh) * 2020-10-01 2023-04-11 奥比中光科技集团股份有限公司 一种采集器及光斑位置追踪方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778643A (zh) * 2014-01-10 2014-05-07 深圳奥比中光科技有限公司 一种实时生成目标深度信息的方法及其装置
US20150063674A1 (en) * 2013-08-28 2015-03-05 United Sciences, Llc Profiling a manufactured part during its service life
CN107424188A (zh) * 2017-05-19 2017-12-01 深圳奥比中光科技有限公司 基于vcsel阵列光源的结构光投影模组
CN108333859A (zh) * 2018-02-08 2018-07-27 宁波舜宇光电信息有限公司 结构光投射装置、深度相机以基于深度相机的深度图像成像方法
CN109087382A (zh) * 2018-08-01 2018-12-25 宁波发睿泰科智能科技有限公司 一种三维重构方法及三维成像系统
CN111664798A (zh) * 2020-04-29 2020-09-15 深圳奥比中光科技有限公司 一种深度成像方法、装置及计算机可读存储介质

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101272511B (zh) * 2007-03-19 2010-05-26 华为技术有限公司 图像深度信息和图像像素信息的获取方法及装置
KR20140075163A (ko) * 2012-12-11 2014-06-19 한국전자통신연구원 구조광 방식을 활용한 패턴 프로젝팅 방법 및 장치
CN103247038B (zh) * 2013-04-12 2016-01-20 北京科技大学 一种视觉认知模型驱动的全局图像信息合成方法
CN103247053B (zh) * 2013-05-16 2015-10-14 大连理工大学 基于双目显微立体视觉的零件精确定位方法
CN104637043B (zh) * 2013-11-08 2017-12-05 株式会社理光 支持像素选择方法、装置、视差值确定方法
CN104504688A (zh) * 2014-12-10 2015-04-08 上海大学 基于双目立体视觉的客流密度估计的方法和系统
CN105203044B (zh) * 2015-05-27 2019-06-11 珠海真幻科技有限公司 以计算激光散斑为纹理的立体视觉三维测量方法及系统
CN104918035A (zh) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 一种获取目标三维图像的方法及系统
CN108307179A (zh) * 2016-08-30 2018-07-20 姜汉龙 一种3d立体成像的方法
CN107169418A (zh) * 2017-04-18 2017-09-15 海信集团有限公司 一种障碍物检测方法及装置
CN108171647B (zh) * 2017-11-24 2021-09-03 同济大学 一种考虑地表形变的Landsat 7条带影像修复方法
CN108765476B (zh) * 2018-06-05 2021-04-20 安徽大学 一种偏振图像配准方法
CN109410207B (zh) * 2018-11-12 2023-05-02 贵州电网有限责任公司 一种基于ncc特征的无人机巡线图像输电线路检测方法
CN109655014B (zh) * 2018-12-17 2021-03-02 中国科学院上海光学精密机械研究所 基于vcsel的三维人脸测量模组及测量方法
CN110221273B (zh) * 2019-05-09 2021-07-06 奥比中光科技集团股份有限公司 时间飞行深度相机及单频调制解调的距离测量方法
CN110517307A (zh) * 2019-06-20 2019-11-29 福州瑞芯微电子股份有限公司 利用卷积实现基于激光散斑图的立体匹配方法
CN110376602A (zh) * 2019-07-12 2019-10-25 深圳奥比中光科技有限公司 多模式深度计算处理器及3d图像设备
CN111045029B (zh) * 2019-12-18 2022-06-28 奥比中光科技集团股份有限公司 一种融合的深度测量装置及测量方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063674A1 (en) * 2013-08-28 2015-03-05 United Sciences, Llc Profiling a manufactured part during its service life
CN103778643A (zh) * 2014-01-10 2014-05-07 深圳奥比中光科技有限公司 一种实时生成目标深度信息的方法及其装置
CN107424188A (zh) * 2017-05-19 2017-12-01 深圳奥比中光科技有限公司 基于vcsel阵列光源的结构光投影模组
CN108333859A (zh) * 2018-02-08 2018-07-27 宁波舜宇光电信息有限公司 结构光投射装置、深度相机以基于深度相机的深度图像成像方法
CN109087382A (zh) * 2018-08-01 2018-12-25 宁波发睿泰科智能科技有限公司 一种三维重构方法及三维成像系统
CN111664798A (zh) * 2020-04-29 2020-09-15 深圳奥比中光科技有限公司 一种深度成像方法、装置及计算机可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113793339A (zh) * 2021-11-18 2021-12-14 北京的卢深视科技有限公司 Doe脱落程度检测方法、电子设备和存储介质
CN113793339B (zh) * 2021-11-18 2022-08-26 合肥的卢深视科技有限公司 Doe脱落程度检测方法、电子设备和存储介质
CN114783041A (zh) * 2022-06-23 2022-07-22 合肥的卢深视科技有限公司 目标对象识别方法、电子设备和计算机可读存储介质
CN114783041B (zh) * 2022-06-23 2022-11-18 合肥的卢深视科技有限公司 目标对象识别方法、电子设备和计算机可读存储介质

Also Published As

Publication number Publication date
CN111664798B (zh) 2022-08-02
CN111664798A (zh) 2020-09-15
US20220299314A1 (en) 2022-09-22

Similar Documents

Publication Publication Date Title
WO2021218196A1 (zh) 一种深度成像方法、装置及计算机可读存储介质
CN108957911B (zh) 散斑结构光投影模组及3d深度相机
CN110596722B (zh) 直方图可调的飞行时间距离测量系统及测量方法
CN110596721B (zh) 双重共享tdc电路的飞行时间距离测量系统及测量方法
US9826216B1 (en) Systems and methods for compact space-time stereo three-dimensional depth sensing
WO2022262332A1 (zh) 一种距离测量装置与相机融合系统的标定方法及装置
US9501833B2 (en) Method and system for providing three-dimensional and range inter-planar estimation
CN110596725B (zh) 基于插值的飞行时间测量方法及测量系统
CN106548489B (zh) 一种深度图像与彩色图像的配准方法、三维图像采集装置
CN110596724B (zh) 动态直方图绘制飞行时间距离测量方法及测量系统
WO2021238214A1 (zh) 一种三维测量系统、方法及计算机设备
CN111856433B (zh) 一种距离测量系统及测量方法
JP2009300268A (ja) 3次元情報検出装置
CN110824490B (zh) 一种动态距离测量系统及方法
CN110596723A (zh) 动态直方图绘制飞行时间距离测量方法及测量系统
JP2015524050A (ja) ターゲット物体の表面の深さをプロファイリングするための装置及び方法
US10317684B1 (en) Optical projector with on axis hologram and multiple beam splitter
CN110716189A (zh) 一种发射器及距离测量系统
CN114898038A (zh) 一种图像重建方法、装置及设备
CN217085782U (zh) 一种结构光三维成像模块及深度相机
CN114705131A (zh) 一种用于3d测量的可定位多线扫描产生方法和系统
CN103697825A (zh) 一种超分辨3d激光测量系统及方法
CN211148903U (zh) 一种发射器及距离测量系统
EP3913754A1 (en) Light source for structured light, structured light projection apparatus and system
CN203687882U (zh) 一种超分辨3d激光测量系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933102

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933102

Country of ref document: EP

Kind code of ref document: A1