CN108881717A - A kind of Depth Imaging method and system - Google Patents

A kind of Depth Imaging method and system Download PDF

Info

Publication number
CN108881717A
CN108881717A CN201810618776.8A CN201810618776A CN108881717A CN 108881717 A CN108881717 A CN 108881717A CN 201810618776 A CN201810618776 A CN 201810618776A CN 108881717 A CN108881717 A CN 108881717A
Authority
CN
China
Prior art keywords
light
image
depth
object construction
reference configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810618776.8A
Other languages
Chinese (zh)
Other versions
CN108881717B (en
Inventor
许星
王兆民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201810618776.8A priority Critical patent/CN108881717B/en
Publication of CN108881717A publication Critical patent/CN108881717A/en
Application granted granted Critical
Publication of CN108881717B publication Critical patent/CN108881717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/48Details of cameras or camera bodies; Accessories therefor adapted for combination with other photographic or optical apparatus
    • G03B17/54Details of cameras or camera bodies; Accessories therefor adapted for combination with other photographic or optical apparatus with projector

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention is suitable for optical measurement and manufacturing field, provides a kind of Depth Imaging method and system, the Depth Imaging method includes:Projector is controlled to object emitting structural light light beam;The object construction light image of the object of light-field camera acquisition is obtained, the object construction light image is irradiated the object by structure light light beam and formed;The first depth image is formed according to the object construction light image and reference configuration light image.The Depth Imaging method and system that the embodiment of the present invention is cooperated using light-field camera and projector, imaging system is compact-sized, and size is small.

Description

A kind of Depth Imaging method and system
Technical field
The invention belongs to optical measurement and manufacturing fields, and in particular to a kind of Depth Imaging method and system.
Background technique
Depth Imaging based on structured light technique, receives extensive attention in recent years and greatly develops.And it utilizes deep The structure light Depth Imaging of degree camera is applied in the equipment such as TV, robot, mobile terminal to realize that body feeling interaction, 3D are built The functions such as mould, avoidance and recognition of face.
But current structure light Depth Imaging either uses monocular structure light depth camera or binocular structure optical depth Camera is spent, the projective module group of imaging system, the physical size acquired between camera or between multiple acquisition cameras are larger, still face Structure is not compact enough, bulky technical problem.
Summary of the invention
In view of this, the embodiment of the invention provides a kind of Depth Imaging method and system, to solve existing imaging system Structure is not compact enough, bulky problem.
First aspect present invention provides a kind of Depth Imaging method, including:
Projector is controlled to object emitting structural light light beam;
The object construction light image of the object of light-field camera acquisition is obtained, the object construction light image is by structure light light Beam irradiates the object and is formed;
The first depth image is formed according to the object construction light image and reference configuration light image
Second aspect of the present invention provides a kind of Depth Imaging system, including projector, light-field camera and processing equipment, institute Projector and light-field camera is stated to be arranged along baseline;
The projector is used for object emitting structural light light beam;
The light-field camera, for acquiring the object construction light image of object, the object construction light image is by structure Light light beam irradiates the object and is formed;
The processing equipment for controlling projector to object emitting structural light light beam, and controls light-field camera acquisition The object construction light light field image of object;Obtain the object construction light image of the object of light-field camera acquisition;According to described Object construction light image and reference configuration light image form the first depth image.
Third aspect present invention provides a kind of computer readable storage medium, the computer-readable recording medium storage There is computer program, which is characterized in that method as described in relation to the first aspect is realized when the computer program is executed by processor Step.
The Depth Imaging method and system that the embodiment of the present invention is cooperated using light-field camera and projector, imaging system structure Compact, size is small, more conducively integrated.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is a kind of structural schematic diagram of Depth Imaging system provided in an embodiment of the present invention;
Fig. 2 is a kind of method flow diagram of Depth Imaging method provided in an embodiment of the present invention;
Fig. 3 is the method flow diagram of another Depth Imaging method provided in an embodiment of the present invention;
Fig. 4 is the method flow diagram of another Depth Imaging method provided in an embodiment of the present invention;
Fig. 5 is the method flow diagram of another Depth Imaging method provided in an embodiment of the present invention;
Fig. 6 is the method flow diagram of another Depth Imaging method provided in an embodiment of the present invention;
Fig. 7 is the method flow diagram of another Depth Imaging method provided in an embodiment of the present invention.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed Body details, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific The present invention also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, in case unnecessary details interferes description of the invention.
Fig. 1 is a kind of schematic diagram of Depth Imaging system provided in an embodiment of the present invention.Depth Imaging system 1 includes light field Camera 10, structured light projection instrument 20 and processing equipment 40, Depth Imaging system for realizing to object 30 carry out depth at Picture.
The main building block of light-field camera 10 includes imaging sensor 101, optical filter (not shown in figure 1), lenticule battle array Arrange (Micro Lens Array, MLA) 102 and lens 103.Wherein, imaging sensor 101 can be charge coupling device figure As sensor (Charge Coupled Device, CCD) or complementary metal oxide semiconductor (Complementary Metal-Oxide-Semiconductor, CMOS) imaging sensor etc..Optical filter can be Baeyer optical filter or infrared filtering Piece etc..
Light-field camera can be divided into according to microlens array 102 and the distance between imaging sensor 101, lens 103 Conventional light field camera and focusing light-field camera.The light-field camera product of conventional light field camera such as lytro company;Focusing light The light-field camera product of field camera such as Raytrix company.The present invention will be illustrated by taking conventional light field camera as an example, Ke Yili Solution is that any kind of light-field camera is suitable for the present invention.
Microlens array 102 is located on the focal plane of lens 103 in conventional light field camera, and imaging sensor 101 is located at micro- On the focal plane of lens array 102.Light-field camera 10 and the difference of general camera be, inside contain for recording light The microlens array of directional information can further realize multi-angle of view imaging, number focusing imaging, digital zoom on this basis Imaging and other effects, this will not be repeated here for specific principle.
For the ease of subsequent description, each pixel of imaging sensor 101 in light-field camera 10 is directly obtained in the present invention The original image taken is known as light field image, such as structure light light field image;Original image will be handled and is obtained multiple The image at visual angle is known as multi-view image, and multi-view image includes that the corresponding pixel array of microlens array 102 is summed and is obtained The image etc. taken;Digital processing will be carried out to original image and the image in the different focal length that obtains is known as digital zoom image; The image carried out in the different image planes of digital processing acquisition to original image is known as digital focus image.
It is understood that any type of light-field camera can be applied in the present invention, such as by multiphase unit Light-field camera etc. is formed by light-field camera array or with exposure mask substitution microlens array 102.
The main building block of structured light projection instrument 20 includes light source 201 and optical module 202, and optical module 202 is used for Launch structure light light beam after the light beam that light source 201 issues is modulated outward.Wherein, light source 201 can for laser diode, Or semiconductor laser etc., it can also be edge-emitting laser, vertical cavity surface laser emitter and corresponding array laser Deng;The wavelength of light source can be infrared or ultraviolet etc..Optical module 202 can be refracting optical element or diffractive optical Element or the combination of both, for example in an embodiment of the invention, optical module 202 includes for converging laser beam Poly- refracting optical element lens, and, the light beam after lens converge is subjected to diffraction beam splitting to form spreading out for structure light Penetrate optical element.Structure light light beam can be the structure light light beam of the image formats such as speckle, spot, striped or two-dimensional pattern.
It is understood that generally requiring to set in light-field camera 10 when the wavelength that structured light projection instrument 20 is projected is λ It sets corresponding optical filter to pass through by the light beam that wavelength is λ, to promote picture quality.
Light-field camera 10 and structured light projection instrument 20 are placed along base direction, than placing in the x-direction as shown in Figure 1, The optical axis of the two can in parallel can also be at a certain angle with shape.As an embodiment of the present invention, light-field camera 10 and structure light Projector 20, the optical axis of both are arranged in parallel, by this set, can simplify structure light Depth Imaging algorithm.
Imaging system of the invention uses the cooperation of light-field camera and projector, in the case where not improving cost, reduces The volume of entire imaging system, compact-sized, size is small, can preferably be integrated in other equipment, as TV, robot, Mobile terminal etc..
Processing equipment 40 is also used to execute some data for controlling light-field camera 10 and structured light projection instrument 20 Processing task.For example the initial data from light-field camera 10 is received, and carry out multi-angle of view imaging, digital zoom, depth image The data processings such as calculating.Processing equipment 40 may include one or more processors and one or more memories, in this hair In bright some embodiments, at least partly processor and memory can also be arranged on light-field camera 10 and/or structured light projection In instrument 20.Processor may include such as digital signal processor (Digital Signal Processing, DSP), using processing Device (Multimedia Application Processor, MAP), field programmable gate array (Field-Programmable Gate Array, FPGA), application-specific IC (Application Specific Integrated Circuit, ASIC one of) etc. or combination, memory may include as random access memory (Random Access Memory, RAM), one of read-only memory (Read Only Memory, ROM), flash memory (Flash) etc. or combination.Processing equipment institute The control of execution, data processing instructions can be saved in the form of software, firmware etc. in memory and when needed by processor It calls, directly instruction can also be cured in circuit and form special circuit (or application specific processor) to execute corresponding instruction, It can also be realized by way of software and special circuit combination.Processing equipment 40 can also include input/output interface, And/or support the network interface of network communication.In some embodiment of the invention, by interface, by treated, data are transmitted to Other units 50 in other equipment or system, such as display unit or exterior terminal equipment etc..It is some other in the present invention In embodiment, display unit can also be in conjunction with one or more processors in processing equipment.
Based on Depth Imaging system shown in FIG. 1, following three kinds of Depth Imaging methods are may be implemented in the present invention.
One, monocular structure light Depth Imaging
As shown in Fig. 2, the embodiment of the present invention provides a kind of Depth Imaging method, which is used for object The situation of Depth Imaging is carried out, which is executed by processing equipment 40.The Depth Imaging method, as shown in Fig. 2, Including step S201 to S203.
S201 controls projector to object emitting structural light light beam.
Wherein, under the control of processing equipment 40, as shown in Figure 1, object 30 of the structured light projection instrument 20 into space It is projected out structure light light beam.As an embodiment of the present invention, which is infrared speckle image light beam.
S202 obtains the object construction light image of the object of light-field camera acquisition, and the object construction light image is by tying Structure light light beam irradiates the object and is formed.
Wherein, object 30 of the 40 control structure light projector 20 of processing equipment into space is projected out structure light light beam Meanwhile controlling light-field camera 10 and acquiring the object construction light image being reflected back by object in space in real time, to obtain light field The object construction light image of the object of camera acquisition.It is understood that each picture of imaging sensor in light-field camera 10 Plain directly collected original image actually contains the intensity and directional information of light beam, it is subsequent can be by original to this Image be further processed with object construction light image, i.e. goal structure light image required for obtaining either Object construction light light field image is also possible to object construction light multi-view image or object construction light zoom image or object construction Light focus image etc..
S203 forms the first depth image according to the object construction light image and reference configuration light image.
Wherein, matching primitives are carried out using the object construction light image and reference configuration light image and obtains image characteristic point Between deviation value, according to the deviation value integrated structure light trigonometric calculations go out depth value formed the first depth image.
In embodiments of the present invention, the reference configuration light image is gathered in advance in calibration phase.Implement at one In example, one piece of reference screen, such as plate, the projection of synchronous control structure light are set at the distance and position known to range imaging system Instrument 20 carries out structure light image projection and light-field camera 10 acquires reference configuration light image.Reference configuration light image either Reference configuration light light field image is also possible to reference configuration light multi-view image, reference configuration light zoom image or reference configuration light Focus image etc..
In some other embodiment of the present invention, reference configuration light image can also be acquired by other cameras, for example use The 2D camera that resolution ratio is higher, field angle is bigger acquires reference configuration light image, using common 2D image as reference configuration Light image is advantageous in that can more comprehensive, clearly interrecord structure light image.
According to object construction light image type, step S203, according to the object construction light image and reference configuration light figure As forming the first depth image, following a few class situations are broadly divided into, under following a few class situations, reference configuration light image can be selected It selects any.
In some embodiment of the invention, object construction light image is the original image acquired by light-field camera, i.e. target Structure light light field image;Although original image resolution ratio with higher, since it cannot be well reflected out structure light figure The minutia of picture, therefore the computational accuracy of deviation value can't be too high when carrying out matching primitives.
In some embodiment of the invention, object construction light image is the 2D figure being further processed to original image Picture, i.e. object construction light multi-view image, such as pixel corresponding to each lens unit by microlens array are summed 2D image, or the pixel in the corresponding pixel array of each lens unit of microlens array at same position is collectively constituted A certain visual angle under 2D image etc..The present embodiment is compared with directly carrying out matching primitives with original image, due to multi-angle of view figure It seem the dimension-reduction treatment done on the basis of original image, resolution ratio reduces, and the calculating speed of matching algorithm will be promoted, simultaneously Requirement to memory can also reduce.It should be noted that according to be 2D image under a certain visual angle, to projector with When light-field camera carries out inside and outside parameter calibration, need to be demarcated for identical multi-view image.
In some embodiment of the invention, object construction light image further includes scheming through digital zoom or number to defocused 2D Picture, i.e. object construction light digital zoom or focus image.In the present embodiment, object construction light digital zoom or right can be used Burnt image calculates the first depth image with reference configuration light image.Since number focusing or zoom are realized to target object Blur-free imaging, so that the present embodiment further improves imaging precision.
On the basis of the above embodiments, as shown in figure 3, further including after step S203:
S204 detects the interesting target region in first depth image.
S205 carries out digital change to the object construction light image according to the depth information in the interesting target region Burnt or focusing, obtains object construction light digital zoom image or object construction light number focus image.
S206, according to the object construction light digital zoom image or object construction light number focus image, with reference knot Structure light image forms the second depth image.
In the present embodiment, either objective structure light image, such as object construction light light field image, object construction are first used Light multi-view image, object construction light zoom image or object construction light focus image etc. calculate with reference configuration light image After one depth image, the processing such as image background segmentation is carried out to determine interesting target region, such as people to the first depth image Body region or article region etc., and according to the depth information in interesting target region to object construction light light field figure As carrying out digital zoom or number focusing, i.e., blur-free imaging is carried out to the object in interesting target region.Here depth letter Breath can be the depth value of certain point in interesting target region, be also possible to the average depth value etc. in interesting target region. Finally using digital zoom or to defocused object construction light digital zoom or focus image, with reference configuration light image progress With calculating to obtain the second depth image.It is understood that since digital zoom or focusing are realized to the clear of target object Clear imaging, to improve matching precision, therefore relative to the first depth image, the second depth image will be provided with higher imaging Precision.
In the present embodiment, since digital zoom image mid-focal length (i.e. digital zoom) or image planes position are (i.e. digital right It is burnt) changed, therefore corresponding depth calculation algorithm mid-focal length or image planes position also need when calculating the second depth image It is adjusted correspondingly, i.e., needs to do with depth calculation algorithm when calculating the second depth image in the first depth image of calculating suitable It adjusts to answering property, to realize that high accuracy depth image calculates.
On the basis of above-mentioned embodiment as described in Figure 3, optionally, step S206, according to the object construction light number Zoom image or object construction light number focus image form the second depth image with reference configuration light image, including:According to institute Object construction light digital zoom image or object construction light number focus image are stated, is schemed with reference configuration light digital zoom or focusing As forming the second depth image.
Wherein, it is formed in the method and embodiment illustrated in fig. 3 of reference configuration light digital zoom or focus image and forms target The method of structure light digital zoom image or object construction light number focus image is similar, and details are not described herein again.
In the present embodiment, either objective structure light image, such as object construction light light field image, object construction are first used Light multi-view image etc., with any reference configuration light image, such as reference configuration light light field image, reference configuration light multi-view image Deng, carry out matching primitives to obtain the first depth image;Then to the first depth image carry out the processing such as image background segmentation with It determines interesting target region, such as human body region, article region, and is believed according to the depth in interesting target region Breath carries out digital focusing or zoom to object construction light light field image, while carrying out digital zoom to reference configuration light light field image Or focusing, for example digital zoom or right is carried out to reference configuration light light field image based on the distance that calibration phase places reference screen Coke, or the depth information based on interesting target region carry out digital zoom or focusing to reference configuration light light field image;Most Matching primitives are carried out using object construction light digital zoom or focus image and reference configuration light digital zoom or focus image afterwards To obtain the second depth image.Here, due to digital zoom or focusing realize to target object or reference screen it is clear at Picture, or unified the focal length of target object image and uncalibrated image, to further improve matching precision.
It, optionally, can also be into one when calculating the first depth image on the basis of above-mentioned embodiment as described in Figure 3 Step carries out dimension-reduction treatment to obtain the first rough depth image, at this point, since the first depth image is to the first depth image Calculated by dimension-reduction treatment, then the second depth image will also possess higher resolution ratio than the first depth image.
It is understood that the above various embodiments only symbolically describes the part function of Depth Imaging system of the present invention Can, by means of Depth Imaging system shown in the present invention, different depths can be adaptively changed according to different application demands It spends image and calculates mode, such as at one for can directly utilize in the not high Application Example of depth image required precision Object construction light multi-view image carries out depth calculation, and in the Application Example high for depth image required precision, then it utilizes Object construction light multi-view image/form of the light field image in conjunction with digital zoom image calculates depth image.
With traditional monocular structure light Depth Imaging system phase as composed by structured light projection instrument and common 2D camera Than having apparent advantage using the monocular structure light Depth Imaging system of light-field camera described in above embodiments, on the one hand, Functional diversities, it can realize that the depth image of quick, low precision obtains, high accuracy depth image also may be implemented and obtain It takes;On the other hand, by the detection and digital zoom/focusing to interesting target region, so that Depth Imaging of the invention System has higher precision, is able to achieve blur-free imaging under the situation of far field, solve conventional depth camera because away from The problem of causing precision sharply to decline from increase.
Two, more mesh structure light Depth Imagings
More mesh structure light Depth Imagings are the extension to binocular structure light Depth Imaging, for example, three mesh structure optical depths at As the simple superposition of two binocular structure light Depth Imagings can be regarded as, therefore with binocular structure optical depth in being explained below It is illustrated for imaging.
As shown in figure 4, the embodiment of the present invention provides a kind of Depth Imaging method, which is used for object The situation of Depth Imaging is carried out, processing equipment 40 executes as shown in Figure 1 for the Depth Imaging method.The Depth Imaging method, such as Shown in Fig. 4, including step S401 to S403.
S401 controls projector to object emitting structural light light beam.
Wherein, under the control of processing equipment 40, as shown in Figure 1, object 30 of the structured light projection instrument 20 into space It is projected out structure light light beam.As an embodiment of the present invention, which is infrared speckle image light beam.
S402 obtains the object construction light light field image of the object of light-field camera acquisition, the object construction light light field Image irradiates the object by structure light light beam and is formed.
Wherein, object 30 of the 40 control structure light projector 20 of processing equipment into space is projected out structure light light beam Meanwhile controlling light-field camera 10 and acquiring the object construction light light field image being reflected back by object in space in real time, to obtain The object construction light light field image of the object of light-field camera acquisition.
S403 calculates the object construction light visual angle under at least two width different perspectivess according to the object construction light light field image Image, and the object construction light multi-view image according at least two width forms the first depth image.
Wherein, processing equipment 40 calculates the target knot under at least two width different perspectivess according to object construction light light field image Structure light multi-view image.
In embodiments of the present invention, processing equipment 40 carries out matching primitives using object construction light multi-view image described in two width The deviation value between image characteristic point is obtained, goes out depth value according to the deviation value integrated structure light trigonometric calculations and forms first Depth image.
It should be noted that needing to obtain the opposite position between different perspectives in advance when calculating depth value based on deviation value Camera internal parameter corresponding to relationship and multi-view image is set, binocular vision algorithm is similar to, needs to obtain left and right phase in advance Inside and outside parameter between machine, can use here the calibration algorithms such as Zhang Zhengyou calibration method obtain in advance it is interior corresponding to different perspectives Outer parameter, and inside and outside parameter is saved in memory in advance, it is called when processor carries out depth value calculating.
It is understood that more mesh structure light Depth Imagings are in some embodiments without necessarily referring to structure light image Make the projection of no structured light projection instrument, as long as target object possesses enough textural characteristics and collects it by light-field camera Texture image can equally calculate depth image.Therefore the far field image-forming range of more mesh structure light Depth Imagings is greater than list The far field image-forming range of mesh structure light imaging.
Compared with traditional binocular structure light Depth Imaging system, the embodiment of the present invention passes through the different perspectives in light-field camera Depth calculation is carried out, due to the relative position deviation between different perspectives, i.e. baseline, can achieve grade, therefore can be with Depth Imaging is carried out to closely such as 10 centimetres even closer objects, and this is traditional binocular structure light Depth Imaging system It cannot achieve.In addition, the image camera in traditional binocular structure light Depth Imaging system is individually present and passes through between each other Bracket connection can deform, eventually the image quality of influence depth image under the influence ofs heated or physical impact etc., and this Multiple image in invention is the problem on deformation for being provided by single camera, therefore avoiding legacy system, can be with high degree It is upper that stable depth image is provided.
On the basis of the above embodiments, as shown in figure 5, further including after step S403:
S404 detects the interesting target region in first depth image.
S405 counts the object construction light light field image according to the depth information in the interesting target region Word zoom or focusing obtain object construction light digital zoom image or object construction light number focus image.
S406, according to the object construction light digital zoom image or object construction light number focus image, with reference knot Structure light image forms the second depth image.
In embodiments of the present invention, the reference configuration light image is gathered in advance in calibration phase.Implement at one In example, one piece of reference screen, such as plate, the projection of synchronous control structure light are set at the distance and position known to range imaging system Instrument 20 carries out structure light image projection and light-field camera 10 acquires reference configuration light image.Reference configuration light image either Reference configuration light light field image is also possible to reference configuration light multi-view image, reference configuration light zoom image or reference configuration light Focus image etc..
In the present embodiment, after first calculating the first depth image, image background segmentation etc. is carried out to the first depth image It handles to determine interesting target region, such as human body region or article region etc., and according to interesting target area The depth information in domain carries out digital zoom to object construction light light field image or number is focused, i.e., in interesting target region Object carries out blur-free imaging.Here depth information can be the depth value of certain point in interesting target region, be also possible to The average depth value etc. in interesting target region.Finally using digital zoom or to defocused object construction light digital zoom or right Burnt image carries out matching primitives with reference configuration light image to obtain the second depth image.It is understood that since number becomes Burnt or focusing realizes the blur-free imaging to target object, to improve matching precision, therefore relative to the first depth image, Second depth image will be provided with higher imaging precision.
In the present embodiment, since digital zoom image mid-focal length (i.e. digital zoom) or image planes position are (i.e. digital right It is burnt) changed, therefore corresponding depth calculation algorithm mid-focal length or image planes position also need when calculating the second depth image It is adjusted correspondingly, i.e., needs to do with depth calculation algorithm when calculating the second depth image in the first depth image of calculating suitable It adjusts to answering property, to realize that high accuracy depth image calculates.
On the basis of above-mentioned embodiment as described in Figure 5, optionally, step S406, according to the object construction light number Zoom image or object construction light number focus image form the second depth image with reference configuration light image, including:According to institute Object construction light digital zoom image or object construction light number focus image are stated, is schemed with reference configuration light digital zoom or focusing As forming the second depth image.
In the present embodiment, first matching primitives are to obtain the first depth image;Secondly image is carried out to the first depth image Background segment etc. is handled to determine interesting target region, such as human body region, article region, and according to interested The depth information of target area carries out digital focusing/zoom to object construction light light field image, while to reference configuration light light field Image carry out digital zoom/focusing, such as based on calibration phase place reference screen distance to reference configuration light light field image into Row digital zoom/focusing or depth information based on interesting target region carry out digital change to reference configuration light light field image Coke/focusing;Finally using number to defocused object construction light number focus image and reference configuration light number focus image into Row matching primitives are to obtain the second depth image.Here, since digital zoom/focusing is realized to target object or reference screen Blur-free imaging, or unified the focal length of target object image and uncalibrated image, to further improve matching precision.
It, optionally, can also be into one when calculating the first depth image on the basis of above-mentioned embodiment as described in Figure 5 Step carries out dimension-reduction treatment to obtain the first rough depth image, at this point, since the first depth image is to the first depth image Calculated by dimension-reduction treatment, then the second depth image will also possess higher resolution ratio than the first depth image.
Three, Depth Imaging is merged
As shown in fig. 6, the embodiment of the present invention provides a kind of Depth Imaging method, which is used for object The situation of Depth Imaging is carried out, the processing equipment 40 of the Depth Imaging method as shown in Figure 1 executes.The Depth Imaging method, As shown in fig. 6, including step S601 to S605.
S601 controls projector to object emitting structural light light beam.
Wherein, under the control of processing equipment 40, as shown in Figure 1, the object 30 in structured light projection instrument 20 is projected out Structure light light beam.As an embodiment of the present invention, which is infrared speckle image light beam.
S602 obtains the object construction light light field image of the object of light-field camera acquisition, the object construction light light field Image irradiates the object by structure light light beam and is formed.
Wherein, object 30 of the 40 control structure light projector 20 of processing equipment into space is projected out structure light light beam Meanwhile controlling light-field camera 10 and acquiring the object construction light image being reflected back by object in space in real time, to obtain light field The object construction light light field image of the object of camera acquisition.
S603 calculates the object construction light visual angle under at least two width different perspectivess according to the object construction light light field image Image.
S604, according to the object construction light light field image or the object construction light multi-view image, with reference configuration light Image forms the first depth image.
In embodiments of the present invention, the reference configuration light image is gathered in advance in calibration phase.Implement at one In example, one piece of reference screen, such as plate are set at the distance and position known to range imaging system, as shown in connection with fig. 1, synchronous control Structured light projection instrument 20 processed carries out structure light image projection and light-field camera 10 acquires reference configuration light image.Reference configuration light Image either reference configuration light light field image, be also possible to reference configuration light multi-view image, reference configuration light zoom image, Or reference configuration light focus image etc..
It is described according to the object construction light light field image or the object construction light multi-view image, with reference configuration light figure As the first depth image of formation, including:
Using the object construction light light field image or the object construction light multi-view image, with reference configuration light image into Row matching primitives obtain the deviation value between image characteristic point, go out depth according to the deviation value integrated structure light trigonometric calculations Value forms the first depth image.
S605 forms the second depth image according to object construction light multi-view image described at least two width.
Matching primitives, which are carried out, using object construction light multi-view image described in two width obtains the deviation value between image characteristic point, Go out depth value according to the deviation value integrated structure light trigonometric calculations and forms the second depth image.
In the embodiment of the present invention, step S604 and S605 can be carried out simultaneously, can also successively be carried out, to the time of the two Sequencing is not specifically limited.
S606 is merged first depth image and second depth image to obtain third depth image.
Wherein, first depth image and second depth image are merged to obtain third using weighting algorithm Depth image;Or first depth image and second depth image are merged to obtain the using MAP-MRF algorithm Three depth images.
In the embodiment of the present invention, since the first depth image is based on monocular structure light Depth Imaging principle, relative to more For mesh structure light Depth Imaging, precision is relatively high, but measurement range is limited due to baseline;And it is based on more mesh structure lights Depth Imaging baseline can achieve millimeter magnitude, and the distance for the object that can be measured is closer, for also may be implemented at a distance Depth Imaging has biggish Depth Imaging range, but precision according to the second depth image that the principle obtains as previously described It can be reduced because baseline reduces;And it is realized in the present embodiment by merging the first depth image with the second depth image Wide-measuring range and high accuracy depth image.
As an embodiment of the present invention, the first depth image is fused into merging for third depth image with the second depth image Algorithm is executed using weighting algorithm, if respectively indicating first at pixel (u, v) with D1 (u, v), D2 (u, v) and D3 (u, v) Depth image, the second depth image and third depth image respectively indicate the first depth image with a1 (u, v) and a2 (u, v) With the confidence weight of pixel depth value in the second depth image, then third depth image can be carried out by following formula It calculates:
D3 (u, v)=[D1 (u, v) a1 (u, v)+D2 (u, v) a2 (u, v)]/[a1 (u, v)+a2 (u, v)].
Wherein, confidence weight can will be set by a variety of methods, such as the first depth image, by In it in short distance, such as<0.2m, and it is relatively remote, such as>The depth value of 4m, place are more reliable, therefore for this kind of depth Its weight factor is larger for angle value;And for the second depth image its in intermediate region, such as 0.2m~4m, depth Value is more reliable, and for intermediate region depth value, its weight factor is larger.In addition, the setting of weight factor can also pass through introducing Some other parameters, such as when the weight factor to respective pixel is set, while considering that the pixel on the pixel periphery is deep Angle value, and smoothing factor is calculated by the pixel depth value on periphery, weight factor is estimated by smoothing factor.
As another embodiment of the present invention, the first depth image and the second depth image are fused into third depth image Process regards a kind of MAP-MRF problem as, i.e., using Markov random field (MRF) to observed value, i.e. the first depth image with Second depth image is modeled with estimated value, i.e. third depth image, and by maximizing posterior probability (MAP) come to the The each pixel value of three depth images is solved.
As shown in fig. 7, the embodiment of the present invention provides another Depth Imaging method, which is used for target Object carries out the situation of Depth Imaging, and the processing equipment 40 of the Depth Imaging method as shown in Figure 1 executes.The Depth Imaging side Method, as shown in fig. 7, comprises step S701 to S707.
S701 obtains the object construction light light field image of the object of light-field camera acquisition, the object construction light light field Image irradiates the object by the structure light light beam of natural light or projector and is formed.
As an embodiment of the present invention, under the control of processing equipment 40, go out structure light to space projection from projector 20 Image, such as infrared speckle image etc..Meanwhile processing equipment 40 controls the acquisition of light-field camera 10 and is reflected back by object in space Object construction light light field image.
As another embodiment of the present invention, the acquisition of object construction light light field image is also possible in no structured light projection In the case of by passive binocular principle obtain depth image, i.e., under natural light irradiation, processing equipment 40 control light field phase Machine 10 acquires the object construction light light field image being reflected back by object in space.
S702 calculates the object construction light visual angle under at least two width different perspectivess according to the object construction light light field image Image.
S703 forms the first depth image according to object construction light multi-view image described at least two width.
S704 detects the interesting target region in first depth image.
S705 counts the object construction light light field image according to the depth information in the interesting target region Word zoom or focusing obtain object construction light digital zoom image or object construction light number focus image.
S706, according to the object construction light digital zoom image or object construction light number focus image, with reference knot Structure light image forms the second depth image.
In embodiments of the present invention, the reference configuration light image is gathered in advance in calibration phase.Implement at one In example, one piece of reference screen, such as plate are set at the distance and position known to range imaging system, as shown in connection with fig. 1, synchronous control Structured light projection instrument 20 processed carries out structure light image projection and light-field camera 10 acquires reference configuration light image.Reference configuration light Image either reference configuration light light field image, be also possible to reference configuration light multi-view image, reference configuration light zoom image, Or reference configuration light focus image etc..
It is described according to the object construction light digital zoom image or object construction light number focus image, with reference configuration Light image forms the second depth image, including:
Using the object construction light digital zoom image or object construction light number focus image, with reference configuration light figure The deviation value between image characteristic point is obtained as carrying out matching primitives, is gone out according to the deviation value integrated structure light trigonometric calculations Depth value forms the second depth image.
S707 is merged first depth image and second depth image to obtain third depth image.
Wherein, first depth image and second depth image are merged to obtain third depth image and figure 6 illustrated embodiments are identical, and details are not described herein again.
In the embodiment of the present invention, the first depth image is based on more mesh Depth Imaging principles, and the second depth image is to be based on Monocular depth image-forming principle, the embodiment of the present invention merged simultaneously more mesh structure light Depth Imagings and monocular structure optical depth at Picture realizes wide-measuring range and high accuracy depth image.In addition, after the embodiment of the present invention is using number focusing or zoom Object construction light image and reference configuration light image carry out matching primitives to obtain the second depth image.Here, due to number Zoom/focusing realizes the blur-free imaging to object, or has unified the focal length of target object image and uncalibrated image, thus Further improve matching precision.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that:It still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.

Claims (10)

1. a kind of Depth Imaging method, which is characterized in that including:
Projector is controlled to object emitting structural light light beam;
The object construction light image of the object of light-field camera acquisition is obtained, the object construction light image is shone by structure light light beam The object is penetrated to be formed;
The first depth image is formed according to the object construction light image and reference configuration light image.
2. Depth Imaging method as described in claim 1, which is characterized in that the object construction light image includes object construction Light light field image, object construction light multi-view image, object construction light zoom image or object construction light focus image.
3. Depth Imaging method as claimed in claim 1 or 2, which is characterized in that the Depth Imaging method further includes:
Detect the interesting target region in first depth image;
According to the depth information in the interesting target region, digital zoom or focusing are carried out to the object construction light image, Obtain object construction light digital zoom image or object construction light number focus image;
According to the object construction light digital zoom image or object construction light number focus image, with reference configuration light image shape At the second depth image.
4. Depth Imaging method as claimed in claim 1 or 2, which is characterized in that described according to the object construction light image Forming the first depth image with reference configuration light image includes:
It is obtained using the object construction light image and reference configuration light image progress matching primitives inclined between image characteristic point From value, goes out depth value according to the deviation value integrated structure light trigonometric calculations and form the first depth image.
5. Depth Imaging method as claimed in claim 1 or 2, which is characterized in that the reference configuration light image includes:With reference to Structure light light field image, reference configuration light multi-view image, reference configuration light zoom image or reference configuration light focus image.
6. a kind of Depth Imaging system, which is characterized in that including projector, light-field camera and processing equipment, the projector and Light-field camera is arranged along baseline;
The projector is used for object emitting structural light light beam;
The light-field camera, for acquiring the object construction light image of object, the object construction light image is by structure light light Beam irradiates the object and is formed;
The processing equipment for controlling projector to object emitting structural light light beam, and controls light-field camera acquisition target The object construction light light field image of object;Obtain the object construction light image of the object of light-field camera acquisition;According to the target Structure light image and reference configuration light image form the first depth image.
7. Depth Imaging system as claimed in claim 6, which is characterized in that the object construction light image includes object construction Light light field image, object construction light multi-view image, object construction light zoom image or object construction light focus image.
8. Depth Imaging system as claimed in claims 6 or 7, which is characterized in that the processing equipment is also used to:
Detect the interesting target region in first depth image;
According to the depth information in the interesting target region, digital zoom or focusing are carried out to the object construction light image, Obtain object construction light digital zoom image or object construction light number focus image;
According to the object construction light digital zoom image or object construction light number focus image, with reference configuration light image shape At the second depth image.
9. Depth Imaging system as claimed in claims 6 or 7, which is characterized in that the reference configuration light image includes:With reference to Structure light light field image, reference configuration light multi-view image, reference configuration light zoom image or reference configuration light focus image.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In the computer program is executed by processor to realize such as the step of any one of claim 1 to 5 the method.
CN201810618776.8A 2018-06-15 2018-06-15 Depth imaging method and system Active CN108881717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810618776.8A CN108881717B (en) 2018-06-15 2018-06-15 Depth imaging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810618776.8A CN108881717B (en) 2018-06-15 2018-06-15 Depth imaging method and system

Publications (2)

Publication Number Publication Date
CN108881717A true CN108881717A (en) 2018-11-23
CN108881717B CN108881717B (en) 2020-11-03

Family

ID=64339001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810618776.8A Active CN108881717B (en) 2018-06-15 2018-06-15 Depth imaging method and system

Country Status (1)

Country Link
CN (1) CN108881717B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862334A (en) * 2019-01-10 2019-06-07 深圳奥比中光科技有限公司 A kind of structure light image obtains system and acquisition methods
TWI686747B (en) * 2018-11-29 2020-03-01 財團法人金屬工業研究發展中心 Method for avoiding obstacles of mobile vehicles all week
CN110874852A (en) * 2019-11-06 2020-03-10 Oppo广东移动通信有限公司 Method for determining depth image, image processor and storage medium
WO2021008209A1 (en) * 2019-07-12 2021-01-21 深圳奥比中光科技有限公司 Depth measurement apparatus and distance measurement method
CN112749610A (en) * 2020-07-27 2021-05-04 腾讯科技(深圳)有限公司 Depth image, reference structured light image generation method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079827A (en) * 2014-06-27 2014-10-01 中国科学院自动化研究所 Light field imaging automatic refocusing method
CN104410784A (en) * 2014-11-06 2015-03-11 北京智谷技术服务有限公司 Light field collecting control method and light field collecting control device
CN104918031A (en) * 2014-03-10 2015-09-16 联想(北京)有限公司 Depth recovery device and method
US20150347833A1 (en) * 2014-06-03 2015-12-03 Mark Ries Robinson Noncontact Biometrics with Small Footprint
CN106162153A (en) * 2015-04-03 2016-11-23 北京智谷睿拓技术服务有限公司 Display control method and device
CN106500629A (en) * 2016-11-29 2017-03-15 深圳大学 A kind of microscopic three-dimensional measurement apparatus and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104918031A (en) * 2014-03-10 2015-09-16 联想(北京)有限公司 Depth recovery device and method
US20150347833A1 (en) * 2014-06-03 2015-12-03 Mark Ries Robinson Noncontact Biometrics with Small Footprint
CN104079827A (en) * 2014-06-27 2014-10-01 中国科学院自动化研究所 Light field imaging automatic refocusing method
CN104410784A (en) * 2014-11-06 2015-03-11 北京智谷技术服务有限公司 Light field collecting control method and light field collecting control device
CN106162153A (en) * 2015-04-03 2016-11-23 北京智谷睿拓技术服务有限公司 Display control method and device
CN106500629A (en) * 2016-11-29 2017-03-15 深圳大学 A kind of microscopic three-dimensional measurement apparatus and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI686747B (en) * 2018-11-29 2020-03-01 財團法人金屬工業研究發展中心 Method for avoiding obstacles of mobile vehicles all week
US11468686B2 (en) 2018-11-29 2022-10-11 Metal Industries Research & Development Centre Omnidirectional obstacle avoidance method for vehicles
CN109862334A (en) * 2019-01-10 2019-06-07 深圳奥比中光科技有限公司 A kind of structure light image obtains system and acquisition methods
CN109862334B (en) * 2019-01-10 2021-04-30 奥比中光科技集团股份有限公司 Structured light image acquisition system and acquisition method
WO2021008209A1 (en) * 2019-07-12 2021-01-21 深圳奥比中光科技有限公司 Depth measurement apparatus and distance measurement method
CN110874852A (en) * 2019-11-06 2020-03-10 Oppo广东移动通信有限公司 Method for determining depth image, image processor and storage medium
CN112749610A (en) * 2020-07-27 2021-05-04 腾讯科技(深圳)有限公司 Depth image, reference structured light image generation method and device and electronic equipment

Also Published As

Publication number Publication date
CN108881717B (en) 2020-11-03

Similar Documents

Publication Publication Date Title
CN108881717A (en) A kind of Depth Imaging method and system
CN108924408A (en) A kind of Depth Imaging method and system
CN110036410B (en) Apparatus and method for obtaining distance information from view
EP3650807B1 (en) Handheld large-scale three-dimensional measurement scanner system simultaneously having photography measurement and three-dimensional scanning functions
US8718326B2 (en) System and method for extracting three-dimensional coordinates
WO2019100933A1 (en) Method, device and system for three-dimensional measurement
CN106454090B (en) Atomatic focusing method and system based on depth camera
CN103339651B (en) Image processing apparatus, camera head and image processing method
JP2017112602A (en) Image calibrating, stitching and depth rebuilding method of panoramic fish-eye camera and system thereof
EP3480648B1 (en) Adaptive three-dimensional imaging system
CN110044300A (en) Amphibious 3D vision detection device and detection method based on laser
CN107483774A (en) Filming apparatus and vehicle
CN108020175B (en) multi-grating projection binocular vision tongue surface three-dimensional integral imaging method
JP7378219B2 (en) Imaging device, image processing device, control method, and program
CN108020200A (en) A kind of depth measurement method and system
CN106355621A (en) Method for acquiring depth information on basis of array images
JP6009206B2 (en) 3D measuring device
CN108924407A (en) A kind of Depth Imaging method and system
CN111868474B (en) Distance measuring camera
CN114359406A (en) Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
CN103426143A (en) Image editing method and correlation fuzzy parameter establishing method
JP2020194454A (en) Image processing device and image processing method, program, and storage medium
Kawasaki et al. Optimized aperture for estimating depth from projector's defocus
JP6039301B2 (en) IMAGING DEVICE, IMAGING SYSTEM, IMAGING DEVICE CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
CN106791335B (en) A kind of compact big visual field optical field acquisition system and its analysis optimization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant