CN207854012U - Depth camera based on structure light - Google Patents
Depth camera based on structure light Download PDFInfo
- Publication number
- CN207854012U CN207854012U CN201721891688.2U CN201721891688U CN207854012U CN 207854012 U CN207854012 U CN 207854012U CN 201721891688 U CN201721891688 U CN 201721891688U CN 207854012 U CN207854012 U CN 207854012U
- Authority
- CN
- China
- Prior art keywords
- receiving module
- depth
- projection
- structure light
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn - After Issue
Links
- 238000012937 correction Methods 0.000 claims abstract description 53
- 230000003287 optical effect Effects 0.000 claims description 59
- 239000007787 solid Substances 0.000 claims description 41
- 238000003384 imaging method Methods 0.000 claims description 25
- 238000013519 translation Methods 0.000 claims description 17
- 239000000758 substrate Substances 0.000 claims description 11
- 239000007788 liquid Substances 0.000 claims description 10
- 230000006641 stabilisation Effects 0.000 claims description 10
- 238000011105 stabilization Methods 0.000 claims description 10
- 238000012546 transfer Methods 0.000 claims description 5
- 239000000463 material Substances 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 18
- 230000014616 translation Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000000034 method Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000004075 alteration Effects 0.000 description 4
- SZVJSHCCFOBDDC-UHFFFAOYSA-N iron(II,III) oxide Inorganic materials O=[Fe]O[Fe]O[Fe]=O SZVJSHCCFOBDDC-UHFFFAOYSA-N 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- CNQCVBJFEGMYDW-UHFFFAOYSA-N lawrencium atom Chemical compound [Lr] CNQCVBJFEGMYDW-UHFFFAOYSA-N 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 206010010071 Coma Diseases 0.000 description 1
- 206010044565 Tremor Diseases 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000001459 lithography Methods 0.000 description 1
- 230000003137 locomotive effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000010534 mechanism of action Effects 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The utility model provides a kind of depth camera based on structure light, including:Supporting body;Module is projected, is suitable for reference object projective structure light;Receiving module, the receiving module and the projection module are mounted on the supporting body, and the receiving module includes camera lens and photosensitive element;And correction component, suitable for making the camera lens, the photosensitive element or the entire receiving module be moved relative to the supporting body.The utility model additionally provides corresponding depth camera bearing calibration.The utility model can be corrected depth camera, improve the accuracy of depth recognition or increase the confidence interval of depth recognition data, obtain better 3D recognition effects.
Description
Technical field
The utility model is related to optical technical fields, specifically, the utility model is related to it is a kind of based on structure light
Depth camera.
Background technology
With the development of science and technology, the continuous promotion of living standards of the people, scientific and technological increasingly update brings many significant
The consumer goods.The technology of present depth camera is more and more perfect, and many products in relation to depth camera have also been used as the consumer goods
It is being sold on the market.General depth camera has binocular receiver, can depth measurement degree reception head (TOF) and structure optical depth phase
Machine.Wherein structure light depth camera just has been able to be used on mobile phone, for example the iPhone X of apple have been equipped with structure light
Depth camera, the most important function of depth camera are just to provide for the three-dimensional information of object.Higher identification accuracy and more
The confidence interval of big identification data is one of the developing direction of present depth camera.
In structure light depth camera, projection arrangement and reception device (i.e. filming apparatus) are all held in structure light depth camera
The quality for the image carried on a shoulder pole different functions, but finally received is the pass for the accuracy for influencing final recognition result
Key.When structured light projecting device is projected, usually there are these optical signature information and project to different objects, different shape
Object or varying environment etc. influence the condition of drop shadow effect and the missing of different degrees of information occur, distortion and error
Phenomena such as.Therefore the image information that the quality of projection quality can directly receive receiving module causes different degrees of interference, example
Such as the error (distortion is alternatively referred to as distortion error) in the error and distortion on shade, these errors all affect final knowledge
Other accuracy.
When projection arrangement is projected, if the optical information being projected on object has been more than the depth of field of reception device
(depth of field refers to the range of imaging clearly), then the image received have it is unintelligible, obscure phenomena such as, also result in projection
Device is projected there are the influences such as error is big to the optical information of object in the identification judgement stage, to influence the precision of identification.
On the other hand, structure optical receiver apparatus can be executed the function of receiving image by special mobile phone module.If practical
The image of reception is fuzzy, ghost image even occurs, then the optical signature information in real scene shooting image may be caused disturbed, to make
Deviate at final recognition result and is expected.
Furthermore since structure optical receiver apparatus is in assembling, there are a systems such as design error, foozle, assembly error
Row error causes structure optical receiver apparatus and there is very big difference with design theory value, connect so as to cause structure optical receiver apparatus
Fruit of producing effects is deteriorated.
Therefore, the solution of drawbacks described above currently can urgently be overcome.
Utility model content
The utility model is intended to provide a kind of solution for the above-mentioned at least one defect that can overcome the prior art.
One side according to the present utility model provides a kind of depth camera based on structure light, which is characterized in that packet
It includes:Supporting body;Module is projected, is suitable for reference object projective structure light;Receiving module, the receiving module and the projective module
Group is mounted on the supporting body, and the receiving module includes camera lens and photosensitive element;And correction component, suitable for making the mirror
Head, the photosensitive element or the entire receiving module are moved relative to the supporting body.
Wherein, the projection module has scheduled spatial position with receiving module.
Wherein, the correction component is suitable for by making the camera lens, the photosensitive element or the entire receiving module phase
The supporting body is moved so that the depth camera receives angle for the projection of the reference object of different distance and keeps steady
Fixed, the projection receives the line that angle is the light-emitting face to the reference object that project module and arrives receiving module with reference object
Angle between the line of light incident surface.Wherein, the correction component is suitable for by making the camera lens, the photosensitive element or whole
A receiving module is moved relative to the supporting body so that throwing of the depth camera for the reference object of different distance
It penetrates and receives angle holding stabilization.
Wherein, the supporting body is integral type substrate, and the projection module, the camera lens and the photosensitive element are installed on
The surface of the unitary substrate.
Wherein, the supporting body is the holder with the first accommodating hole and the second accommodating hole, the projection module and described
Receiving module bears against the medial surface of first accommodating hole and second accommodating hole respectively.Wherein, the projection module and
The receiving module is respectively arranged on two substrates.
Wherein, the supporting body is the shell of the depth camera.
Wherein, the correction component is suitable for adjusting inclination angle of the optical axis of the camera lens relative to the supporting body.
Wherein, the correction component is suitable for making the camera lens or photosensitive element or makes the entire receiving module relative to institute
State supporting body translation.
Wherein, the translation includes the translation on the optical axis direction of the camera lens.
Wherein, the translation includes the translation on the optical axis direction along the camera lens.
Wherein, the depth camera further includes image real time transfer element, and described image data handling component is suitable for basis
The knot that projection optics characteristic information and the projection module in the image of the receiving module shooting are projected to reference object
The projection optics characteristic information of structure light provides drive signal to the correction component.
Wherein, the depth camera further includes image real time transfer element, and described image processing element is suitable for according to
Receiving module shoots the readability of the projection optics characteristic information in image, and drive signal is provided to the correction component.
Wherein, the drive signal is suitable for making the reception light of the throw light and the receiving module of the projection module
Angle keep stablize.The drive signal can also make optical signature information keep clearly state in imaging.
Wherein, the projection module can be projected clearly with corresponding by what certain object plane and field depth were constituted
The first solid region, the receiving module have it is corresponding by certain object plane and field depth constituted can clearly at
Second solid region of picture, the correction component are suitable for by making the camera lens, the photosensitive element or the entire reception mould
Group is moved relative to the supporting body, to keep first solid region and second solid region overlapping.
Wherein, the correction component is suitable for by making the camera lens or photosensitive element or keeping the entire receiving module opposite
It is translated in the supporting body, to make the imaging region of the light of carrying projection optics characteristic information that reference object reflected more connect
The center of the nearly photosensitive element.
Wherein, the correction component is further adapted for that the camera lens is made to keep steady relative to the reference object in aperture time
It is fixed.
Wherein, the drive signal is electric signal.
Wherein, the correction component includes voice coil actuator (voice coil actuator includes coil and magnetite).
Wherein, the correction component includes MEMS (MEMS, Micro-Electro-Mechanical
System), the camera lens is the focusing liquid lens coupled with the MEMS.
Wherein, the correction component includes piezoelectric actuator, the piezoelectric actuator include be arranged the camera lens extremely
Piezoelectric material in a few eyeglass.
Wherein, the correction component includes pneumatic or actuation means of surging.
Another aspect according to the present utility model additionally provides a kind of bearing calibration of the depth camera of aforementioned structure light,
Including:
It projects module and projects the structure light for carrying projection optics characteristic information to reference object;
Receiving module shoots the image containing reference object;And
The original throwing that projection optics characteristic information captured by the receiving module is projected with the projection module
Component is corrected described in shadow optical signature information-driven, keeps the camera lens, the photosensitive element or the entire receiving module opposite
It is moved in the supporting body.
Wherein, the correction component is being driven, is keeping the camera lens, the photosensitive element or the entire receiving module opposite
In the step of supporting body movement, further include:
By making the camera lens, the photosensitive element or the entire receiving module be moved relative to the supporting body, come
Adjust the angle of the throw light of the projection module and the reception light of the receiving module.
Wherein, the projection module can be projected clearly with corresponding by what certain object plane and field depth were constituted
The first solid region, the receiving module have it is corresponding by certain object plane and field depth constituted can clearly at
Second solid region of picture;
The correction component is being driven, is making the camera lens, the photosensitive element or the entire receiving module relative to institute
In the step for stating supporting body movement, further include:
By making the camera lens, the photosensitive element or the entire receiving module be moved relative to the supporting body, come
Keep first solid region and second solid region overlapping.
Wherein, by making the camera lens or photosensitive element or keeping the entire receiving module flat relative to the supporting body
Move, come make carrying projection optics characteristic information that reference object reflected light imaging region closer to the photosensitive element
Center.
Compared with prior art, the utility model has following at least one technique effect:
1, the utility model can extremely be clapped by adjusting the projection module to reference object line and the receiving module
The angle for taking the photograph object line, to be corrected to depth camera.
2, the utility model can improve the accuracy of depth recognition or increase the confidence interval of depth recognition data.
3, the utility model can be by making projection end and the shooting respective blur-free imaging region in end overlap mutually, to obtain more
Good 3D recognition effects.
4, the utility model can more accurately be gone back by making the projection optics characteristic information of actual photographed be more clear
Former depth information obtains better 3D recognition effects.
5, the utility model can identify accuracy by the way that stabilization function is further added to help depth camera to promote 3D.
Description of the drawings
Exemplary embodiment is shown in refer to the attached drawing.Embodiment and attached drawing disclosed herein should be considered illustrative
, and not restrictive.
Fig. 1 shows the schematic diagram of the depth camera based on structure light of the utility model one embodiment;
Fig. 2 shows the nonoverlapping examples of the first solid region and the second solid region;
Fig. 3 show adjustment receiving module the first solid region and the second solid region are overlapped and make reference object into
Enter an example of crossover region;
Fig. 4 show adjustment receiving module the first solid region and the second solid region are overlapped and make reference object into
Enter another example of crossover region;
Fig. 5 shows that reference object is not in an example in receiving module field of view center region;
Fig. 6 shows one that so that reference object is in receiving module field of view center region and shows
Example.
Specific implementation mode
Refer to the attached drawing is made more detailed description by the application in order to better understand to the various aspects of the application.It answers
Understand, the description of the only illustrative embodiments to the application is described in detail in these, rather than limits the application in any way
Range.In the specification, the identical element of identical reference numbers.It includes associated institute to state "and/or"
Any and all combinations of one or more of list of items.
It should be noted that in the present specification, the first, second equal statement is only used for a feature and another feature differentiation
It comes, and does not indicate that any restrictions to feature.Therefore, discussed below without departing substantially from teachings of the present application
First main body is also known as the second main body.
In the accompanying drawings, for convenience of description, thickness, the size and shape of object are slightly exaggerated.Attached drawing is merely illustrative
And it is and non-critical drawn to scale.
It will also be appreciated that term " comprising ", " including ", " having ", "comprising" and/or " including ", when in this theory
In bright book use when indicate exist stated feature, entirety, step, operations, elements, and/or components, but do not exclude the presence of or
It is attached with one or more of the other feature, entirety, step, operation, component, assembly unit and/or combination thereof.In addition, ought be such as
When the statement of at least one of " ... " appears in after the list of listed feature, entire listed feature is modified, rather than is modified
Individual component in list.In addition, when describing presently filed embodiment, use " can with " indicate " one of the application or
Multiple embodiments ".Also, term " illustrative " is intended to refer to example or illustration.
As it is used in the present context, term " substantially ", " about " and similar term are used as the approximate term of table, and
Be not used as the term of table degree, and be intended to illustrate by by those skilled in the art will appreciate that, measured value or calculated value
In inherent variability.
Unless otherwise defined, otherwise all terms (including technical terms and scientific words) used herein all have with
The application one skilled in the art's is generally understood identical meaning.It will also be appreciated that term (such as in everyday words
Term defined in allusion quotation) it should be interpreted as having the meaning consistent with their meanings in the context of the relevant technologies, and
It will not be explained with idealization or excessively formal sense, unless clear herein so limit.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the schematic diagram of the depth camera based on structure light of the utility model one embodiment.The depth phase
Machine includes supporting body 300, projection module 100, receiving module 200 and correction component.Receiving module 200 and projection module 100 are equal
It is made on supporting body 300, which has enough structural strengths, to be projection module 100 and receiving module
200 provide firm reference plane.Receiving module 200 includes camera lens 202 and photosensitive element 201.Correction component is suitable for making the mirror
First 202 move relative to the supporting body 300.Module 100 is projected to be suitable for reference object projective structure light.In an example
In, at projection module 100 end, to projection source carried out coding characterize in other words formed structure light (including point, line or
The shapes such as face).Taken by receiving module 200 is the depth on subject surface on the light sources project to object being encoded
Modulated image because structured light light source carry many characteristic points or coding, thus provide many matching angle points or
The direct code word of person, consequently facilitating carrying out the matching of characteristic point.Feature based on the known image that projection source projects away
Characteristic point that the depth modulation on point and subject surface in matched actual photographed image is crossed (such as can be with base
Being taken pair in the characteristic point for the known image that projection source projects away and matched actual photographed image
The deviation of the characteristic point crossed as the depth modulation on surface between the two), you can restore the depth letter on subject surface
Breath, to obtain 3D rendering.
In order to obtain the three-dimensional information of object in structural light measurement, generally all depth can be obtained using principle of triangulation
Information, basic principle be exactly according on reference object surface optical signature information, projection module, between imaging modules
Geometrical relationship, to determine the three-dimensional information of object.Specifically, the light-emitting face of a projection module is preset to reference object
Line and reference object are to the angle between the line of the light incident surface of receiving module.For ease of description, herein by the angle
Referred to as projection receives angle.Wherein, the light incident surface of receiving module can be defined as the plane of incidence of receiving module front end eyeglass.
The light-emitting face of projection module can be defined as the light-emitting surface of projection module front end optical element (die terminals are rear end).Shooting
The line of object to the light incident surface of receiving module can be defined as the datum mark of reference object to the line of plane of incidence central point.
Project light-emitting face to the reference object of module line can be defined as exit facet central point to reference object datum mark
Line.The datum mark of reference object can be the geometric center on reference object surface, can also be set as needed as shooting pair
As other points on surface.
It is built reference object, projection module, between imaging modules by the projection incidence angle establishing suitable coordinate system
After vertical contact, you can obtain the depth information of reference object by the depth information algorithm centainly based on structure light.In the prior art,
Depth camera based on structure light usually by the optical axis for projecting module (such as can be understood as the normal of light-emitting face) and receives
The optical axis (such as can be understood as the normal of light incident surface) of module is arranged to tilt so that the two forms an angle, corresponding
Depth information algorithm based on structure light assumes that actual photographed object is located at the optical axis of the optical axis and receiving module of projection module
Crosspoint calculated, and then obtain reference object depth information.However, in actual photographed, reference object relative to
The distance of depth camera may change.When reference object present position is apart from the optical axis and receiving module for projecting module
The crosspoint of optical axis farther out when, with preset angle calcu-lation depth information algorithm recognition accuracy will decline.Therefore, it will receive
Module is configured to mobilizable state, adjusts the relative position of receiving module using correction component, it will help make the depth
It spends camera and angle holding stabilization is received for the projection of the reference object of different distance, and then projection when actual photographed is made to receive
Angle is matched with the preset angle of algorithm, to ensure that the reference object under different location can get preferable depth information
Accuracy of identification.In allowed limits, larger projection receives angle, helps to improve the accuracy of identification of depth information.
In addition RGB modules are further provided in some depth cameras, wherein the RGB modules generally with projection module and at
As module is fixed on the supporting body and is in the same depth camera plane.The RGB modules can play for
Depth image pixel supplements the effect of color.Configuration RGB modules are beneficial in some cases, are used in cooperation software
When, RGB modules can be enriched by the effect of established threedimensional model color in depth information.In an implementation of the utility model
In example, the receiving module can be imaging modules.In another embodiment of the utility model, the receiving module can be with
It is RGB modules.In another embodiment of the utility model, the receiving module includes imaging modules and RGB modules, also
It is to say, depth camera can have there are two receiving module, one of them is imaging modules, another is RGB modules.
In one embodiment, supporting body 300 is integral type substrate, and the upper surface of the unitary substrate, which has to correspond to, throws
Penetrate the first area of module 100 and the second area corresponding to receiving module 200.Component is corrected to realize using voice coil motor.Institute
It includes coil and magnetite to state voice coil motor.Voice coil motor further includes a cylindrical support body.The cylindrical support body and photosensitive element 201
It is mounted on the second area of unitary substrate, and cylindrical support body is centered around around photosensitive element 201.The coil and institute
Different location in the receiving module can be arranged by stating magnetite.In this way, the electric current of the coil by adjusting voice coil motor, can make mirror
First 202 move relative to unitary substrate.This movement can adjust the optical axis of the camera lens 202 relative to the supporting body
The movement at 300 inclination angle can also be that the camera lens 202 is made to be translated relative to the supporting body 300.Translation may include hanging down
Can also include flat on the optical axis direction along the camera lens 202 directly in the translation on the optical axis direction of the camera lens 202
It moves, can also be along moving on the optical axis direction for tilting the camera lens 202, in other words, the translation is included in X, Y
The movement fastened with the space coordinate that Z-direction is formed.Correction component can adjust the projection by above-mentioned movement and receive folder
Angle, to play corrective action to depth camera.It will be appreciated by those skilled in the art that such as can be with by the correction component
It is that OIS parallel-moving types optical anti-vibration motor (Pure Shift), OIS shift shaft types optical anti-vibration motor (Tilt Shift) are vertical to realize
The movement of direct light axis direction and tilted optical axis plane.Certainly the described correction component can also be with one kind in lower structure or more
Kind:Spring-piece type motor, ball-type motor or frictional motor.It will be appreciated by those skilled in the art that in the utility model
Under core spirit, the correction component preferred embodiment is not construed as limiting the utility model.
It is noted that the mode of correction component adjustment camera lens movement in the present embodiment, can play adjustment and receive
The field depth of module, the different field depth of receiving module correspond to the work model of the depth camera of different the utility model
It encloses, to realize the switching to the gesture identification of short distance to remote object or the process of gesture recognition.
In one example, depth camera based on certain algorithm from the projection optics characteristic information of actual photographed image
(such as coding information of projection source) restores the depth information on subject surface.It is deep to calculate in some algorithms
Spend information, it is assumed that the reflected light that the structure light that projection module 100 is projected is received with receiving module 200 is (through reference object table
The reflected light of face reflection) there is fixed angle.However, due to the shape and ruler of the distance of reference object, reference object itself
Environment residing for very little and reference object may cause the angle of projection light and reflected light to change.Projection receives angle and goes out
It is inconsistent that existing deviation causes projection when actual photographed to receive the projection reception angle that angle and depth calculation algorithm are assumed, into
And the confidence interval of the decline of depth recognition accuracy or depth recognition data may be caused to reduce.And it is based on above-described embodiment,
By correcting component the camera lens 202 can be made to be moved in all directions relative to supporting body 300 so that for it is different it is far and near,
The reference object of different shape and size and different local environments, projection, which receives angle, can keep stable, and then improve deep
It spends the accuracy of identification or increases the confidence interval of depth recognition data.
It is noted that in the present embodiment, by adjusting camera lens 202 to different location, may be implemented to receiving
Image in depth information supplement, so as to realize synthesis high scope, high-precision, high-resolution depth information.When
Right still further aspect, the shooting by different directions for detection object can be played and solve the immobilization because of receiving module
Shooting and caused by detection object detail missing cavity effect.
In one embodiment, projection module 100 includes laser emitter 101 (vcsel), collimation lens 102 and spreads out
Penetrate optical element 103 (DOE).The laser emitter sends out light, is then collimated again by collimation lens 102, then by spreading out
Optical element 103 is penetrated to project away.
Further, in one embodiment, the depth camera further includes image real time transfer element, described image number
Projection optics characteristic information in the image for being suitable for being shot according to the receiving module 200 according to processing element and the projection
The projection optics characteristic information for the structure light that module 100 is projected to reference object, providing drive signal to the correction component (should
Drive signal can be electric signal).The drive signal is suitable for making the throw light of the projection module 100 and the reception mould
The angle of the reception light of group 200 keeps stablizing.In one embodiment, it can be exported according to front and back multiple migration imagery value
Drive signal.Wherein unitary substrate may include that wiring board, image real time transfer element can be embedded in unitary substrate,
Independent element may be used and be electrically connected with the actuating member realization of correction component by unitary substrate.In addition image procossing
Element is arranged in mobile terminal and good selection, and the computing chip of mobile terminal is utilized to realize the output of drive signal.
In one embodiment, the supporting body 300 is the holder with the first accommodating hole and the second accommodating hole, the throwing
It penetrates module 100 and the receiving module 200 bears against the medial surface of first accommodating hole and second accommodating hole respectively.This
In embodiment, the correction component is by making entire receiving module 200 be moved relative to the holder, to adjust the projective module
The angle of the throw light of group 100 and the reception light of the receiving module 200, to play corrective action to depth camera.
The movement can be that the movement at inclination angle of the optical axis of the adjustment camera lens 202 relative to the supporting body 300 (tilts fortune
It is dynamic), can also be to make the camera lens 202 or photosensitive element 201 or make the entire receiving module 200 relative to the supporting body
300 translations.Translation may include the translation on the optical axis direction of the camera lens 202, can also include along the mirror
Translation on first 202 optical axis direction.
In the present embodiment, angle is received to adjust projection by adjusting the mode of entire receiving module 200.This adjustment side
Formula does not change the relative position of the internal component of receiving module 200, maintains the steady of the parameters of entire receiving module 200
It is fixed, for example, optical distortion parameter, field depth.It is opposite with projection module 100 just for receiving module 200 in the present embodiment
Position such as angle is changed, by the folder for only adjusting the optical axis of the receiving module 200 and the projection surface of the projection module 100
The supplement that the details of the reference object for receiving module 200 may be implemented in angle is either realized more by adjusting the angle of bigger
Big accuracy of identification.
In another embodiment, the supporting body 300 can also be the shell of depth camera or other realization shapes
Formula, as long as can be that projection module 100 and receiving module 200 provide reference plane, the camera lens 202 or photosensitive can be made by correcting component
Element 201 makes the entire receiving module 200 be moved relative to the supporting body 300.
In another embodiment, the sensitive chip of the correction Component driver receiving module 200 moves, this kind of side
The movement of formula is beneficial, and center and the shooting of sensitive chip can be made in such a way that movement is occurred for the position of sensitive chip
The middle section of optical signature information in image is consistent.The optical signature information of shooting gained image in this way
It is positively retained at the central area of image, thus better accuracy of identification may be implemented.
In one embodiment, it is preferably one by sensitive chip in the mode of correction Component driver sensitive chip movement
Curved surface sensitive chip is beneficial, and curved surface sensitive chip can be reduced because of light of the sensitive chip relative to the mobile generation of camera lens
It studies as upper distortion (such as aberration, spherical aberration etc.).Distortion on optical imagery may make optical signature information obtained by shooting exist
It deforms, such as the distance between hot spot itself changes in imaging because of the curvature of field, therefore causes in imaging process
Identify the error occurred.
Further, in another embodiment, the projection module 100 has corresponding by certain object plane and scape
The first solid region 110 that deep range was constituted can clearly project, the receiving module 200 have corresponding by certain
Object plane and field depth constituted can blur-free imaging the second solid region 210, the correction component is suitable for described by making
Camera lens, the photosensitive element or the entire receiving module are moved relative to the supporting body, to make first solid region
110 and second solid region 210 it is overlapping.The crossover region of first solid region 110 and second solid region 210
Domain is that the 3D of receiving module identifies effective coverage.
Fig. 2~4 show projection module under several different situations, receiving module, the first solid region, the second three-dimensional area
The position relationship of domain and reference object.With reference to figure 2~4, the first solid region is the corresponding object plane of projection module and depth of field model
The three-dimensional clear area to be formed is enclosed, similarly, the second solid region is the corresponding object plane of receiving module and field depth formation
Three-dimensional clear area, the overlapping region of both projection and shooting clear area form 3D and identify effective coverage.When reference object 500
When in the overlapping region, it will be able to obtain better 3D recognition effects.In addition, being received described in the correction Component driver
When module translates when (X above-mentioned, Y and Z-space coordinate system), reference object can be made close to optical center as far as possible
Region.
However, in actual photographed, reference object 500 is not always in the overlapping of both projection and shooting clear area
In region, or even both projection and shooting clear area (i.e. the first solid region and the second solid region) are not always overlapped
's.For example, Fig. 2 shows the nonoverlapping examples of the first solid region and the second solid region.And by make the camera lens,
The photosensitive element or the entire receiving module are moved relative to the supporting body, can make first solid region 110
It is overlapping with second solid region 210, to obtain better 3D recognition effects.Fig. 3 shows that adjustment receiving module makes
The example that first solid region and the second solid region are overlapped and reference object made to enter crossover region.Fig. 4 shows adjustment
Receiving module makes the first solid region and the second solid region be overlapped and reference object is made to enter another example of crossover region.
On the other hand, when reference object is in the edge of 3D identifications effective coverage, in order to allow reference object is closer to receive
The better central area of module imaging effect, the correction component is by making the camera lens, the photosensitive element or entire reception
Module is moved relative to the space coordinates, and in the present embodiment, the movement includes the photosurface along optical axis
The translation and inclination of the photosurface relative to optical axis direction in direction.So that the reference object is in the receiving module
Middle imaging effect is better.
In one embodiment, the optical axis of receiving module and the projection surface of projection module are preset to a folder in depth camera
Angle α sets up the space coordinate of the photosurface and the projection surface of the projection module of the receiving module by the angle α
System, then depth calculation is substituted into obtain the depth information of reference object by the angle α.
It is noted that the sensitized lithography of the photosensitive element of the reception device is arc-plane, this is certain
In the case of be beneficial, in the Plane Rotation process mistake around receiving module optical focus, the reference object is in reception device
Optical system imaging on a focal plane, does not there is certain deviation.This is because the far-off axle light rays in optical system
The result obtained by result and paraxial rays trace obtained by trace is inconsistent caused as a result, therefore causing aberration, coma etc.
Error occurs.The photosensitive element of selection arc photosurface can eliminate the interference such as aberration.
Still further aspect, the optical axis of receiving module and the projection surface of projection module are preset to an angle α in depth camera,
The space coordinates of the photosurface and the projection surface of the projection module of the receiving module are set up by the angle α, then
It is substituted by parameters such as angle α and calculates depth calculation to obtain the depth information of reference object.
In one embodiment, the correction component is further adapted for by making the camera lens or photosensitive element or making entire described
Receiving module is translated relative to the supporting body, to make the light of carrying projection optics characteristic information that reference object reflected
Imaging region is closer to the center of the photosensitive element.Since the central area of photosensitive element usually has better image quality
Such as the intensity of illumination in optical center region is best, therefore the present embodiment can make the projection optics characteristic information of actual photographed more clear
It is clear, to more accurately restore depth information, obtain better 3D recognition effects.For example, Fig. 5 shows that reference object is not in
One example in receiving module field of view center region.In this example, imaging region is not in the center of the photosensitive element.Figure
6 show an example for making reference object be in receiving module field of view center region after receiving module is adjusted.After adjusted,
Imaging region is in the central area of photosensitive element, to make imaging be more clear, and then obtains better 3D recognition effects.It needs
Note that make the camera lens or photosensitive element or the entire receiving module is made to be translated relative to the supporting body, it can also be simultaneously
The projection of percentage regulation camera receives angle (in some instances, the angle of bigger can obtain better 3D effect) and anti-
Penetrate the mapping point of light.
In the aforementioned embodiment, the correction component is realized based on the electromagnetic action of coil and magnetite.The utility model is simultaneously
It is without being limited thereto.For example, in another embodiment of the utility model, the correction component includes MEMS (MEMS, Micro-
Electro-Mechanical System), and the camera lens is the focusing liquid lens coupled with the MEMS.
Focusing liquid lens include conducting liquid, by making conducting liquid deformation camera lens can be made to move.In the present embodiment, pass through telecommunications
Focusing liquid lens movement number can be driven, such as makes focusing liquid lens that focusing liquid lens are translated or changed relative to supporting body
Direction of the optical axis relative to supporting body so that the shooting for different far and near, different shapes and size and different local environments
The angle of object, the throw light and the reception light of the receiving module of the projection module keeps stable, and then improves
The accuracy of depth recognition or the confidence interval for increasing depth recognition data.In another embodiment, the correction component
Including piezopolymer, the piezopolymer is arranged at least one eyeglass of the camera lens.The piezopolymer is electric
Signal, which drives, to be deformed upon, and promotes camera lens that direction of the camera lens optical axis relative to supporting body is translated or changed relative to supporting body,
So that for the reference object of different far and near, different shapes and size and different local environments, the projection of the projection module
The angle of light and the reception light of the receiving module keeps stable, and then improves accuracy or the increase of depth recognition
The confidence interval of depth recognition data.Such as OIS parallel-moving types optical anti-vibration motor (Pure Shift), OIS shift shaft type optics are anti-
Tremble the preferred embodiment that the devices such as motor (Tilt Shift) all can serve as the correction component.Those skilled in the art should
Know, the correction component can be realized by different devices and be moved that institute's way of example does not cause to limit to the utility model.
Further, in one embodiment, the correction component can also have stabilization function.The stabilization function refers to
It prevents camera lens from shaking in aperture time, in other words, camera lens is made to keep steady relative to reference object in aperture time
It is fixed.The stabilization function can utilize locomotory mechanism (such as electromagnetic action mechanism, the focusing liquid effects mechanism or piezoelectricity of correction component
Mechanism of action) it realizes.But it is noted that hereinbefore the described correction for depth camera (such as makes the module that projects
The angle of throw light and the reception light of the receiving module keeps stable correction) it is not to be done in aperture time
Correction, belongs to different scopes from stabilization function.Certainly, camera lens shake may cause the projection optics feature in real scene shooting image to believe
There are ghost problems in breath, causes depth information reduction inaccurate or goes back primary failure.Therefore, stabilization work(is further added in correction component
Can, depth camera can be helped to promote 3D identification accuracys.
It is noted that the reception device is in order to obtain the light-inletting quantity of bigger to receive more information, therefore institute
It states reception device and preferably completes exposure after longer aperture time, in this case, the correction component is beneficial
, because the shake occurred in longer aperture time is easier to cause the shake of receiving module imaging and fuzzy, institute
Caused by stating means for correcting and capable of reducing shake of the reception device in aperture time phenomena such as image blur, ghost image.
In addition, the depth camera is made of a projection module and multiple receiving module modules in one embodiment,
Described in receiving module optical axis with it is described projection module projection plane at independent angle.Each receiving module is for shooting
Depth information under the different angle of object can complement each other.That is the depth map obtained by being calculated in each receiving module
The information of picture can be mutually superimposed.The superposition refers to that the pixel value by each amplitude deepness image is overlapped, such as to Asia
Pixel is filled, to improve the resolution ratio and precision of the depth image after being superimposed.This kind of mode can also solve depth phase
The cavity effect of receiving module in machine.
Further, the utility model additionally provides a kind of bearing calibration of the depth camera based on structure light, including:
Step 1:It projects module and projects the structure light for carrying projection optics characteristic information to reference object;
Step 2:Receiving module shoots the image containing reference object;And
Step 3:What the projection optics characteristic information captured by the receiving module was projected with the projection module
Component is corrected described in original projection optical signature information-driven, makes the camera lens, the photosensitive element or the entire reception mould
Group is moved relative to the supporting body.
In one embodiment, the step 3 further includes:By making the camera lens, the photosensitive element or entire described
Receiving module is moved relative to the supporting body, to adjust the reception of the throw light and the receiving module for projecting module
The angle of light.
In one embodiment, the projection module, which has, corresponding is made of certain object plane and field depth
The first solid region that can clearly project, the receiving module are made of with corresponding certain object plane and field depth
Can blur-free imaging the second solid region;The step 3 further includes:By making the camera lens, the photosensitive element or entire
The receiving module is moved relative to the supporting body, to keep first solid region and second solid region overlapping.
In one embodiment, the step 3 further includes:By making the camera lens or photosensitive element or making the entire receiving module phase
The supporting body is translated, come make carrying projection optics characteristic information that reference object reflected light imaging region more
Close to the center of the photosensitive element.
The utility model additionally provides receiving module and projection module relative angle in a kind of depth camera
Scaling method, including:
Step 1:It projects module and projects the structure light for carrying projection optics characteristic information to reference object;
Step 2:Receiving module adjusts to different angles and shoots the image containing reference object;
Step 3:Depth information is carried out according to the image containing the optical signature information on reference object under different angles
Identification;And
Step 4:One identification of comparison selection is carried out for the depth information of the identification under different angle with calibrating parameters to join
Number, and determine identification parameter.
Above description is only the better embodiment of the application and the explanation to institute's application technology principle.Art technology
Personnel should be appreciated that utility model range involved in the application, however it is not limited to which the specific combination of above-mentioned technical characteristic forms
Technical solution, while should also cover in the case where not departing from utility model design, by above-mentioned technical characteristic or its etc.
The other technical solutions for carrying out arbitrary combination with feature and being formed.Such as features described above and (but not limited to) disclosed herein
Technical characteristic with similar functions is replaced mutually and the technical solution that is formed.
Claims (16)
1. a kind of depth camera based on structure light, which is characterized in that including:
Supporting body;
Module is projected, is suitable for reference object projective structure light;
Receiving module, the receiving module and the projection module are mounted on the supporting body, and the receiving module includes mirror
Head and photosensitive element;And
Component is corrected, suitable for making the camera lens, the photosensitive element or the entire receiving module be moved relative to the supporting body
It is dynamic.
2. the depth camera according to claim 1 based on structure light, which is characterized in that the correction component is suitable for passing through
The camera lens, the photosensitive element or the entire receiving module is set to be moved relative to the supporting body so that the depth phase
Machine receives angle for the projection of the reference object of different distance and keeps stablizing, and it is the light for projecting module that the projection, which receives angle,
Exit facet to reference object line and reference object to the angle between the line of the light incident surface of receiving module.
3. the depth camera according to claim 1 based on structure light, which is characterized in that the supporting body is integral type base
Plate, the projection module, the camera lens and the photosensitive element are installed on the surface of the unitary substrate.
4. the depth camera according to claim 1 based on structure light, which is characterized in that the supporting body is with first
The holder of accommodating hole and the second accommodating hole, the projection module and the receiving module bear against respectively first accommodating hole and
The medial surface of second accommodating hole.
5. the depth camera according to claim 1 based on structure light, which is characterized in that the supporting body is the depth
The shell of camera.
6. the depth camera according to claim 2 based on structure light, which is characterized in that the correction component is suitable for adjustment
Inclination angle of the optical axis of the camera lens relative to the supporting body.
7. the depth camera according to claim 2 based on structure light, which is characterized in that the correction component is suitable for making institute
It states camera lens or photosensitive element or the entire receiving module is made to be translated relative to the supporting body.
8. the depth camera according to claim 7 based on structure light, which is characterized in that the translation includes perpendicular to institute
State the translation on the optical axis direction of camera lens.
9. the depth camera according to claim 7 based on structure light, which is characterized in that the translation includes along described
Translation on the optical axis direction of camera lens.
10. the depth camera according to claim 1 based on structure light, which is characterized in that the depth camera further includes
Image real time transfer element, the projected light in the image that described image data handling component is suitable for being shot according to the receiving module
The projection optics characteristic information for learning the structure light that characteristic information and the projection module are projected to reference object, to the school
Positive component provides drive signal.
11. the depth camera according to claim 10 based on structure light, which is characterized in that the drive signal is suitable for making
It obtains the depth camera and angle holding stabilization is received for the projection of the reference object of different distance, the holding stabilization is to make reality
Projection when border is shot receives angle and the preset projection receives the deviation of angle less than threshold value.
12. the depth camera according to claim 2 based on structure light, which is characterized in that the projection module have with
Its corresponding first solid region that can clearly project being made of certain object plane and field depth, the receiving module have
It is corresponding by certain object plane and field depth constituted can blur-free imaging the second solid region, the correction component is suitable
In by making the camera lens, the photosensitive element or the entire receiving module moves relative to the supporting body, described in making
First solid region and second solid region are overlapping.
13. the depth camera according to claim 2 based on structure light, which is characterized in that the correction component is suitable for logical
Crossing makes the camera lens or photosensitive element or the entire receiving module is made to be translated relative to the supporting body, to make reference object institute
The imaging region of the light of the carrying projection optics characteristic information of reflection is closer to the center of the photosensitive element.
14. the depth camera according to claim 10 based on structure light, which is characterized in that the correction component is further adapted for
The camera lens is set to keep stablizing relative to the reference object in aperture time.
15. the depth camera according to claim 2 based on structure light, which is characterized in that the correction component includes sound
Enclose actuator;Or including MEMS, wherein the camera lens is the focusing liquid lens coupled with the MEMS;
Or including piezoelectric actuator, and the piezoelectric actuator includes the piezoelectricity being arranged at least one eyeglass of the camera lens
Material;Either including pneumatic or actuation means of surging.
16. the depth camera according to claim 6 based on structure light, which is characterized in that the photosensitive element it is photosensitive
Face is arc-shaped.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201721891688.2U CN207854012U (en) | 2017-12-28 | 2017-12-28 | Depth camera based on structure light |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201721891688.2U CN207854012U (en) | 2017-12-28 | 2017-12-28 | Depth camera based on structure light |
Publications (1)
Publication Number | Publication Date |
---|---|
CN207854012U true CN207854012U (en) | 2018-09-11 |
Family
ID=63419804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201721891688.2U Withdrawn - After Issue CN207854012U (en) | 2017-12-28 | 2017-12-28 | Depth camera based on structure light |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN207854012U (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109212879A (en) * | 2018-10-25 | 2019-01-15 | 深圳阜时科技有限公司 | A kind of functionalization mould group, sensing device and equipment |
CN109714536A (en) * | 2019-01-23 | 2019-05-03 | Oppo广东移动通信有限公司 | Method for correcting image, device, electronic equipment and computer readable storage medium |
CN109982074A (en) * | 2019-04-11 | 2019-07-05 | 歌尔股份有限公司 | A kind of method, apparatus and assemble method of the tilt angle obtaining TOF mould group |
CN109981932A (en) * | 2017-12-28 | 2019-07-05 | 宁波舜宇光电信息有限公司 | Depth camera and its bearing calibration based on structure light |
WO2020063639A1 (en) * | 2018-09-30 | 2020-04-02 | 南昌欧菲生物识别技术有限公司 | 3d recognition module, 3d recognition apparatus and intelligent terminal |
CN111246073A (en) * | 2020-03-23 | 2020-06-05 | 维沃移动通信有限公司 | Imaging device, method and electronic equipment |
WO2021007944A1 (en) * | 2019-07-15 | 2021-01-21 | 南昌欧菲生物识别技术有限公司 | Method for calibrating bracket of 3d structured light module, apparatus, and device |
-
2017
- 2017-12-28 CN CN201721891688.2U patent/CN207854012U/en not_active Withdrawn - After Issue
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109981932A (en) * | 2017-12-28 | 2019-07-05 | 宁波舜宇光电信息有限公司 | Depth camera and its bearing calibration based on structure light |
CN109981932B (en) * | 2017-12-28 | 2024-03-08 | 宁波舜宇光电信息有限公司 | Depth camera based on structured light and correction method thereof |
WO2020063639A1 (en) * | 2018-09-30 | 2020-04-02 | 南昌欧菲生物识别技术有限公司 | 3d recognition module, 3d recognition apparatus and intelligent terminal |
CN109212879A (en) * | 2018-10-25 | 2019-01-15 | 深圳阜时科技有限公司 | A kind of functionalization mould group, sensing device and equipment |
CN109714536A (en) * | 2019-01-23 | 2019-05-03 | Oppo广东移动通信有限公司 | Method for correcting image, device, electronic equipment and computer readable storage medium |
CN109982074A (en) * | 2019-04-11 | 2019-07-05 | 歌尔股份有限公司 | A kind of method, apparatus and assemble method of the tilt angle obtaining TOF mould group |
CN109982074B (en) * | 2019-04-11 | 2021-01-15 | 歌尔光学科技有限公司 | Method and device for obtaining inclination angle of TOF module and assembling method |
WO2021007944A1 (en) * | 2019-07-15 | 2021-01-21 | 南昌欧菲生物识别技术有限公司 | Method for calibrating bracket of 3d structured light module, apparatus, and device |
CN111246073A (en) * | 2020-03-23 | 2020-06-05 | 维沃移动通信有限公司 | Imaging device, method and electronic equipment |
CN111246073B (en) * | 2020-03-23 | 2022-03-25 | 维沃移动通信有限公司 | Imaging device, method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN207854012U (en) | Depth camera based on structure light | |
CN103988115B (en) | Optical image stabilization | |
KR102470757B1 (en) | Device for tilting an optical element, particularly a mirror | |
US20180343391A1 (en) | Auto focus and optical image stabilization with roll compensation in a compact folded camera | |
US7213926B2 (en) | Image projection system and method | |
CN109981932A (en) | Depth camera and its bearing calibration based on structure light | |
US20210044725A1 (en) | Camera-specific distortion correction | |
US20160127646A1 (en) | Optical image stabilization for thin cameras | |
US10007994B2 (en) | Stereodepth camera using VCSEL projector with controlled projection lens | |
US10326894B1 (en) | Self stabilizing projector | |
CN111540004A (en) | Single-camera polar line correction method and device | |
WO2017169186A1 (en) | Image projection system and correction method | |
US11412143B2 (en) | Electronic device and method for controlling camera motion | |
KR102635884B1 (en) | A camera module including an aperture | |
KR20200122013A (en) | A camera module and an optical instrument including the same | |
CN117170159A (en) | Light hole module, camera module and electronic device | |
US11991446B2 (en) | Method of image stabilization and electronic device therefor | |
US20230086178A1 (en) | Camera module and electronic device including the same | |
JP2015059988A (en) | Stereoscopic imaging device stereoscopic image generation method | |
KR20230001760A (en) | Method of image stabilization and electronic device therefor | |
US20230230210A1 (en) | Correcting distortion from camera pitch angle | |
CN114967170B (en) | Display processing method and device based on flexible naked eye three-dimensional display equipment | |
JP2022147145A (en) | Projector, projection system, method for correcting correction value, and program | |
KR101882977B1 (en) | Lens Module for Forming 360 Degree Image and Application for Forming 360 Degree Image | |
WO2021185085A1 (en) | Display method and display control device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
AV01 | Patent right actively abandoned | ||
AV01 | Patent right actively abandoned | ||
AV01 | Patent right actively abandoned |
Granted publication date: 20180911 Effective date of abandoning: 20240308 |
|
AV01 | Patent right actively abandoned |
Granted publication date: 20180911 Effective date of abandoning: 20240308 |