CN106170086A - The method of drawing three-dimensional image and device, system - Google Patents

The method of drawing three-dimensional image and device, system Download PDF

Info

Publication number
CN106170086A
CN106170086A CN201610698004.0A CN201610698004A CN106170086A CN 106170086 A CN106170086 A CN 106170086A CN 201610698004 A CN201610698004 A CN 201610698004A CN 106170086 A CN106170086 A CN 106170086A
Authority
CN
China
Prior art keywords
image
viewpoint
black light
coloured image
coloured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610698004.0A
Other languages
Chinese (zh)
Other versions
CN106170086B (en
Inventor
黄源浩
肖振中
刘龙
许星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201610698004.0A priority Critical patent/CN106170086B/en
Publication of CN106170086A publication Critical patent/CN106170086A/en
Priority to PCT/CN2017/085147 priority patent/WO2018032841A1/en
Application granted granted Critical
Publication of CN106170086B publication Critical patent/CN106170086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Abstract

The invention discloses method and device, the system of drawing three-dimensional image.Wherein, told method includes: obtains respectively and with the first viewpoint, target is acquired the black light image that obtains and described target is acquired with the second viewpoint the first coloured image obtained;The parallax between described first viewpoint and described second viewpoint is calculated by described black light image;According to the pixel coordinate of the first coloured image described in described parallactic movement, obtain the second coloured image of the first viewpoint;3-D view is formed by described first coloured image and described second coloured image.By the way, it is possible to increase three dimensional display effect.

Description

The method of drawing three-dimensional image and device, system
Technical field
The present invention relates to dimension display technologies field, particularly relate to method and device, the system of drawing three-dimensional image.
Background technology
Mankind's eyes can produce vision difference, just due to position difference when watching the object with certain distance It is that this parallax allows people have the sensorial effects of three-dimensional.Dimension display technologies is according to this principle, by will obtain simultaneously Binocular images is received by corresponding eyes respectively, thus produces 3-D effect.Owing to this technology has brought brand-new Stereos copic viewing is experienced, and the demand of 3-D view resource is the most day by day increased by people in recent years.
One of method of currently acquired 3-D view is that by image processing techniques, two dimensional image is converted into 3-D view. It is specially and uses image processing techniques to be calculated the depth information of scene of existing two dimensional image, and then draw out virtual other Visual point image, utilizes existing two dimensional image and other virtual visual point images to form 3-D view.
Owing to the depth information of the existing two dimensional image for drawing these other visual point images is through being calculated, this Process can cause the loss of image detail information, affects the effect of three dimensional display.
Summary of the invention
The technical problem that present invention mainly solves is to provide method and device, the system of drawing three-dimensional image, it is possible to carry High three dimensional display effect.
For solving above-mentioned technical problem, the technical scheme that the present invention uses is: provide a kind of drawing three-dimensional image Method, including: obtain respectively and target is acquired the black light image that obtains and with the second viewpoint to institute with the first viewpoint State the first coloured image that target is acquired obtaining;Described first viewpoint and described second is calculated by described black light image Parallax between viewpoint;According to the pixel coordinate of the first coloured image described in described parallactic movement, obtain the second of the first viewpoint Coloured image;3-D view is formed by described first coloured image and described second coloured image.
Wherein, described black light image is to described target projective structure light pattern at projection module, by being arranged on Described target is acquired obtaining by the black light image acquisition device stating the first viewpoint, and described first coloured image is by being arranged on Described target is acquired obtaining by the color camera of described second viewpoint.
It is wherein, described by the parallax between described black light image described first viewpoint of calculating and described second viewpoint, Including: according to the matching algorithm of Digital Image Processing, calculate the described black light image that comprises described structured light patterns with Displacement between each pixel of the reference configuration light image preset;It is calculated the first viewpoint and described second by described displacement to regard Parallax between point, wherein, described displacement and described parallax have linear relationship.
Wherein, the described parallax being calculated between the first viewpoint and described second viewpoint by described displacement, including: utilize Following formula 1 is calculated the parallax d between the first viewpoint and described second viewpoint,
d = B 2 B 1 ( B 1 f Z 0 + Δ u ) - - - ( 1 )
Wherein, B1For the distance between described black light image acquisition device and described projection module, B2For described invisible Distance between light image harvester and color camera;Z0It is that described reference configuration light image place plane is relative to black light The depth value of image acquisition device;F is the image planes focal length of black light image acquisition device and color camera, and Δ u is black light figure Displacement between each pixel of picture and default reference configuration light image.
Wherein, the described pixel coordinate according to the first coloured image described in described parallactic movement, obtain the of the first viewpoint Two coloured images, including: according to parallax d, set up the first pixel coordinate I of described black light imageir(uir,vir) with described Second pixel coordinate I of the first coloured imager(ur,vrCorresponding relation between) is: Iir(uir,vir)=Ir(ur+d,vr);Will The pixel value of the first pixel coordinate of described black light image is set in described first coloured image and described first pixel Coordinate has the pixel value of the second pixel coordinate of corresponding relation, to form the described target the second cromogram in the first viewpoint Picture;Described second coloured image is smoothed, denoising.
Wherein, also include: utilize described black light image to be calculated the depth image of described first viewpoint;Utilize three Dimension image transformation theory, depth image and described first coloured image according to described first viewpoint are calculated described target and exist 3rd coloured image of the first viewpoint;
Described formed 3-D view by described first coloured image and described second coloured image, including: by described second The pixel value of the respective pixel in coloured image and described 3rd coloured image is averaged or weighted average, obtains described 4th coloured image of one viewpoint;3-D view is formed by described first coloured image and described 4th coloured image.
Wherein, the position relationship between described first viewpoint and the second viewpoint is the position relationship between human body eyes;Institute State color camera and described black light image acquisition device and described projection module is on same straight line;Described black light Image is infrared image, and described black light image acquisition device is infrared camera.
Wherein, the image acquisition target surface of described color camera and described black light image acquisition device equal in magnitude, differentiate Rate and focal length are identical, and optical axis is parallel to each other.
In order to solve above-mentioned technical problem, the present invention uses and another solution is that a kind of 3-D view of offer draws dress Put, including acquisition module, target is acquired the black light image that obtains and with for obtaining respectively with the first viewpoint Two viewpoints are acquired the first coloured image obtained to described target;Computing module, based on by described black light image Calculate the parallax between described first viewpoint and described second viewpoint;Obtain module, for according to described in described parallactic movement first The pixel coordinate of coloured image, obtains the second coloured image of the first viewpoint;Form module, for by described first coloured image 3-D view is formed with described second coloured image.
In order to solve above-mentioned technical problem, the present invention uses and another solution is that a kind of 3-D view of offer draws system System, including projection module, black light image acquisition device, color camera and described black light image acquisition device and colored phase The image processing equipment that machine connects;Described image processing equipment is used for: obtains respectively and adopts with the black light image of the first viewpoint Storage is acquired the black light image that obtains and is acquired described target with the color camera of the second viewpoint target The first coloured image obtained;Regarding between described first viewpoint and described second viewpoint is calculated by described black light image Difference;According to the pixel coordinate of the first coloured image described in described parallactic movement, obtain the second coloured image of the first viewpoint;By institute State the first coloured image and described second coloured image forms 3-D view.
The invention has the beneficial effects as follows: utilize the black light image of the first viewpoint collected obtain the first viewpoint and The parallax of the second viewpoint, and utilize the first coloured image of the second viewpoint and this parallax to obtain the second cromogram of the second viewpoint Picture, and then formed 3-D view by the first coloured image and the second coloured image, due to regarding of this first viewpoint and the second viewpoint Difference is obtained by the view data collected, and needs not move through image procossing, therefore decreases the loss of image detail information, with The more accurately coloured image of two viewpoints of acquisition, and then decrease the distortion factor of the 3-D view of synthesis, improve based on two dimension The three dimensional display effect that image generates.And relative to existing DIBR technology, the present embodiment is without being calculated the deep of image Degree information, it is to avoid be repeated several times and calculate the error introduced, further increase three dimensional display effect.
Accompanying drawing explanation
Fig. 1 is the flow chart of method one embodiment of drawing three-dimensional image of the present invention;
Fig. 2 is the schematic diagram of method one application scenarios of drawing three-dimensional image of the present invention;
Fig. 3 is the partial process view of another embodiment of method of drawing three-dimensional image of the present invention;
Fig. 4 is the partial process view of the method another embodiment of drawing three-dimensional image of the present invention;
Fig. 5 is the flow chart of the method another embodiment again of drawing three-dimensional image of the present invention;
Fig. 6 is the structural representation of 3-D view drawing apparatus one embodiment of the present invention;
Fig. 7 is the structural representation of 3-D view drawing system one embodiment of the present invention;
Fig. 8 is the structural representation of 3-D view another embodiment of drawing system of the present invention.
Detailed description of the invention
In order to be better understood from technical scheme, below in conjunction with the accompanying drawings the embodiment of the present invention is retouched in detail State.
The term used in embodiments of the present invention is only merely for describing the purpose of specific embodiment, and is not intended to be limiting The present invention." a kind of ", " described " and " being somebody's turn to do " of singulative used in the embodiment of the present invention and appended claims It is also intended to include most form, unless context clearly shows that other implications.It is also understood that term used herein Any or all possible combination that "and/or" refers to and comprises one or more project of listing being associated.
Refer to the flow chart that Fig. 1, Fig. 1 are method one embodiments of drawing three-dimensional image of the present invention.In the present embodiment, should Method can be performed by 3-D view drawing apparatus, comprises the following steps:
S11: obtain respectively and target is acquired the black light image that obtains and with the second viewpoint pair with the first viewpoint Described target is acquired the first coloured image obtained.
It should be noted that black light image of the present invention and coloured image are two dimensional image.This is invisible Light image is to obtain the image that the intensity of the black light in target is formed.
Wherein, this first viewpoint and the second viewpoint are positioned at the diverse location of target, to obtain at two viewpoints of this target Image.It is that the different images superposition watched by eyes is formed generally, due to three-dimensional sense organ, therefore this first viewpoint and second regards Point is for two viewpoints as human body eyes, and the position relationship between the i.e. first viewpoint and the second viewpoint is between human body eyes Position relationship.Such as, the distance of Regular Human's eyes is t, then the distance between the first viewpoint and the second viewpoint be set to T, this t are specifically such as 6.5cm.And, same or like for ensureing the picture depth of the first viewpoint and the second viewpoint, by first Viewpoint is set to the second viewpoint identical with the distance of this target or is less than setting threshold value apart from difference, applies concrete In, this setting threshold value may be configured as the value of no more than 10cm or 20cm.
In one specifically application, as in figure 2 it is shown, this black light image is for projecting to described target 23 at projection module 25 Structured light patterns, is acquired described target 23 by the black light image acquisition device 21 being arranged on described first viewpoint Arriving, target 23 is acquired obtaining by this first coloured image by the color camera 22 being arranged on described second viewpoint.Invisible The image transmitting that light image harvester 21 and color camera are collected is to 3-D view drawing apparatus 24, following to carry out The acquisition of 3-D view.Owing to color camera is different from the position of black light image acquisition device, thus this first coloured image with Space three-dimensional point corresponding on same pixel coordinate in black light image also differs.In Fig. 2, color camera 22 and institute State black light image acquisition device 21 and described projection module 25 is on same straight line, so that this color camera 22 and described Black light image acquisition device 21 and described projection module 25 are identical to the degree of depth of target.Certainly, Fig. 2 is only used as a kind of enforcement Example, other apply in, above-mentioned three kinds also can not be on the same line.
Specifically, projection module 25 is typically made up of laser and diffraction optical element, and laser can be swashing of edge transmitting type Light can also be vertical cavity surface laser, and this laser is the black light that can be collected by this black light image acquisition device.Spread out Penetrate optical element to need to be configured of the functions such as collimation, beam splitting, diffusion according to different structured light patterns.Above-mentioned knot Structure light pattern can be to be distributed irregular speckle pattern, and speckle center energy level needs to meet harmless requirement, therefore Need to consider the configuring condition of the power of laser and diffraction optical element.
The dense degree of speckle pattern have impact on speed and the precision that depth value calculates, and speckle particle is the most, calculates speed The slowest, but precision is the highest.Therefore, this projection module 25 can select to close according to the approximate depth of the target area of shooting image Suitable speckle particle density, while ensureing to calculate speed, still has higher computational accuracy.Certainly, this speckle particle is close Degree also can be determined according to the calculating demand of self by above-mentioned 3-D view drawing apparatus 24, and the density information this determined Send to projection module 25.
Wherein, this projection module 25 is to target area but does not limit to be the angle of flare projection speckle particle pattern with certain 's.
At projection module 25 after target projective structure light image, what black light image acquisition device 21 gathered target can not See light image.Specifically, this black light can be any black light, and such as, this black light image acquisition device 21 can be Infrared collecting device, such as infrared camera, this black light image is infrared image;Or black light image acquisition device 21 can be EUV collector, such as ultraviolet-cameras, this black light image is ultraviolet image.
For the collection effect reached and avoid follow-up unnecessary calculating, can be by color camera and black light image Harvester is arranged to synchronous acquisition and collection frame number is identical, and the coloured image and the black light image that so obtain can guarantee that one by one Corresponding relation, it is simple to subsequent calculations.
S12: calculated the parallax between described first viewpoint and described second viewpoint by described black light image.
Such as, relevant (DIC) algorithm of matching algorithm such as figure digital picture utilizing Digital Image Processing is calculated the Parallax between image and the image of the second viewpoint of one viewpoint, the image of the i.e. first viewpoint and the pixel of the image of the second viewpoint Relative position relation between coordinate.
S13: according to the pixel coordinate of the first coloured image described in described parallactic movement, obtain the second colour of the first viewpoint Image.
Such as, the pixel coordinate of this first coloured image is moved image parallactic value d that respective pixel is corresponding, wherein, moves Dynamic pixel coordinate (the u obtained1+d,v1) pixel value (being also called rgb value) be the pixel coordinate (u in the first coloured image1, v1) pixel value.
S14: formed 3-D view by described first coloured image and the second coloured image.
Such as, using the first coloured image and the second coloured image as human body binocular images, with compositing 3 d images, Can be specifically top-down format, left-right format or the 3-D view shown for 3D of red blue form.Further, closing After becoming 3-D view, also this 3-D view can be shown, or output shows to the exterior display device connected.
In the present embodiment, the black light image of the first viewpoint collected is utilized to obtain the first viewpoint and the second viewpoint Parallax, and utilize the first coloured image of the second viewpoint and this parallax to obtain the second coloured image of the second viewpoint, Jin Eryou First coloured image and the second coloured image form 3-D view, owing to the parallax of this first viewpoint and the second viewpoint is by gathering The view data arrived obtains, and needs not move through image procossing, therefore decreases the loss of image detail information, more accurately to obtain The coloured image of two viewpoints, and then decrease the distortion factor of the 3-D view of synthesis, improve based on two dimensional image generation Three dimensional display effect.And draw (depth-image-based rendering, DIBR) skill relative to existing depth image Art, the present embodiment is without being calculated the depth information of image, it is to avoid is repeated several times and calculates the error introduced, improves further Three dimensional display effect.
Referring to Fig. 3, in another embodiment, this black light image is to described target projective structure at projection module Light pattern, is acquired obtaining to described target by the black light image acquisition device being arranged on described first viewpoint, and described Described target is acquired obtaining by one coloured image by the color camera being arranged on described second viewpoint, and the present embodiment is with above-mentioned The difference of embodiment is, above-mentioned S12 includes following sub-step:
S121: according to the matching algorithm of Digital Image Processing, calculate and comprise the described invisible of described structured light patterns Displacement between each pixel of light image and default reference configuration light image.
The matching algorithm of this Digital Image Processing such as numeral image correlation algorithm.This reference configuration light pattern is to advance with The projection module planar projective reference configuration light pattern to setpoint distance set, and utilize the black light set Image acquisition device gathers what the reference configuration light pattern of this plane obtained, and above-mentioned " setting " is interpreted as once setting it After, this image acquisition device and projection module also will not be moved when the follow-up collection carrying out this black light image.
Such as, digital picture related algorithm is utilized to obtain black light image with reference configuration light pattern as with reference to speckle pattern The shift value Δ u of each respective pixel between Xiang.The certainty of measurement of current digital image related algorithm can reach sub-pixel, such as 1/8 pixel, say, that the value of Δ u can be the multiple of 1/8, unit is pixel.
S122: be calculated the parallax between the first viewpoint and described second viewpoint by described displacement.
Owing to the displacement between this black light image and each pixel of this reference configuration light image and this parallax have line Sexual relationship.Therefore movable and its linear relationship is calculated the parallax between the first viewpoint and described second viewpoint.
Such as, utilize the parallax d that following formula 11 is calculated between the first viewpoint and described second viewpoint,
d = B 2 B 1 ( B 1 f Z 0 + Δ u ) - - - ( 11 )
Wherein, B1For the distance between described black light image acquisition device and described projection module, B2For described invisible Distance between light image harvester and color camera;Z0It is that described reference configuration light image place plane is relative to black light The depth value of image acquisition device;F is the image planes focal length of black light image acquisition device and color camera, and Δ u is black light figure Displacement between each pixel of picture and default reference configuration light image.This reference configuration light image place plane is throws before Penetrate the plane at this reference configuration light pattern place, this Z0For representing the distance between this image acquisition device of this plan range, can Obtained by the range information of this plane when testing this reference configuration light image before.In the present embodiment, the unit of f is pixel, f's Value can first pass through demarcation in advance and obtain.
When the numerical value of calculated parallax d is not integer, process such as it can being rounded up or round.
Referring to Fig. 4, in another embodiment, it is with the difference of above-described embodiment, and above-mentioned 13 include following sub-step Rapid:
S131: set up the of the first pixel coordinate of described black light image and described first coloured image according to parallax Corresponding relation between two pixel coordinates.
Such as, according to parallax d, the first pixel coordinate I of described black light image is set upir(uir,vir) with described first Second pixel coordinate I of coloured imager(ur,vrCorresponding relation between) is: Iir(uir,vir)=Ir(ur+d,vr)。
S132: the pixel value of the first pixel coordinate of described black light image is set in described first coloured image There is the pixel value of the second pixel coordinate of corresponding relation with described first pixel coordinate, to form described target in the first viewpoint The second coloured image.
Such as, according to corresponding relation, by pixel value (alternatively referred to as rgb value) assignment of the first coloured image in black light Image, to generate the second coloured image.Illustrate with one of them pixel coordinate of image, if d is 1, then black light image Pixel coordinate (1,1) is corresponding with the pixel coordinate of the first coloured image (2,1).Then, by the pixel coordinate of black light image The pixel value of (1,1) be set to pixel coordinate (2,1) in the first coloured image pixel value (r, g, b).
S133: described second coloured image is smoothed, denoising.
Owing to some bad points usually occur in the data of shift value Δ u, cause the coloured image finally given occurs These data can be amplified when processing further in step below, and then have a strong impact on three dimensional display by the problems such as cavity Effect, for avoiding bad point or the area data impact on three dimensional display of depth image, this sub-step the second colour to obtaining Image carries out denoising, smoothing processing.
Certainly, in other embodiments, above-mentioned S13 step can only include above-mentioned S131 and S132 sub-step.
Refer to Fig. 5, in another embodiment again, after above-mentioned S11, further comprising the steps of:
S15: utilize described black light image to be calculated the depth image of described first viewpoint.
Such as, utilizing infrared image to calculate the depth image of this first viewpoint, its concrete calculation can use existing Related algorithm.
S16: utilize 3-D view transformation theory, according to depth image and described first coloured image of described first viewpoint It is calculated the described target the 3rd coloured image in the first viewpoint.
Adopt with image according to 3-D view conversion (3D Image Wrapping) arbitrary three-dimensional coordinate point of theoretical space Two-dimensional coordinate point in collection plane can be mapped by transitting probability theory, therefore thus the first viewpoint and second can be regarded by theory The pixel coordinate of the image of point is mapped, and according to this corresponding relation and the pixel value of the first coloured image of the second viewpoint, It is that the image pixel coordinates of the first viewpoint arranges the pixel value of respective pixel coordinate in the first coloured image of the second viewpoint.
Such as, this S16 includes following sub-step:
A: utilize equation 1 below 2 to be calculated the first pixel coordinate (u of depth image of described first viewpointD,vD) with Second pixel coordinate (u of described first coloured imageR,vRCorresponding relation between),
Z R U R ‾ = Z D M R R · M D - 1 U D ‾ + M R T - - - ( 12 )
Wherein, described ZDFor the depth information in described first depth image, represent degree of depth phase described in described target range The depth value of machine;ZRRepresent the depth value of color camera described in described target range;For described color camera The pixel homogeneous coordinates fastened of image coordinate;The pixel fastened for the image coordinate of described depth camera Homogeneous coordinates;MRFor the internal reference matrix of described color camera, MDInternal reference matrix for described depth camera;R is that depth camera is relative Spin matrix in the outer ginseng matrix of color camera, T is that depth camera is relative to the translation in the outer ginseng matrix of color camera Matrix.
Internal reference matrix and the outer ginseng matrix of above-mentioned camera and harvester can be set in advance, and specifically this internal reference matrix can basis The parameter that arranges of camera and harvester is calculated, and this outer ginseng matrix can be by between black light image acquisition device and color camera Position relationship determine.In one embodiment, by pixel focal length and the figure of camera and the collection lens of harvester As gathering the inner parameter matrix that the center position coordinates of target surface is constituted.Owing to the position relationship of the first viewpoint and the second viewpoint sets It is set to the position relationship of human eye eyes, between human body eyes, there is no any rotating against and the distance of only setting value t, therefore Color camera is unit matrix relative to the spin matrix R of black light image acquisition device, translation matrix T=[t, 0,0]-1
Further, this setting value t can be carried out according to the distance of black light image acquisition device and color camera with target Adjust.In another embodiment, further comprising the steps of before above-mentioned S11: to obtain target and black light image acquisition device Distance with color camera;When judging that described target is equal with the distance of described black light image acquisition device and described color camera During more than the first distance value, described setting value t is tuned up;When judging described target and described black light image acquisition device and institute State the distance of color camera when being respectively less than second distance value, described setting value t is turned down.
Wherein, described first distance value is more than or equal to described second distance value.Such as, when target and black light image The distance of harvester is 100cm, and target is also 100cm with the distance of color camera, owing to 100cm is less than second distance value 200cm, then turn a step value down by setting value, or according to current goal and black light image acquisition device and color camera Distance be calculated turn value down after be adjusted.When the distance of target with black light image acquisition device and color camera is 300cm, owing to 300cm is more than second distance value 200 and less than the first distance value 500cm, is not adjusted this setting value.
B: the pixel value of the first pixel coordinate of described black light image is set in described first coloured image with Described first pixel coordinate has the pixel value of the second pixel coordinate of corresponding relation, to form described target in the first viewpoint 3rd coloured image.
Such as, by the depth information Z of the black light image of the first viewpointDAfter substituting into above-mentioned formula 12, formula can be tried to achieve The depth information of second viewpoint on 12 left sides namely the depth information Z of the first coloured imageR, and the image of the first coloured image Pixel homogeneous coordinates in coordinate systemIn the present embodiment, black light image acquisition device and color camera and target away from From identical, the Z i.e. tried to achieveRWith ZDIt is equal.By pixel homogeneous coordinatesAvailable the first picture with this black light image Element coordinate (uD,vD) the second pixel coordinate (u of the first coloured image one to oneR,vR), such as its corresponding relation is (uR, vR)=(uD+d,vD).Then, according to corresponding relation, by the pixel value assignment of the first coloured image in black light image, with life Become the 3rd coloured image.
In this again another embodiment, above-mentioned S14 comprises the following steps:
S141: the pixel value of the respective pixel in described second coloured image and described 3rd coloured image is averaged Or weighted average, obtains the 4th coloured image of described first viewpoint.
With the pixel coordinate citing in coloured image, the pixel coordinate in the second coloured image and the 3rd coloured image The pixel value of (Ur, Vr) is respectively (r1, g1, b1) and (r2, g2, b2), then by the picture in the 4th coloured image of the first viewpoint The pixel value of element coordinate (Ur, Vr) is set to
S142: formed 3-D view by described first coloured image and described 4th coloured image.
Such as, using the first coloured image and the 4th coloured image as human body binocular images, with compositing 3 d images.
It is understood that in above-described embodiment, this black light image acquisition device and the image of color camera can be arranged Target surface is equal in magnitude, resolution is identical and focal length is identical in collection.Or, color camera and described black light image acquisition device Image acquisition target surface size, resolution and focal length at least one differ, the target surface size of such as color camera with And resolution is all big than black light image acquisition device, now, after above-mentioned S13, this preparation method also includes: to described first Coloured image and/or described second coloured image carry out interpolation, dividing processing so that described first coloured image and described second The target area that coloured image is corresponding is identical, and image size is the most identical with resolution.Due to color camera and black light figure As harvester exists error when assembling, therefore the image acquisition target surface of this black light image acquisition device above-mentioned and color camera is big Little equal, resolution is identical and focal length is identical is interpreted as: the image of this black light image acquisition device and color camera is adopted Collection target surface size, resolving power and focal length are identical in the range of allowable error.
And, above-mentioned image includes photo or video, when above-mentioned image is video, and described black light image acquisition The frequency acquisition of device and color camera synchronizes, if or black light image acquisition device different with the frequency acquisition of color camera Step, then obtain the video image that frequency is consistent by the way of image interpolation.
Refer to the structural representation that Fig. 6, Fig. 6 are 3-D view drawing apparatus one embodiments of the present invention.In the present embodiment, This drawing apparatus 60 includes acquisition module 61, computing module 62, forms module 63 and obtain module 64.Wherein,
Acquisition module 61 for obtain respectively with the first viewpoint target is acquired the black light image that obtains and with Second viewpoint is acquired the first coloured image obtained to described target;
Computing module 62 is for being calculated between described first viewpoint and described second viewpoint by described black light image Parallax;
Obtain module 64 for the pixel coordinate according to the first coloured image described in described parallactic movement, obtain the first viewpoint The second coloured image;
Form module 63 for being formed 3-D view by described first coloured image and described second coloured image.
Alternatively, described black light image is to described target projective structure light pattern at projection module, by being arranged on Described target is acquired obtaining by the black light image acquisition device of described first viewpoint, and described first coloured image is by arranging Described target is acquired obtaining by the color camera in described second viewpoint.
Alternatively, computing module 62, specifically for the matching algorithm according to Digital Image Processing, calculates and comprises described knot Displacement between described black light image and each pixel of default reference configuration light image of structure light pattern;By described displacement Being calculated the parallax between the first viewpoint and described second viewpoint, wherein, described displacement and described parallax have linear relationship.
Still optionally further, computing module 62 performs described to be calculated the first viewpoint and described second by described displacement and regard Parallax between point, including: utilize the parallax d that above-mentioned formula 11 is calculated between the first viewpoint and described second viewpoint.
Alternatively, module 64 is obtained specifically for according to parallax d, setting up the first pixel coordinate of described black light image Iir(uir,vir) and the second pixel coordinate I of described first coloured imager(ur,vrCorresponding relation between) is: Iir(uir,vir) =Ir(ur+d,vr);The pixel value of the first pixel coordinate of described black light image is set in described first coloured image There is the pixel value of the second pixel coordinate of corresponding relation with described first pixel coordinate, to form described target in the first viewpoint The second coloured image;Described second coloured image is smoothed, denoising.
Alternatively, computing module 62 is additionally operable to the degree of depth utilizing described black light image to be calculated described first viewpoint Image;Utilizing 3-D view transformation theory, depth image and described first coloured image according to described first viewpoint calculate To described target at the 3rd coloured image of the first viewpoint;This formation module 63 is specifically for by described second coloured image and institute The pixel value stating the respective pixel in the 3rd coloured image is averaged or weighted average, obtains the 4th of described first viewpoint the Coloured image;3-D view is formed by described first coloured image and described 4th coloured image.
Alternatively, the position relationship between described first viewpoint and the second viewpoint is the position relationship between human body eyes; Described color camera and described black light image acquisition device and described projection module are on same straight line;Described invisible Light image is infrared image, and described black light image acquisition device is infrared camera.
Alternatively, the image acquisition target surface of described color camera and described black light image acquisition device equal in magnitude, point Resolution and focal length are identical, and optical axis is parallel to each other.
Alternatively, described black light image and described first coloured image are photo or video, when described invisible When light image and described first coloured image are video, the frequency acquisition of described black light image acquisition device and color camera is same Step, if or the frequency acquisition of black light image acquisition device and color camera asynchronous, then obtain by the way of image interpolation Obtain the video image that frequency is consistent.
Wherein, the above-mentioned module of this drawing apparatus is respectively used to perform the corresponding steps in said method embodiment, specifically Execution process as above embodiment of the method illustrates, therefore not to repeat here.
Refer to the structural representation that Fig. 7, Fig. 7 are 3-D view drawing system one way of example of the present invention.This enforcement In example, this system 70 includes projecting module 74, black light image acquisition device 71, color camera 72 and described black light figure The image processing equipment 73 connected as harvester 71 and color camera 72.This image processing equipment 73 includes input interface 731, place Reason device 732, memorizer 733.Further, this image processing equipment 73 also can be connected with projection module 74.
This input interface 731 is for the image obtaining black light image acquisition device 71 and color camera 72 collects.
Memorizer 733 is used for storing computer program, and provides described computer program to processor 732, and can store The internal reference matrix of the data that processor 732 is used when processing such as black light image acquisition device 71 and color camera 72 and outer ginseng Matrix etc., and the image that input interface 731 obtains.
Processor 732 is used for:
Obtained respectively by input interface 731 and with the black light image acquisition device 71 of the first viewpoint, target is acquired The black light image that obtains and with the color camera 72 of the second viewpoint, described target is acquired the first cromogram obtained Picture;
The parallax between described first viewpoint and described second viewpoint is calculated by described black light image;
According to the pixel coordinate of the first coloured image described in described parallactic movement, obtain the second cromogram of the first viewpoint Picture;
3-D view is formed by described first coloured image and described second coloured image.
In the present embodiment, image processing equipment 73 may also include display screen 734, and this display screen 734 is used for showing this three-dimensional Image, to realize three dimensional display.Certainly, in another embodiment, image processing equipment 73 is not used in this 3-D view of display, as Shown in Fig. 8, this 3-D view drawing system 70 also includes the display device 75 being connected with image processing equipment 73, display device 75 For receiving the 3-D view of image processing equipment 73 output, and show this 3-D view.
Alternatively, processor 732, specifically for the matching algorithm according to Digital Image Processing, calculates and comprises described structure Displacement between described black light image and each pixel of default reference configuration light image of light pattern;By described displacement meter Calculating the parallax obtaining between the first viewpoint and described second viewpoint, wherein, described displacement and described parallax have linear relationship.
Still optionally further, processor 732 performs described to be calculated the first viewpoint and described second by described displacement and regard Parallax between point, including: utilize the parallax d that following formula 1 is calculated between the first viewpoint and described second viewpoint.
Alternatively, processor 732 performs the described pixel coordinate according to the first coloured image described in described parallactic movement, To the second coloured image of the first viewpoint, including: according to parallax d, set up the first pixel coordinate I of described black light imageir (uir,vir) and the second pixel coordinate I of described first coloured imager(ur,vrCorresponding relation between) is: Iir(uir,vir)= Ir(ur+d,vr);The pixel value of the first pixel coordinate of described black light image is set in described first coloured image with Described first pixel coordinate has the pixel value of the second pixel coordinate of corresponding relation, to form described target in the first viewpoint Second coloured image;Described second coloured image is smoothed, denoising.
Alternatively, processor 732 is additionally operable to the degree of depth utilizing described black light image to be calculated described first viewpoint Image;Utilizing 3-D view transformation theory, depth image and described first coloured image according to described first viewpoint calculate To described target at the 3rd coloured image of the first viewpoint;Processor 732 performs described by described first coloured image with described Second coloured image forms 3-D view, including: by the correspondence picture in described second coloured image and described 3rd coloured image The pixel value of element is averaged or weighted average, obtains the 4th coloured image of described first viewpoint;Colored by described first Image and described 4th coloured image form 3-D view.
Alternatively, the position relationship between described first viewpoint and the second viewpoint is the position relationship between human body eyes; Described color camera 72 and described black light image acquisition device 71 and described projection module 74 are on same straight line;Described Black light image is infrared image, and described black light image acquisition device 71 is infrared camera.
Alternatively, the image acquisition target surface size phase of described color camera 72 and described black light image acquisition device 71 Identical Deng, resolution and focal length, optical axis is parallel to each other.
Alternatively, described black light image and described first coloured image are photo or video, when described invisible When light image and described first coloured image are video, the frequency acquisition of described black light image acquisition device and color camera is same Step, if or the frequency acquisition of black light image acquisition device and color camera asynchronous, then obtain by the way of image interpolation Obtain the video image that frequency is consistent.
This image processing equipment 73 can be used for performing side described in above-described embodiment as above-mentioned 3-D view drawing apparatus Method.Such as, the method that the invention described above embodiment discloses can also be applied in processor 732, or real by processor 732 Existing.Processor 732 is probably a kind of IC chip, has the disposal ability of signal.During realizing, said method Each step can be completed by the instruction of the integrated logic circuit of the hardware in processor 732 or software form.Above-mentioned place Reason device 732 can be general processor, digital signal processor (DSP), special IC (ASIC), ready-made gate array able to programme Row (FPGA) or other PLDs, discrete gate or transistor logic, discrete hardware components.Can realize Or disclosed each method, step and the logic diagram in the execution embodiment of the present invention.General processor can be microprocessor Or this processor can also be the processor etc. of any routine.Step in conjunction with the method disclosed in the embodiment of the present invention is permissible It is embodied directly in hardware decoding processor to have performed, or has performed with the hardware in decoding processor and software module combination Become.Software module may be located at random access memory, flash memory, read only memory, and programmable read only memory or electrically-erasable can In the storage medium that this area such as programmable memory, depositor is ripe.This storage medium is positioned at memorizer 733, and processor 732 is read Take the information in respective memory, complete the step of said method in conjunction with its hardware.
In such scheme, the black light image of the first viewpoint collected is utilized to obtain the first viewpoint and the second viewpoint Parallax, and utilize the first coloured image of the second viewpoint and this parallax to obtain the second coloured image of the second viewpoint, Jin Eryou First coloured image and the second coloured image form 3-D view, owing to the parallax of this first viewpoint and the second viewpoint is by gathering The view data arrived obtains, and needs not move through image procossing, therefore decreases the loss of image detail information, more accurately to obtain The coloured image of two viewpoints, and then decrease the distortion factor of the 3-D view of synthesis, improve based on two dimensional image generation Three dimensional display effect.And relative to existing DIBR technology, the present embodiment is without being calculated the depth information of image, it is to avoid It is repeated several times and calculates the error introduced, further increase three dimensional display effect.
The foregoing is only embodiments of the present invention, not thereby limit the scope of the claims of the present invention, every utilization is originally Equivalent structure or equivalence flow process that description of the invention and accompanying drawing content are made convert, or are directly or indirectly used in what other were correlated with Technical field, is the most in like manner included in the scope of patent protection of the present invention.

Claims (10)

1. the method for a drawing three-dimensional image, it is characterised in that including:
Obtain respectively and target is acquired the black light image that obtains and with the second viewpoint to described target with the first viewpoint It is acquired the first coloured image obtained;
The parallax between described first viewpoint and described second viewpoint is calculated by described black light image;
According to the pixel coordinate of the first coloured image described in described parallactic movement, obtain the second coloured image of the first viewpoint;
3-D view is formed by described first coloured image and described second coloured image.
Method the most according to claim 1, it is characterised in that described black light image is to described mesh at projection module Mark projective structure light pattern, is acquired described target by the black light image acquisition device being arranged on described first viewpoint Arriving, described target is acquired obtaining by described first coloured image by the color camera being arranged on described second viewpoint.
Method the most according to claim 2, it is characterised in that described calculating by described black light image described first regards Parallax between point and described second viewpoint, including:
According to the matching algorithm of Digital Image Processing, calculate the described black light image comprising described structured light patterns with pre- If reference configuration light image each pixel between displacement;
Being calculated the parallax between the first viewpoint and described second viewpoint by described displacement, wherein, described displacement regards with described Difference has linear relationship.
Method the most according to claim 3, it is characterised in that described be calculated the first viewpoint and described by described displacement Parallax between second viewpoint, including:
Utilize the parallax d that following formula 1 is calculated between the first viewpoint and described second viewpoint,
d = B 2 B 1 ( B 1 f Z 0 + Δ u ) - - - ( 1 )
Wherein, B1For the distance between described black light image acquisition device and described projection module, B2For described black light figure As the distance between harvester and color camera;Z0It is that described reference configuration light image place plane is relative to black light image The depth value of harvester;F is the image planes focal length of black light image acquisition device and color camera, Δ u be black light image with Displacement between each pixel of the reference configuration light image preset.
Method the most according to claim 1, it is characterised in that described according to the first coloured image described in described parallactic movement Pixel coordinate, obtain the second coloured image of the first viewpoint, including:
According to parallax d, set up the first pixel coordinate I of described black light imageir(uir,vir) and described first coloured image The second pixel coordinate Ir(ur,vrCorresponding relation between) is:
Iir(uir,vir)=Ir(ur+d,vr);
The pixel value of the first pixel coordinate of described black light image is set in described first coloured image and described One pixel coordinate has the pixel value of the second pixel coordinate of corresponding relation, to form the described target the second coloured silk in the first viewpoint Color image;
Described second coloured image is smoothed, denoising.
Method the most according to claim 2, it is characterised in that also include:
Described black light image is utilized to be calculated the depth image of described first viewpoint;
Utilizing 3-D view transformation theory, depth image and described first coloured image according to described first viewpoint are calculated Described target is at the 3rd coloured image of the first viewpoint;
Described formed 3-D view by described first coloured image and described second coloured image, including:
The pixel value of the respective pixel in described second coloured image and described 3rd coloured image is averaged or weights Averagely, the 4th coloured image of described first viewpoint is obtained;
3-D view is formed by described first coloured image and described 4th coloured image.
Method the most according to claim 2, it is characterised in that the position relationship between described first viewpoint and the second viewpoint For the position relationship between human body eyes;Described color camera and described black light image acquisition device and described projection module It is on same straight line;Described black light image is infrared image, and described black light image acquisition device is infrared camera.
8. according to the method described in any one of claim 1 to 7, it is characterised in that described color camera and described black light The image acquisition target surface of image acquisition device is equal in magnitude, resolution and focal length identical, optical axis is parallel to each other.
9. a 3-D view drawing apparatus, it is characterised in that including:
Acquisition module, is acquired, with the first viewpoint, the black light image that obtains for obtaining respectively and regards with second target Point is acquired the first coloured image obtained to described target;
Computing module, for being calculated the parallax between described first viewpoint and described second viewpoint by described black light image;
Obtain module, for according to the pixel coordinate of the first coloured image described in described parallactic movement, obtain the of the first viewpoint Two coloured images;
Form module, for being formed 3-D view by described first coloured image and described second coloured image.
10. a 3-D view drawing system, it is characterised in that include projecting module, black light image acquisition device, colored phase The image processing equipment that machine is connected with described black light image acquisition device and color camera;
Described image processing equipment is used for:
Obtain respectively black light image that target is acquired obtaining by the black light image acquisition device with the first viewpoint and With the color camera of the second viewpoint, described target is acquired the first coloured image obtained;
The parallax between described first viewpoint and described second viewpoint is calculated by described black light image;
According to the pixel coordinate of the first coloured image described in described parallactic movement, obtain the second coloured image of the first viewpoint;
3-D view is formed by described first coloured image and described second coloured image.
CN201610698004.0A 2016-08-19 2016-08-19 Method and device thereof, the system of drawing three-dimensional image Active CN106170086B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610698004.0A CN106170086B (en) 2016-08-19 2016-08-19 Method and device thereof, the system of drawing three-dimensional image
PCT/CN2017/085147 WO2018032841A1 (en) 2016-08-19 2017-05-19 Method, device and system for drawing three-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610698004.0A CN106170086B (en) 2016-08-19 2016-08-19 Method and device thereof, the system of drawing three-dimensional image

Publications (2)

Publication Number Publication Date
CN106170086A true CN106170086A (en) 2016-11-30
CN106170086B CN106170086B (en) 2019-03-15

Family

ID=57375861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610698004.0A Active CN106170086B (en) 2016-08-19 2016-08-19 Method and device thereof, the system of drawing three-dimensional image

Country Status (2)

Country Link
CN (1) CN106170086B (en)
WO (1) WO2018032841A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875435A (en) * 2016-12-14 2017-06-20 深圳奥比中光科技有限公司 Obtain the method and system of depth image
CN107105217A (en) * 2017-04-17 2017-08-29 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
WO2018032841A1 (en) * 2016-08-19 2018-02-22 深圳奥比中光科技有限公司 Method, device and system for drawing three-dimensional image
CN108460368A (en) * 2018-03-30 2018-08-28 百度在线网络技术(北京)有限公司 3-D view synthetic method, device and computer readable storage medium
CN113436129A (en) * 2021-08-24 2021-09-24 南京微纳科技研究院有限公司 Image fusion system, method, device, equipment and storage medium
CN114119680A (en) * 2021-09-09 2022-03-01 北京的卢深视科技有限公司 Image acquisition method and device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662695A (en) * 2009-09-24 2010-03-03 清华大学 Method and device for acquiring virtual viewport
US20100128129A1 (en) * 2008-11-26 2010-05-27 Samsung Electronics Co., Ltd. Apparatus and method of obtaining image
CN102289841A (en) * 2011-08-11 2011-12-21 四川虹微技术有限公司 Method for regulating audience perception depth of three-dimensional image
CN103004214A (en) * 2010-07-16 2013-03-27 高通股份有限公司 Vision-based quality metric for three dimensional video
US20140055574A1 (en) * 2012-08-27 2014-02-27 Samsung Electronics Co., Ltd. Apparatus and method for capturing color images and depth images
CN103796004A (en) * 2014-02-13 2014-05-14 西安交通大学 Active binocular depth sensing method of structured light
CN103824318A (en) * 2014-02-13 2014-05-28 西安交通大学 Multi-camera-array depth perception method
US20140187879A1 (en) * 2012-12-05 2014-07-03 Fred Wood System and Method for Laser Imaging and Ablation of Cancer Cells Using Fluorescence
CN104428624A (en) * 2012-06-29 2015-03-18 富士胶片株式会社 Three-dimensional measurement method, apparatus, and system, and image processing device
CN105120257A (en) * 2015-08-18 2015-12-02 宁波盈芯信息科技有限公司 Vertical depth sensing device based on structured light coding
CN105791662A (en) * 2014-12-22 2016-07-20 联想(北京)有限公司 Electronic device and control method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4461091B2 (en) * 2004-11-30 2010-05-12 本田技研工業株式会社 Position detection apparatus and correction method thereof
CN102999939B (en) * 2012-09-21 2016-02-17 魏益群 Coordinate acquiring device, real-time three-dimensional reconstructing system and method, three-dimensional interactive device
JP2014230179A (en) * 2013-05-24 2014-12-08 ソニー株式会社 Imaging apparatus and imaging method
CN104918035A (en) * 2015-05-29 2015-09-16 深圳奥比中光科技有限公司 Method and system for obtaining three-dimensional image of target
CN106170086B (en) * 2016-08-19 2019-03-15 深圳奥比中光科技有限公司 Method and device thereof, the system of drawing three-dimensional image
CN106791763B (en) * 2016-11-24 2019-02-22 深圳奥比中光科技有限公司 A kind of application specific processor for 3D display and 3D interaction
CN106604020B (en) * 2016-11-24 2019-05-31 深圳奥比中光科技有限公司 A kind of application specific processor for 3D display

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128129A1 (en) * 2008-11-26 2010-05-27 Samsung Electronics Co., Ltd. Apparatus and method of obtaining image
CN101662695A (en) * 2009-09-24 2010-03-03 清华大学 Method and device for acquiring virtual viewport
CN103004214A (en) * 2010-07-16 2013-03-27 高通股份有限公司 Vision-based quality metric for three dimensional video
CN102289841A (en) * 2011-08-11 2011-12-21 四川虹微技术有限公司 Method for regulating audience perception depth of three-dimensional image
CN104428624A (en) * 2012-06-29 2015-03-18 富士胶片株式会社 Three-dimensional measurement method, apparatus, and system, and image processing device
US20140055574A1 (en) * 2012-08-27 2014-02-27 Samsung Electronics Co., Ltd. Apparatus and method for capturing color images and depth images
US20140187879A1 (en) * 2012-12-05 2014-07-03 Fred Wood System and Method for Laser Imaging and Ablation of Cancer Cells Using Fluorescence
CN103796004A (en) * 2014-02-13 2014-05-14 西安交通大学 Active binocular depth sensing method of structured light
CN103824318A (en) * 2014-02-13 2014-05-28 西安交通大学 Multi-camera-array depth perception method
CN105791662A (en) * 2014-12-22 2016-07-20 联想(北京)有限公司 Electronic device and control method
CN105120257A (en) * 2015-08-18 2015-12-02 宁波盈芯信息科技有限公司 Vertical depth sensing device based on structured light coding

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018032841A1 (en) * 2016-08-19 2018-02-22 深圳奥比中光科技有限公司 Method, device and system for drawing three-dimensional image
CN106875435A (en) * 2016-12-14 2017-06-20 深圳奥比中光科技有限公司 Obtain the method and system of depth image
CN106875435B (en) * 2016-12-14 2021-04-30 奥比中光科技集团股份有限公司 Method and system for obtaining depth image
CN107105217A (en) * 2017-04-17 2017-08-29 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
CN107105217B (en) * 2017-04-17 2018-11-30 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
CN108460368A (en) * 2018-03-30 2018-08-28 百度在线网络技术(北京)有限公司 3-D view synthetic method, device and computer readable storage medium
CN108460368B (en) * 2018-03-30 2021-07-09 百度在线网络技术(北京)有限公司 Three-dimensional image synthesis method and device and computer-readable storage medium
CN113436129A (en) * 2021-08-24 2021-09-24 南京微纳科技研究院有限公司 Image fusion system, method, device, equipment and storage medium
CN113436129B (en) * 2021-08-24 2021-11-16 南京微纳科技研究院有限公司 Image fusion system, method, device, equipment and storage medium
CN114119680A (en) * 2021-09-09 2022-03-01 北京的卢深视科技有限公司 Image acquisition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106170086B (en) 2019-03-15
WO2018032841A1 (en) 2018-02-22

Similar Documents

Publication Publication Date Title
CN106170086A (en) The method of drawing three-dimensional image and device, system
CN105160680B (en) A kind of design method of the noiseless depth camera based on structure light
CN106254854A (en) The preparation method of 3-D view, Apparatus and system
Dufaux et al. Emerging technologies for 3D video: creation, coding, transmission and rendering
CN105678742B (en) A kind of underwater camera scaling method
CN103868460B (en) Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN102072706B (en) Multi-camera positioning and tracking method and system
CN101916455B (en) Method and device for reconstructing three-dimensional model of high dynamic range texture
CN109741405A (en) A kind of depth information acquisition system based on dual structure light RGB-D camera
CN105931240A (en) Three-dimensional depth sensing device and method
CN104021587B (en) Based on the true Three-dimensional Display rapid generation of large scene for calculating holographic technique
CN110390719A (en) Based on flight time point cloud reconstructing apparatus
CN103115613B (en) Three-dimensional space positioning method
CN102445165B (en) Stereo vision measurement method based on single-frame color coding grating
CN103337094A (en) Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN107666606A (en) Binocular panoramic picture acquisition methods and device
CN107452031B (en) Virtual ray tracking method and light field dynamic refocusing display system
US8917317B1 (en) System and method for camera calibration
CN102980513B (en) Monocular full-view stereo vision sensor centered by thing
CN104677330A (en) Small binocular stereoscopic vision ranging system
CN104155765A (en) Method and equipment for correcting three-dimensional image in tiled integral imaging display
CN103299343A (en) Range image pixel matching method
CN105654547A (en) Three-dimensional reconstruction method
Aggarwal et al. Panoramic stereo videos with a single camera
CN106210474A (en) A kind of image capture device, virtual reality device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808

Patentee after: Obi Zhongguang Technology Group Co., Ltd

Address before: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808

Patentee before: SHENZHEN ORBBEC Co.,Ltd.