CN109769109A - Method and system based on virtual view synthesis drawing three-dimensional object - Google Patents

Method and system based on virtual view synthesis drawing three-dimensional object Download PDF

Info

Publication number
CN109769109A
CN109769109A CN201910164246.5A CN201910164246A CN109769109A CN 109769109 A CN109769109 A CN 109769109A CN 201910164246 A CN201910164246 A CN 201910164246A CN 109769109 A CN109769109 A CN 109769109A
Authority
CN
China
Prior art keywords
virtual
color image
image
camera
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910164246.5A
Other languages
Chinese (zh)
Inventor
陈东岳
宋园园
常兴亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201910164246.5A priority Critical patent/CN109769109A/en
Publication of CN109769109A publication Critical patent/CN109769109A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a kind of method and system based on virtual view synthesis drawing three-dimensional object, and the method for the drawing three-dimensional object includes the following steps: S1: being demarcated and is registrated to reference camera;S2: include the reference view of left and right two angles under selection Same Scene, acquire the reference picture of left and right two angles of reference view respectively using reference camera;S3: the Null Spot in pretreatment removal depth image;S4: the color image in each reference picture at reference view is respectively mapped at virtual view with graphical three-dimensional converter technique;S5: left virtual color image and right virtual color image are merged;S6: by reference picture color image and virtual image be registered to and complete to draw the virtual 3-D image of object in true environment background.Technical solution of the present invention solves the complexity problem of tradition modeling.

Description

Method and system based on virtual view synthesis drawing three-dimensional object
Technical field
The present invention relates to virtual view reconstruction technique fields, specifically, more particularly to a kind of based on virtual view synthesis The method and system of drawing three-dimensional object.
Background technique
Mixed reality (Mixed Reality, MR) is actually virtual reality (Virtual Reality, VR) and enhancing A kind of combination of real (Augmented Reality, AR).First the virtualization of true thing, then and real world MR is It is superimposed together, to generate a new visible environment, dummy object can coexist in this environment with real world, And real-time interactive.MR will realize true rendering, need to obtain the full spectrum information of an object or scene, for certain large sizes Scene realization gets up relatively difficult.Virtual view reconstruction technique solves the problems, such as this, by obtaining a small amount of data, in terminal Realize the aggregated data of each viewpoint.
Virtual view reconstruction technique exactly goes out the image of virtual view using the Image Rendering of known viewpoint, technology master Be divided into two kinds: one is the rendering techniques (Model Based Rendering, MBR) based on model, utilize computer graphical It learns, establishes out three-dimensional scene models, then by computer coloring projection and illumination render, generate the figure of virtual view Picture.But since the precision of threedimensional model is usually lower and registration scene environment illumination is complicated, not easy to control, so MBR imaging is not But calculating speed is slow, high labor cost and the sense of reality are unsatisfactory;One is image basedrendering (Image Based Rendering, IBR), virtual perspective effect is generated based on existing true picture sequence, avoid complicated Geometric Modeling with It calculates, therefore realizes that process is quick and easy, imaging effect is true to nature, but in terms of robustness and to the adaptability of registration environment also There is the space further promoted, IBR technology is divided into three by the number for the three-dimensional geometric information data used when according to drawing image Kind: without the IBR technology of geological information, the IBR technology using implicit geological information, the IBR technology using explicit geological information.
For the IBR technology for using implicit geological information, now commonly used three-dimensional reconstruction is true to reconstruct one Three-dimension object, which needs truly to reconstruct the three dimensional virtual models of body surface in a computer, so needing pair Object carries out complicated modeling processing, after three-dimension object scene model establishes, although the display of any viewpoint may be implemented, But there is also many problems in modeling process.
First the disadvantage is that need to establish scene model according to fundamental figure unit to draw out 3-D graphic landscape, And mathematical description is carried out to model, therefore three-dimensional reconstruction object is limited, good to plane volume modeling timeliness fruit, but to complicated real Effect is poor when body Model is rebuild, especially some complicated curved surface bodies.
After second the disadvantage is that scene model establishes, in order to obtain threedimensional model true to nature as real scene, also need The color for wanting all objects in computation model determines illumination condition, texture bonding method, to obtain three-dimensional article with texture Body improves the sense of reality of model.
Summary of the invention
According to the complexity problem of tradition modeling set forth above, and provide a kind of based on virtual view synthesis drawing three-dimensional object The method and system of body.The present invention mainly utilizes the IBR technology of implicit geological information --- depth image-based rendering technology (Depth-Image-BasedRendering, DIBR), the reconstruction of dummy object is realized using biaxial stress structure, to realize with few The image data of amount completes the reproduction of entire three-dimension object.
The technological means that the present invention uses is as follows:
The present invention provides a kind of methods based on virtual view synthesis drawing three-dimensional object, include the following steps:
S1: reference camera is demarcated and is registrated;
S2: include the reference view of left and right two angles under selection Same Scene, acquire reference respectively using reference camera The reference picture of left and right two angles of viewpoint, including a color image and a depth image corresponding with color image;
S3: the Null Spot in pretreatment removal depth image is carried out to depth image using median filter;
S4: color image in each reference picture at reference view is mapped respectively with graphical three-dimensional converter technique To at virtual view, specifically include:
Behind the position for determining virtual view, using formula 1, according in the reference picture in two-dimensional space depth image it is every Coordinate (the u of a pixel1,v1), calculate the coordinate (u that virtual view corresponds to pixel in two-dimensional surface2,v2) and three-dimensional point Depth value Z relative to virtual camera2:
Wherein, virtual camera is the camera being placed at virtual view position;A1For reference camera inner parameter matrix;Z1 It is depth value of the three-dimensional space midpoint with respect to reference camera;A2For virtual camera inner parameter matrix;R and T is virtualphase respectively Rotation and translation matrix of the machine relative to reference camera;
S5: according to the coordinate (u for the pixel being mapped at virtual view2,v2) left and right two angles can be respectively obtained Reference picture in the corresponding left virtual color image of color image and right virtual color image, by left virtual color image and the right side Virtual color image is merged;
In conjunction with expression pixel position coordinates (u2,v2) and corresponding pixel value Z2, calculated and merged according to following three kinds of situations Pixel value I (u, v) of the color image afterwards at pixel (u, v), finally obtains virtual color image:
(1) as the pixel (u of left virtual color image and the corresponding same position of right virtual color image2,v2) be not Cavity, pixel value I (u, v) of the virtual color image at pixel (u, v) are equal in two images in (u2,v2) at weighting picture Element value, weighted value are as follows:
Wherein, t, tL, tRIt is that Camera extrinsic number is translated towards under virtual view, left reference view, right reference view respectively Amount;
(2) when in left virtual color image and right virtual color image only have piece image in coordinate (u2,v2) at be not empty Hole, the value of pixel value I (u, v) of the virtual color image at pixel (u, v) are equal to the corresponding pixel value of non-empty image;
(3) when left virtual color image and right virtual color image are in (u2,v2) it is all cavity, virtual color image is in picture Pixel value fused image at vegetarian refreshments (u, v) is also cavity at I (u, v);
Wherein, cavity indicates that the pixel value of pixel is zero;
S6: the virtual color image of color image and S5 generation in the reference picture of object is infused using ARToolkit Volume is completed to draw the virtual 3-D image of object into true environment background.
Further, step S1 demarcates reference camera using Zhang Zhengyou calibration method, for eliminating the reference of acquisition The distortion of image.
Further, reference camera uses Kinect V2 depth camera.
Further, step S4 specifically includes following two parts:
(1) depth information of reference picture, i.e. range information of the object point apart from camera lens are utilized, it will be in two-dimensional space Reference picture in depth image each pixel coordinate (u1,v1) obtain corresponding to real space according to the projection of formula 2 The point coordinate (X, Y, Z) of world coordinate system:
Wherein, A1For reference camera inner parameter matrix;[R1T1] it is reference camera external parameter matrix;Z1It is three-dimensional space Between middle three-dimensional point with respect to the depth value of reference camera, i.e. z-component in reference camera coordinate system;
(2) it will be gone, obtained on two-dimensional surface corresponding to three-dimensional point re-projection to virtual view in three-dimensional space according to formula 3 To the pixel (u of virtual visual point image2,v2) and corresponding depth pixel value Z2:
Wherein, A2For virtual camera inner parameter matrix;[R2T2] it is virtual camera external parameter matrix.
The present invention also provides a kind of systems based on virtual view synthesis drawing three-dimensional object, using described based on void The method of quasi- View Synthesis drawing three-dimensional object, comprising:
Camera calibration and registration module: calibration is carried out to reference camera using Zhang Zhengyou gridiron pattern standardization and to reference phase Machine is registrated;
Reference picture acquisition module: including the reference view of left and right two angles under selection Same Scene, using referring to phase Machine acquires the reference picture of left and right two angles of reference view respectively, including a color image and one and color image phase The depth image answered;
Depth image preprocessing module: depth image is carried out in pretreatment removal depth image using median filter Null Spot;
Reference picture biaxial stress structure module: with graphical three-dimensional converter technique by each reference picture at reference view In color image be respectively mapped at virtual view;Obtain the corresponding left virtual color of color image of left and right two angles Image and right virtual color image;
Image co-registration module: left virtual color image and right virtual color image are merged;In conjunction with virtual view pair Answer the coordinate (u of pixel in two-dimensional surface2,v2) and corresponding pixel value Z2, fused color image is calculated in pixel Pixel value I (u, v) at (u, v), finally obtains virtual color image;
Drawing virtual view image and display module: using ARToolkit by the color image in the reference picture of object It completes to draw the virtual 3-D image of object into true environment background with the S5 virtual color image registration generated.
Compared with the prior art, the invention has the following advantages that
Method and system provided by the invention based on virtual view synthesis drawing three-dimensional object, is utilized based on DIBR technology Biaxial stress structure realizes the reconstruction of dummy object, to realize the reproduction for completing entire three-dimension object with a small amount of image data, solves It has determined the complexity of traditional modeling, and by using image itself including directly scene information abundant, has been easy from figure Model of place as obtaining photo realistic, strong sense of reality.
To sum up, it applies the technical scheme of the present invention and realizes the reconstruction of dummy object using biaxial stress structure, to realize with few The image data of amount completes the reproduction of entire three-dimension object.Therefore, technical solution of the present invention solves the complexity of tradition modeling Property problem.
The present invention can be widely popularized in fields such as three-dimensional reconstructions based on the above reasons.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to do simply to introduce, it should be apparent that, the accompanying drawings in the following description is this hair Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with It obtains other drawings based on these drawings.
Fig. 1 is the method flow diagram of drawing three-dimensional object of the present invention.
Fig. 2 (a), (b) are respectively the Kinect V2 camera and infrared light path schematic diagram that the present invention uses.
Fig. 3 (a), (b) are respectively camera calibration and with punctual exemplary diagram.
Fig. 4 (a), (b) are respectively depth image pretreatment front and back effect diagram.
Fig. 5 is using conventional method single-view virtual view composograph schematic diagram.
Fig. 6 is the virtual image schematic diagram that the method for drawing three-dimensional object of the present invention generates.
Fig. 7 (a), (b) are respectively the virtual of the left and right reference view that the method for drawing three-dimensional object of the present invention generates Image.
Fig. 8 is the system block diagram of drawing three-dimensional object of the present invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work It encloses.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product Or other step or units that equipment is intrinsic.
Embodiment 1
As shown in figs. 1-7, the present invention provides a kind of methods based on virtual view synthesis drawing three-dimensional object, including such as Lower step:
S1: the present embodiment acquires image using Kinect V2 depth camera, and camera is as measurement component, camera model It is not absolute pin-hole model, camera lens has distortion, so in order to improve the accuracy of the precision of measurement and entire research work, It needs fixed to the advanced rower of camera and is registrated:, will be to each for correction data since the distortion degree of different cameral is variant Camera is demarcated, and the relevant parameter matrix of depth camera and color camera is respectively obtained;30 width that Kinect V2 is shot are not It is imported into the chessboard picture under visual angle in the Camera Calibrator tool of MATLAB, as shown in figure 4, simultaneously basis is opened just Friendly standardization acquires the inside and outside parameter of depth camera and colour imagery shot respectively.In view of the position of depth camera and color camera It sets and is not together, the visual field of two cameras cannot be completely overlapped, and the location information that will cause color image and depth image is different It causes, in order to directly correspond to the pixel of depth image frame in color framing image, camera is registrated.By on depth map Pixel coordinate be projected on the coordinate of cromogram using mapping relations, complete image registration;As shown in Fig. 2 (a), The Kinect V2 camera that the present embodiment uses includes colour imagery shot 1, depth transducer 2, infrared transmitter 3 and infrared receiver Device 4, Fig. 2 (b) are the infrared light path schematic diagram of camera;Fig. 3 (a) and (b) are for camera calibration and with punctual exemplary diagram.
S2: include the reference view of left and right two angles under selection Same Scene, acquire reference respectively using reference camera The reference picture of left and right two angles of viewpoint, including a color image and a depth image corresponding with color image;
S3: in the present embodiment, since the cost of Kinect V2 limits, depth camera is in terms of image acquisition and processing Ability it is limited, the Null Spot of many not depth values is contained in collected depth image, in order to which successor virtual image is raw At accuracy, need to carry out a degree of pretreatment to depth image;
The present embodiment carries out the Null Spot in pretreatment removal depth image using median filter to depth image;First Containing several elements are chosen as filter window around pixel to be processed, then the pixel value of these elements is arranged Sequence takes in in-between two pixel averages conduct if the pixel in neighborhood is that odd number takes its intermediate value if even number Value, finally using the intermediate value sought as the pixel value of point to be processed, the median filter that the present embodiment uses is defined as follows:
(s,t)∈Sxy
Wherein, (x, y) indicates central pixel point, and f (x, y) is the gray value of the point, and g (x, y) is depth image to be processed,For the output of median filtering, SxyIndicate filter window;
It is respectively depth image pretreatment front and back effect diagram as shown in Fig. 4 (a)-(b), Fig. 4 (a) is original depth-map Picture, wherein including many inactive pixels points, Fig. 4 (b) is using median filtering to the image obtained after image preprocessing, original depth Some noise spots in degree image are filtered out;
S4: use graphical three-dimensional converter technique (3D Image Warping) by each reference picture at reference view Middle color image is respectively mapped at virtual view, is specifically included:
Wherein, the depth image of acquisition is for providing depth information;
Behind the position for determining virtual view, using formula 1, according in the reference picture in two-dimensional space depth image it is every Coordinate (the u of a pixel1,v1), calculate the coordinate (u that virtual view corresponds to pixel in two-dimensional surface2,v2) and three-dimensional point Depth value Z relative to virtual camera2:
Wherein, virtual camera is the camera being placed at virtual view position;A1For reference camera inner parameter matrix;Z1 It is depth value of the three-dimensional space midpoint with respect to reference camera;A2For virtual camera inner parameter matrix;R and T is virtualphase respectively Rotation and translation matrix of the machine relative to reference camera;
Z2It is depth value of the space three-dimensional point relative to virtual camera, that is, camera coordinates tie up to the component of z-axis;
Depth value indicates the distance value of pixel distance reference camera lens in three-dimensional space in the present embodiment;
The technology is broadly divided into two parts: first with the depth information of reference picture, by the every of reference view image A pixel projects to corresponding position in the three-dimensional space of reality, completes the conversion of two-dimensional space to three-dimensional space.Then root According to the parameter of virtual view, where these points in three-dimensional space are projected to virtual view on corresponding two-dimensional surface, thus The image of virtual view is obtained, i.e., from three-dimensional space re-projection to two-dimensional space;Wherein world coordinate system and pixel coordinate system it Between matrix conversion formula indicate are as follows:
Further, step S4 specifically includes following two parts:
(1) depth information of reference picture, i.e. range information of the object point apart from camera lens are utilized, it will be in two-dimensional space Reference picture in depth image each pixel coordinate (u1,v1) obtain corresponding to real space according to the projection of formula 2 The point coordinate (X, Y, Z) of world coordinate system:
Wherein, A1For reference camera inner parameter matrix;[R1T1] it is reference camera external parameter matrix;Z1It is three-dimensional space Between middle three-dimensional point with respect to the depth value of reference camera, i.e. z-component in reference camera coordinate system;
(2) it will be gone, obtained on two-dimensional surface corresponding to three-dimensional point re-projection to virtual view in three-dimensional space according to formula 3 To the pixel (u of virtual visual point image2,v2) and corresponding depth pixel value Z2:
Wherein, A2For virtual camera inner parameter matrix;[R2T2] it is virtual camera external parameter matrix.
The present embodiment is also provided a comparison of using conventional method and using the effect of herein described method synthesis virtual image: figure 5 is using conventional method single-view virtual view composograph schematic diagrames, it can be seen that realize virtual view using single visual angle figure When the synthesis of point, there is very big a piece of cavity in the position of the image other side, this is because the visual angle of original viewpoint and virtual view The visual angle of point is different, and the original viewpoint positioned at virtual view side can't see the corresponding area of virtual view other side after torsion Domain causes virtual view corresponding position not have pixel and form cavity.
Fig. 6 is the virtual image schematic diagram generated using herein described method, is thrown using biaxial stress structure left and right viewpoint The image of movie queen carry out fusion and, and the virtual visual point image generated after adaptive median filter, biaxial stress structure have been carried out to it The virtual view figure synthesized afterwards, the effective solution empty problem in Unknown Background region.
Fig. 7 (a)-(b) is respectively the virtual of the left and right reference view that the method for drawing three-dimensional object of the present invention generates Image reverses certain angle as seen from the figure between the reference view of left and right, produce the virtual image of intermediate any viewpoint.One As three-dimensional reconstruction when being rebuild for complicated physical model effect it is poor, especially some complicated curved surface bodies.And it is true in order to improve True feeling also needs to carry out model texture patch decorations.The object that this system uses complicated curved surface body cup to rebuild as virtual image, and And only the left and right visual point image of object is handled, since image themselves contain scene information abundant, compared to three-dimensional It rebuilds, the subject image strong sense of reality that this system obtains, and reduces complicated modelling operability.
S5: according to the coordinate (u for the pixel being mapped at virtual view2,v2) left and right two angles can be respectively obtained Reference picture in the corresponding left virtual color image of color image and right virtual color image, by left virtual color image and the right side Virtual color image is merged;
The coordinate of fused image slices vegetarian refreshments is identical with the pixel coordinate in the two images for fusion, as long as therefore Pixel value, which is put at corresponding coordinate, can be obtained by fused image;In conjunction with expression pixel position coordinates (u2,v2) and it is right The pixel value Z answered2, according to following three kinds of situations calculate fused color image at pixel (u, v) pixel value I (u, V), virtual color image is finally obtained:
(1) as the pixel (u of left virtual color image and the corresponding same position of right virtual color image2,v2) be not Cavity, pixel value I (u, v) of the virtual color image at pixel (u, v) are equal in two images in (u2,v2) at weighting picture Element value, weighted value are as follows:
Wherein, t, tL, tRIt is that Camera extrinsic number is translated towards under virtual view, left reference view, right reference view respectively Amount;
(2) when in left virtual color image and right virtual color image only have piece image in coordinate (u2,v2) at be not empty Hole, the value of pixel value I (u, v) of the virtual color image at pixel (u, v) are equal to the corresponding pixel value of non-empty image;
(3) when left virtual color image and right virtual color image are in (u2,v2) it is all cavity, virtual color image is in picture Pixel value fused image at vegetarian refreshments (u, v) is also cavity at I (u, v);
Wherein, cavity indicates the background area surrounded by the boundary that foreground pixel is connected, herein refers to pixel value It is zero;
S6: the virtual color image of color image and S5 generation in the reference picture of object is infused using ARToolkit Volume is completed to draw the virtual 3-D image of object into true environment background.
As shown in figure 8, the present invention also provides a kind of systems based on virtual view synthesis drawing three-dimensional object, using institute The method based on virtual view synthesis drawing three-dimensional object stated, comprising:
Camera calibration and registration module: calibration is carried out to reference camera using Zhang Zhengyou gridiron pattern standardization and to reference phase Machine is registrated;
Reference picture acquisition module: including the reference view of left and right two angles under selection Same Scene, using referring to phase Machine acquires the reference picture of left and right two angles of reference view respectively, including a color image and one and color image phase The depth image answered;
Depth image preprocessing module: depth image is carried out in pretreatment removal depth image using median filter Null Spot;
Reference picture biaxial stress structure module: with graphical three-dimensional converter technique by each reference picture at reference view In color image be respectively mapped at virtual view;Obtain the corresponding left virtual color of color image of left and right two angles Image and right virtual color image;
Image co-registration module: left virtual color image and right virtual color image are merged;In conjunction with virtual view pair Answer the coordinate (u of pixel in two-dimensional surface2,v2) and corresponding pixel value Z2, fused color image is calculated in pixel Pixel value I (u, v) at (u, v), finally obtains virtual color image;
Drawing virtual view image and display module: using ARToolkit by the reference picture in the reference picture of object It completes to draw the virtual 3-D image of object into true environment background with the S5 virtual color image registration generated.
The present invention is made that improvement to the mapping at single visual angle, the virtual view figure obtained using biaxial stress structure is utilized The reference picture of two viewpoints is controlled to realize the image synthesis of intermediate any viewpoint, improves the synthesis quality of image;Pass through The image data of the raw image data of three-dimension object and generation is registered to really by three-dimensional registration technology using ARToolkit In environment, the drafting of dummy object is completed, and real-time display is carried out to the image after virtual reality fusion, to enhance making for user With sense and the sense of reality.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme.

Claims (5)

1. the method based on virtual view synthesis drawing three-dimensional object, which comprises the steps of:
S1: reference camera is demarcated and is registrated;
S2: include the reference view of left and right two angles under selection Same Scene, acquire reference view respectively using reference camera The reference picture of left and right two angles, including a color image and a depth image corresponding with color image;
S3: the Null Spot in pretreatment removal depth image is carried out to depth image using median filter;
S4: color image in each reference picture at reference view is respectively mapped to void with graphical three-dimensional converter technique At quasi- viewpoint, specifically include:
Behind the position for determining virtual view, using formula 1, according to each picture of depth image in the reference picture in two-dimensional space Coordinate (the u of vegetarian refreshments1,v1), calculate the coordinate (u that virtual view corresponds to pixel in two-dimensional surface2,v2) and three-dimensional point it is opposite In the depth value Z of virtual camera2:
Wherein, virtual camera is the camera being placed at virtual view position;A1For reference camera inner parameter matrix;Z1It is three Depth value of the dimension space midpoint with respect to reference camera;A2For virtual camera inner parameter matrix;R and T is virtual camera phase respectively For the rotation and translation matrix of reference camera;
S5: according to the coordinate (u for the pixel being mapped at virtual view2,v2) ginsengs of left and right two angles can be respectively obtained The corresponding left virtual color image of color image in image and right virtual color image are examined, left virtual color image and the right side is virtual Color image is merged;
In conjunction with expression pixel position coordinates (u2,v2) and corresponding pixel value Z2, calculated according to following three kinds of situations fused Pixel value I (u, v) of the color image at pixel (u, v), finally obtains virtual color image:
(1) as the pixel (u of left virtual color image and the corresponding same position of right virtual color image2,v2) it is not sky Hole, pixel value I (u, v) of the virtual color image at pixel (u, v) are equal in two images in (u2,v2) at weighted pixel Value, weighted value are as follows:
Wherein, t, tL, tRIt is the translation vector of Camera extrinsic number under virtual view, left reference view, right reference view respectively;
(2) when in left virtual color image and right virtual color image only have piece image in coordinate (u2,v2) at be not cavity, The value of pixel value I (u, v) of the virtual color image at pixel (u, v) is equal to the corresponding pixel value of non-empty image;
(3) when left virtual color image and right virtual color image are in (u2,v2) it is all cavity, virtual color image is in pixel Pixel value fused image at (u, v) is also cavity at I (u, v);
Wherein, cavity indicates that the pixel value of pixel is zero;
S6: the virtual color image registration of color image and S5 generation in the reference picture of object is arrived using ARToolkit It completes to draw the virtual 3-D image of object in true environment background.
2. the method according to claim 1 based on virtual view synthesis drawing three-dimensional object, it is characterised in that: step S1 Reference camera is demarcated using Zhang Zhengyou calibration method, the distortion of the reference picture for eliminating acquisition.
3. the method according to claim 2 based on virtual view synthesis drawing three-dimensional object, it is characterised in that: refer to phase Machine uses Kinect V2 depth camera.
4. the method according to claim 1 based on virtual view synthesis drawing three-dimensional object, it is characterised in that: step S4 Specifically include following two parts:
(1) depth information of reference picture, i.e. range information of the object point apart from camera lens are utilized, by the ginseng in two-dimensional space Examine the coordinate (u of each pixel of depth image in image1,v1) world for obtaining corresponding to real space is projected according to formula 2 The point coordinate (X, Y, Z) of coordinate system:
Wherein, A1For reference camera inner parameter matrix;[R1T1] it is reference camera external parameter matrix;Z1It is three in three-dimensional space The depth value of the opposite reference camera of dimension point, i.e. z-component in reference camera coordinate system;
(2) it will be gone on two-dimensional surface corresponding to three-dimensional point re-projection to virtual view in three-dimensional space according to formula 3, obtain void Pixel (the u of quasi- visual point image2,v2) and corresponding depth pixel value Z2:
Wherein, A2For virtual camera inner parameter matrix;[R2T2] it is virtual camera external parameter matrix.
5. a kind of system based on virtual view synthesis drawing three-dimensional object, is based on using claim 1-4 is described in any item The method of virtual view synthesis drawing three-dimensional object characterized by comprising
Camera calibration and registration module: using Zhang Zhengyou gridiron pattern standardization to reference camera carry out calibration and to reference camera into Row registration;
Reference picture acquisition module: include the reference view of left and right two angles under selection Same Scene, use reference camera point Not Cai Ji left and right two angles of reference view reference picture, including a color image and one it is corresponding with color image Depth image;
Depth image preprocessing module: it is invalid in pretreatment removal depth image to be carried out using median filter to depth image Point;
Reference picture biaxial stress structure module: will be in each reference picture at reference view with graphical three-dimensional converter technique Color image is respectively mapped at virtual view;Obtain the corresponding left virtual color image of color image of left and right two angles With right virtual color image;
Image co-registration module: left virtual color image and right virtual color image are merged;In conjunction with virtual view corresponding two Coordinate (the u of pixel in dimensional plane2,v2) and corresponding pixel value Z2, fused color image is calculated at pixel (u, v) The pixel value I (u, v) at place, finally obtains virtual color image;
Drawing virtual view image and display module: using ARToolkit by the color image and S5 in the reference picture of object The virtual color image registration of generation is completed to draw the virtual 3-D image of object into true environment background.
CN201910164246.5A 2019-03-05 2019-03-05 Method and system based on virtual view synthesis drawing three-dimensional object Pending CN109769109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910164246.5A CN109769109A (en) 2019-03-05 2019-03-05 Method and system based on virtual view synthesis drawing three-dimensional object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910164246.5A CN109769109A (en) 2019-03-05 2019-03-05 Method and system based on virtual view synthesis drawing three-dimensional object

Publications (1)

Publication Number Publication Date
CN109769109A true CN109769109A (en) 2019-05-17

Family

ID=66457734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910164246.5A Pending CN109769109A (en) 2019-03-05 2019-03-05 Method and system based on virtual view synthesis drawing three-dimensional object

Country Status (1)

Country Link
CN (1) CN109769109A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246146A (en) * 2019-04-29 2019-09-17 北京邮电大学 Full parallax light field content generating method and device based on multiple deep image rendering
CN110322539A (en) * 2019-06-04 2019-10-11 贝壳技术有限公司 Threedimensional model cutting process display methods, device and the electronic equipment of three-dimension object
CN111540022A (en) * 2020-05-14 2020-08-14 深圳市艾为智能有限公司 Image uniformization method based on virtual camera
CN111988596A (en) * 2020-08-23 2020-11-24 咪咕视讯科技有限公司 Virtual viewpoint synthesis method and device, electronic equipment and readable storage medium
CN112116530A (en) * 2019-06-19 2020-12-22 杭州海康威视数字技术股份有限公司 Fisheye image distortion correction method and device and virtual display system
CN113450274A (en) * 2021-06-23 2021-09-28 山东大学 Self-adaptive viewpoint fusion method and system based on deep learning
WO2022042413A1 (en) * 2020-08-24 2022-03-03 阿里巴巴集团控股有限公司 Image reconstruction method and apparatus, and computer readable storage medium, and processor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568026A (en) * 2011-12-12 2012-07-11 浙江大学 Three-dimensional enhancing realizing method for multi-viewpoint free stereo display
CN102592275A (en) * 2011-12-16 2012-07-18 天津大学 Virtual viewpoint rendering method
CN104756489A (en) * 2013-07-29 2015-07-01 北京大学深圳研究生院 Virtual viewpoint synthesis method and system
CN106791774A (en) * 2017-01-17 2017-05-31 湖南优象科技有限公司 Virtual visual point image generating method based on depth map
CN107818580A (en) * 2016-09-12 2018-03-20 达索系统公司 3D reconstructions are carried out to real object according to depth map

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568026A (en) * 2011-12-12 2012-07-11 浙江大学 Three-dimensional enhancing realizing method for multi-viewpoint free stereo display
CN102592275A (en) * 2011-12-16 2012-07-18 天津大学 Virtual viewpoint rendering method
CN104756489A (en) * 2013-07-29 2015-07-01 北京大学深圳研究生院 Virtual viewpoint synthesis method and system
US20160150208A1 (en) * 2013-07-29 2016-05-26 Peking University Shenzhen Graduate School Virtual viewpoint synthesis method and system
CN107818580A (en) * 2016-09-12 2018-03-20 达索系统公司 3D reconstructions are carried out to real object according to depth map
CN106791774A (en) * 2017-01-17 2017-05-31 湖南优象科技有限公司 Virtual visual point image generating method based on depth map

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈思利: "《一种基于DIBR的虚拟视点合成算法》", 《成都电子机械高等专科学校学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246146A (en) * 2019-04-29 2019-09-17 北京邮电大学 Full parallax light field content generating method and device based on multiple deep image rendering
CN110322539A (en) * 2019-06-04 2019-10-11 贝壳技术有限公司 Threedimensional model cutting process display methods, device and the electronic equipment of three-dimension object
CN112116530A (en) * 2019-06-19 2020-12-22 杭州海康威视数字技术股份有限公司 Fisheye image distortion correction method and device and virtual display system
CN112116530B (en) * 2019-06-19 2023-08-18 杭州海康威视数字技术股份有限公司 Fisheye image distortion correction method, device and virtual display system
CN111540022A (en) * 2020-05-14 2020-08-14 深圳市艾为智能有限公司 Image uniformization method based on virtual camera
CN111540022B (en) * 2020-05-14 2024-04-19 深圳市艾为智能有限公司 Image unification method based on virtual camera
CN111988596A (en) * 2020-08-23 2020-11-24 咪咕视讯科技有限公司 Virtual viewpoint synthesis method and device, electronic equipment and readable storage medium
CN111988596B (en) * 2020-08-23 2022-07-26 咪咕视讯科技有限公司 Virtual viewpoint synthesis method and device, electronic equipment and readable storage medium
WO2022042413A1 (en) * 2020-08-24 2022-03-03 阿里巴巴集团控股有限公司 Image reconstruction method and apparatus, and computer readable storage medium, and processor
CN113450274A (en) * 2021-06-23 2021-09-28 山东大学 Self-adaptive viewpoint fusion method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN109769109A (en) Method and system based on virtual view synthesis drawing three-dimensional object
CN110874864B (en) Method, device, electronic equipment and system for obtaining three-dimensional model of object
US20200219301A1 (en) Three dimensional acquisition and rendering
CN111062873B (en) Parallax image splicing and visualization method based on multiple pairs of binocular cameras
CN106101689B (en) The method that using mobile phone monocular cam virtual reality glasses are carried out with augmented reality
US9407904B2 (en) Method for creating 3D virtual reality from 2D images
US20150358612A1 (en) System and method for real-time depth modification of stereo images of a virtual reality environment
CN105931240A (en) Three-dimensional depth sensing device and method
GB2464453A (en) Determining Surface Normals from Three Images
CN104077808A (en) Real-time three-dimensional face modeling method used for computer graph and image processing and based on depth information
CN106296825B (en) A kind of bionic three-dimensional information generating system and method
CN104933704B (en) A kind of 3 D stereo scan method and system
CN106170086B (en) Method and device thereof, the system of drawing three-dimensional image
CN111047709A (en) Binocular vision naked eye 3D image generation method
CN109510975A (en) A kind of extracting method of video image, equipment and system
CA2540538C (en) Stereoscopic imaging
EP4073756A1 (en) A method for measuring the topography of an environment
CN114996814A (en) Furniture design system based on deep learning and three-dimensional reconstruction
CN106169179A (en) Image denoising method and image noise reduction apparatus
CN113989434A (en) Human body three-dimensional reconstruction method and device
CN105427302B (en) A kind of three-dimensional acquisition and reconstructing system based on the sparse camera collection array of movement
Knorr et al. From 2D-to stereo-to multi-view video
CN112200852B (en) Stereo matching method and system for space-time hybrid modulation
CN108540790A (en) It is a kind of for the three-dimensional image acquisition method of mobile terminal, device and mobile terminal
CN109089100B (en) Method for synthesizing binocular stereo video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190517

RJ01 Rejection of invention patent application after publication