CN106993179A - A kind of method that 3D models turn three-dimensional double vision point view - Google Patents

A kind of method that 3D models turn three-dimensional double vision point view Download PDF

Info

Publication number
CN106993179A
CN106993179A CN201710173870.2A CN201710173870A CN106993179A CN 106993179 A CN106993179 A CN 106993179A CN 201710173870 A CN201710173870 A CN 201710173870A CN 106993179 A CN106993179 A CN 106993179A
Authority
CN
China
Prior art keywords
eye
matrix
mvp
eyes
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710173870.2A
Other languages
Chinese (zh)
Inventor
麻辉文
颜成钢
张新
李亚菲
李宁
陈泽伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Publication of CN106993179A publication Critical patent/CN106993179A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of method that 3D models turn three-dimensional double vision point view.This method is:1) 3D models to be converted are chosen;Cone displacement, the right and left eyes frustum parameter of the 3D models are calculated according to the parameter of input, the projection matrix and viewing matrix, the projection matrix and viewing matrix of right eye of left eye is built;2) the incoming tinters of images of left and right eyes MVP are obtained according to the projection matrix, viewing matrix and model matrix of images of left and right eyes;3) apex coordinate of the 3D models is distinguished into the incoming tinters of premultiplication left eye MVP, the incoming tinters of right eye MVP, obtains new apex coordinate;After each summit of the 3D models is converted, you can obtain the left-and right-eye images of the 3D models;4) the obtained left-and right-eye images are mapped on screen, spliced, obtain the double vision point disparity map of the 3D models.The present invention has arithmetic speed fast, close to the visual custom of human eye, meets the emulation of eyes convergence.

Description

A kind of method that 3D models turn three-dimensional double vision point view
Technical field
The present invention relates to 3D Display Techniques, and in particular to a kind of is three-dimensional double vision point by the 3D model conversations based on OpenGL The method of the conversion method of view, the particularly a kind of three-dimensional double vision point view of 3D models turn.
Background technology
Human lives perceive the world in a three-dimensional world using stereoscopic vision, at full speed with computer technology Development, the mode that computer describes real world is increasingly enriched:Again to video from sound to image, what computer can be represented The world becomes increasingly complex.The 2D that can carry out that current most display device remains unchanged is shown, depth information is have ignored, in numeral Change, the epoch of modernization, 2D is shown it is impossible to meet human wants, then 3D models are used as a kind of new media format Enter in the life of people, study and work, and received quickly by ordinary populace.It is entertained in video display, building, machinery Manufacture, medical treatment is military, ecommerce, virtual reality, and many aspects such as archaeology, which are obtained for, to be increasingly widely applied.
In current Computer display field, stereoscopic display has become the developing direction in future.Commercially, at present There are many hardware options, enable us to obtain three-dimensional 3D visual informations.And 3D Display Techniques are most multiple in whole 3D flows A miscellaneous step, because the platform of broadcasting is all flat-panel display devices, and right and left eyes material will occur on same display device, This relates to how to be separated right and left eyes material, and is accurately sent to respectively in the right and left eyes of spectators.Once right and left eyes The separation of material goes wrong, and 3D effect would not occur, and spectators also will be seen that image content that is chaotic, having ghost image.We Using convergence binocular projection observation model, 3D models are mapped on screen according to this observation model.This model more meets people The observation custom of eye, makes one more true, the nature seen.
The basic reason that human eye can obtain stereoscopic vision is the presence of parallax, is respectively seen using the eyes of people in the same time Different image formation parallax, so that the characteristics of obtaining stereoscopic vision.
The content of the invention
Three-dimensional stereo display technique is the key technology of virtual reality, is also the essential basic bar of virtual reality system Part, and depth perception be properly formed be stereo display technique key, depth perception be properly formed by binocular parallax figure come Realize.Nowadays the 3D modelling effects of the three-dimensional display of in the market, are directly determined, disparity map by double vision point view Contain the range information of scene.Therefore the present invention, from principle of stereoscopic vision, how primary study utilizes OpenGL from calculating Multi-view image is extracted in the virtual 3D models of machine, so as to carry out changing into three-dimensional double vision point visual difference figure, and then 3D is solved Display problem.
The 3D models of the present invention turn double vision point view including 3D model reads and turn double vision point disparity map;Wherein 3D models are read Take including reading vertex information and drawing summit;Turn double vision point disparity map and double vision point view is converted and drawn including monocular.
The technical solution adopted for the present invention to solve the technical problems comprises the following steps:
Step 1:Selected convergence type observation model;
Observation model mainly has convergence type observation model and run-in index observation model.The selected convergence type observation mould of the present invention Type, and wherein top, bottom, Left, Right are respectively that the side up and down that face is cut out before the shared frustum of a pyramid of right and left eyes is arrived The distance at center, Near cuts out face to the distance of viewpoint before being, Far cuts out face to the distance of viewpoint after being.
Step 2:Input parameter, and calculate the cone displacement of convergence type observation model:
Eyes apart from above and below IOD, human eye observation's image both sides visual field angle fov, human eye to preceding shear surface it is vertical away from From d (eye-nearZ), the vertical range d (eye-screen) of human eye to screen, the vertical range d of human eye to rear shear surface (eye-farZ), image length-width ratio ratio.
The cone displacement of the convergence type observation model is calculated according to similar triangles
Frustum shift=(IOD/2) * d (eye-nearZ)/d (eye-screen) is (1)
Step 3:Calculate right and left eyes frustum parameter
Left eye frustum parameter:
Top=tan (fov/2) * d (eye-nearZ) are (2)
LeftLeft_eye=-ratio*top-Frumstum shift are (3)
RightLeft_eye=ratio*top-Frumstumshift is (4)
Bottom=-top is (5)
Right eye frustum parameter:
Top=tan (fov/2) * d (eye-nearZ) are (6)
Rightright_eye=ratio*top+Frumstum shift are (7)
LeftRight_eye=-ratio*top+Frumstum shift are (8)
Bottom=-top is (9)
Wherein, fov represents the visual field angle on both sides above and below human eye observation's image, and ratio is image length-width ratio;
Top and bottom are respectively to cut out the up-and-down boundary in face to the distance at center before the shared frustum of a pyramid of right and left eyes;
LeftLeft_eyeAnd LeftRight_eyeRespectively cut out before the frustum of a pyramid of right and left eyes the left margin in face to center away from From;
RightLeft_eyeAnd Rightright_eyeRespectively cut out before the frustum of a pyramid of right and left eyes the right margin in face to center away from From;
Step 4:Obtain the projection matrix and viewing matrix of right and left eyes.
Left eye projection matrix Mlproj
Mlproj=frustum (LeftLeft_eye,RightLeft_eye,bottom,top,d(eye-nearZ),d(eye- farZ))⑽
Right eye projection matrix Mrproj
Mrproj=frustum (LeftRight_eye,RightRight_eye,bottom,top,d(eye-nearZ),d(eye- farZ))⑾
Left-eye view matrix
Mlview=glm::LookAt(LeftCameraPosition,CameraTarget,upVector)⑿
Right-eye view matrix
Mrview=glm::LookAt(RightCameraPosition,CameraTarget,upVector)⒀
LeftCameraPosition represents the left camera position in actual coordinates, RightCameraPosition tables Show the position of the right camera in actual coordinates, CameraTarget represents the target location in actual coordinates, upVector Represent and take z coordinate to be positive direction.
Step 5:The projection matrix of right and left eyes, viewing matrix and model matrix are multiplied respectively and obtain two 4*4 matrixes MVP, and by the incoming tinters of matrix MVP
MVP=Projection*View*Model is (14)
Projection is projection matrix, and View is viewing matrix, and Model is model matrix.
Because projection matrix and viewing matrix that right and left eyes are obtained are different from, the MVP that three matrix multiples are obtained Matrix is also different.Therefore, the present invention can be respectively by the incoming tinters of the MVP of left eye and right eye MVP, i.e., according to left eye Projection matrix, viewing matrix and model matrix obtain left eye MVP, and by the incoming tinters of left eye MVP, according to the projection of right eye Matrix, viewing matrix and model matrix obtain right eye MVP, and by the incoming tinters of right eye MVP.
Step 6:3D model files are loaded, and vertex information is stored in array
3D model files are loaded with recursive algorithm, and the vertex information read out is preserved in the form of array.When 3D moulds After the completion of all summits loading of type, by the incoming tinter of array for saving vertex information.
Step 7:Apex coordinate is distinguished to the MVP matrixes of premultiplication right and left eyes, so as to obtain new apex coordinate;
If former apex coordinate P1=(X1,Y1,Z1, W), apex coordinate is P after conversion2=(X2,Y2,Z2,W)。
Then
P2=MVP*P1
Step 8:After each summit of 3D models is converted, you can obtain new observed image, as left and right eye pattern Picture.The left and right that obtained right and left eyes image is mapped in into screen respectively is half of, they is spliced double vision point has just been obtained at one piece View.
Compared with prior art, the positive effect of the present invention is:
The present invention is different from the method that information is gathered by binocular camera, but is set up by reading 3D model files Double vision point view, can be applied to the double vision point view generation of virtual scene.
The present invention will convert obtained incoming bore hole 3D screens of double vision point view i.e. compared to conventional method strong applicability Bore hole 3D effect can be achieved, is that VR effects can be achieved by the incoming VR equipment of double vision point view.
The present invention uses convergence type projection model, meets the visual custom of human eye, meets the emulation of eyes influx, realization Effect is more true to nature.Meanwhile, influence the eyes of visual experience apart from IOD and human eye to screen apart from d (eye-screen) all It is input parameter.This causes regulation final effect to become more science, conveniently.
Brief description of the drawings
Fig. 1 is convergence type perspective view;
Fig. 2 is convergence type projection frustum of a pyramid schematic diagram;
Fig. 3 is converting algorithm flow chart.
Embodiment
With reference to embodiment, the present invention will be described in detail.
As Figure 1-3, a kind of 3D models turn the method for three-dimensional double vision point view, specifically include following steps:
Step 1:Selected convergence type observation model
Observation model mainly has convergence type observation model and run-in index observation model.The selected convergence type observation mould of the present invention Type.
The frustum of a pyramid that accompanying drawing 1 is convergence type perspective view, accompanying drawing 2 is convergence type projection.Wherein top, bottom, Left, Right are respectively the distance at the centers of Bian Dao up and down that face is cut out before the shared frustum of a pyramid of right and left eyes, and Near is cut before being Sanction face is to the distance of viewpoint, and Far cuts out face to the distance of viewpoint after being.
Step 2:The cone displacement of convergence type observation model is calculated according to similar triangles:
Frustum shift=(IOD/2) * d (eye-nearZ)/d (eye-screen) is (1)
In formula, IOD is eyes distance, and d (eye-nearZ) represents human eye to the vertical range of preceding shear surface, d (eye- Screen) represent human eye to the vertical range of screen, the vertical range of d (eye-farZ) expression human eyes to rear shear surface.Take IOD =7, d (eye-nearZ)=1, d (eye-screen)=5, d (eye-farZ)=100, then can calculate and obtain Frumstum Shift=0.7.
Step 3:Gained knowledge according to geometry and calculate right and left eyes frustum parameter
Left eye:
Top=tan (fov/2) * d (eye-nearZ) are (2)
LeftLeft_eye=-ratio*top-Frumstum shift are (3)
Right Left_eye=ratio*top-Frumstum shift are (4)
Bottom=-top is (5)
Right eye:
Top=tan (fov/2) * d (eye-nearZ) are (6)
RightRight_eye=ratio*top+Frumstum shift are (7)
Left Right_eye=-ratio*top+Frumstum shift are (8)
Bottom=-top is (9)
Wherein, fov represents the visual field angle on both sides above and below human eye observation's image, and ratio is image length-width ratio;
Top and bottom are respectively to cut out the up-and-down boundary in face to the distance at center before the shared frustum of a pyramid of right and left eyes;
LeftLeft_eyeAnd LeftRight_eyeRespectively cut out before the frustum of a pyramid of right and left eyes the left margin in face to center away from From;
RightLeft_eyeAnd Rightright_eyeRespectively cut out before the frustum of a pyramid of right and left eyes the right margin in face to center away from From;
Fov=120 ° is taken, ratio=1080/960 then can be calculated
Left eye:RightLeft_eye=1.2485, LeftLeft_eye=-2.6485, bottom=- 1.732。
Right eye:RightRight_eye=2.6485, LeftRight_eye=-1.2485, bottom=- 1.732。
Step 4:The projection matrix and viewing matrix of right and left eyes are obtained according to following two matrix, it is specific as follows:
fnear, fFarD (eye-nearZ), d (eye-farZ) are represented, t, b represent top, bottom rL,rR,lL,lRRespectively Represent RightLeft_eye,RightRight_eye,LeftLeft_eye,LeftRight_eye, so as to draw left eye projection matrix Mlproj:
Mlproj=frustum (LeftLeft_eye,RightLeft_eye,bottom,top,d(eye-nearZ),d(eye- farZ))⑽
Right eye projection matrix Mrproj:
Mrproj=frustum (LeftRight_eye,RightRight_eye,bottom,top,d(eye-nearZ),d(eye- farZ))⑾
Left-eye view matrix Mlview:
Mlview=glm::LookAt(LeftCameraPosition,CameraTarget,upVector)⑿
Right-eye view matrix Mrview:
Mrview=glm::LookAt(RightCameraPosition,CameraTarget,upVector)⒀
LeftCamera Position represent the left camera position in actual coordinates, RightCameraP osition The position of the right camera in actual coordinates is represented, CameraTarget represents the target location in actual coordinates, UpVector, which is represented, takes z coordinate to be positive direction.
Step 5:The projection matrix of right and left eyes, viewing matrix and model matrix are multiplied respectively and obtain two 4*4 matrixes MVP, and by the incoming tinters of matrix MVP
MVP=Projection*View*Model is (14)
Projection is projection matrix, and View is viewing matrix, and Model is model matrix.
Because projection matrix and viewing matrix that right and left eyes are obtained are different from, the MVP that three matrix multiples are obtained Matrix is also different.Therefore, the present invention can be respectively by the incoming tinters of the MVP of left eye and right eye MVP, i.e., according to left eye Projection matrix, viewing matrix and model matrix obtain left eye MVP, and by the incoming tinters of left eye MVP, according to the projection of right eye Matrix, viewing matrix and model matrix obtain right eye MVP, and by the incoming tinters of right eye MVP.Model matrix can make 3D moulds The position of type changes in world coordinate system, the rotation or translation of such as model.Because the present invention can not do any Rotation translation, so the model matrix of the present invention takes 4*4 unit matrix, that is, give tacit consent to 3D models and be carried in world coordinate system At origin.
Step 6:3D model files are loaded, and vertex information is stored in array
3D model files are loaded with recursive algorithm, and the vertex information read out is preserved in the form of array.When 3D moulds After the completion of all summits loading of type, by the incoming tinter of array for saving vertex information.
Step 7:Apex coordinate is distinguished to the MVP matrixes of premultiplication right and left eyes, so as to obtain new apex coordinate;
If former apex coordinate P1=(X1,Y1,Z1, W), apex coordinate is P after conversion2=(X2,Y2,Z2,W)。
Then
P2=MVP*P1
Step 8:After each summit of 3D models is converted, you can obtain new observed image, as left and right eye pattern Picture.The left and right that obtained right and left eyes image is mapped in into screen respectively is half of, they is spliced double vision point has just been obtained at one piece View.
The present invention handles the incoming tinter of obtained MVP matrixes, greatly accelerates program operation speed, this will improve journey The real-time and rapidity of sequence.Observation effect is easily adjusted, the observed parameter that can be best suitable for according to the different choice of external equipment.

Claims (6)

1. a kind of method that 3D models turn three-dimensional double vision point view, it is characterised in that comprise the following steps:
Step 1:Selected convergence type observation model
Top in selected convergence type observation model, bottom, Left, Right are respectively to cut out before the shared frustum of a pyramid of right and left eyes The distance at the centers of Bian Dao up and down in face, Near cuts out face to the distance of viewpoint before being, Far cut out after being face to viewpoint away from From;
Step 2:The cone displacement of convergence type observation model is calculated according to similar triangles;
Step 3:Gained knowledge according to geometry and calculate right and left eyes frustum parameter;
Step 4:Calculate the projection matrix and viewing matrix for obtaining right and left eyes;
Step 5:The projection matrix of right and left eyes, viewing matrix and model matrix are multiplied respectively and obtain two 4*4 matrix MVP, and By the incoming tinters of matrix MVP;
Step 6:3D model files are loaded, and vertex information is stored in array
3D model files are loaded with recursive algorithm, and the vertex information read out is preserved in the form of array;When 3D models After the completion of all summit loadings, by the incoming tinter of array for saving vertex information;
Step 7:Apex coordinate is distinguished to the MVP matrixes of premultiplication right and left eyes, so as to obtain new apex coordinate;
Step 8:After each summit of 3D models is converted, you can obtain new observed image, as right and left eyes image;Will The left and right that obtained right and left eyes image is mapped in screen respectively is half of, they is spliced double vision point view has just been obtained at one piece.
2. the method that a kind of 3D models according to claim 1 turn three-dimensional double vision point view, it is characterised in that described in step 2 According to similar triangles calculate convergence type observation model cone displacement, it is specific as follows:
Frustum shift=(IOD/2) * d (eye-nearZ)/d (eye-screen) is (1)
In formula, IOD is eyes distance, and d (eye-nearZ) represents human eye to the vertical range of preceding shear surface, d (eye-screen) Represent human eye to the vertical range of screen, the vertical range of d (eye-farZ) expression human eyes to rear shear surface.
3. the method that a kind of 3D models according to claim 2 turn three-dimensional double vision point view, it is characterised in that described in step 3 According to geometry gain knowledge calculating right and left eyes frustum parameter, it is specific as follows:
Left eye:
Top=tan (fov/2) * d (eye-nearZ) are (2)
LeftLeft_eye=-ratio*top-Frumstum shift are (3)
RightLeft_eye=ratio*top-Frumstum shift are (4)
Bottom=-top is (5)
Right eye:
Top=tan (fov/2) * d (eye-nearZ) are (6)
RightRight_eye=ratio*top+Frumstumshift is (7)
LeftRight_eye=-ratio*top+Frumstum shift are (8)
Bottom=-top is (9)
Wherein, fov represents the visual field angle on both sides above and below human eye observation's image, and ratio is image length-width ratio;Top and bottom Respectively the up-and-down boundary in face is cut out to the distance at center before the shared frustum of a pyramid of right and left eyes;LeftLeft_eyeAnd LeftRight_eye The left margin in face is respectively cut out before the frustum of a pyramid of right and left eyes to the distance at center;RightLeft_eyeAnd Rightright_eyeRespectively To cut out the right margin in face before the frustum of a pyramid of right and left eyes to the distance at center.
4. the method that a kind of 3D models according to claim 3 turn three-dimensional double vision point view, it is characterised in that described in step 4 Calculating obtain the projection matrix and viewing matrix of right and left eyes, it is specific as follows:
Propose first with following two matrix
M L P R O J = 2 · f N e a r r L - l L 0 r L + l L r L - l L 0 0 2 f N e a r t - b t + b t - b 0 0 0 f N e a r + f F a r f N e a r - f F a r 2 · f N e a r f N e a r - f F a r t 0 - 1 0 = 2 3.897 0 - 1.4 3.897 0 0 2 3.464 0 0 0 0 - 101 99 - 2 99 1.732 0 - 1 0
M R P R O J = 2 · f N e a r r R - l R 0 r R + l R r R - l R 0 0 2 f N e a r t - b t + b t - b 0 0 0 f N e a r + f F a r f N e a r - f F a r 2 · f N e a r f N e a r - f F a r t 0 - 1 0 = 2 3.879 0 1.4 3.897 0 0 2 3.464 0 0 0 0 - 101 99 - 2 99 1.732 0 - 1 0
fnear, fFarD (eye-nearZ), d (eye-farZ) are represented, t, b represent top, bottom
rL,rR,lL,lRRight is represented respectivelyLeft_eye,RightRight_eye,LeftLeft_eye,LeftRight_eye,
Then obtain again:
Left eye projection matrix Mlproj:
Mlproj=frustum (LeftLeft_eye,RightLeft_eye,bottom,top,d(eye-nearZ),d(eye-farZ)) ⑽
Right eye projection matrix Mrproj:
Mrproj=frustum (LeftRight_eye,RightRight_eye,bottom,top,d(eye-nearZ),d(eye-farZ)) ⑾
Left-eye view matrix Mlview:
Mlview=glm::LookAt(LeftCameraPosition,CameraTarget,upVector) ⑿
Right-eye view matrix Mrview:
Mrview=glm::LookAt(RightCameraPosition,CameraTarget,upVector) ⒀
LeftCamera Position represent the left camera position in actual coordinates, and RightCameraP osition are represented The position of right camera in actual coordinates, CameraTarget represents the target location in actual coordinates, upVector generations Table takes z coordinate to be positive direction.
5. the method that a kind of 3D models according to claim 4 turn three-dimensional double vision point view, it is characterised in that described square Battle array MVP is as follows:
MVP=Projection*View*Model is (14)
Projection is projection matrix, and View is viewing matrix, and Model is model matrix;
Because projection matrix and viewing matrix that right and left eyes are obtained are different from, the MVP matrixes that three matrix multiples are obtained It is also different;Therefore need respectively by the incoming tinters of the MVP of left eye and right eye MVP, i.e., according to the projection matrix of left eye, regard Figure matrix and model matrix obtain left eye MVP, and by the incoming tinters of left eye MVP, according to the projection matrix of right eye, view square Battle array and model matrix obtain right eye MVP, and by the incoming tinters of right eye MVP.
6. the method that a kind of 3D models according to claim 5 turn three-dimensional double vision point view, it is characterised in that described in step 7 By apex coordinate distinguish premultiplication right and left eyes MVP matrixes, so as to obtain new apex coordinate, be specifically calculated as follows:
If former apex coordinate P1=(X1,Y1,Z1, W), apex coordinate is P after conversion2=(X2,Y2,Z2,W);
Then
P2=MVP*P1 ⒂。
CN201710173870.2A 2017-02-24 2017-03-22 A kind of method that 3D models turn three-dimensional double vision point view Pending CN106993179A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017101030652 2017-02-24
CN201710103065 2017-02-24

Publications (1)

Publication Number Publication Date
CN106993179A true CN106993179A (en) 2017-07-28

Family

ID=59411718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710173870.2A Pending CN106993179A (en) 2017-02-24 2017-03-22 A kind of method that 3D models turn three-dimensional double vision point view

Country Status (1)

Country Link
CN (1) CN106993179A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109640070A (en) * 2018-12-29 2019-04-16 上海曼恒数字技术股份有限公司 A kind of stereo display method, device, equipment and storage medium
CN110264393A (en) * 2019-05-15 2019-09-20 联想(上海)信息技术有限公司 A kind of information processing method, terminal and storage medium
CN110324601A (en) * 2018-03-27 2019-10-11 京东方科技集团股份有限公司 Rendering method, computer product and display device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040212612A1 (en) * 2003-04-28 2004-10-28 Michael Epstein Method and apparatus for converting two-dimensional images into three-dimensional images
CN101369348A (en) * 2008-11-07 2009-02-18 上海大学 Novel sight point reconstruction method for multi-sight point collection/display system of convergence type camera
CN103493102A (en) * 2011-03-14 2014-01-01 高通股份有限公司 Stereoscopic conversion for shader based graphics content
CN105913474A (en) * 2016-04-05 2016-08-31 清华大学深圳研究生院 Binocular three-dimensional reconstruction device and three-dimensional reconstruction method thereof, and Android application

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040212612A1 (en) * 2003-04-28 2004-10-28 Michael Epstein Method and apparatus for converting two-dimensional images into three-dimensional images
CN101369348A (en) * 2008-11-07 2009-02-18 上海大学 Novel sight point reconstruction method for multi-sight point collection/display system of convergence type camera
CN103493102A (en) * 2011-03-14 2014-01-01 高通股份有限公司 Stereoscopic conversion for shader based graphics content
CN105913474A (en) * 2016-04-05 2016-08-31 清华大学深圳研究生院 Binocular three-dimensional reconstruction device and three-dimensional reconstruction method thereof, and Android application

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
师少飞: "面向嵌入式应用的二维视频转换为三维视频的关键技术研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *
钮圣虓: "平面3D游戏_2D视频转立体3D技术研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324601A (en) * 2018-03-27 2019-10-11 京东方科技集团股份有限公司 Rendering method, computer product and display device
CN109640070A (en) * 2018-12-29 2019-04-16 上海曼恒数字技术股份有限公司 A kind of stereo display method, device, equipment and storage medium
CN110264393A (en) * 2019-05-15 2019-09-20 联想(上海)信息技术有限公司 A kind of information processing method, terminal and storage medium

Similar Documents

Publication Publication Date Title
JP4228646B2 (en) Stereoscopic image generation method and stereoscopic image generation apparatus
CN101895779B (en) Stereo display method and system
EP3712840A1 (en) Method and system for generating an image of a subject in a scene
CN101587386B (en) Method, device and system for processing cursor
AU2018249563B2 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
TWI584222B (en) Stereoscopic image processor, stereoscopic image interaction system, and stereoscopic image displaying method
US20130038600A1 (en) System and Method of Processing 3D Stereoscopic Image
WO2012036120A1 (en) Stereoscopic image generation device, stereoscopic image display device, stereoscopic image adjustment method, program for executing stereoscopic image adjustment method on computer, and recording medium on which program is recorded
US20120306860A1 (en) Image generation system, image generation method, and information storage medium
US9754379B2 (en) Method and system for determining parameters of an off-axis virtual camera
CN102157012B (en) Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system
CN106993179A (en) A kind of method that 3D models turn three-dimensional double vision point view
JP2012174238A5 (en)
WO2017062730A1 (en) Presentation of a virtual reality scene from a series of images
JPH0744701B2 (en) Three-dimensional superimpose device
JP2012010047A5 (en)
Baker Generating images for a time-multiplexed stereoscopic computer graphics system
CN104299258A (en) Solid figure processing method and equipment
KR100893381B1 (en) Methods generating real-time stereo images
KR101341597B1 (en) Method of generating depth map of 2-dimensional image based on camera location and angle and method of generating binocular stereoscopic image and multiview image using the same
KR100556830B1 (en) 3D graphical model rendering apparatus and method for displaying stereoscopic image
Froner et al. Implementing an Improved Stereoscopic Camera Model.
JP2003085593A (en) Interactive image operating apparatus and displaying method for image content
CN102014291B (en) Method for generating left-eye and right-eye picture pair at horizontal view angle of camera larger than 180 degrees
Li et al. Real time stereo rendering for augmented reality on 3DTV system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Yan Chenggang

Inventor after: Ma Huiwen

Inventor after: Zhang Xin

Inventor after: Li Yafei

Inventor after: Li Ning

Inventor after: Chen Zelun

Inventor before: Ma Huiwen

Inventor before: Yan Chenggang

Inventor before: Zhang Xin

Inventor before: Li Yafei

Inventor before: Li Ning

Inventor before: Chen Zelun

CB03 Change of inventor or designer information
RJ01 Rejection of invention patent application after publication

Application publication date: 20170728

RJ01 Rejection of invention patent application after publication