A kind of method that 3D models turn three-dimensional double vision point view
Technical field
The present invention relates to 3D Display Techniques, and in particular to a kind of is three-dimensional double vision point by the 3D model conversations based on OpenGL
The method of the conversion method of view, the particularly a kind of three-dimensional double vision point view of 3D models turn.
Background technology
Human lives perceive the world in a three-dimensional world using stereoscopic vision, at full speed with computer technology
Development, the mode that computer describes real world is increasingly enriched:Again to video from sound to image, what computer can be represented
The world becomes increasingly complex.The 2D that can carry out that current most display device remains unchanged is shown, depth information is have ignored, in numeral
Change, the epoch of modernization, 2D is shown it is impossible to meet human wants, then 3D models are used as a kind of new media format
Enter in the life of people, study and work, and received quickly by ordinary populace.It is entertained in video display, building, machinery
Manufacture, medical treatment is military, ecommerce, virtual reality, and many aspects such as archaeology, which are obtained for, to be increasingly widely applied.
In current Computer display field, stereoscopic display has become the developing direction in future.Commercially, at present
There are many hardware options, enable us to obtain three-dimensional 3D visual informations.And 3D Display Techniques are most multiple in whole 3D flows
A miscellaneous step, because the platform of broadcasting is all flat-panel display devices, and right and left eyes material will occur on same display device,
This relates to how to be separated right and left eyes material, and is accurately sent to respectively in the right and left eyes of spectators.Once right and left eyes
The separation of material goes wrong, and 3D effect would not occur, and spectators also will be seen that image content that is chaotic, having ghost image.We
Using convergence binocular projection observation model, 3D models are mapped on screen according to this observation model.This model more meets people
The observation custom of eye, makes one more true, the nature seen.
The basic reason that human eye can obtain stereoscopic vision is the presence of parallax, is respectively seen using the eyes of people in the same time
Different image formation parallax, so that the characteristics of obtaining stereoscopic vision.
The content of the invention
Three-dimensional stereo display technique is the key technology of virtual reality, is also the essential basic bar of virtual reality system
Part, and depth perception be properly formed be stereo display technique key, depth perception be properly formed by binocular parallax figure come
Realize.Nowadays the 3D modelling effects of the three-dimensional display of in the market, are directly determined, disparity map by double vision point view
Contain the range information of scene.Therefore the present invention, from principle of stereoscopic vision, how primary study utilizes OpenGL from calculating
Multi-view image is extracted in the virtual 3D models of machine, so as to carry out changing into three-dimensional double vision point visual difference figure, and then 3D is solved
Display problem.
The 3D models of the present invention turn double vision point view including 3D model reads and turn double vision point disparity map;Wherein 3D models are read
Take including reading vertex information and drawing summit;Turn double vision point disparity map and double vision point view is converted and drawn including monocular.
The technical solution adopted for the present invention to solve the technical problems comprises the following steps:
Step 1:Selected convergence type observation model;
Observation model mainly has convergence type observation model and run-in index observation model.The selected convergence type observation mould of the present invention
Type, and wherein top, bottom, Left, Right are respectively that the side up and down that face is cut out before the shared frustum of a pyramid of right and left eyes is arrived
The distance at center, Near cuts out face to the distance of viewpoint before being, Far cuts out face to the distance of viewpoint after being.
Step 2:Input parameter, and calculate the cone displacement of convergence type observation model:
Eyes apart from above and below IOD, human eye observation's image both sides visual field angle fov, human eye to preceding shear surface it is vertical away from
From d (eye-nearZ), the vertical range d (eye-screen) of human eye to screen, the vertical range d of human eye to rear shear surface
(eye-farZ), image length-width ratio ratio.
The cone displacement of the convergence type observation model is calculated according to similar triangles
Frustum shift=(IOD/2) * d (eye-nearZ)/d (eye-screen) is (1)
Step 3:Calculate right and left eyes frustum parameter
Left eye frustum parameter:
Top=tan (fov/2) * d (eye-nearZ) are (2)
LeftLeft_eye=-ratio*top-Frumstum shift are (3)
RightLeft_eye=ratio*top-Frumstumshift is (4)
Bottom=-top is (5)
Right eye frustum parameter:
Top=tan (fov/2) * d (eye-nearZ) are (6)
Rightright_eye=ratio*top+Frumstum shift are (7)
LeftRight_eye=-ratio*top+Frumstum shift are (8)
Bottom=-top is (9)
Wherein, fov represents the visual field angle on both sides above and below human eye observation's image, and ratio is image length-width ratio;
Top and bottom are respectively to cut out the up-and-down boundary in face to the distance at center before the shared frustum of a pyramid of right and left eyes;
LeftLeft_eyeAnd LeftRight_eyeRespectively cut out before the frustum of a pyramid of right and left eyes the left margin in face to center away from
From;
RightLeft_eyeAnd Rightright_eyeRespectively cut out before the frustum of a pyramid of right and left eyes the right margin in face to center away from
From;
Step 4:Obtain the projection matrix and viewing matrix of right and left eyes.
Left eye projection matrix Mlproj
Mlproj=frustum (LeftLeft_eye,RightLeft_eye,bottom,top,d(eye-nearZ),d(eye-
farZ))⑽
Right eye projection matrix Mrproj
Mrproj=frustum (LeftRight_eye,RightRight_eye,bottom,top,d(eye-nearZ),d(eye-
farZ))⑾
Left-eye view matrix
Mlview=glm::LookAt(LeftCameraPosition,CameraTarget,upVector)⑿
Right-eye view matrix
Mrview=glm::LookAt(RightCameraPosition,CameraTarget,upVector)⒀
LeftCameraPosition represents the left camera position in actual coordinates, RightCameraPosition tables
Show the position of the right camera in actual coordinates, CameraTarget represents the target location in actual coordinates, upVector
Represent and take z coordinate to be positive direction.
Step 5:The projection matrix of right and left eyes, viewing matrix and model matrix are multiplied respectively and obtain two 4*4 matrixes
MVP, and by the incoming tinters of matrix MVP
MVP=Projection*View*Model is (14)
Projection is projection matrix, and View is viewing matrix, and Model is model matrix.
Because projection matrix and viewing matrix that right and left eyes are obtained are different from, the MVP that three matrix multiples are obtained
Matrix is also different.Therefore, the present invention can be respectively by the incoming tinters of the MVP of left eye and right eye MVP, i.e., according to left eye
Projection matrix, viewing matrix and model matrix obtain left eye MVP, and by the incoming tinters of left eye MVP, according to the projection of right eye
Matrix, viewing matrix and model matrix obtain right eye MVP, and by the incoming tinters of right eye MVP.
Step 6:3D model files are loaded, and vertex information is stored in array
3D model files are loaded with recursive algorithm, and the vertex information read out is preserved in the form of array.When 3D moulds
After the completion of all summits loading of type, by the incoming tinter of array for saving vertex information.
Step 7:Apex coordinate is distinguished to the MVP matrixes of premultiplication right and left eyes, so as to obtain new apex coordinate;
If former apex coordinate P1=(X1,Y1,Z1, W), apex coordinate is P after conversion2=(X2,Y2,Z2,W)。
Then
P2=MVP*P1⒂
Step 8:After each summit of 3D models is converted, you can obtain new observed image, as left and right eye pattern
Picture.The left and right that obtained right and left eyes image is mapped in into screen respectively is half of, they is spliced double vision point has just been obtained at one piece
View.
Compared with prior art, the positive effect of the present invention is:
The present invention is different from the method that information is gathered by binocular camera, but is set up by reading 3D model files
Double vision point view, can be applied to the double vision point view generation of virtual scene.
The present invention will convert obtained incoming bore hole 3D screens of double vision point view i.e. compared to conventional method strong applicability
Bore hole 3D effect can be achieved, is that VR effects can be achieved by the incoming VR equipment of double vision point view.
The present invention uses convergence type projection model, meets the visual custom of human eye, meets the emulation of eyes influx, realization
Effect is more true to nature.Meanwhile, influence the eyes of visual experience apart from IOD and human eye to screen apart from d (eye-screen) all
It is input parameter.This causes regulation final effect to become more science, conveniently.
Brief description of the drawings
Fig. 1 is convergence type perspective view;
Fig. 2 is convergence type projection frustum of a pyramid schematic diagram;
Fig. 3 is converting algorithm flow chart.
Embodiment
With reference to embodiment, the present invention will be described in detail.
As Figure 1-3, a kind of 3D models turn the method for three-dimensional double vision point view, specifically include following steps:
Step 1:Selected convergence type observation model
Observation model mainly has convergence type observation model and run-in index observation model.The selected convergence type observation mould of the present invention
Type.
The frustum of a pyramid that accompanying drawing 1 is convergence type perspective view, accompanying drawing 2 is convergence type projection.Wherein top, bottom,
Left, Right are respectively the distance at the centers of Bian Dao up and down that face is cut out before the shared frustum of a pyramid of right and left eyes, and Near is cut before being
Sanction face is to the distance of viewpoint, and Far cuts out face to the distance of viewpoint after being.
Step 2:The cone displacement of convergence type observation model is calculated according to similar triangles:
Frustum shift=(IOD/2) * d (eye-nearZ)/d (eye-screen) is (1)
In formula, IOD is eyes distance, and d (eye-nearZ) represents human eye to the vertical range of preceding shear surface, d (eye-
Screen) represent human eye to the vertical range of screen, the vertical range of d (eye-farZ) expression human eyes to rear shear surface.Take IOD
=7, d (eye-nearZ)=1, d (eye-screen)=5, d (eye-farZ)=100, then can calculate and obtain Frumstum
Shift=0.7.
Step 3:Gained knowledge according to geometry and calculate right and left eyes frustum parameter
Left eye:
Top=tan (fov/2) * d (eye-nearZ) are (2)
LeftLeft_eye=-ratio*top-Frumstum shift are (3)
Right Left_eye=ratio*top-Frumstum shift are (4)
Bottom=-top is (5)
Right eye:
Top=tan (fov/2) * d (eye-nearZ) are (6)
RightRight_eye=ratio*top+Frumstum shift are (7)
Left Right_eye=-ratio*top+Frumstum shift are (8)
Bottom=-top is (9)
Wherein, fov represents the visual field angle on both sides above and below human eye observation's image, and ratio is image length-width ratio;
Top and bottom are respectively to cut out the up-and-down boundary in face to the distance at center before the shared frustum of a pyramid of right and left eyes;
LeftLeft_eyeAnd LeftRight_eyeRespectively cut out before the frustum of a pyramid of right and left eyes the left margin in face to center away from
From;
RightLeft_eyeAnd Rightright_eyeRespectively cut out before the frustum of a pyramid of right and left eyes the right margin in face to center away from
From;
Fov=120 ° is taken, ratio=1080/960 then can be calculated
Left eye:RightLeft_eye=1.2485, LeftLeft_eye=-2.6485, bottom=-
1.732。
Right eye:RightRight_eye=2.6485, LeftRight_eye=-1.2485, bottom=-
1.732。
Step 4:The projection matrix and viewing matrix of right and left eyes are obtained according to following two matrix, it is specific as follows:
fnear, fFarD (eye-nearZ), d (eye-farZ) are represented, t, b represent top, bottom rL,rR,lL,lRRespectively
Represent RightLeft_eye,RightRight_eye,LeftLeft_eye,LeftRight_eye, so as to draw left eye projection matrix Mlproj:
Mlproj=frustum (LeftLeft_eye,RightLeft_eye,bottom,top,d(eye-nearZ),d(eye-
farZ))⑽
Right eye projection matrix Mrproj:
Mrproj=frustum (LeftRight_eye,RightRight_eye,bottom,top,d(eye-nearZ),d(eye-
farZ))⑾
Left-eye view matrix Mlview:
Mlview=glm::LookAt(LeftCameraPosition,CameraTarget,upVector)⑿
Right-eye view matrix Mrview:
Mrview=glm::LookAt(RightCameraPosition,CameraTarget,upVector)⒀
LeftCamera Position represent the left camera position in actual coordinates, RightCameraP osition
The position of the right camera in actual coordinates is represented, CameraTarget represents the target location in actual coordinates,
UpVector, which is represented, takes z coordinate to be positive direction.
Step 5:The projection matrix of right and left eyes, viewing matrix and model matrix are multiplied respectively and obtain two 4*4 matrixes
MVP, and by the incoming tinters of matrix MVP
MVP=Projection*View*Model is (14)
Projection is projection matrix, and View is viewing matrix, and Model is model matrix.
Because projection matrix and viewing matrix that right and left eyes are obtained are different from, the MVP that three matrix multiples are obtained
Matrix is also different.Therefore, the present invention can be respectively by the incoming tinters of the MVP of left eye and right eye MVP, i.e., according to left eye
Projection matrix, viewing matrix and model matrix obtain left eye MVP, and by the incoming tinters of left eye MVP, according to the projection of right eye
Matrix, viewing matrix and model matrix obtain right eye MVP, and by the incoming tinters of right eye MVP.Model matrix can make 3D moulds
The position of type changes in world coordinate system, the rotation or translation of such as model.Because the present invention can not do any
Rotation translation, so the model matrix of the present invention takes 4*4 unit matrix, that is, give tacit consent to 3D models and be carried in world coordinate system
At origin.
Step 6:3D model files are loaded, and vertex information is stored in array
3D model files are loaded with recursive algorithm, and the vertex information read out is preserved in the form of array.When 3D moulds
After the completion of all summits loading of type, by the incoming tinter of array for saving vertex information.
Step 7:Apex coordinate is distinguished to the MVP matrixes of premultiplication right and left eyes, so as to obtain new apex coordinate;
If former apex coordinate P1=(X1,Y1,Z1, W), apex coordinate is P after conversion2=(X2,Y2,Z2,W)。
Then
P2=MVP*P1⒂
Step 8:After each summit of 3D models is converted, you can obtain new observed image, as left and right eye pattern
Picture.The left and right that obtained right and left eyes image is mapped in into screen respectively is half of, they is spliced double vision point has just been obtained at one piece
View.
The present invention handles the incoming tinter of obtained MVP matrixes, greatly accelerates program operation speed, this will improve journey
The real-time and rapidity of sequence.Observation effect is easily adjusted, the observed parameter that can be best suitable for according to the different choice of external equipment.