CN105072433A - Depth perception mapping method applied to head track virtual reality system - Google Patents

Depth perception mapping method applied to head track virtual reality system Download PDF

Info

Publication number
CN105072433A
CN105072433A CN201510519808.5A CN201510519808A CN105072433A CN 105072433 A CN105072433 A CN 105072433A CN 201510519808 A CN201510519808 A CN 201510519808A CN 105072433 A CN105072433 A CN 105072433A
Authority
CN
China
Prior art keywords
prime
camera
distance
perspective plane
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510519808.5A
Other languages
Chinese (zh)
Other versions
CN105072433B (en
Inventor
李大锦
李一晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN201510519808.5A priority Critical patent/CN105072433B/en
Publication of CN105072433A publication Critical patent/CN105072433A/en
Application granted granted Critical
Publication of CN105072433B publication Critical patent/CN105072433B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a depth perception mapping method applied to a head track virtual reality system. The depth perception mapping method comprises the steps of: 1, obtaining a three-dimensional picture of scene objects, judging that whether the head state of the three-dimensional picture of the scene objects is changed, and if no, shooting the three-dimensional picture of the scene objects by using a dual-camera with a camera axle spread which is same with the distance of two eyes of a user; and if yes, entering the next step; 2, adjusting the distance from the zero parallax projection plane of the dual-camera to the center position of the dual-camera to allow the distance from the center of the dual camera to the projection plane to be equal to the distance from a physical display screen to the center of the two eyes of the user; 3, determining a suitable depth perception region according to the screen size and a viewpoint position, performing scale transformation on the scene objects on two sides of the projection plane along the depth direction and extruding to the projection plane, so as to allow the scene depth range to be within the determined suitable depth perception region; and 4, performing skew transformation on the scene objects according to perspective projection geometry, and shooting the three-dimensional picture of the scene objects by the dual-camera.

Description

Be applied to the perceived depth mapping method that head follows the tracks of virtual reality system
Technical field
The present invention relates to field of computer multimedia, particularly relate to a kind of perceived depth mapping method being applied to head tracking virtual reality system.
Background technology
Stereo display is the image parallactic by simulating human binocular vision, makes beholder form the visually-perceptible of depth when watching image.In virtual reality system, stereo display effectively can strengthen the sense of reality and the property immersed.But in stereo display, due to the conflict that human eye is converge like the spokes of a wheel at the hub and focus on, also can cause visual fatigue.The best method that software solves visual fatigue reduces conflict that is converge like the spokes of a wheel at the hub and that focus on by the parallax of reduction binocular images.Parallax minimizing means that perceived depth also can reduce.Therefore when beholder is when watching stereo-picture, the perceived depth scope that suitable is had.The method be compressed to by perceived depth within suitable perceived depth scope is called depth map.
In virtual reality system, traditional depth-compression method is that the wheelbase by reducing eyes camera realizes depth map.But because depth map can cause perceptual object to produce distortion in both direction: one is that to produce crimp along view direction be that object seems to be tending towards flat; Two is that perceptual object is cut sth. askew using the center line of eyes camera as reference position distortion.In head tracking system, because the eyes camera taking stereo-picture moves along with the motion of head, the datum line of the distortion of cutting sth. askew of perceptual object also can move along with the motion of head, thus causing perceptual image can move along with the motion of head, the movement of this image is referred to as " image drift ".
Summary of the invention
In order to solve the image drift problem that traditional depth map method produces in head tracking system, the invention provides a kind of perceived depth mapping method being applied to head tracking virtual reality system.The method utilizes carrys out photographic images with actual eyes apart from identical camera axis distance, utilizes fixing datum of deformation line of cutting sth. askew to carry out depth-compression conversion to virtual scene.The method not only can eliminate the image drift problem in head tracking virtual reality system, can also realize linear processes depth map flexibly.
To achieve these goals, the present invention is by the following technical solutions:
Be applied to the perceived depth mapping method that head follows the tracks of virtual reality system, comprise:
Step one: the stereo-picture obtaining object scene, and judge whether the head state of the stereo-picture of object scene changes, if do not change, then adopt and the stereo-picture of user's eyes apart from the double camera photographed scene object of identical camera axis distance; If change, then enter next step;
Step 2: 0 parallax perspective plane of adjustment double camera, to the distance of double camera point midway, makes double camera mid point two oculocentric apart from identical to user with physical display screen to the distance on perspective plane;
Step 3: determine perceived depth Suitable Area according to screen size and viewpoint position, carries out scale transformation by both sides, perspective plane object scene along depth direction and extrudes to perspective plane, perceived depth Suitable Area scene depth scope being positioned at determine;
Step 4: according to perspective projection geometry, object scene is cut sth. askew conversion, utilize double camera to carry out the stereo-picture of photographed scene object.
In described step 4, also passing the projection on the axis at center, perspective plane perpendicular to perspective plane as the perspective reference point calculating conversion of cutting sth. askew using double camera mid point.
In described step 3, the method that both sides, perspective plane object scene carries out scale transformation along depth direction is comprised linear scale transformed mappings and non-linear zoom transformed mappings.
To have an X-rayed reference point for the origin of coordinates, with the perpendicular bisector on perspective plane for Z axis, take horizontal direction as X-axis, vertical direction is that Y-axis sets up coordinate system, and model vertices coordinate is from (x, y, z) be converted to after convergent-divergent and conversion of cutting sth. askew (x ', y ', z ');
Carry out mapping along the enterprising line linearity scale transformation of depth direction to model in this coordinate space, the scale transformation on depth direction is:
z ′ = F - f ′ F - f ( z + F ) - F
Wherein, f be camera cutting identity distance far away from, f' be the distal border of perceived depth Suitable Area to camera distance, F is double camera mid point distance projection plane.
Object scene cutting sth. askew in the horizontal and vertical directions in both sides, perspective plane is transformed to:
x ′ = xz ′ z , y ′ = yz ′ z .
When both sides, perspective plane object scene carries out non-linear zoom transformed mappings along depth direction, adopt different compression ratios in the different degree of depth.
When both sides, perspective plane object scene carries out non-linear zoom transformed mappings along depth direction, compression ratio reduces along with both sides, perspective plane object scene and reducing of perspective plane distance.
To have an X-rayed reference point for the origin of coordinates, with the perpendicular bisector on perspective plane for Z axis, take horizontal direction as X-axis, vertical direction is that Y-axis sets up coordinate system, and model vertices coordinate is from (x, y, z) be converted to after convergent-divergent and conversion of cutting sth. askew (x ', y ', z ');
Map along the enterprising line nonlinearity scale transformation of depth direction model in this coordinate space, the scale transformation on depth direction is:
z ′ = ( F - f ′ ) ( F - n ′ ) ( 2 F - f - n ) ( z + F ) ( ( F - f ′ ) ( F - n ) - ( F - n ′ ) ( F - f ) ) | z + F | + ( 2 F - f ′ - n ′ ) ( F - f ) ( F - n ) - F
Wherein, f be camera cutting identity distance far away from, n be nearly cutting identity distance from, f', n' are respectively the far and near border of perceived depth Suitable Area to camera distance, and F is the distance of double camera mid point to projection plane.
Object scene cutting sth. askew in the horizontal and vertical directions in both sides, perspective plane is transformed to:
x ′ = xz ′ z , y ′ = yz ′ z .
Beneficial effect of the present invention:
(1) when the camera axis of actual beholder's eyes Distance geometry shooting stereo-picture is apart from time equal, perceptual object does not produce distortion, therefore, in head tracking system, employing of the present invention can removal of images drift phenomenon apart from taking stereo-picture apart from identical camera axis with the eyes of actual user.
(2) in order to realize depth-compression, virtual scene is carried out convergent-divergent and cuts sth. askew conversion to reduce perceived depth by the present invention, adopt simultaneously and take stereo-picture with user's eyes apart from identical camera axis distance, perceived depth can be reduced when removal of images drifts about like this, alleviate visual fatigue.
Accompanying drawing explanation
Fig. 1 is the perceived depth mapping method schematic flow sheet being applied to head tracking virtual reality system of the present invention;
Fig. 2 is the conversion schematic diagram of cutting sth. askew in model convergent-divergent in the depth direction and horizontal direction.
Embodiment
Below in conjunction with accompanying drawing and embodiment, the present invention will be further described:
As shown in Figure 1, the perceived depth mapping method being applied to head tracking virtual reality system of the present invention, comprising:
Step one: the stereo-picture obtaining object scene, and judge whether the head state of the stereo-picture of object scene changes, if do not change, then adopt and the stereo-picture of user's eyes apart from the double camera photographed scene object of identical camera axis distance; If change, then enter next step;
Step 2: 0 parallax perspective plane of adjustment double camera, to the distance of double camera point midway, makes double camera mid point two oculocentric apart from identical to user with physical display screen to the distance on perspective plane;
Step 3: determine perceived depth Suitable Area according to screen size and viewpoint position, carries out scale transformation by both sides, perspective plane object scene along depth direction and extrudes to perspective plane, perceived depth Suitable Area scene depth scope being positioned at determine;
Step 4: according to perspective projection geometry, object scene is cut sth. askew conversion, utilize double camera to carry out the stereo-picture of photographed scene object.
Further, in step 3, the method that both sides, perspective plane object scene carries out scale transformation along depth direction is comprised linear scale transformed mappings and non-linear zoom transformed mappings.
For the object be taken for the animal model shown in Fig. 2:
User eyes are adopted to carry out the image of this animal of stereo display apart from the right and left eyes image that the double camera of identical camera axis distance takes the animal shown in Fig. 2 respectively, the object that such perceptual object is namely taken does not produce distortion for the animal model shown in Fig. 2, can reach the object of removal of images drift phenomenon like this.Wherein, in the present embodiment by the detailed process of animal model linear scale transformed mappings and non-linear zoom transformed mappings be:
First, to have an X-rayed reference point for the origin of coordinates, with the perpendicular bisector on perspective plane for Z axis, take horizontal direction as X-axis, vertical direction is that Y-axis sets up coordinate system;
Then, the apex coordinate of this animal model is converted to after convergent-divergent and conversion of cutting sth. askew from (x, y, z) (x ', y ', z '); Also pass the subpoint on the axis at center, perspective plane perpendicular to perspective plane for perspective reference point with double camera mid point, and calculate the conversion of cutting sth. askew of object according to this reference point;
Carry out mapping along the enterprising line linearity scale transformation of depth direction to the animal model shown in Fig. 2 in this coordinate space, the scale transformation on depth direction is:
z ′ = F - f ′ F - f ( z + F ) - F
Object scene cutting sth. askew in the horizontal and vertical directions in both sides, perspective plane is transformed to:
x ′ = xz ′ z , y ′ = yz ′ z
Wherein, f be camera cutting identity distance far away from, f' be the distal border of perceived depth Suitable Area to camera distance, F is double camera mid point distance projection plane;
When both sides, perspective plane object scene carries out non-linear zoom transformed mappings along depth direction, different compression ratios is adopted to carry out non-linear zoom transformed mappings at different depths; When both sides, perspective plane object scene carries out non-linear zoom transformed mappings along depth direction, compression ratio reduces along with both sides, perspective plane object scene and reducing of perspective plane distance.
To have an X-rayed reference point for the origin of coordinates, with the perpendicular bisector on perspective plane for Z axis, take horizontal direction as X-axis, vertical direction is that Y-axis sets up coordinate system, and model vertices coordinate is from (x, y, z) be converted to after convergent-divergent and conversion of cutting sth. askew (x ', y ', z ');
Carry out mapping along the enterprising line nonlinearity scale transformation of depth direction to model in this coordinate space, the scale transformation on depth direction is:
z ′ = ( F - f ′ ) ( F - n ′ ) ( 2 F - f - n ) ( z + F ) ( ( F - f ′ ) ( F - n ) - ( F - n ′ ) ( F - f ) ) | z + F | + ( 2 F - f ′ - n ′ ) ( F - f ) ( F - n ) - F
Object scene cutting sth. askew in the horizontal and vertical directions in both sides, perspective plane is transformed to:
x ′ = xz ′ z , y ′ = yz ′ z
Wherein, f be camera cutting identity distance far away from, n be nearly cutting identity distance from, f', n' be the far and near border of perceived depth Suitable Area to camera distance, F is the distance of double camera mid point to projection plane.
Realize depth-compression in the present embodiment, virtual scene carried out convergent-divergent and cut sth. askew conversion to reduce perceived depth, when removal of images drifts about, finally reaching minimizing perceived depth, alleviate the object of visual fatigue.
By reference to the accompanying drawings the specific embodiment of the present invention is described although above-mentioned; but not limiting the scope of the invention; one of ordinary skill in the art should be understood that; on the basis of technical scheme of the present invention, those skilled in the art do not need to pay various amendment or distortion that creative work can make still within protection scope of the present invention.

Claims (9)

1. be applied to the perceived depth mapping method that head follows the tracks of virtual reality system, it is characterized in that, comprising:
Step one: the stereo-picture obtaining object scene, and judge whether the head state of the stereo-picture of object scene changes, if do not change, then adopt and the stereo-picture of user's eyes apart from the double camera photographed scene object of identical camera axis distance; If change, then enter next step;
Step 2: 0 parallax perspective plane of adjustment double camera, to the distance of double camera point midway, makes double camera mid point two oculocentric apart from identical to user with physical display screen to the distance on perspective plane;
Step 3: determine perceived depth Suitable Area according to screen size and viewpoint position, carries out scale transformation by both sides, perspective plane object scene along depth direction and extrudes to perspective plane, perceived depth Suitable Area scene depth scope being positioned at determine;
Step 4: according to perspective projection geometry, object scene is cut sth. askew conversion, utilize double camera to carry out the stereo-picture of photographed scene object.
2. a kind of head that is applied to follows the tracks of the perceived depth mapping method of virtual reality system as claimed in claim 1, it is characterized in that, in described step 4, also passing the projection on the axis at center, perspective plane perpendicular to perspective plane as the perspective reference point calculating conversion of cutting sth. askew using double camera mid point.
3. a kind of head that is applied to follows the tracks of the perceived depth mapping method of virtual reality system as claimed in claim 1, it is characterized in that, in described step 3, the method that both sides, perspective plane object scene carries out scale transformation along depth direction is comprised linear scale transformed mappings and non-linear zoom transformed mappings.
4. a kind of head that is applied to follows the tracks of the perceived depth mapping method of virtual reality system as claimed in claim 3, it is characterized in that, to have an X-rayed reference point for the origin of coordinates, with the perpendicular bisector on perspective plane for Z axis, take horizontal direction as X-axis, vertical direction is that Y-axis sets up coordinate system, model vertices coordinate is converted to after convergent-divergent and conversion of cutting sth. askew from (x, y, z) (x ', y ', z ');
Carry out mapping along the enterprising line linearity scale transformation of depth direction to model in this coordinate space, the scale transformation on depth direction is:
z ′ = F - f ′ F - f ( z + F ) - F
Wherein, f be camera cutting identity distance far away from, f' be the distal border of perceived depth Suitable Area to camera distance, F is double camera mid point distance projection plane.
5. a kind of head that is applied to follows the tracks of the perceived depth mapping method of virtual reality system as claimed in claim 4, it is characterized in that, object scene cutting sth. askew in the horizontal and vertical directions in both sides, perspective plane is transformed to:
x ′ = xz ′ z , y ′ = yz ′ z .
6. a kind of head that is applied to follows the tracks of the perceived depth mapping method of virtual reality system as claimed in claim 3, it is characterized in that, when both sides, perspective plane object scene carries out non-linear zoom transformed mappings along depth direction, adopt different compression ratios in the different degree of depth.
7. a kind of head that is applied to follows the tracks of the perceived depth mapping method of virtual reality system as claimed in claim 6, it is characterized in that, when both sides, perspective plane object scene carries out non-linear zoom transformed mappings along depth direction, compression ratio reduces along with both sides, perspective plane object scene and reducing of perspective plane distance.
8. a kind of head that is applied to follows the tracks of the perceived depth mapping method of virtual reality system as claimed in claim 6, it is characterized in that, to have an X-rayed reference point for the origin of coordinates, with the perpendicular bisector on perspective plane for Z axis, take horizontal direction as X-axis, vertical direction is that Y-axis sets up coordinate system, model vertices coordinate is converted to after convergent-divergent and conversion of cutting sth. askew from (x, y, z) (x ', y ', z ');
Map along the enterprising line nonlinearity scale transformation of depth direction model in this coordinate space, the scale transformation on depth direction is:
z ′ = ( F - f ′ ) ( F - n ′ ) ( 2 F - f - n ) ( z + F ) ( ( F - f ′ ) ( F - n ) - ( F - n ′ ) ( F - f ) ) | z + F | + ( 2 F - f ′ - n ′ ) ( F - f ) ( F - n ) - F
Wherein, f be camera cutting identity distance far away from, n be nearly cutting identity distance from, f', n' are respectively the far and near border of perceived depth Suitable Area to camera distance, and F is the distance of double camera mid point to projection plane.
9. a kind of head that is applied to follows the tracks of the perceived depth mapping method of virtual reality system as claimed in claim 8, it is characterized in that, object scene cutting sth. askew in the horizontal and vertical directions in both sides, perspective plane is transformed to:
x ′ = xz ′ z , y ′ = yz ′ z .
CN201510519808.5A 2015-08-21 2015-08-21 Depth perception mapping method applied to head track virtual reality system Expired - Fee Related CN105072433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510519808.5A CN105072433B (en) 2015-08-21 2015-08-21 Depth perception mapping method applied to head track virtual reality system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510519808.5A CN105072433B (en) 2015-08-21 2015-08-21 Depth perception mapping method applied to head track virtual reality system

Publications (2)

Publication Number Publication Date
CN105072433A true CN105072433A (en) 2015-11-18
CN105072433B CN105072433B (en) 2017-03-22

Family

ID=54501701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510519808.5A Expired - Fee Related CN105072433B (en) 2015-08-21 2015-08-21 Depth perception mapping method applied to head track virtual reality system

Country Status (1)

Country Link
CN (1) CN105072433B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228530A (en) * 2016-06-12 2016-12-14 深圳超多维光电子有限公司 A kind of stereography method, device and stereophotography equipment
CN106251403A (en) * 2016-06-12 2016-12-21 深圳超多维光电子有限公司 A kind of methods, devices and systems of virtual three-dimensional Scene realization
WO2018133312A1 (en) * 2017-01-19 2018-07-26 华为技术有限公司 Processing method and device
GB2565140A (en) * 2017-08-04 2019-02-06 Nokia Technologies Oy Virtual reality video processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1897715A (en) * 2006-05-31 2007-01-17 北京航空航天大学 Three-dimensional vision semi-matter simulating system and method
EP2615580A1 (en) * 2012-01-13 2013-07-17 Softkinetic Software Automatic scene calibration
CN103260015A (en) * 2013-06-03 2013-08-21 程志全 Three-dimensional visual monitoring system based on RGB-Depth camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1897715A (en) * 2006-05-31 2007-01-17 北京航空航天大学 Three-dimensional vision semi-matter simulating system and method
EP2615580A1 (en) * 2012-01-13 2013-07-17 Softkinetic Software Automatic scene calibration
US20150181198A1 (en) * 2012-01-13 2015-06-25 Softkinetic Software Automatic Scene Calibration
CN103260015A (en) * 2013-06-03 2013-08-21 程志全 Three-dimensional visual monitoring system based on RGB-Depth camera

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228530A (en) * 2016-06-12 2016-12-14 深圳超多维光电子有限公司 A kind of stereography method, device and stereophotography equipment
CN106251403A (en) * 2016-06-12 2016-12-21 深圳超多维光电子有限公司 A kind of methods, devices and systems of virtual three-dimensional Scene realization
CN106251403B (en) * 2016-06-12 2018-02-16 深圳超多维光电子有限公司 A kind of methods, devices and systems of virtual three-dimensional Scene realization
WO2018133312A1 (en) * 2017-01-19 2018-07-26 华为技术有限公司 Processing method and device
GB2565140A (en) * 2017-08-04 2019-02-06 Nokia Technologies Oy Virtual reality video processing
US10681276B2 (en) 2017-08-04 2020-06-09 Nokia Technologies Oy Virtual reality video processing to compensate for movement of a camera during capture

Also Published As

Publication number Publication date
CN105072433B (en) 2017-03-22

Similar Documents

Publication Publication Date Title
US7983477B2 (en) Method and apparatus for generating a stereoscopic image
JP4918689B2 (en) Stereo image generation method and stereo image generation apparatus for generating a stereo image from a two-dimensional image using a mesh map
US20160267720A1 (en) Pleasant and Realistic Virtual/Augmented/Mixed Reality Experience
US20150339844A1 (en) Method and apparatus for achieving transformation of a virtual view into a three-dimensional view
US11190756B2 (en) Head-mountable display system
AU2018249563B2 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
EP3287837A1 (en) Head-mountable display system
KR102564479B1 (en) Method and apparatus of 3d rendering user' eyes
CN101180653A (en) Method and device for three-dimensional rendering
CN105072433A (en) Depth perception mapping method applied to head track virtual reality system
CN114022565A (en) Alignment method and alignment device for display equipment and vehicle-mounted display system
EP4068768A1 (en) 3d display apparatus and 3d image display method
CN111275801A (en) Three-dimensional picture rendering method and device
CN104599308B (en) A kind of dynamic chart pasting method based on projection
US20220067966A1 (en) Localization and mapping using images from multiple devices
CN104134235A (en) Real space and virtual space fusion method and real space and virtual space fusion system
CN103747236A (en) 3D (three-dimensional) video processing system and method by combining human eye tracking
WO2020048461A1 (en) Three-dimensional stereoscopic display method, terminal device and storage medium
CN102510503B (en) Stereoscopic display method and stereoscopic display equipment
US10296098B2 (en) Input/output device, input/output program, and input/output method
GB2566276A (en) A method of modifying an image on a computational device
KR102113285B1 (en) Image processing method and apparatus of parallel axis typed stereo camera system for 3d-vision of near objects
JP6168597B2 (en) Information terminal equipment
US10110876B1 (en) System and method for displaying images in 3-D stereo
CN106484850B (en) Panoramic table display methods and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170322

Termination date: 20180821