CN109542209A - A method of adapting to human eye convergence - Google Patents
A method of adapting to human eye convergence Download PDFInfo
- Publication number
- CN109542209A CN109542209A CN201710662219.1A CN201710662219A CN109542209A CN 109542209 A CN109542209 A CN 109542209A CN 201710662219 A CN201710662219 A CN 201710662219A CN 109542209 A CN109542209 A CN 109542209A
- Authority
- CN
- China
- Prior art keywords
- distance
- stereo scene
- human eye
- shooting
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Stereoscopic And Panoramic Photography (AREA)
Abstract
The present invention provides a kind of methods of adaptation human eye convergence, comprising the following steps: step 1: determining the convergence angle when eyes observation different distance object of people, obtains the corresponding relationship of the convergence angle Yu the distance;Step 2: shooting stereo scene;Step 3: the stereo scene is presented, determine viewer when watching the stereo scene visual pattern center object between shooting point at a distance from, the real-time convergence angle of eyes is determined according to the corresponding relationship, by the left and right view ball rotation of image in VR viewing apparatus to the real-time convergence angle, carry out the eyes proper motion with viewer with this to be adapted, to reach visual experience identical with viewing natural scene.The method is applicable not only to viewing outdoor scene VR stereo scene, is also applied for viewing VR three-dimensional video-frequency.This programme can effectively reduce viewing outdoor scene VR content when it is caused it is dizzy with it is uncomfortable.
Description
Technical field
The present invention relates to technical field of stereoscopic vision more particularly to a kind of methods for adapting to human eye convergence.
Background technique
VR (VirtualReality, virtual reality) is by VPL company, U.S. founder Lanier at last century 80 years
What in generation, just proposed.Its specific intension is: comprehensive utilization computer graphics system and the interface equipments such as various reality and control are being counted
The technology for immersing feeling is provided in three-dimensional environment generating on calculation machine, can interacting.And being generated by computer, can interacting three
Dimension environment claims as virtual environment (i.e. Virtual Environment, abbreviation VE).And outdoor scene VR is the aphorama using shooting
Frequency and image are depicted as tridimensional virtual environment in a computer, and are interacted by it with people using external sensor.
For people when watching outdoor scene VR content, spectators may may feel that dizziness at present, remove content of shooting clarity,
Aberration problems, stability etc. is related with photographic quality and delay, screen refresh rate and the resolution ratio of virtual implementing helmet and
The problems such as interpupillary distance is uncomfortable, it is also related with the convergence factor of human eye.Human eye when observing real world, it is lenticular adjusting generally with
The movement of eyeball has certain corresponding relationship, and when watching distant objects attentively, eyes head-up, crystalline lens loosens, and works as and watch attentively nearby
Eye muscle can compress crystalline lens change shape when object, so that clearly image energy is correctly fallen on the retina, and
Eyes can be converged to centre simultaneously, when wearing VR glasses viewing outdoor scene VR content, if this set vision for having violated human eye is former
Reason will cause dizziness and discomfort.In addition, people, when watching natural scene, general custom is in rotation head, by optic centre
Object observed by alignment rotates the probability on head much larger than Oculomotor probability, wears VR glasses and watch outdoor scene VR content
When, position of this optic centre in depth map can be extrapolated according to the sensor information in VR glasses, to obtain phase
The range information answered, this is also the basic principle that the present invention is relied on
Summary of the invention
In view of this, the technical problem to be solved in the present invention is to provide a kind of method of adaptation human eye convergence, it can be effective
Reduce viewing VR content when it is caused it is dizzy with it is uncomfortable.
The technical scheme of the present invention is realized as follows:
A method of adapting to human eye convergence, comprising the following steps:
Step 1: determine people eyes observation different distance object when convergence angle, obtain the convergence angle with it is described
The corresponding relationship of distance;
Step 2: shooting stereo scene;
Step 3: the stereo scene is presented, determines viewer's visual pattern center object when watching the stereo scene
The distance between body and shooting point determine the real-time convergence angle of eyes according to the corresponding relationship, will scheme in VR viewing apparatus
The left and right of picture regards ball rotation to the real-time convergence angle.
Preferably, before the step 3, further includes:
Obtain the depth map of the stereo scene;
The determination viewer when watching stereo scene between visual pattern center object and shooting point away from
From including: to determine the viewer visual pattern center object and shooting point when watching stereo scene according to the depth map
The distance between.
Preferably, the shooting stereo scene includes:
Constructing stereo scene is clapped at a distance of the combination fixed point ring that camera appropriate forms using binocular camera and/or by two,
It specifically includes:
It is placed on panorama electric platform by binocular camera and/or by two at a distance of the combination that camera appropriate forms, panorama
Electric platform, which turns an angle, to be triggered binocular camera and/or the combinations formed by two at a distance of camera appropriate while shooting view
Scenic picture under angle repeats shooting and rotates a circle to the panorama electric platform, obtains two image sequences in left and right;Or it uses
Binocular camera and/or the combinations formed by two at a distance of the camera appropriate sync pulse jammings one on pan and tilt head enclose video, then
Solution is pressed into two image sequences in left and right.
Also binocular camera can be used and enclose video in pan and tilt head sync pulse jamming one, then solution is pressed into the two image sequences in left and right
Column.
As special case, in the stereo scene using the stereoscopic full views camera shooting being made of multiple twin-lens groups, by multiple
The image of left side camera lens shooting forms left-side images sequence in twin-lens group, and the image of right side camera lens shooting forms image right sequence
Column.
It, can be with using the three-dimensional panoramic video for the stereoscopic full views camera shooting being made of multiple twin-lens groups as special case
By obtaining two image sequences of right and left eyes of each frame after decompression.
Preferably, the depth map for obtaining the stereo scene includes:
The depth map of the stereo scene is obtained according to binocular location algorithm.
The method proposed by the present invention for adapting to human eye convergence, precalculates the relationship of human eye convergence angle and distance, can be with
When determining that viewer watches stereo scene visual pattern center object between shooting point at a distance from after calculate human eye convergence
Angle, by being converged to adapt to the natural of human eye by the left and right view ball rotation of image in VR viewing apparatus to human eye convergence angle
Poly- movement, achievees the purpose that meet human eye vision principle, effectively reduce when viewing outdoor scene VR content it is caused it is dizzy with it is uncomfortable.
Detailed description of the invention
Fig. 1 is the flow chart for the method for adapting to human eye convergence that the embodiment of the present invention proposes;
Fig. 2 is the flow chart for the method for adapting to human eye convergence that further embodiment of this invention proposes;
Fig. 3 is the flow chart for the method for adapting to human eye convergence that another embodiment of the present invention proposes.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, the embodiment of the present invention proposes a kind of method of adaptation human eye convergence, comprising the following steps:
S101: it determines the convergence angle when eyes observation different distance object of people, obtains pair of convergence angle and distance
It should be related to;
S102: shooting stereo scene;
S103: being presented stereo scene, determines viewer visual pattern center object and shooting when watching stereo scene
The distance between point, the real-time convergence angle of eyes is determined according to corresponding relationship, by the left and right view ball of image in VR viewing apparatus
It rotates to real-time convergence angle.
As it can be seen that the method proposed by the present invention for adapting to human eye convergence, precalculates the relationship of human eye convergence angle and distance,
Can determine viewer when watching stereo scene visual pattern center object between shooting point at a distance from after calculate people
Eye convergence angle, the natural convergence that the left and right by the way that VR to be watched to image adapts to human eye to human eye convergence angle depending on ball rotation are dynamic
Make, achieve the purpose that meet human eye vision principle, effectively reduce viewing VR content when it is caused it is dizzy with it is uncomfortable.
As shown in Fig. 2, further embodiment of this invention proposes a kind of method of human eye convergence, comprising the following steps:
S201: human eye convergence calibration;It determines the convergence angle when eyes observation different distance object of people, obtains convergence angle
The corresponding relationship of degree and distance.
In the present embodiment, can be used determine interpupillary distance binocular camera shooting different distance object, such as according to
Certain intervals shoot several object photographs out of 0.5-10 meters ranges, and the left images that binocular is shot then are attached to left and right view
On ball, the object under different distance is then watched, left and right is adjusted and depending on the angle of the rotation of ball human eye is seen under this distance
See that object most naturally, stereoscopic effect is best, is write down angular dimension at this time, thus successively obtained in 0.5 meter to 10 meters model of distance
Eyes convergence angle corresponding under interior different distance is enclosed, and these data are fitted to obtain corresponding relationship T.
Theoretically eyes convergence angle when observing near objects is big, and convergence angle is small when observing distant objects, leads to
The mode for crossing the method rotation Binocular vison ball of simulation human eye convergence, records different distance (d1,d2,...,dn) object double
Rotation angle (α when under emmetropia convergence1,α2,...,αn), and according to data fitting theory, it can find out between the two
Corresponding relationship, for subsequent adjustment convergence foundation,
S202: shooting stereo scene;Constructing stereo scene is clapped using binocular camera fixed point ring, is specifically included: by binocular phase
Machine is placed on panorama electric platform, and panorama electric platform turns an angle the scene for triggering binocular camera simultaneously under shooting visual angle
Picture repeats shooting and rotates a circle to panorama electric platform, obtains two image sequences in left and right, or binocular camera is placed on entirely
Sync pulse jamming one encloses video on scape electric platform, and then solution is pressed into two image sequences in left and right.
Purpose is true 3D visual environment to be constructed when presenting by shooting to outdoor scene environment, including fixed point is stood
Body scenario building, for example clapped using binocular fisheye camera fixed point ring, obtain the steric information of each viewpoint of current location;Or
Can choose further splicing sequence of left-right images obtain constitute full-view stereo vision right and left eyes panoramic picture similarly can also
To carry out steric environment shooting using the stereoscopic full views camera being made of multiple twin-lens groups, right and left eyes two figures are directly obtained
As sequence, naturally it is also possible to which the shot sequence image in left side in multiple twin-lens groups is carried out the left eye that splicing obtains stereoscopic vision
The lens image sequence on right side in multiple twin-lens groups is carried out the right eye panoramic figure that splicing obtains stereoscopic vision by panoramic picture
Picture.
S203: depth map is obtained;The depth map of stereo scene is obtained according to binocular location algorithm.
Obtain one of the vital task that each point in scene is computer vision system relative to the distance of gamma camera.In scene
Each point can indicate relative to the distance of gamma camera with depth map (Depth Map), i.e. each of depth map pixel value
Indicate the distance between certain point and video camera in scene.
In the present embodiment, it can use existing binocular location algorithm and calculate depth map and spelling under each visual angle
It is connected into 360 degree of panoramic range image, or carries out 360 degree of scannings in the position using laser range finder and obtains panorama depth map
It is projected to obtain depth map as or based on VSLAM (positioning and composition while view-based access control model) establishes scene point cloud chart, with
And it is not excluded for manual drawing and generates depth map.
S204: the convergence of human eye is simulated depending on ball in rotation left and right;Stereo scene is presented, determines that viewer exists according to depth map
Visual pattern center object determines the real-time convergence of eyes according to corresponding relationship at a distance from shooting point when watching stereo scene
Angle, by the left and right view ball rotation of image in VR viewing apparatus to real-time convergence angle.
In the present embodiment, filmed sequence of left-right images can be attached on the view ball of left and right, watches outdoor scene VR scene
When from the gyroscope of the VR helmet the current head state of available viewer, thus extrapolate the visual angle present image just
Center finds corresponding position depth information corresponding in 360 ° of panoramic range images of splicing synthesis in advance, obtains mesh
Distance of preceding viewer when visual pattern center object is in actual photographed when watching stereo scene between shooting point is big
It is small, and then the distance acquired by " human eye convergence calibration " automatically adjusts left and right view with corresponding relationship is converged, for example obtains
The distance for obtaining current location is 4m, and last distance is 2m, needs left and right view ball inside when removing to calculate 2m and 4m with T
It is 1 ° that the angle of rotation, which makes the difference to obtain, then just rotation left and right regards 1 ° of ball at once, that is, can reach the effect that convergence is adjusted.Certainly,
After determining distance, the angle of human eye convergence can also be directly calculated according to T, directly control left and right view ball rotation at this time to correspondence
Angle.For example, after determining that distance is 6m according to the information of depth map, the angle of eyes convergence is calculated according to T
For A, the left and right for directly controlling image in VR viewing apparatus regards ball rotation to A angle.
In the present embodiment, VR viewing apparatus can be the devices such as the VR helmet or VR glasses.
As it can be seen that the method for adapting to human eye convergence that the embodiment of the present invention proposes, precalculates human eye convergence angle and distance
Relationship, can determine viewer when watching stereo scene visual pattern center object between shooting point at a distance from after
Human eye convergence angle is calculated, by the way that the left and right of image in VR viewing apparatus is regarded ball rotation to human eye convergence angle, to adapt to
The natural convergence of human eye acts, and achievees the purpose that meet human eye vision principle, effectively reduces caused dizzy when viewing VR content
With discomfort.
As shown in figure 3, another embodiment of the present invention proposes a kind of method of human eye convergence, comprising the following steps:
S301: human eye convergence calibration;It determines the convergence angle when eyes observation different distance object of people, obtains convergence angle
The corresponding relationship of degree and distance;
S302: using the stereoscopic full views camera shooting three-dimensional panoramic video being made of multiple twin-lens groups;
S303: decompression obtains the right and left eyes image sequence or right and left eyes panorama of each frame image in three-dimensional panoramic video
Figure;
S304: the depth map of stereo scene is obtained according to binocular location algorithm;
S305: stereo scene is presented, viewer visual pattern center when watching stereo scene is determined according to depth map
Object determines the real-time convergence angle of eyes according to corresponding relationship at a distance from shooting point, in the every of viewing three-dimensional panoramic video
Rotation left and right adapts to the natural convergence movement of human eye depending on ball when one frame image.
In conclusion following effect at least may be implemented in the embodiment of the present invention:
In embodiments of the present invention, the relationship for precalculating human eye convergence angle and being observed between object distance, can be with
Determining viewer when watching three-dimensional panoramic video between each frame stereo scene visual pattern center object and shooting point
Distance after calculate human eye convergence angle, by by the left and right view ball rotation of VR device image to human eye convergence angle, thus suitable
It answers the natural convergence of human eye to act, achievees the purpose that meet human eye vision principle, cause when effectively reducing viewing outdoor scene VR content
It is dizzy with it is uncomfortable.
Finally, it should be noted that the foregoing is merely presently preferred embodiments of the present invention, it is merely to illustrate skill of the invention
Art scheme, is not intended to limit the scope of the present invention.Any modification for being made all within the spirits and principles of the present invention,
Equivalent replacement, improvement etc., are included within the scope of protection of the present invention.
Claims (5)
1. a kind of method for adapting to human eye convergence, which comprises the following steps:
Step 1: determining the convergence angle when eyes observation different distance object of people, obtain the convergence angle and the distance
Corresponding relationship;
Step 2: shooting stereo scene;
Step 3: the stereo scene is presented, determine viewer when watching the stereo scene visual pattern center object with
The distance between shooting point determines the real-time convergence angle of eyes according to the corresponding relationship, by image in VR viewing apparatus
Left and right regards ball rotation to the real-time convergence angle.
2. adapting to the method for human eye convergence as described in claim 1, which is characterized in that before the step 3, further includes:
Obtain the depth map of the stereo scene;
The determining viewer when watching the stereo scene visual pattern center object between shooting point at a distance from packet
It includes: determining the viewer when watching stereo scene between visual pattern center object and shooting point according to the depth map
Distance.
3. adapting to the method for human eye convergence as described in claim 1, which is characterized in that the shooting stereo scene includes:
Constructing stereo scene is clapped at a distance of the combination fixed point ring that camera appropriate forms using binocular camera and/or by two, specifically
Include:
It is placed on panorama electric platform by binocular camera and/or by two at a distance of the combination that camera appropriate forms, panorama is electronic
Cloud platform rotation certain angle triggers binocular camera and/or by two under the combinations that camera appropriate forms simultaneously shooting visual angle
Scenic picture, repeat shooting rotate a circle to the panorama electric platform, obtain left and right two image sequences;Or use binocular
Camera and/or the combinations formed by two at a distance of the camera appropriate sync pulse jammings one on pan and tilt head enclose video, then decompress
At two image sequences in left and right.
4. adapting to the method for human eye convergence as described in claim 1, which is characterized in that the shooting stereo scene includes:
The stereo scene shot using the stereoscopic full views camera being made of multiple twin-lens groups, is specifically included:
Stereoscopic full views photo is shot, two image sequences of right and left eyes are obtained;Or three-dimensional video-frequency is shot, each frame is obtained after decompression
Two image sequences of right and left eyes.
5. adapting to the method for human eye convergence as claimed in claim 2, which is characterized in that the depth for obtaining the stereo scene
Degree figure includes;
The depth map of the stereo scene is obtained according to binocular location algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710662219.1A CN109542209A (en) | 2017-08-04 | 2017-08-04 | A method of adapting to human eye convergence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710662219.1A CN109542209A (en) | 2017-08-04 | 2017-08-04 | A method of adapting to human eye convergence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109542209A true CN109542209A (en) | 2019-03-29 |
Family
ID=65823550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710662219.1A Pending CN109542209A (en) | 2017-08-04 | 2017-08-04 | A method of adapting to human eye convergence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109542209A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270766A (en) * | 2020-10-14 | 2021-01-26 | 浙江吉利控股集团有限公司 | Control method, system, equipment and storage medium of virtual reality system |
CN113516683A (en) * | 2021-04-12 | 2021-10-19 | 广东视明科技发展有限公司 | Modeling method for evaluating influence of moving speed on stereoscopic vision |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0882765A (en) * | 1994-09-13 | 1996-03-26 | Nippon Telegr & Teleph Corp <Ntt> | Stereoscopic display device |
CN102340678A (en) * | 2010-07-21 | 2012-02-01 | 深圳Tcl新技术有限公司 | Stereoscopic display device with adjustable field depth and field depth adjusting method |
CN103096106A (en) * | 2011-11-01 | 2013-05-08 | 三星电子株式会社 | Image processing apparatus and method |
CN103404155A (en) * | 2010-12-08 | 2013-11-20 | 汤姆逊许可公司 | Method and system for 3d display with adaptive disparity |
CN104506836A (en) * | 2014-11-28 | 2015-04-08 | 深圳市亿思达科技集团有限公司 | Personal holographic three-dimensional display method and device based on eyeball tracking |
CN104661012A (en) * | 2014-11-28 | 2015-05-27 | 深圳市亿思达科技集团有限公司 | Individual holographic three-dimensional display method and equipment |
CN105075254A (en) * | 2013-03-28 | 2015-11-18 | 索尼公司 | Image processing device and method, and program |
CN105872518A (en) * | 2015-12-28 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Method and device for adjusting parallax through virtual reality |
CN106231292A (en) * | 2016-09-07 | 2016-12-14 | 深圳超多维科技有限公司 | A kind of stereoscopic Virtual Reality live broadcasting method, device and equipment |
CN106228613A (en) * | 2016-06-12 | 2016-12-14 | 深圳超多维光电子有限公司 | Construction method, device and the stereoscopic display device of a kind of virtual three-dimensional scene |
CN106309089A (en) * | 2016-08-29 | 2017-01-11 | 深圳市爱思拓信息存储技术有限公司 | VR (Virtual Reality) eyesight correction method and device |
CN106507086A (en) * | 2016-10-28 | 2017-03-15 | 北京灵境世界科技有限公司 | A kind of 3D rendering methods of roaming outdoor scene VR |
-
2017
- 2017-08-04 CN CN201710662219.1A patent/CN109542209A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0882765A (en) * | 1994-09-13 | 1996-03-26 | Nippon Telegr & Teleph Corp <Ntt> | Stereoscopic display device |
CN102340678A (en) * | 2010-07-21 | 2012-02-01 | 深圳Tcl新技术有限公司 | Stereoscopic display device with adjustable field depth and field depth adjusting method |
CN103404155A (en) * | 2010-12-08 | 2013-11-20 | 汤姆逊许可公司 | Method and system for 3d display with adaptive disparity |
CN103096106A (en) * | 2011-11-01 | 2013-05-08 | 三星电子株式会社 | Image processing apparatus and method |
CN105075254A (en) * | 2013-03-28 | 2015-11-18 | 索尼公司 | Image processing device and method, and program |
CN104506836A (en) * | 2014-11-28 | 2015-04-08 | 深圳市亿思达科技集团有限公司 | Personal holographic three-dimensional display method and device based on eyeball tracking |
CN104661012A (en) * | 2014-11-28 | 2015-05-27 | 深圳市亿思达科技集团有限公司 | Individual holographic three-dimensional display method and equipment |
CN105872518A (en) * | 2015-12-28 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Method and device for adjusting parallax through virtual reality |
CN106228613A (en) * | 2016-06-12 | 2016-12-14 | 深圳超多维光电子有限公司 | Construction method, device and the stereoscopic display device of a kind of virtual three-dimensional scene |
CN106309089A (en) * | 2016-08-29 | 2017-01-11 | 深圳市爱思拓信息存储技术有限公司 | VR (Virtual Reality) eyesight correction method and device |
CN106231292A (en) * | 2016-09-07 | 2016-12-14 | 深圳超多维科技有限公司 | A kind of stereoscopic Virtual Reality live broadcasting method, device and equipment |
CN106507086A (en) * | 2016-10-28 | 2017-03-15 | 北京灵境世界科技有限公司 | A kind of 3D rendering methods of roaming outdoor scene VR |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270766A (en) * | 2020-10-14 | 2021-01-26 | 浙江吉利控股集团有限公司 | Control method, system, equipment and storage medium of virtual reality system |
CN113516683A (en) * | 2021-04-12 | 2021-10-19 | 广东视明科技发展有限公司 | Modeling method for evaluating influence of moving speed on stereoscopic vision |
CN113516683B (en) * | 2021-04-12 | 2023-10-10 | 广东视明科技发展有限公司 | Modeling method for evaluating influence of moving speed on stereoscopic vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7094266B2 (en) | Single-depth tracking-accommodation-binocular accommodation solution | |
US10750154B2 (en) | Immersive stereoscopic video acquisition, encoding and virtual reality playback methods and apparatus | |
CN106464854B (en) | Image encodes and display | |
CN106413829B (en) | Image coding and display | |
US20230141039A1 (en) | Immersive displays | |
CN108600733B (en) | Naked eye 3D display method based on human eye tracking | |
US20150358539A1 (en) | Mobile Virtual Reality Camera, Method, And System | |
EP4012482A1 (en) | Display | |
CN104777620B (en) | The depth of field identification Optical devices and its imaging method of virtual reality 3D scenes | |
CN105939481A (en) | Interactive three-dimensional virtual reality video program recorded broadcast and live broadcast method | |
US9118894B2 (en) | Image processing apparatus and image processing method for shifting parallax images | |
JP2008524673A (en) | Stereo camera image distortion correction apparatus and method | |
JP6384940B2 (en) | 3D image display method and head mounted device | |
WO2018176927A1 (en) | Binocular rendering method and system for virtual active parallax computation compensation | |
CN107545537A (en) | A kind of method from dense point cloud generation 3D panoramic pictures | |
CN105141941A (en) | Digital panoramic 3D film production method and system | |
CN108614636A (en) | A kind of 3D outdoor scenes VR production methods | |
CN109542209A (en) | A method of adapting to human eye convergence | |
CN106507086A (en) | A kind of 3D rendering methods of roaming outdoor scene VR | |
JP6207640B2 (en) | 2D image stereoscopic display device | |
CN204496115U (en) | The depth of field identification optical devices of virtual reality 3D scene | |
CN108513122B (en) | Model adjusting method and model generating device based on 3D imaging technology | |
WO2023056803A1 (en) | Holographic presentation method and apparatus | |
CN113382222B (en) | Display method based on holographic sand table in user moving process | |
CN113382229B (en) | Dynamic auxiliary camera adjusting method and device based on holographic sand table |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190329 |