CN110850977B - Stereoscopic image interaction method based on 6DOF head-mounted display - Google Patents

Stereoscopic image interaction method based on 6DOF head-mounted display Download PDF

Info

Publication number
CN110850977B
CN110850977B CN201911077191.0A CN201911077191A CN110850977B CN 110850977 B CN110850977 B CN 110850977B CN 201911077191 A CN201911077191 A CN 201911077191A CN 110850977 B CN110850977 B CN 110850977B
Authority
CN
China
Prior art keywords
shaking
virtual
user
dimensional scene
wearer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911077191.0A
Other languages
Chinese (zh)
Other versions
CN110850977A (en
Inventor
吕云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Weiai New Economic And Technological Research Institute Co ltd
Original Assignee
Chengdu Weiai New Economic And Technological Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Weiai New Economic And Technological Research Institute Co ltd filed Critical Chengdu Weiai New Economic And Technological Research Institute Co ltd
Priority to CN201911077191.0A priority Critical patent/CN110850977B/en
Publication of CN110850977A publication Critical patent/CN110850977A/en
Application granted granted Critical
Publication of CN110850977B publication Critical patent/CN110850977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The application discloses a stereoscopic image interaction method based on a 6DOF head-mounted display, which comprises the following steps of S1, collecting spatial position information and posture information of a visual angle of a wearer; s2, adjusting the posture information and the proportion parameters of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and simultaneously establishing a dynamic motion model for the dynamic object; s3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface; s4, according to the shaking grade of the displayed virtual picture, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model; s5, acquiring the operation form of the two hands of the user, the action parameters of the two hands of the user and the input voice instruction; s6, performing virtual response of the three-dimensional scene based on the operation form, the action parameters and the input voice command.

Description

Stereoscopic image interaction method based on 6DOF head-mounted display
Technical Field
The application belongs to the technical field of AR, and particularly relates to a stereoscopic image interaction method based on a 6DOF head-mounted display.
Background
The AR augmented reality (Augmented Reality) technology is a technology for skillfully fusing virtual information with a real world, and widely uses various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like, and applies virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer to the real world after simulation, so that the two kinds of information are mutually complemented, thereby realizing the enhancement of the real world.
The interaction relation between the user and the simulation environment and between the user and various virtual objects in the simulation environment are important components of the AR technology, and are subject to shaking of virtual pictures and interaction modes with the virtual environment, so that the user experience in the virtual environment is poor.
Disclosure of Invention
The application aims to provide a stereoscopic image interaction method based on a 6DOF head-mounted display to solve the problem of poor effect of the conventional virtual interaction body school.
In order to achieve the above purpose, the application adopts the following technical scheme:
a stereoscopic image interaction method based on a 6DOF head-mounted display, comprising:
s1, acquiring spatial position information and posture information of a visual angle of a wearer;
s2, adjusting the posture information and the proportion parameters of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and simultaneously establishing a dynamic motion model for the dynamic object;
s3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface;
s4, according to the shaking grade of the displayed virtual picture, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model;
s5, acquiring the operation form of the two hands of the user, the action parameters of the two hands of the user and the input voice instruction;
s6, performing virtual response of the three-dimensional scene based on the operation form, the action parameters and the input voice command.
Preferably, the spatial position information is perspective spatial image information captured by at least two sets of cameras located above the 6DOF head mounted display.
Preferably, the attitude information includes pitch angle, yaw angle, roll angle of the 6DOF head mounted display device.
Preferably, the shaking level is a shaking degree felt by the wearer on the virtual picture, and is classified into primary shaking, intermediate shaking, and overload shaking.
Preferably, the method for adjusting the rendering effect parameters of the three-dimensional scene and the dynamic motion model is as follows: the wearer selects a specific shaking grade according to the shaking grade option displayed on the 6DOF head-mounted display, and then adjusts the rendering effect of the virtual interactive picture so as to slow down the shaking degree of the wearer.
Preferably, the rendering effect parameters are picture delay, light intensity, luminous map subdivision, light buffering and picture dithering.
Preferably, the voice instructions of the wearer are collected and received through a voice input device.
Preferably, the generation and the inspection of the voice model are carried out based on a deep learning algorithm, the instruction intention of the voice instruction of the wearer is obtained, and the corresponding action instruction is generated according to the instruction intention.
Preferably, the operation form and the action parameters of both hands of the user are obtained through a camera at the top of the 6DOF head-mounted display, and each operation form corresponds to a unique operation instruction.
Preferably, according to the operation instruction corresponding to the operation mode of the two hands of the wearer, corresponding operation is performed on the virtual object in the virtual screen.
The stereoscopic image interaction method based on the 6DOF head-mounted display provided by the application has the following beneficial effects:
according to the application, through collecting the spatial position information and the posture information of the visual angle of a wearer, a virtual three-dimensional scene with high fidelity is constructed and rendered, and a dynamic motion model is built for a dynamic object; the virtual scene is controlled by adopting voice instruction input and a double-hand operation mode, so that the selectivity of user interaction experience is improved; meanwhile, the shaking grade is adopted, and the virtual scene rendering parameters are adjusted according to the shaking grade selected by the user, so that the user requirements are met, and the user experience comfort level is increased.
Drawings
Fig. 1 is a flow chart of a stereoscopic image interaction method based on a 6DOF head mounted display.
Detailed Description
The following description of the embodiments of the present application is provided to facilitate understanding of the present application by those skilled in the art, but it should be understood that the present application is not limited to the scope of the embodiments, and all the applications which make use of the inventive concept are protected by the spirit and scope of the present application as defined and defined in the appended claims to those skilled in the art.
According to an embodiment of the present application, referring to fig. 1, the stereoscopic image interaction method based on the 6DOF head-mounted display of the present solution includes:
s1, acquiring spatial position information and posture information of a visual angle of a wearer;
s2, adjusting the posture information and the proportion parameters of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and simultaneously establishing a dynamic motion model for the dynamic object;
s3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface;
s4, according to the shaking grade of the displayed virtual picture, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model;
s5, acquiring the operation form of the two hands of the user, the action parameters of the two hands of the user and the input voice instruction;
s6, performing virtual response of the three-dimensional scene based on the operation form, the action parameters and the input voice command.
The above steps are described in detail below
Step S1, acquiring spatial position information and posture information of a visual angle of a wearer;
at least two groups of cameras above the 6DOF head-mounted display, wherein the cameras can rotate and deflect, and the shooting of the spatial information of the user visual angle is performed based on the two rotatable cameras.
The gesture information includes a pitch angle, a yaw angle, and a roll angle of the 6DOF head mounted display device, that is, a pitch angle, a yaw angle, and a roll angle when the 6DOF head mounted display device is worn by the user, and is used for reflecting gesture information of the user in the virtual environment.
S2, adjusting the posture information and the proportion parameters of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and simultaneously establishing a dynamic motion model for the dynamic object;
and (3) acquiring the gesture information and the spatial position information of the user in the step (S1), and further adjusting the gesture information and the proportion parameters of the virtual object according to the relative relation between the spatial position information and the real scene, so that the gesture of the user in the virtual environment is closer to a true value and accords with the virtual environment.
Because the coordinate system on which the cameras are used for shooting is a space coordinate system, and the virtual environment and the reality are interacted with each other to depend on the virtual display interface coordinate system, the two coordinate systems are mutually independent, so that the space position information and the gesture information acquired by at least two groups of cameras are required to be converted into the virtual position information under the virtual reality display interface.
And then a virtual three-dimensional scene for man-machine interaction is constructed, and the three-dimensional scene is rendered to restore a real environment with high reality.
And a dynamic motion model is built for the dynamic object, and the gesture information and the space position information of the dynamic object are converted into virtual position information under a virtual reality display interface, and are simultaneously overlapped and framed in a virtual three-dimensional scene.
S3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface;
and (3) displaying the virtual scene constructed in the step (S2) on a 6DOF head-mounted display, wherein a user can interact with the virtual scene in real time through the display when wearing the 6DOF head-mounted device.
S4, according to the shaking grade of the displayed virtual picture, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model;
the shaking level is the shaking degree felt by the wearer on the virtual picture, and different users can feel different even facing the same virtual scene; therefore, the scheme divides the shaking grades, can divide the shaking grades in multiple stages, and only divides the shaking grades into three grades, namely primary shaking, medium shaking and overload shaking.
The primary shake is shake or shake range that the user can bear.
Medium-level shaking is still within an affordable range, but users are already able to feel the shaking brought by the current virtual environment significantly.
Overload shaking is beyond the bearing range of users, and phenomena such as dizzy and vomiting are seriously caused.
According to the actual requirements, the user can display the shaking level of the virtual scene in the three-dimensional scene through the operation form of the two hands or/and the input voice command, and select the specific shaking level according to the current experience of the user.
According to the shaking level of the specific three-dimensional scene selected by the user, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model; to slow down the user's shaking degree; rendering effect parameters such as picture delay, light intensity, luminous map subdivision, light buffering and picture dithering.
S5, acquiring the operation form of the hands of the user, the action parameters of the hands and the input voice instructions;
the input instructions comprise a user double-hand morphological instruction and a user voice instruction, and information interaction of the virtual scene is realized through the double-hand morphological instruction and the voice instruction.
The voice command may be acquired and obtained through a voice input device, and may be MIC, which is not limited herein.
The collected and acquired voice instructions are subjected to generation and inspection of a voice model based on a deep learning algorithm, instruction intention of the voice instructions of a wearer is obtained, corresponding action instructions are generated according to the instruction intention, and the action instructions act on a virtual scene.
The method comprises the steps that operation modes and action parameters of two hands of a user are obtained through a camera at the top of a 6DOF head-mounted display, each operation mode corresponds to a unique operation instruction, and each operation instruction corresponds to a unique virtual scene instruction.
And S6, performing virtual response of the three-dimensional scene based on the operation form, the action parameters thereof and the input voice command.
And (5) acquiring the voice instruction and the operation form instruction in the step (S5), and converting the instruction into a corresponding operation instruction in the virtual scene, thereby realizing the corresponding of the virtual scene.
According to the application, through collecting the spatial position information and the posture information of the visual angle of a wearer, a virtual three-dimensional scene with high fidelity is constructed and rendered, and a dynamic motion model is built for a dynamic object; the virtual scene is controlled by adopting voice instruction input and a double-hand operation mode, so that the selectivity of user interaction experience is improved; meanwhile, the shaking grade is adopted, and the virtual scene rendering parameters are adjusted according to the shaking grade selected by the user, so that the user requirements are met, and the user experience comfort level is increased.
Although specific embodiments of the application have been described in detail with reference to the accompanying drawings, it should not be construed as limiting the scope of protection of the present patent. Various modifications and variations which may be made by those skilled in the art without the creative effort are within the scope of the patent described in the claims.

Claims (1)

1. A stereoscopic image interaction method based on a 6DOF head-mounted display, comprising:
s1, acquiring spatial position information and posture information of a visual angle of a wearer;
s2, adjusting the posture information and the proportion parameters of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and simultaneously establishing a dynamic motion model for the dynamic object;
s3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface;
s4, according to the shaking grade of the displayed virtual picture, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model;
s5, acquiring the operation form of the two hands of the user, the action parameters of the two hands of the user and the input voice instruction;
s6, performing virtual response of the three-dimensional scene based on the operation form, the action parameters thereof and the input voice command;
the spatial position information is visual angle spatial image information shot by at least two groups of cameras above the 6DOF head-mounted display;
the gesture information comprises a pitch angle, a yaw angle and a roll angle of the 6DOF head-mounted display device;
the shaking grade is the shaking degree felt by the wearer on the virtual picture, and is divided into primary shaking, intermediate shaking and overload shaking;
the method for adjusting the rendering effect parameters of the three-dimensional scene and the dynamic motion model comprises the following steps: the wearer selects a specific shaking grade according to the shaking grade option displayed on the 6DOF head-mounted display, and further adjusts the rendering effect of the virtual interactive picture so as to slow down the shaking degree of the wearer;
the rendering effect parameters comprise picture delay, light intensity, luminous map subdivision, lamplight buffering and picture dithering;
collecting and receiving voice instructions of a wearer through voice input equipment;
generating and checking a voice model based on a deep learning algorithm to obtain the instruction intention of a voice instruction of a wearer, and generating a corresponding action instruction according to the instruction intention;
acquiring operation forms and action parameters of both hands of a user through a camera at the top of the 6DOF head-mounted display, wherein each operation form corresponds to a unique operation instruction;
according to the operation instruction corresponding to the operation form of the hands of the wearer, corresponding operation is implemented on the virtual object in the virtual picture;
the shaking level is the shaking degree felt by the wearer on the virtual picture, and different users can feel different even facing the same virtual scene; therefore, the shaking grades are divided, and multistage division is carried out according to the shaking grades, wherein the multistage division comprises primary shaking, medium shaking and overload shaking;
the primary shaking is shaking or shaking range beatable by a user;
the medium-level shaking is in an bearable range, but a user can obviously feel shaking brought by the current virtual environment;
overload shaking is beyond the bearing range of the user, so that dizzy and vomiting can occur;
according to actual demands, displaying the shaking level of the virtual scene in the three-dimensional scene through the operation form of the two hands or/and the input voice command, and selecting a specific shaking level according to the current experience of the user;
according to the shaking level of the specific three-dimensional scene selected by the user, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model; to slow down the user's shaking degree; rendering effect parameters such as picture delay, light intensity, luminous map subdivision, light buffering and picture dithering.
CN201911077191.0A 2019-11-06 2019-11-06 Stereoscopic image interaction method based on 6DOF head-mounted display Active CN110850977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911077191.0A CN110850977B (en) 2019-11-06 2019-11-06 Stereoscopic image interaction method based on 6DOF head-mounted display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911077191.0A CN110850977B (en) 2019-11-06 2019-11-06 Stereoscopic image interaction method based on 6DOF head-mounted display

Publications (2)

Publication Number Publication Date
CN110850977A CN110850977A (en) 2020-02-28
CN110850977B true CN110850977B (en) 2023-10-31

Family

ID=69599692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911077191.0A Active CN110850977B (en) 2019-11-06 2019-11-06 Stereoscopic image interaction method based on 6DOF head-mounted display

Country Status (1)

Country Link
CN (1) CN110850977B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352510B (en) * 2020-03-30 2023-08-04 歌尔股份有限公司 Virtual model creation method, system and device and head-mounted equipment
CN111862348A (en) * 2020-07-30 2020-10-30 腾讯科技(深圳)有限公司 Video display method, video generation method, video display device, video generation device, video display equipment and storage medium
CN112286355B (en) * 2020-10-28 2022-07-26 杭州天雷动漫有限公司 Interactive method and system for immersive content
CN113709543A (en) * 2021-02-26 2021-11-26 腾讯科技(深圳)有限公司 Video processing method and device based on virtual reality, electronic equipment and medium
CN113515193B (en) * 2021-05-17 2023-10-27 聚好看科技股份有限公司 Model data transmission method and device
CN113823044B (en) * 2021-10-08 2022-09-13 刘智矫 Human body three-dimensional data acquisition room and charging method thereof
CN114866757B (en) * 2022-04-22 2024-03-05 深圳市华星光电半导体显示技术有限公司 Stereoscopic display system and method

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012068547A (en) * 2010-09-27 2012-04-05 Brother Ind Ltd Display device
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality
CN103019569A (en) * 2012-12-28 2013-04-03 西安Tcl软件开发有限公司 Interactive device and interactive method thereof
CN104603865A (en) * 2012-05-16 2015-05-06 丹尼尔·格瑞贝格 A system worn by a moving user for fully augmenting reality by anchoring virtual objects
CN104820497A (en) * 2015-05-08 2015-08-05 东华大学 A 3D interaction display system based on augmented reality
CN105446481A (en) * 2015-11-11 2016-03-30 周谆 Gesture based virtual reality human-machine interaction method and system
CN106507094A (en) * 2016-10-31 2017-03-15 北京疯景科技有限公司 The method and device of correction panoramic video display view angle
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN107204044A (en) * 2016-03-17 2017-09-26 深圳多哚新技术有限责任公司 A kind of picture display process and relevant device based on virtual reality
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN108287607A (en) * 2017-01-09 2018-07-17 成都虚拟世界科技有限公司 A kind of method at control HMD visual angles and wear display equipment
CN109375764A (en) * 2018-08-28 2019-02-22 北京凌宇智控科技有限公司 A kind of head-mounted display, cloud server, VR system and data processing method
CN109478340A (en) * 2016-07-13 2019-03-15 株式会社万代南梦宫娱乐 Simulation system, processing method and information storage medium
CN109801379A (en) * 2019-01-21 2019-05-24 视辰信息科技(上海)有限公司 General augmented reality glasses and its scaling method
CN109884976A (en) * 2018-10-24 2019-06-14 黄杏兰 A kind of AR equipment and its method of operation
CN109923462A (en) * 2016-09-13 2019-06-21 奇跃公司 Sensing spectacles
CN110035328A (en) * 2017-11-28 2019-07-19 辉达公司 Dynamic dithering and delay-tolerant rendering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130278631A1 (en) * 2010-02-28 2013-10-24 Osterhout Group, Inc. 3d positioning of augmented reality information

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012068547A (en) * 2010-09-27 2012-04-05 Brother Ind Ltd Display device
CN104603865A (en) * 2012-05-16 2015-05-06 丹尼尔·格瑞贝格 A system worn by a moving user for fully augmenting reality by anchoring virtual objects
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality
CN103019569A (en) * 2012-12-28 2013-04-03 西安Tcl软件开发有限公司 Interactive device and interactive method thereof
CN104820497A (en) * 2015-05-08 2015-08-05 东华大学 A 3D interaction display system based on augmented reality
CN105446481A (en) * 2015-11-11 2016-03-30 周谆 Gesture based virtual reality human-machine interaction method and system
CN107204044A (en) * 2016-03-17 2017-09-26 深圳多哚新技术有限责任公司 A kind of picture display process and relevant device based on virtual reality
CN109478340A (en) * 2016-07-13 2019-03-15 株式会社万代南梦宫娱乐 Simulation system, processing method and information storage medium
CN109923462A (en) * 2016-09-13 2019-06-21 奇跃公司 Sensing spectacles
CN106507094A (en) * 2016-10-31 2017-03-15 北京疯景科技有限公司 The method and device of correction panoramic video display view angle
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN108287607A (en) * 2017-01-09 2018-07-17 成都虚拟世界科技有限公司 A kind of method at control HMD visual angles and wear display equipment
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN110035328A (en) * 2017-11-28 2019-07-19 辉达公司 Dynamic dithering and delay-tolerant rendering
CN109375764A (en) * 2018-08-28 2019-02-22 北京凌宇智控科技有限公司 A kind of head-mounted display, cloud server, VR system and data processing method
CN109884976A (en) * 2018-10-24 2019-06-14 黄杏兰 A kind of AR equipment and its method of operation
CN109801379A (en) * 2019-01-21 2019-05-24 视辰信息科技(上海)有限公司 General augmented reality glasses and its scaling method

Also Published As

Publication number Publication date
CN110850977A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN110850977B (en) Stereoscopic image interaction method based on 6DOF head-mounted display
CN106157359B (en) Design method of virtual scene experience system
EP3035681B1 (en) Image processing method and apparatus
JP2019079552A (en) Improvements in and relating to image making
KR20150090183A (en) System and method for generating 3-d plenoptic video images
WO2017128887A1 (en) Method and system for corrected 3d display of panoramic image and device
CN105959666A (en) Method and device for sharing 3d image in virtual reality system
US11694352B1 (en) Scene camera retargeting
JP2019510991A (en) Arrangement for rendering virtual reality with an adaptive focal plane
CN116778368A (en) Planar detection using semantic segmentation
CN111294665A (en) Video generation method and device, electronic equipment and readable storage medium
CN103177467A (en) Method for creating naked eye 3D (three-dimensional) subtitles by using Direct 3D technology
CN112292657A (en) Moving around a computer simulated real scene
CN102262705A (en) Virtual reality method of actual scene
CN110174950B (en) Scene switching method based on transmission gate
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
US20240046507A1 (en) Low bandwidth transmission of event data
CN105979239A (en) Virtual reality terminal, display method of video of virtual reality terminal and device
GB2566276A (en) A method of modifying an image on a computational device
US11120612B2 (en) Method and device for tailoring a synthesized reality experience to a physical setting
GB2585078A (en) Content generation system and method
US11182978B1 (en) Rendering virtual content with coherent visual properties
WO2021041428A1 (en) Method and device for sketch-based placement of virtual objects
CN206757286U (en) A kind of virtual reality multi-media stage
CN117555426B (en) Virtual reality interaction system based on digital twin technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant