CN110850977A - Stereoscopic image interaction method based on 6DOF head-mounted display - Google Patents

Stereoscopic image interaction method based on 6DOF head-mounted display Download PDF

Info

Publication number
CN110850977A
CN110850977A CN201911077191.0A CN201911077191A CN110850977A CN 110850977 A CN110850977 A CN 110850977A CN 201911077191 A CN201911077191 A CN 201911077191A CN 110850977 A CN110850977 A CN 110850977A
Authority
CN
China
Prior art keywords
virtual
mounted display
shaking
stereoscopic image
6dof
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911077191.0A
Other languages
Chinese (zh)
Other versions
CN110850977B (en
Inventor
吕云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Weiai New Economic And Technological Research Institute Co Ltd
Original Assignee
Chengdu Weiai New Economic And Technological Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Weiai New Economic And Technological Research Institute Co Ltd filed Critical Chengdu Weiai New Economic And Technological Research Institute Co Ltd
Priority to CN201911077191.0A priority Critical patent/CN110850977B/en
Publication of CN110850977A publication Critical patent/CN110850977A/en
Application granted granted Critical
Publication of CN110850977B publication Critical patent/CN110850977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The invention discloses a stereoscopic image interaction method based on a 6DOF head-mounted display, which comprises the steps of S1, collecting spatial position information and posture information of a wearer visual angle; s2, adjusting the posture information and the proportion parameter of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and establishing a dynamic motion model for the dynamic object; s3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface; s4, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model according to the shaking grade of the displayed virtual picture; s5, acquiring the operation forms of the two hands of the user, the action parameters of the two hands of the user and the input voice instructions; and S6, performing virtual response of the three-dimensional scene based on the operation form, the operation parameters and the input voice command.

Description

Stereoscopic image interaction method based on 6DOF head-mounted display
Technical Field
The invention belongs to the technical field of AR, and particularly relates to a stereoscopic image interaction method based on a 6DOF head-mounted display.
Background
The AR Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information and a real world, and is widely applied to the real world after virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is simulated and applied by various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like, wherein the two kinds of information supplement each other, so that the real world is enhanced.
The interaction relationship between the user and the simulation environment and various virtual objects in the simulation environment is an important component of the AR technology, and the user experience in the virtual environment is poor due to the shaking of the virtual screen and the interaction manner with the virtual environment.
Disclosure of Invention
The present invention is directed to provide a stereoscopic image interaction method based on a 6DOF head-mounted display to solve the problem of poor correction effect of the existing virtual interactive body, which addresses the above-mentioned shortcomings in the prior art.
In order to achieve the purpose, the invention adopts the technical scheme that:
a method of stereoscopic image interaction based on a 6DOF head mounted display, comprising:
s1, collecting the spatial position information and the posture information of the visual angle of the wearer;
s2, adjusting the posture information and the proportion parameter of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and establishing a dynamic motion model for the dynamic object;
s3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface;
s4, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model according to the shaking grade of the displayed virtual picture;
s5, acquiring the operation forms of the two hands of the user, the action parameters of the two hands of the user and the input voice instructions;
and S6, performing virtual response of the three-dimensional scene based on the operation form, the operation parameters and the input voice command.
Preferably, the spatial position information is perspective spatial image information captured by at least two sets of cameras positioned above the 6DOF head mounted display.
Preferably, the attitude information includes a pitch angle, a yaw angle, and a roll angle of the 6DOF head mounted display device.
Preferably, the shaking level is a degree of shaking felt by the wearer on the virtual screen, and the shaking level is divided into a primary shaking, a middle shaking, and an overload shaking.
Preferably, the method for adjusting the rendering effect parameters of the three-dimensional scene and the dynamic motion model comprises the following steps: the wearer selects a specific shaking level according to the shaking level option displayed on the 6DOF head-mounted display, and then adjusts the rendering effect of the virtual interactive picture to reduce the shaking degree of the wearer.
Preferably, the rendering effect parameters are picture delay, light intensity, luminous map subdivision, light buffering and picture dithering.
Preferably, the voice command of the wearer is collected and received through the voice input device.
Preferably, the generation and the verification of the voice model are carried out based on a deep learning algorithm, so as to obtain the instruction intention of the voice instruction of the wearer, and the corresponding action instruction is generated according to the instruction intention.
Preferably, the operation forms and the motion parameters of the two hands of the user are acquired through a camera on the top of the 6DOF head-mounted display, and each operation form corresponds to a unique operation instruction.
Preferably, the corresponding operation is performed on the virtual object in the virtual screen in accordance with an operation command corresponding to the two-hand operation form of the wearer.
The stereoscopic image interaction method based on the 6DOF head-mounted display has the following beneficial effects:
the method comprises the steps of constructing and rendering a virtual three-dimensional scene with high reality degree by collecting spatial position information and posture information of a wearer visual angle, and establishing a dynamic motion model for a dynamic object; the virtual scene is controlled by adopting voice instruction input and a two-hand operation form, so that the selectivity of user interaction experience is increased; meanwhile, the shaking rating is adopted, and the virtual scene rendering parameters are adjusted according to the shaking grade selected by the user, so that the user requirements are met, and the experience comfort level of the user is improved.
Drawings
Fig. 1 is a flowchart of a stereoscopic image interaction method based on a 6DOF head mounted display.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
According to an embodiment of the present application, referring to fig. 1, the stereoscopic image interaction method based on a 6DOF head-mounted display of the present solution includes:
s1, collecting the spatial position information and the posture information of the visual angle of the wearer;
s2, adjusting the posture information and the proportion parameter of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and establishing a dynamic motion model for the dynamic object;
s3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface;
s4, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model according to the shaking grade of the displayed virtual picture;
s5, acquiring the operation forms of the two hands of the user, the action parameters of the two hands of the user and the input voice instructions;
and S6, performing virtual response of the three-dimensional scene based on the operation form, the operation parameters and the input voice command.
The above steps are described in detail below
S1, collecting spatial position information and posture information of the visual angle of the wearer;
at least two sets of cameras above the 6DOF head-mounted display can rotate and deflect, and the spatial information of the user visual angle is shot based on the two rotatable cameras.
The attitude information includes a pitch angle, a yaw angle, and a roll angle of the 6DOF head mounted display device, i.e., a pitch angle, a yaw angle, and a roll angle when the 6DOF head mounted display device is worn by the user, for reflecting the attitude information of the user in the virtual environment.
Step S2, adjusting the posture information and the proportion parameter of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and establishing a dynamic motion model for the dynamic object;
and S1, acquiring the user posture information and the spatial position information, and further adjusting the posture information and the proportional parameter of the virtual object according to the relative relationship between the spatial position information and the real scene, so that the posture of the user in the virtual environment is closer to the real value and better conforms to the virtual environment.
Because the coordinate system on which the camera depends on shooting is a spatial coordinate system, the virtual environment and the reality interaction depend on a virtual display interface coordinate system, and the two coordinate systems are independent of each other, the spatial position information and the posture information acquired by at least two groups of cameras need to be converted into the virtual position information under the virtual reality display interface.
And then a virtual three-dimensional scene for human-computer interaction is constructed, and the three-dimensional scene is rendered to restore a real environment with high reality degree.
And establishing a dynamic motion model for the dynamic object, converting the attitude information and the spatial position information of the dynamic object into virtual position information under a virtual reality display interface, and simultaneously superposing and constructing the dynamic object in a virtual three-dimensional scene.
S3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface;
the virtual scene constructed in step S2 is displayed on a 6DOF head-mounted display, and a user can interact with the virtual scene in real time through the display while wearing the 6DOF head-mounted device.
S4, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model according to the shaking grade of the displayed virtual picture;
the shaking grade is the shaking degree of the wearer on the virtual picture, and different users can feel different even facing the same virtual scene; therefore, the shaking grade is divided, multi-grade division can be performed according to the shaking grade, the scheme only divides three grades, and the scheme is primarily shaken, medium-grade shaken and overloaded shaken.
Wherein the primary shake is a shake or shake range that the user can endure.
The intermediate level of shaking is still within acceptable limits, but the user has been able to clearly perceive the shaking brought about by the current virtual environment.
The overload shaking exceeds the bearing range of the user, and phenomena of dizziness, vomiting and the like can be caused seriously.
The user can display the shaking grade of the virtual scene in the three-dimensional scene according to actual requirements through the operation forms of the two hands or/and the input voice commands, and selects the specific shaking grade according to the current feeling of the user.
Adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model according to the shaking grade of the specific three-dimensional scene selected by the user; to slow down the shaking degree of the user; rendering effect parameters picture delay, light intensity, luminous map subdivision, light buffering and picture dithering.
Step S5, obtaining the operation forms of the two hands of the user, the action parameters and the input voice instructions;
the input instructions comprise two-hand shape instructions of the user and voice instructions of the user, and information interaction of the virtual scene is achieved through the two-hand shape instructions and the voice instructions.
The voice instruction acquisition and acquisition performed by the voice input device may be an MIC, which is not limited herein.
The collected and obtained voice instruction is generated and checked based on a deep learning algorithm to obtain an instruction intention of the voice instruction of the wearer, and a corresponding action instruction is generated according to the instruction intention and acts on a virtual scene.
The operation forms and the action parameters of the two hands of the user are acquired through a camera at the top of the 6DOF head-mounted display, each operation form corresponds to a unique operation instruction, and each operation instruction corresponds to a unique virtual scene instruction.
Step S6 is to perform a virtual response of the three-dimensional scene based on the operation form, the operation parameters thereof, and the input voice command.
And acquiring the voice instruction and the operation form instruction in the step S5, and converting the instruction into a corresponding operation instruction in the virtual scene, thereby implementing the correspondence of the virtual scene.
The method comprises the steps of constructing and rendering a virtual three-dimensional scene with high reality degree by collecting spatial position information and posture information of a wearer visual angle, and establishing a dynamic motion model for a dynamic object; the virtual scene is controlled by adopting voice instruction input and a two-hand operation form, so that the selectivity of user interaction experience is increased; meanwhile, the shaking rating is adopted, and the virtual scene rendering parameters are adjusted according to the shaking grade selected by the user, so that the user requirements are met, and the experience comfort level of the user is improved.
While the embodiments of the invention have been described in detail in connection with the accompanying drawings, it is not intended to limit the scope of the invention. Various modifications and changes may be made by those skilled in the art without inventive step within the scope of the appended claims.

Claims (10)

1. A method for stereoscopic image interaction based on a 6DOF head mounted display, comprising:
s1, collecting the spatial position information and the posture information of the visual angle of the wearer;
s2, adjusting the posture information and the proportion parameter of the virtual object according to the relative relation between the space position information and the real scene, constructing and rendering a virtual three-dimensional scene, and establishing a dynamic motion model for the dynamic object;
s3, displaying the three-dimensional scene and the dynamic motion model on a virtual display interface;
s4, adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model according to the shaking grade of the displayed virtual picture;
s5, acquiring the operation forms of the two hands of the user, the action parameters of the two hands of the user and the input voice instructions;
and S6, performing virtual response of the three-dimensional scene based on the operation form, the operation parameters and the input voice command.
2. The method for 6DOF head mounted display based stereoscopic image interaction according to claim 1, wherein: the spatial position information is visual angle spatial image information shot by at least two groups of cameras positioned above the 6DOF head-mounted display.
3. The method for 6DOF head mounted display based stereoscopic image interaction according to claim 1, wherein: the attitude information includes a pitch angle, a yaw angle, and a roll angle of the 6DOF head mounted display device.
4. The method for 6DOF head mounted display based stereoscopic image interaction according to claim 1, wherein: the shaking grade is a shaking degree that a wearer feels about the virtual picture, and the shaking grade is divided into primary shaking, intermediate shaking and overload shaking.
5. The method for 6DOF head mounted display based stereoscopic image interaction according to claim 1, wherein the method of adjusting rendering effect parameters of the three-dimensional scene and the dynamic motion model is: the wearer selects a specific shaking level according to the shaking level option displayed on the 6DOF head-mounted display, and then adjusts the rendering effect of the virtual interactive picture to reduce the shaking degree of the wearer.
6. The method of 6DOF head mounted display based stereoscopic image interaction according to claim 5, wherein the rendering effect parameters are picture delay, light intensity, luminous map subdivision, light buffering, and picture dithering.
7. The method for 6DOF head mounted display based stereoscopic image interaction according to claim 1, wherein the voice command of the wearer is collected and received through a voice input device.
8. The method for 6DOF head-mounted display-based stereoscopic image interaction according to claim 7, wherein the generation and verification of the voice model are performed based on a deep learning algorithm, an instruction intention of the voice instruction of the wearer is obtained, and a corresponding action instruction is generated according to the instruction intention.
9. The method for stereoscopic image interaction based on a 6DOF head mounted display according to claim 1, wherein the operation forms and the motion parameters of both hands of the user are acquired through a camera at the top of the 6DOF head mounted display, and each operation form corresponds to a unique operation instruction.
10. The method for stereoscopic image interaction based on a 6DOF head mounted display according to claim 9, wherein the corresponding operation is performed on the virtual object in the virtual screen in accordance with an operation instruction corresponding to a two-handed operation form of the wearer.
CN201911077191.0A 2019-11-06 2019-11-06 Stereoscopic image interaction method based on 6DOF head-mounted display Active CN110850977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911077191.0A CN110850977B (en) 2019-11-06 2019-11-06 Stereoscopic image interaction method based on 6DOF head-mounted display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911077191.0A CN110850977B (en) 2019-11-06 2019-11-06 Stereoscopic image interaction method based on 6DOF head-mounted display

Publications (2)

Publication Number Publication Date
CN110850977A true CN110850977A (en) 2020-02-28
CN110850977B CN110850977B (en) 2023-10-31

Family

ID=69599692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911077191.0A Active CN110850977B (en) 2019-11-06 2019-11-06 Stereoscopic image interaction method based on 6DOF head-mounted display

Country Status (1)

Country Link
CN (1) CN110850977B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352510A (en) * 2020-03-30 2020-06-30 歌尔股份有限公司 Virtual model creating method, system and device and head-mounted equipment
CN111862348A (en) * 2020-07-30 2020-10-30 腾讯科技(深圳)有限公司 Video display method, video generation method, video display device, video generation device, video display equipment and storage medium
CN112286355A (en) * 2020-10-28 2021-01-29 杭州如雷科技有限公司 Interactive method and system for immersive content
CN113515193A (en) * 2021-05-17 2021-10-19 聚好看科技股份有限公司 Model data transmission method and device
CN113709543A (en) * 2021-02-26 2021-11-26 腾讯科技(深圳)有限公司 Video processing method and device based on virtual reality, electronic equipment and medium
CN113823044A (en) * 2021-10-08 2021-12-21 刘智矫 Human body three-dimensional data acquisition room and charging method thereof
CN114866757A (en) * 2022-04-22 2022-08-05 深圳市华星光电半导体显示技术有限公司 Stereoscopic display system and method
CN111862348B (en) * 2020-07-30 2024-04-30 深圳市腾讯计算机系统有限公司 Video display method, video generation method, device, equipment and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012068547A (en) * 2010-09-27 2012-04-05 Brother Ind Ltd Display device
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality
CN103019569A (en) * 2012-12-28 2013-04-03 西安Tcl软件开发有限公司 Interactive device and interactive method thereof
US20130278631A1 (en) * 2010-02-28 2013-10-24 Osterhout Group, Inc. 3d positioning of augmented reality information
CN104603865A (en) * 2012-05-16 2015-05-06 丹尼尔·格瑞贝格 A system worn by a moving user for fully augmenting reality by anchoring virtual objects
CN104820497A (en) * 2015-05-08 2015-08-05 东华大学 A 3D interaction display system based on augmented reality
CN105446481A (en) * 2015-11-11 2016-03-30 周谆 Gesture based virtual reality human-machine interaction method and system
CN106507094A (en) * 2016-10-31 2017-03-15 北京疯景科技有限公司 The method and device of correction panoramic video display view angle
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN107204044A (en) * 2016-03-17 2017-09-26 深圳多哚新技术有限责任公司 A kind of picture display process and relevant device based on virtual reality
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN108287607A (en) * 2017-01-09 2018-07-17 成都虚拟世界科技有限公司 A kind of method at control HMD visual angles and wear display equipment
CN109375764A (en) * 2018-08-28 2019-02-22 北京凌宇智控科技有限公司 A kind of head-mounted display, cloud server, VR system and data processing method
CN109478340A (en) * 2016-07-13 2019-03-15 株式会社万代南梦宫娱乐 Simulation system, processing method and information storage medium
CN109801379A (en) * 2019-01-21 2019-05-24 视辰信息科技(上海)有限公司 General augmented reality glasses and its scaling method
CN109884976A (en) * 2018-10-24 2019-06-14 黄杏兰 A kind of AR equipment and its method of operation
CN109923462A (en) * 2016-09-13 2019-06-21 奇跃公司 Sensing spectacles
CN110035328A (en) * 2017-11-28 2019-07-19 辉达公司 Dynamic dithering and delay-tolerant rendering

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130278631A1 (en) * 2010-02-28 2013-10-24 Osterhout Group, Inc. 3d positioning of augmented reality information
JP2012068547A (en) * 2010-09-27 2012-04-05 Brother Ind Ltd Display device
CN104603865A (en) * 2012-05-16 2015-05-06 丹尼尔·格瑞贝格 A system worn by a moving user for fully augmenting reality by anchoring virtual objects
CN102945564A (en) * 2012-10-16 2013-02-27 上海大学 True 3D modeling system and method based on video perspective type augmented reality
CN103019569A (en) * 2012-12-28 2013-04-03 西安Tcl软件开发有限公司 Interactive device and interactive method thereof
CN104820497A (en) * 2015-05-08 2015-08-05 东华大学 A 3D interaction display system based on augmented reality
CN105446481A (en) * 2015-11-11 2016-03-30 周谆 Gesture based virtual reality human-machine interaction method and system
CN107204044A (en) * 2016-03-17 2017-09-26 深圳多哚新技术有限责任公司 A kind of picture display process and relevant device based on virtual reality
CN109478340A (en) * 2016-07-13 2019-03-15 株式会社万代南梦宫娱乐 Simulation system, processing method and information storage medium
CN109923462A (en) * 2016-09-13 2019-06-21 奇跃公司 Sensing spectacles
CN106507094A (en) * 2016-10-31 2017-03-15 北京疯景科技有限公司 The method and device of correction panoramic video display view angle
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN108287607A (en) * 2017-01-09 2018-07-17 成都虚拟世界科技有限公司 A kind of method at control HMD visual angles and wear display equipment
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN110035328A (en) * 2017-11-28 2019-07-19 辉达公司 Dynamic dithering and delay-tolerant rendering
CN109375764A (en) * 2018-08-28 2019-02-22 北京凌宇智控科技有限公司 A kind of head-mounted display, cloud server, VR system and data processing method
CN109884976A (en) * 2018-10-24 2019-06-14 黄杏兰 A kind of AR equipment and its method of operation
CN109801379A (en) * 2019-01-21 2019-05-24 视辰信息科技(上海)有限公司 General augmented reality glasses and its scaling method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352510A (en) * 2020-03-30 2020-06-30 歌尔股份有限公司 Virtual model creating method, system and device and head-mounted equipment
CN111862348A (en) * 2020-07-30 2020-10-30 腾讯科技(深圳)有限公司 Video display method, video generation method, video display device, video generation device, video display equipment and storage medium
CN111862348B (en) * 2020-07-30 2024-04-30 深圳市腾讯计算机系统有限公司 Video display method, video generation method, device, equipment and storage medium
CN112286355A (en) * 2020-10-28 2021-01-29 杭州如雷科技有限公司 Interactive method and system for immersive content
CN113709543A (en) * 2021-02-26 2021-11-26 腾讯科技(深圳)有限公司 Video processing method and device based on virtual reality, electronic equipment and medium
CN113515193A (en) * 2021-05-17 2021-10-19 聚好看科技股份有限公司 Model data transmission method and device
CN113515193B (en) * 2021-05-17 2023-10-27 聚好看科技股份有限公司 Model data transmission method and device
CN113823044A (en) * 2021-10-08 2021-12-21 刘智矫 Human body three-dimensional data acquisition room and charging method thereof
CN114866757A (en) * 2022-04-22 2022-08-05 深圳市华星光电半导体显示技术有限公司 Stereoscopic display system and method
CN114866757B (en) * 2022-04-22 2024-03-05 深圳市华星光电半导体显示技术有限公司 Stereoscopic display system and method

Also Published As

Publication number Publication date
CN110850977B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN110850977A (en) Stereoscopic image interaction method based on 6DOF head-mounted display
CN106157359B (en) Design method of virtual scene experience system
EP3035681B1 (en) Image processing method and apparatus
CN104536579B (en) Interactive three-dimensional outdoor scene and digital picture high speed fusion processing system and processing method
GB2534580A (en) Image processing
CN109696961A (en) Historical relic machine & equipment based on VR technology leads reward and realizes system and method, medium
JPH10334275A (en) Method and system for virtual reality and storage medium
JP6384940B2 (en) 3D image display method and head mounted device
WO2019063976A1 (en) Head-mountable display system
CN111880654A (en) Image display method and device, wearable device and storage medium
EP3788464B1 (en) Moving about a computer simulated reality setting
CN107071388A (en) A kind of three-dimensional augmented reality display methods and device
CN103177467A (en) Method for creating naked eye 3D (three-dimensional) subtitles by using Direct 3D technology
CN102262705A (en) Virtual reality method of actual scene
CN106843473B (en) AR-based children painting system and method
TWI755636B (en) Method and program for playing virtual reality image
CN105979239A (en) Virtual reality terminal, display method of video of virtual reality terminal and device
US11328488B2 (en) Content generation system and method
US11127208B2 (en) Method for providing virtual reality image and program using same
CN110174950B (en) Scene switching method based on transmission gate
CN106125927B (en) Image processing system and method
CN105933690A (en) Adaptive method and device for adjusting 3D image content size
US11727645B2 (en) Device and method for sharing an immersion in a virtual environment
CN115908755A (en) AR projection method, system and AR projector
CN113552947A (en) Virtual scene display method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant