CN108805985B - Virtual space method and device - Google Patents

Virtual space method and device Download PDF

Info

Publication number
CN108805985B
CN108805985B CN201810244761.XA CN201810244761A CN108805985B CN 108805985 B CN108805985 B CN 108805985B CN 201810244761 A CN201810244761 A CN 201810244761A CN 108805985 B CN108805985 B CN 108805985B
Authority
CN
China
Prior art keywords
virtual space
virtual
entrance
collision
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810244761.XA
Other languages
Chinese (zh)
Other versions
CN108805985A (en
Inventor
黄明炜
林进浔
郑福
林进津
王巧华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Shuboxun Information Technology Co ltd
Original Assignee
Fujian Shuboxun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Shuboxun Information Technology Co ltd filed Critical Fujian Shuboxun Information Technology Co ltd
Priority to CN201810244761.XA priority Critical patent/CN108805985B/en
Publication of CN108805985A publication Critical patent/CN108805985A/en
Application granted granted Critical
Publication of CN108805985B publication Critical patent/CN108805985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The inventors disclose a virtual space method: mounting a model in a virtual scene to the origin of a virtual space; arranging a surround along the periphery of the virtual space; arranging a first collision object and placing the first collision object at the virtual space inlet, wherein the size of the first collision object is the same as that of the virtual space inlet; arranging a second collision object on the virtual camera; setting an initial state value of a virtual space entrance to be false, and changing the state value of a virtual camera to be true when the virtual camera enters a virtual space from the virtual space entrance; when the virtual camera enters the virtual space and the initial value of the virtual space entrance is false, triggering a collision event and hiding other models except the entrance in the virtual space; when the virtual camera leaves the virtual space, a leaving event is triggered, all the hidden models are displayed, and the initial state value of the virtual space entrance is set to be false, so that the function of natural shuttling switching back and forth between the virtual space and the real space is realized.

Description

Virtual space method and device
Technical Field
The invention relates to the field of computer software, in particular to a virtual space method and a virtual space device.
Background
Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding a corresponding image, and the technology aims to sleeve a virtual world on a screen in the real world and perform interaction.
The augmented reality technology integrates real world information and virtual world information seamlessly, and is a new technology which is characterized in that entity information (visual information, sound, taste, touch and the like) which is difficult to experience in a certain time space range of the real world originally is overlapped after being simulated through scientific technologies such as computers and the like, virtual information is applied to the real world and is perceived by human senses, and therefore the sense experience beyond reality is achieved. The real environment and the virtual object are superimposed on the same picture or space in real time and exist simultaneously. The method not only shows the information of the real world, but also displays the virtual information at the same time, and the two kinds of information are mutually supplemented and superposed. In visual augmented reality, a user can see the real world around it by using a head-mounted display to multiply and combine the real world with computer graphics.
The augmented reality technology comprises new technologies and new means such as multimedia, three-dimensional modeling, real-time video display and control, multi-sensor fusion, real-time tracking and registration, scene fusion and the like. Augmented reality provides information that is generally different from what human beings can perceive. This technique was first proposed in 1990. Along with the improvement of the operational capability of portable electronic products, the application of augmented reality is wider and wider.
Most of current AR virtual space interaction schemes superimpose objects in real space, and most of the conventional AR experiences are to show and interact with an object.
Disclosure of Invention
For this reason, it is necessary to provide a new AR interaction manner to implement the function of natural shuttling switching back and forth between the virtual space and the real space.
To achieve the above object, the inventors provide a virtual space method, comprising the steps of:
mounting a model in a virtual scene to the origin of a virtual space;
arranging a surrounding object along the periphery of the virtual space, wherein the surrounding object does not overlap with the virtual space and surrounds the area outside the virtual scene entrance; rendering the entry to the virtual space prior to rendering the enclosure;
arranging a first collision object and placing the first collision object at the virtual space inlet, wherein the size of the first collision object is the same as that of the virtual space inlet; arranging a second collision object on the virtual camera;
setting an initial state value of a virtual space entrance to be false, and changing the state value of a virtual camera to be true when the virtual camera enters a virtual space from the virtual space entrance;
when the virtual camera enters the virtual space and the initial value of the virtual space entrance is false, triggering a collision event and hiding other models except the entrance in the virtual space; when the virtual camera leaves the virtual space, a leave event is triggered, all models that are hidden are displayed, and the virtual space entry initial state value is set to false.
Further, in the virtual space method, before the step of "mounting the model in the virtual scene to the origin of the virtual space", the method further includes the steps of: and setting the bottom central point of the three-dimensional model of the virtual space entrance as a virtual space origin, and taking the virtual space origin as a virtual scene father node.
Further, in the virtual space method, the surround is processed as follows:
the channel mask is set to 0 and no result is written by the color channel;
rendering the object only when the depth of the object is less than or equal to the depth value of itself;
rendering the surround precedes rendering other opaque objects than the virtual space entry.
Further, in the virtual space method, the second collision object is preset to a size enough to trigger a collision reaction when the second collision object collides with another collision object.
The inventor also provides a virtual space device which comprises a model mounting unit, a surround setting unit, a rendering unit, a collision object setting unit, a state value setting unit and a collision processing unit;
the model mounting unit is used for mounting a model in a virtual scene to the origin of a virtual space;
the surrounding object setting unit is used for setting a surrounding object along the periphery of the virtual space, wherein the surrounding object is not overlapped with the virtual space and surrounds the area outside the virtual scene entrance; the rendering unit is used for rendering a preset object, and the rendering of the inlet of the virtual space by the rendering unit is prior to the rendering of the enclosure;
the collision object setting unit is used for setting a first collision object and placing the first collision object at the virtual space inlet, and the size of the first collision object is the same as that of the virtual space inlet; the collision object setting unit is also used for setting a second collision object in the virtual camera;
the state value setting unit is used for setting the initial state value of the virtual space entrance to be false, and when the virtual camera enters the virtual space from the virtual space entrance, the state value setting unit changes the state value of the virtual camera to be true;
the collision processing unit is used for triggering a collision event and hiding other models except an entrance in the virtual space when the virtual camera enters the virtual space and the initial value of the entrance in the virtual space is false; the collision processing unit is also used for triggering a leaving event when the virtual camera leaves the virtual space, displaying all the hidden models, and the state value setting unit is used for setting the initial state value of the virtual space entrance to be false.
Further, in the virtual space apparatus, before the model in the virtual scene is mounted to the virtual space origin, the model mounting unit sets a bottom center point of a three-dimensional model at the virtual space entrance as the virtual space origin, and uses the virtual space origin as a virtual scene parent node.
Further, the virtual space apparatus further includes a surround processing unit, configured to perform the following processing on the surround:
the channel mask is set to 0 and no result is written by the color channel;
rendering the object only when the depth of the object is less than or equal to the depth value of itself;
rendering the surround precedes rendering other opaque objects than the virtual space entry.
Further, in the virtual space apparatus, the second collision object is preset to a size enough to trigger a collision reaction when the second collision object collides with another collision object.
Different from the prior art, the technical scheme can lead the user to shuttle back and forth between the virtual space and the real space only through the mobile equipment by superposing the virtual space in the real space, and the transition effect is natural. The AR interaction of the technical scheme of the invention can generate interaction with the virtual space, and roam in the virtual space by utilizing the space positioning of the AR. In the virtual space, only can see the real space through the space of transfer door, and need see through the transfer door in the real space and can see the virtual space to this has guaranteed the realization that does not wear group's effect.
Drawings
FIG. 1 is a flow chart of a virtual space method according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of a virtual space apparatus according to an embodiment of the present invention.
Description of reference numerals:
1-model mounting Unit
2-surround setting unit
3-rendering unit
4-Collision object setting Unit
5-State value setting Unit
6-collision processing unit
7-surround treatment unit
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Please refer to fig. 1, which is a flowchart illustrating a method according to an embodiment of the present invention; the method comprises the following steps:
and S1, setting the bottom central point of the three-dimensional model of the virtual space entrance as a virtual space origin, and taking the virtual space origin as a virtual scene father node.
And S2, mounting the model in the virtual scene to the origin of the virtual space.
S3, arranging a surrounding object along the periphery of the virtual space, wherein the surrounding object does not overlap with the virtual space and surrounds the area outside the virtual scene entrance; rendering the entry to the virtual space prior to rendering the enclosure; in this step, the enclosure is treated as follows:
the channel mask is set to 0 and no result is written by the color channel;
rendering the object only when the depth of the object is less than or equal to the depth value of itself;
rendering the surround precedes rendering other opaque objects than the virtual space entry.
S4, arranging a first collision object and placing the first collision object at the virtual space entrance, wherein the size of the first collision object is the same as that of the virtual space entrance; a second collision object is arranged on the virtual camera, and the second collision object is preset to be of a size which is enough to trigger collision reaction when the second collision object collides with other collision objects.
And S5, setting the initial state value of the virtual space entrance to be false, and changing the state value of the virtual camera to be true when the virtual camera enters the virtual space from the virtual space entrance.
S6, when the virtual camera enters the virtual space and the initial value of the virtual space entrance is false, triggering a collision event and hiding other models except the entrance in the virtual space; when the virtual camera leaves the virtual space, a leave event is triggered, all models that are hidden are displayed, and the virtual space entry initial state value is set to false.
Different from the prior art, the virtual space method provided by the technical scheme can enable a user to shuttle back and forth between the virtual space and the real space only through the mobile device by superposing one virtual space in the real space, and the transition effect is natural. The AR interaction of the technical scheme of the invention can generate interaction with the virtual space, and roam in the virtual space by utilizing the space positioning of the AR. In the virtual space, only can see the real space through the space of transfer door, and need see through the transfer door in the real space and can see the virtual space to this has guaranteed the realization that does not wear group's effect.
Please refer to fig. 2, which is a schematic structural diagram of a virtual space apparatus according to another embodiment of the present invention and proposed by the inventor; the virtual space device comprises a model mounting unit 1, a surrounding object setting unit 2, a rendering unit 3, a collision object setting unit 4, a state value setting unit 5, a collision processing unit 6 and a surrounding object processing unit 7;
the model mounting unit 1 is used for mounting a model in a virtual scene to an origin of a virtual space; in addition, before the model mounting unit mounts the model in the virtual scene to the virtual space origin, the model mounting unit also sets the bottom center point of the three-dimensional model at the virtual space entrance as the virtual space origin, and takes the virtual space origin as the virtual scene parent node.
The surround setting unit 2 is used for setting a surround along the periphery of the virtual space, wherein the surround does not overlap with the virtual space and surrounds the area outside the virtual scene entrance; the rendering unit 3 is used for rendering a preset object, and the rendering of the entry of the virtual space by the rendering unit is prior to the rendering of the enclosure;
the collision object setting unit 4 is used for setting a first collision object and placing the first collision object at the virtual space entrance, and the size of the first collision object is the same as that of the virtual space entrance; the collision object setting unit is also used for setting a second collision object in the virtual camera, and the second collision object is preset to be a size which is enough to trigger collision reaction when the second collision object collides with other collision objects.
The state value setting unit 5 is configured to set an initial state value of a virtual space entry to false, and when a virtual camera enters a virtual space from the virtual space entry, the state value setting unit 5 changes its state value to true;
the collision processing unit 6 is used for triggering a collision event and hiding other models except an entrance in the virtual space when the virtual camera enters the virtual space and the initial value of the entrance in the virtual space is false; the collision processing unit 6 is further configured to trigger a leave event when the virtual camera leaves the virtual space, displaying all the models that are hidden, and the state value setting unit 5 sets the virtual space entry initial state value to false.
Further, the virtual space apparatus further includes a surround processing unit 7, configured to perform the following processing on the surround:
the channel mask is set to 0 and no result is written by the color channel;
rendering the object only when the depth of the object is less than or equal to the depth value of itself;
rendering the surround precedes rendering other opaque objects than the virtual space entry.
The flow of the virtual space device for realizing the virtual space method is roughly as follows:
s1, the model mounting unit 1 sets the bottom center point of the three-dimensional model of the virtual space entrance as the virtual space origin, and takes the virtual space origin as the father node of the virtual scene.
S2, the model mounting unit 1 mounts the model in the virtual scene to the origin of the virtual space.
S3, the surround setting unit 2 sets a surround along the periphery of the virtual space, the surround not overlapping the virtual space and surrounding an area outside the virtual scene entrance; rendering the entry of the virtual space by the rendering unit 3 precedes the rendering of the enclosure; in this step, the enclosure is treated as follows:
the channel mask is set to 0 and no result is written by the color channel;
rendering the object only when the depth of the object is less than or equal to the depth value of itself;
rendering the surround precedes rendering other opaque objects than the virtual space entry.
S4, the collision object setting unit 4 sets a first collision object and places the first collision object at the virtual space entrance, and the size of the first collision object is the same as that of the virtual space entrance; the colliding object setting unit 4 sets a second colliding object in the virtual camera, and the second colliding object is preset to a size enough to trigger a collision reaction when it collides with another colliding object.
The S5 state value setting unit 5 sets the virtual space entry initial state value to false, and when the virtual camera enters the virtual space from the virtual space entry, the state value setting unit 5 changes its state value to true.
S6, when the virtual camera enters the virtual space and the initial value of the virtual space entrance is false, triggering a collision event, and hiding other models except the entrance in the virtual space by the collision processing unit 6; when the virtual camera leaves the virtual space, a leave event is triggered, the collision processing unit 6 displays all models that are hidden, and the state value setting unit 5 sets the virtual space entry initial state value to false.
Different from the prior art, the virtual space device provided by the technical scheme can enable a user to shuttle back and forth between the virtual space and the real space only through the mobile equipment by superposing one virtual space in the real space, and the transition effect is natural. The AR interaction of the technical scheme of the invention can generate interaction with the virtual space, and roam in the virtual space by utilizing the space positioning of the AR. In the virtual space, only can see the real space through the space of transfer door, and need see through the transfer door in the real space and can see the virtual space to this has guaranteed the realization that does not wear group's effect.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising … …" or "comprising … …" does not exclude the presence of additional elements in a process, method, article, or terminal that comprises the element. Further, herein, "greater than," "less than," "more than," and the like are understood to exclude the present numbers; the terms "above", "below", "within" and the like are to be understood as including the number.
As will be appreciated by one skilled in the art, the above-described embodiments may be provided as a method, apparatus, or computer program product. These embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. All or part of the steps in the methods according to the embodiments may be implemented by a program instructing associated hardware, where the program may be stored in a storage medium readable by a computer device and used to execute all or part of the steps in the methods according to the embodiments. The computer devices, including but not limited to: personal computers, servers, general-purpose computers, special-purpose computers, network devices, embedded devices, programmable devices, intelligent mobile terminals, intelligent home devices, wearable intelligent devices, vehicle-mounted intelligent devices, and the like; the storage medium includes but is not limited to: RAM, ROM, magnetic disk, magnetic tape, optical disk, flash memory, U disk, removable hard disk, memory card, memory stick, network server storage, network cloud storage, etc.
The various embodiments described above are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a computer apparatus to produce a machine, such that the instructions, which execute via the processor of the computer apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer apparatus to cause a series of operational steps to be performed on the computer apparatus to produce a computer implemented process such that the instructions which execute on the computer apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the embodiments have been described, once the basic inventive concept is obtained, other variations and modifications of these embodiments can be made by those skilled in the art, so that the above embodiments are only examples of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes using the contents of the present specification and drawings, or any other related technical fields, which are directly or indirectly applied thereto, are included in the scope of the present invention.

Claims (8)

1. A virtual space interaction method is characterized by comprising the following steps:
mounting a model in a virtual scene to the origin of a virtual space;
arranging a surrounding object along the periphery of the virtual space, wherein the surrounding object does not overlap with the virtual space and surrounds the area outside the virtual scene entrance; rendering the entry to the virtual space prior to rendering the enclosure;
arranging a first collision object and placing the first collision object at the virtual space inlet, wherein the size of the first collision object is the same as that of the virtual space inlet; arranging a second collision object on the virtual camera;
setting an initial state value of a virtual space entrance to be false, and changing the state value of a virtual camera to be true when the virtual camera enters a virtual space from the virtual space entrance;
when the virtual camera enters the virtual space and the initial value of the virtual space entrance is false, triggering a collision event and hiding other models except the entrance in the virtual space; when the virtual camera leaves the virtual space, a leave event is triggered, all models that are hidden are displayed, and the virtual space entry initial state value is set to false.
2. The virtual space interaction method as claimed in claim 1, further comprising, before the step of "mounting the model in the virtual scene to the origin of the virtual space": and setting the bottom central point of the three-dimensional model of the virtual space entrance as a virtual space origin, and taking the virtual space origin as a virtual scene father node.
3. The virtual space interaction method of claim 1 or 2, wherein the surround is processed by:
the channel mask is set to 0 and no result is written by the color channel;
rendering the object only when the depth of the object is less than or equal to the depth value of itself;
rendering the surround precedes rendering other opaque objects than the virtual space entry.
4. The virtual space interaction method of claim 1 or 2, wherein the second collision object is preset to a size sufficient to trigger a collision reaction when it collides with another collision object.
5. A virtual space interaction device is characterized by comprising a model mounting unit, a surround setting unit, a rendering unit, a collision object setting unit, a state value setting unit and a collision processing unit;
the model mounting unit is used for mounting a model in a virtual scene to the origin of a virtual space;
the surrounding object setting unit is used for setting a surrounding object along the periphery of the virtual space, wherein the surrounding object is not overlapped with the virtual space and surrounds the area outside the virtual scene entrance; the rendering unit is used for rendering a preset object, and the rendering of the inlet of the virtual space by the rendering unit is prior to the rendering of the enclosure;
the collision object setting unit is used for setting a first collision object and placing the first collision object at the virtual space inlet, and the size of the first collision object is the same as that of the virtual space inlet; the collision object setting unit is also used for setting a second collision object in the virtual camera;
the state value setting unit is used for setting the initial state value of the virtual space entrance to be false, and when the virtual camera enters the virtual space from the virtual space entrance, the state value setting unit changes the state value of the virtual camera to be true;
the collision processing unit is used for triggering a collision event and hiding other models except an entrance in the virtual space when the virtual camera enters the virtual space and the initial value of the entrance in the virtual space is false; the collision processing unit is also used for triggering a leaving event when the virtual camera leaves the virtual space, displaying all the hidden models, and the state value setting unit is used for setting the initial state value of the virtual space entrance to be false.
6. The virtual space interacting device as claimed in claim 5, wherein the model mounting unit sets a bottom center point of a three-dimensional model of a virtual space entrance as a virtual space origin before mounting the model in the virtual scene to the virtual space origin, and takes the virtual space origin as a virtual scene parent node.
7. The virtual space interaction device of claim 5 or 6, further comprising a surround processing unit for processing the surround as follows:
the channel mask is set to 0 and no result is written by the color channel;
rendering the object only when the depth of the object is less than or equal to the depth value of itself;
rendering the surround precedes rendering other opaque objects than the virtual space entry.
8. The virtual space interaction device of claim 5 or 6, wherein the second impactor is pre-sized to ensure that it collides with another impactor with sufficient size to trigger a collision reaction.
CN201810244761.XA 2018-03-23 2018-03-23 Virtual space method and device Active CN108805985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810244761.XA CN108805985B (en) 2018-03-23 2018-03-23 Virtual space method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810244761.XA CN108805985B (en) 2018-03-23 2018-03-23 Virtual space method and device

Publications (2)

Publication Number Publication Date
CN108805985A CN108805985A (en) 2018-11-13
CN108805985B true CN108805985B (en) 2022-02-15

Family

ID=64095320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810244761.XA Active CN108805985B (en) 2018-03-23 2018-03-23 Virtual space method and device

Country Status (1)

Country Link
CN (1) CN108805985B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8405680B1 (en) * 2010-04-19 2013-03-26 YDreams S.A., A Public Limited Liability Company Various methods and apparatuses for achieving augmented reality
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN105389848A (en) * 2015-11-06 2016-03-09 网易(杭州)网络有限公司 Drawing system and method of 3D scene, and terminal
CN106056663A (en) * 2016-05-19 2016-10-26 京东方科技集团股份有限公司 Rendering method for enhancing reality scene, processing module and reality enhancement glasses
CN106157359A (en) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual scene experiencing system
CN106548519A (en) * 2016-11-04 2017-03-29 上海玄彩美科网络科技有限公司 Augmented reality method based on ORB SLAM and the sense of reality of depth camera
CN106598229A (en) * 2016-11-11 2017-04-26 歌尔科技有限公司 Virtual reality scene generation method and equipment, and virtual reality system
CN107368188A (en) * 2017-07-13 2017-11-21 河北中科恒运软件科技股份有限公司 The prospect abstracting method and system based on spatial multiplex positioning in mediation reality

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8681179B2 (en) * 2011-12-20 2014-03-25 Xerox Corporation Method and system for coordinating collisions between augmented reality and real reality

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8405680B1 (en) * 2010-04-19 2013-03-26 YDreams S.A., A Public Limited Liability Company Various methods and apparatuses for achieving augmented reality
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN106157359A (en) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual scene experiencing system
CN105389848A (en) * 2015-11-06 2016-03-09 网易(杭州)网络有限公司 Drawing system and method of 3D scene, and terminal
CN106056663A (en) * 2016-05-19 2016-10-26 京东方科技集团股份有限公司 Rendering method for enhancing reality scene, processing module and reality enhancement glasses
CN106548519A (en) * 2016-11-04 2017-03-29 上海玄彩美科网络科技有限公司 Augmented reality method based on ORB SLAM and the sense of reality of depth camera
CN106598229A (en) * 2016-11-11 2017-04-26 歌尔科技有限公司 Virtual reality scene generation method and equipment, and virtual reality system
CN107368188A (en) * 2017-07-13 2017-11-21 河北中科恒运软件科技股份有限公司 The prospect abstracting method and system based on spatial multiplex positioning in mediation reality

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于增强现实的虚拟实景空间漫游机制研究与实现";高超等;《计算机工程与设计》;20071231;第5994-5997页 *
"基于增强现实的虚拟实景空间的研究与实现";高宇等;《小型微型计算机系统》;20060131;第146-150页 *

Also Published As

Publication number Publication date
CN108805985A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
US8681179B2 (en) Method and system for coordinating collisions between augmented reality and real reality
US20170195664A1 (en) Three-dimensional viewing angle selecting method and apparatus
CN106598229B (en) Virtual reality scene generation method and device and virtual reality system
EP3691280B1 (en) Video transmission method, server, vr playback terminal and computer-readable storage medium
CN109743892B (en) Virtual reality content display method and device
US20180356893A1 (en) Systems and methods for virtual training with haptic feedback
US20170160795A1 (en) Method and device for image rendering processing
JP6267405B2 (en) Method and apparatus for uninstalling application programs
CN115134649B (en) Method and system for presenting interactive elements within video content
CN106445157B (en) Method and device for adjusting picture display direction
CN106774821B (en) Display method and system based on virtual reality technology
EP2998845A1 (en) User interface based interaction method and related apparatus
CN112184356A (en) Guided retail experience
CN112363658B (en) Interaction method and device for video call
CN114222076B (en) Face changing video generation method, device, equipment and storage medium
WO2018000606A1 (en) Virtual-reality interaction interface switching method and electronic device
CN108805985B (en) Virtual space method and device
KR20120013021A (en) A method and apparatus for interactive virtual reality services
CN115049574A (en) Video processing method and device, electronic equipment and readable storage medium
CN117319725A (en) Subtitle display method, device, equipment and medium
CN106648757B (en) Data processing method of virtual reality terminal and virtual reality terminal
CN116828245A (en) Video switching method, device, apparatus, medium, and program
CN114398132B (en) Scene data display method and device, computer equipment and storage medium
Upadhyay et al. Mechanism for Displaying Three-Dimensional Objects Alongside Video in Virtual Reality
CN116700851A (en) Screenshot method and equipment for VR scene of virtual reality equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant