CN112245910B - Modeling and limit movement method and system based on Quest head display - Google Patents

Modeling and limit movement method and system based on Quest head display Download PDF

Info

Publication number
CN112245910B
CN112245910B CN202011167301.5A CN202011167301A CN112245910B CN 112245910 B CN112245910 B CN 112245910B CN 202011167301 A CN202011167301 A CN 202011167301A CN 112245910 B CN112245910 B CN 112245910B
Authority
CN
China
Prior art keywords
quest
experience
model
game scene
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011167301.5A
Other languages
Chinese (zh)
Other versions
CN112245910A (en
Inventor
张衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Qiusuo Media Co ltd
Suzhou Huantiao Sports Culture Technology Co ltd
Original Assignee
Ningbo Qiusuo Media Co ltd
Suzhou Huantiao Sports Culture Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Qiusuo Media Co ltd, Suzhou Huantiao Sports Culture Technology Co ltd filed Critical Ningbo Qiusuo Media Co ltd
Priority to CN202011167301.5A priority Critical patent/CN112245910B/en
Publication of CN112245910A publication Critical patent/CN112245910A/en
Application granted granted Critical
Publication of CN112245910B publication Critical patent/CN112245910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/816Athletics, e.g. track-and-field sports
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8005Athletics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Abstract

The application discloses a modeling and experience method and system based on a Quest head display, wherein the method comprises the following steps: s1, collecting a game scene picture; s2, establishing a spatial modeling position relation corresponding to the game scene picture based on a Quest head display spatial positioning method; s3, establishing an experience model according to the established spatial modeling position relationship. The technology establishes a one-to-one scene relation by using a Quest head display space positioning method, wherein the space relation between a preparation experience position in a game scene and a preparation experience position in an actual environment is consistent, and an actual model corresponds to a model in the game scene one by one; the game scene model data is adopted to establish a corresponding entertainment model and experience area in reality, the corresponding game scene is restored by a real one-to-one scene, the stake touched by the player is the stake, the touched cliff is the empty cliff, and real game experience is brought to the player.

Description

Modeling and limit movement method and system based on Quest head display
Technical Field
The application relates to the technical field of sports items, in particular to a modeling and experience method and system based on a Quest head display.
Background
The experience scheme of the indoor amusement park on the market basically has 3dof positioning with the single color, and the immersive experience is generated by playing a video such as 360 degrees.
The existing game is concise, and a user wears the game to play in an indoor mode. The sense and experience in the game cannot be truly realized, and only the head wear plays a blank entertainment.
Thus, the prior art exists:
the device can not restore the field space in a one-to-one manner, has no sense of space, and can not interact with the surrounding in the movement process;
the existing indoor game entertainment does not have a model for simulating a game scene, and a real experience area or an entertainment model similar to the game scene model cannot be provided for a player, so that the emotion of the player cannot be mobilized;
the player cannot be provided with a sense of feedback of the real experience as in the game, and the sense of experience of the player is poor.
Disclosure of Invention
The application mainly aims to provide a modeling and experience method and system based on a Quest head display so as to solve the current problem.
In order to achieve the above object, the present application provides the following techniques:
the first aspect of the application provides a modeling method based on a Quest head-up display, which comprises the following steps:
s1, collecting a game scene picture;
s2, establishing a spatial modeling position relation corresponding to the game scene picture based on a Quest head display spatial positioning method;
s3, establishing an experience model according to the established spatial modeling position relationship.
Preferably, the capturing game scene pictures includes:
and acquiring the experience preparation position data, the model contour data and the model direction of the game space in the game scene picture.
Preferably, the establishing a spatial modeling position relationship corresponding to the game scene picture based on the Quest head display spatial positioning method includes:
determining the size of the space, the starting point and the positive direction of the Quest head display to be identified according to the acquired preparation experience position data, model contour data and model direction of the game space in the game scene picture;
and establishing a spatial modeling position relation corresponding to the game scene picture according to the determined spatial size, starting point and positive direction which need to be identified by the Quest head display.
Preferably, the establishing an experience model according to the established spatial modeling position relationship includes:
establishing a preparation area model according to the established spatial modeling position relation;
and establishing a falling area model according to the established spatial modeling position relationship.
Preferably, the establishing the preparation area model according to the established spatial modeling position relationship includes:
establishing a positioning space matrix based on a Quest head display space positioning method in the experience preparation area;
and establishing an experience preparation area corresponding to the game scene in the positioning space matrix based on the spatial modeling position relation of the game scene picture.
Preferably, the building the falling area model according to the built spatial modeling position relationship includes:
establishing a falling area model corresponding to a game scene in a falling body inspection area based on the spatial modeling position relation of the game scene picture;
scaling the obtained drop zone model;
and establishing an experience falling area corresponding to the game scene in the falling body inspection area according to the falling area model.
Preferably, the step of establishing, in the drop body test area, an experience drop area corresponding to a game scene according to the drop area model obtained by scaling includes:
acquiring drop model contour data in the game scene picture;
setting a scaling ratio, and scaling the drop model profile data according to the scaling ratio;
and establishing an experience drop area corresponding to the game scene in the drop body test area according to the scaled drop model outline data.
The second aspect of the application provides a limit movement experience method based on a Quest head display, which comprises the following steps:
firstly, starting a Quest head display;
secondly, detecting whether the positioning space matrix is normal or not by using a Quest head display; if so, entering the experience preparation area;
and thirdly, running a game, and entering the experience drop area based on a Quest head display.
Preferably, in the second step, the detecting whether the positioning space matrix is normal by using a Quest head display further includes:
if not, performing space calibration by using the Quest head display;
and after the calibration is finished, entering the experience preparation area.
The third aspect of the present application is to provide a Quest head display-based extreme motion experience system, comprising:
a game terminal: the method comprises the steps of running a game and feeding back a game scene of the game to a Quest head display;
quest head display: the method is used for establishing and detecting a positioning space matrix based on a Quest head display space positioning method in a preparation experience area;
experience preparation area: providing a limit movement preparation area which is based on the Quest head display and corresponds to a game scene for a user;
experience drop area: for providing the user with a limited motion drop region based on the Quest head display and corresponding to a game scene.
Compared with the prior art, the application can bring the following technical effects:
the technology establishes a one-to-one scene relation by using a Quest head display space positioning method (Inside-out tracking positioning) according to the consistent spatial relation between a preparation experience position in a game scene and a preparation experience position in an actual environment, and an actual model corresponds to a model in the game scene one by one; the game scene model data is adopted to establish a corresponding entertainment model and experience area in reality, the corresponding game scene is restored by a real one-to-one scene, the stake touched by the player is the stake, the touched cliff is the empty cliff, and real game experience is brought to the player.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, are incorporated in and constitute a part of this specification. The drawings and their description are illustrative of the application and are not to be construed as unduly limiting the application. In the drawings:
FIG. 1 is a flow chart of a modeling method based on a Quest head-up display of the present application;
FIG. 2 is a schematic diagram of the implementation of an actual model (left) and a model (right) in a game scene established by a modeling method based on a Quest head display;
FIG. 3 is a flow chart of the method for experiencing extreme motion based on a Quest head-up display.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the application herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the present application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal" and the like indicate an azimuth or a positional relationship based on that shown in the drawings. These terms are only used to better describe the present application and its embodiments and are not intended to limit the scope of the indicated devices, elements or components to the particular orientations or to configure and operate in the particular orientations.
Also, some of the terms described above may be used to indicate other meanings in addition to orientation or positional relationships, for example, the term "upper" may also be used to indicate some sort of attachment or connection in some cases. The specific meaning of these terms in the present application will be understood by those of ordinary skill in the art according to the specific circumstances.
In addition, the term "plurality" shall mean two as well as more than two.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
Example 1
The technical scheme provides a depth combination scheme of an indoor motion and Quest integrated machine, which comprises the steps of combining request space positioning with indoor, adapting and developing indoor game scenes, playing methods, operation flows and the like in Quest head display.
As shown in fig. 1, a first aspect of the present application is to provide a modeling method based on a Quest head-up display, which includes the following steps:
s1, collecting a game scene picture;
the technology establishes a one-to-one scene relation by using a Quest head display space positioning method (Inside-out tracking positioning) according to the consistent spatial relation between the preparation experience position in the game scene and the preparation experience position in the actual environment, and the actual model corresponds to the model in the game scene one by one.
The game scene picture of the game terminal equipment is transmitted to the Quest head display through the data port, the model data of the game scene picture is required to be acquired, and the model data of the game scene picture such as parameters of model position, size and the like are utilized for actual modeling.
S2, establishing a spatial modeling position relation corresponding to the game scene picture based on a Quest head display spatial positioning method;
and establishing a spatial modeling position relation corresponding to the game scene picture according to the model data of the game scene picture by using a space positioning method of the Quest head display.
Setting a scaling ratio to be modeled in an actual field according to model data of a game scene picture, and establishing a spatial modeling position relation corresponding to the game scene picture according to the scaling ratio;
thereafter, modeling is performed on the real field in accordance with the game scene model.
In the actual space, modeling is carried out on the preparation area, space calibration is carried out along the edge of the preparation area, and the size of the space, the starting point and the positive direction which need to be identified by the Quest head display are determined.
In game performance, the origin and the positive direction of the modeled scene are consistent with the actual calibration space starting point and the positive first.
The size and direction of the actual space are consistent with those of the game space, and the obstacles in the actual space have corresponding one-to-one model and spatial relation in the game space.
S3, establishing an experience model according to the established spatial modeling position relationship.
As shown in fig. 2, the left schematic diagram is a one-to-one model of an actual scene, and the right side is a game scene picture based on the actual scene model.
And modeling on a real field according to the model data of the game scene picture by using a space positioning method of the Quest head display.
In the actual space, modeling is carried out on the preparation area (namely, the standing area of the player in the left diagram), space calibration is carried out along the edge of the preparation area, and the size of the space, the starting point and the positive direction which need to be identified by the Quest head display are determined.
Therefore, the game scene model data is adopted to establish the corresponding entertainment model and experience area in reality, the corresponding game scene is restored by real one-to-one scene, the stake touched by the player is the stake, the touched cliff is the empty cliff, and real game experience is brought to the player.
Preferably, the capturing game scene pictures includes:
and acquiring the experience preparation position data, the model contour data and the model direction of the game space in the game scene picture.
Preferably, the establishing a spatial modeling position relationship corresponding to the game scene picture based on the Quest head display spatial positioning method includes:
determining the size of the space, the starting point and the positive direction of the Quest head display to be identified according to the acquired preparation experience position data, model contour data and model direction of the game space in the game scene picture;
in the actual space, modeling is carried out on the preparation area, space calibration is carried out along the edge of the preparation area, and the size of the space, the starting point and the positive direction which need to be identified by the Quest head display are determined.
In game performance, the origin and the positive direction of the modeled scene are consistent with the actual calibration space starting point and the positive first.
The size and direction of the actual space are consistent with those of the game space, and the obstacles in the actual space have corresponding one-to-one model and spatial relation in the game space.
And establishing a spatial modeling position relation corresponding to the game scene picture according to the determined spatial size, starting point and positive direction which need to be identified by the Quest head display.
Preferably, the establishing an experience model according to the established spatial modeling position relationship includes:
establishing a preparation area model according to the established spatial modeling position relation;
and establishing a falling area model according to the established spatial modeling position relationship.
The model established by the technology comprises a preparation area model and a drop area model, wherein a limited movement preparation area based on the Quest head display and corresponding to the game scene is provided for a user, and a limited movement drop area based on the Quest head display and corresponding to the game scene is provided for the user.
The experience preparation area is used for providing a limit movement preparation area which is based on the Quest head display and corresponds to the game scene for the user;
the experience drop area is used for providing a limited motion drop area based on the Quest head display and corresponding to a game scene for a user.
As shown in FIG. 2, players interact in a one-to-one model of the actual scene on the left, while in the VR world in the Quest head-up, the actual scene is like the entertainment scene on the right.
During modeling, the indoor light source needs to be ensured to achieve certain brightness, a certain reference object needs to be arranged around the indoor light source, and surrounding environment information is read by four cameras and sensors displayed by a Quest head, so that space positioning data are obtained.
The Quest head of the embodiment displays four cameras which can collect images at the same time, and gesture information is obtained by comparing differences of images obtained by the different cameras at the same time. The player touches different obstacles in the game and has corresponding feedback (physical feedback, acoustic feedback, etc.).
In fig. 2, the area where the human body stands is an experience preparation area where the player wears and detects the Quest head display; the area marked by the dotted line is an experience falling area, and the model of the falling area is scaled by a certain proportion through a certain scaling, so that the falling is steeper and deeper, and the actual model and the model in the game scene are not in one-to-one correspondence.
And the tracking characteristic of the Quest head display is fully utilized in the falling area, and under a good tracking condition, the tracking data is mapped to the player visual experience camera of the game scene, so that the falling in the game scene and the falling in reality are scaled in a certain proportion, and the player is more stimulated in the process of sliding down.
The player experiences and prepares the area and wears the Quest head display, gets into the different world, carries out interaction with the different world and incudes the emotion, and after the player is ready on specific device, the operator pushes into the whereabouts area, experiences the quick feel of the acceleration gliding of different world.
In the limit movement link, the movement which is not stimulated in the real environment is zoomed in the VR game, the movement path which is tracked is enlarged, the visual movement sense is enhanced, and the completely different VR experience is brought by the assistance of sound effect and special effect.
The following is a method for establishing a preparation area model and a drop area model:
preferably, the establishing the preparation area model according to the established spatial modeling position relationship includes:
establishing a positioning space matrix based on a Quest head display space positioning method in the experience preparation area;
and establishing an experience preparation area corresponding to the game scene in the positioning space matrix based on the spatial modeling position relation of the game scene picture.
As shown in fig. 2, the region where the human body stands in the left schematic diagram is an experience preparation region, and a positioning space matrix is established in the preparation experience region based on a Quest head display space positioning method according to the space modeling position relation of the game scene picture, game model data and the like;
in an actual field, an experience preparation area corresponding to a game scene is built in the positioning space matrix, so that an area model in which a human body stands as shown in fig. 2 is built.
After the one-to-one model of the left actual scene is established, VR entertainment experience can be carried out by using the Quest head display. In the one-to-one restoration real space, in order to ensure that a player has good space feeling, the latest gesture recognition technology of the Oculus request is applied, and the scene model actually touched by the player has corresponding actual feedback in reality.
Preferably, the building the falling area model according to the built spatial modeling position relationship includes:
establishing a falling area model corresponding to a game scene in a falling body inspection area based on the spatial modeling position relation of the game scene picture;
scaling the obtained drop zone model;
and establishing an experience falling area corresponding to the game scene in the falling body inspection area according to the falling area model.
As shown in fig. 2, the area marked by the broken line (curve) of the left schematic diagram is an experience drop area, and the drop area model is scaled by a certain scale, so that the drop is steeper and deeper, and the actual model and the model in the game scene are not in one-to-one correspondence.
When the area marked by the broken line (curve) is worn by the Quest head display, the scene experience of the player in the game is like a spiral curve shown in the game scene on the right side of fig. 2, and the falling area model is scaled by a certain proportion, so that the falling is steeper and deeper, and the actual model of the falling area model and the model in the game scene are not in one-to-one correspondence.
And the tracking characteristic of the Quest head display is fully utilized in the falling area, and under a good tracking condition, the tracking data is mapped to the player visual experience camera of the game scene, so that the falling in the game scene and the falling in reality are scaled in a certain proportion, and the player is more stimulated in the process of sliding down.
Preferably, the step of establishing, in the drop body test area, an experience drop area corresponding to a game scene according to the drop area model obtained by scaling includes:
acquiring drop model contour data in the game scene picture;
setting a scaling ratio, and scaling the drop model profile data according to the scaling ratio;
and establishing an experience drop area corresponding to the game scene in the drop body test area according to the scaled drop model outline data.
In order to establish the experience drop area, the drop model contour data in the acquired game scene picture needs to be scaled, and the experience drop area corresponding to the game scene is established in the drop body test area according to the scaled drop model contour data. The drop is steeper and deeper, and the actual model and the model in the game scene are not in one-to-one correspondence.
Example 2
As depicted in FIG. 3, the steps of playing a game experience with a Quest head display in the region model on the left in FIG. 2 are described.
The second aspect of the application provides a limit movement experience method based on a Quest head display, which comprises the following steps:
firstly, starting a Quest head display;
secondly, detecting whether the positioning space matrix is normal or not by using a Quest head display; if so, entering the experience preparation area;
and thirdly, running a game, and entering the experience drop area based on a Quest head display.
Preferably, in the second step, the detecting whether the positioning space matrix is normal by using a Quest head display further includes:
if not, performing space calibration by using the Quest head display;
and after the calibration is finished, entering the experience preparation area.
The Quest head display and the process of experience in the region model on the left side of fig. 2 using the Quest head display are described in principle in example 1.
The player performs a real experience in the region model on the left side in fig. 2;
the virtual experience of the game world is conducted in the region model on the right in fig. 2.
Example 3
The modeling method based on the Quest head display and the limit motion experience method based on the Quest head display are provided, the limit motion experience system based on the Quest head display is provided, and the limit motion experience system based on the Quest head display is provided by combining the Quest head display and the experience preparation area and the experience dropping area established in the embodiment 1, and the concrete operation of the limit motion experience system based on the Quest head display is as described in the embodiments 1 and 2.
The third aspect of the present application is to provide a Quest head display-based extreme motion experience system, comprising:
a game terminal: the method comprises the steps of running a game and feeding back a game scene of the game to a Quest head display;
quest head display: the method is used for establishing and detecting a positioning space matrix based on a Quest head display space positioning method in a preparation experience area;
experience preparation area: providing a limit movement preparation area which is based on the Quest head display and corresponds to a game scene for a user;
experience drop area: for providing the user with a limited motion drop region based on the Quest head display and corresponding to a game scene.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (6)

1. A modeling method based on a Quest head display is characterized by comprising the following steps:
s1, acquiring a game scene picture transmitted by a game terminal device, and acquiring model data of the game scene picture;
s2, setting a scaling ratio to be modeled in an actual field according to model data of a game scene picture based on a quick head display space positioning method, and establishing a space modeling position relation corresponding to the game scene picture; modeling is carried out on the real field according to the game scene model, and the real model of the real field corresponds to the model of the game scene one by one;
s3, establishing an experience model according to the established spatial modeling position relation;
the establishing an experience model according to the established spatial modeling position relationship comprises the following steps:
establishing a preparation area model according to the established spatial modeling position relation;
establishing a falling area model according to the established spatial modeling position relationship;
the step of establishing a falling area model according to the established spatial modeling position relationship comprises the following steps:
establishing a falling area model corresponding to a game scene in a falling body inspection area based on the spatial modeling position relation of the game scene picture;
scaling the obtained drop zone model;
establishing an experience drop area corresponding to a game scene in the drop experience area according to the drop area model;
wherein, gather game scene picture includes:
acquiring the experience preparation position data, model contour data and model direction of a game space in the game scene picture;
the method for positioning the head display space based on the Quest establishes a space modeling position relation corresponding to the game scene picture, and comprises the following steps:
determining the size of the space, the starting point and the positive direction of the Quest head display to be identified according to the acquired preparation experience position data, model contour data and model direction of the game space in the game scene picture;
and establishing a spatial modeling position relation corresponding to the game scene picture according to the determined spatial size, starting point and positive direction which need to be identified by the Quest head display.
2. The modeling method based on Quest head-up display according to claim 1, wherein the establishing a preparation area model according to the established spatial modeling positional relationship comprises:
establishing a positioning space matrix based on a Quest head display space positioning method in the experience preparation area;
and establishing an experience preparation area corresponding to the game scene in the positioning space matrix based on the spatial modeling position relation of the game scene picture.
3. The modeling method based on Quest head-up display according to claim 1, wherein the scaling the obtained drop zone model, in the drop experience zone, establishes an experience drop zone corresponding to a game scene according to the drop zone model, comprises:
acquiring drop model contour data in the game scene picture;
setting a scaling ratio, and scaling the drop model profile data according to the scaling ratio;
and establishing an experience drop area corresponding to the game scene in the drop body test area according to the scaled drop model outline data.
4. A method for limiting a movement experience based on a Quest head-up, characterized in that the method is a method for playing a game experience in a model established by a modeling method based on a Quest head-up as claimed in any one of claims 1-3, the method comprising the steps of:
firstly, starting a Quest head display;
secondly, detecting whether the positioning space matrix is normal or not by using a Quest head display; if so, entering the experience preparation area;
and thirdly, running a game, and entering the experience drop area based on a Quest head display.
5. The method for limiting motion experience based on a Quest head-up as recited in claim 4, wherein in the second step, said detecting whether the localization space matrix is normal using a Quest head-up further comprises:
if not, performing space calibration by using the Quest head display;
and after the calibration is finished, entering the experience preparation area.
6. A Quest-head-display-based extreme motion experience system, characterized in that the operation rules of the system comprise a Quest-head-display-based modeling method according to any one of claims 1-3, and a Quest-head-display-based extreme motion experience method according to any one of claims 4-5, the system comprising:
a game terminal: the method comprises the steps of running a game and feeding back a game scene of the game to a Quest head display;
quest head display: the method is used for establishing and detecting a positioning space matrix based on a Quest head display space positioning method in a preparation experience area;
experience preparation area: providing a limit movement preparation area which is based on the Quest head display and corresponds to a game scene for a user;
experience drop area: for providing the user with a limited motion drop region based on the Quest head display and corresponding to a game scene.
CN202011167301.5A 2020-10-27 2020-10-27 Modeling and limit movement method and system based on Quest head display Active CN112245910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011167301.5A CN112245910B (en) 2020-10-27 2020-10-27 Modeling and limit movement method and system based on Quest head display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011167301.5A CN112245910B (en) 2020-10-27 2020-10-27 Modeling and limit movement method and system based on Quest head display

Publications (2)

Publication Number Publication Date
CN112245910A CN112245910A (en) 2021-01-22
CN112245910B true CN112245910B (en) 2023-08-11

Family

ID=74262806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011167301.5A Active CN112245910B (en) 2020-10-27 2020-10-27 Modeling and limit movement method and system based on Quest head display

Country Status (1)

Country Link
CN (1) CN112245910B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681661A (en) * 2011-01-31 2012-09-19 微软公司 Using a three-dimensional environment model in gameplay
CN106249896A (en) * 2016-08-12 2016-12-21 浙江拓客网络科技有限公司 Based on sterically defined virtual reality interactive system
CN106390454A (en) * 2016-08-31 2017-02-15 广州麦驰网络科技有限公司 Reality scene virtual game system
CN106445176A (en) * 2016-12-06 2017-02-22 腾讯科技(深圳)有限公司 Man-machine interaction system and interaction method based on virtual reality technique
CN107185245A (en) * 2017-05-31 2017-09-22 武汉秀宝软件有限公司 A kind of actual situation synchronous display method and system based on SLAM technologies
CN208493206U (en) * 2018-07-12 2019-02-15 云奥信息科技(广州)有限公司 A kind of immersion exemplary motion stage device
CN110770664A (en) * 2018-06-25 2020-02-07 深圳市大疆创新科技有限公司 Navigation path tracking control method, equipment, mobile robot and system
CN111175972A (en) * 2019-12-31 2020-05-19 Oppo广东移动通信有限公司 Head-mounted display, scene display method thereof and storage medium
JP2020110352A (en) * 2019-01-11 2020-07-27 株式会社コロプラ Game program, game method, and information processor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6790141B2 (en) * 2001-09-28 2004-09-14 Igt Sequential gaming

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681661A (en) * 2011-01-31 2012-09-19 微软公司 Using a three-dimensional environment model in gameplay
CN106249896A (en) * 2016-08-12 2016-12-21 浙江拓客网络科技有限公司 Based on sterically defined virtual reality interactive system
CN106390454A (en) * 2016-08-31 2017-02-15 广州麦驰网络科技有限公司 Reality scene virtual game system
CN106445176A (en) * 2016-12-06 2017-02-22 腾讯科技(深圳)有限公司 Man-machine interaction system and interaction method based on virtual reality technique
CN107185245A (en) * 2017-05-31 2017-09-22 武汉秀宝软件有限公司 A kind of actual situation synchronous display method and system based on SLAM technologies
CN110770664A (en) * 2018-06-25 2020-02-07 深圳市大疆创新科技有限公司 Navigation path tracking control method, equipment, mobile robot and system
CN208493206U (en) * 2018-07-12 2019-02-15 云奥信息科技(广州)有限公司 A kind of immersion exemplary motion stage device
JP2020110352A (en) * 2019-01-11 2020-07-27 株式会社コロプラ Game program, game method, and information processor
CN111175972A (en) * 2019-12-31 2020-05-19 Oppo广东移动通信有限公司 Head-mounted display, scene display method thereof and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
VR技术在体育领域内的应用与展望;朱永皓等;《当代体育科技》;20220225;第246页右栏 第1段 *

Also Published As

Publication number Publication date
CN112245910A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
US20230302359A1 (en) Reconfiguring reality using a reality overlay device
US10821347B2 (en) Virtual reality sports training systems and methods
TWI786701B (en) Method and system for eye tracking with prediction and late update to gpu for fast foveated rendering in an hmd environment and non-transitory computer-readable medium
US9236032B2 (en) Apparatus and method for providing content experience service
Miles et al. A review of virtual environments for training in ball sports
US9728011B2 (en) System and method for implementing augmented reality via three-dimensional painting
US11826628B2 (en) Virtual reality sports training systems and methods
CN102622774B (en) Living room film creates
US20030227453A1 (en) Method, system and computer program product for automatically creating an animated 3-D scenario from human position and path data
KR20130098770A (en) Expanded 3d space based virtual sports simulation system
JP2002247602A (en) Image generator and control method therefor, and its computer program
CN106582005A (en) Data synchronous interaction method and device in virtual games
US20140342344A1 (en) Apparatus and method for sensory-type learning
US20180261120A1 (en) Video generating device, method of controlling video generating device, display system, video generation control program, and computer-readable storage medium
CN107551554A (en) Indoor sport scene simulation system and method are realized based on virtual reality
KR101915780B1 (en) Vr-robot synchronize system and method for providing feedback using robot
KR20180013892A (en) Reactive animation for virtual reality
US20130176302A1 (en) Virtual space moving apparatus and method
CN106390454A (en) Reality scene virtual game system
CN113470190A (en) Scene display method and device, equipment, vehicle and computer readable storage medium
CN103218826B (en) Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method
CN112245910B (en) Modeling and limit movement method and system based on Quest head display
JP2017151917A (en) Program and eye wear
CN103830904A (en) Device for realizing 3D (three-dimensional) simulation game
CN109200575A (en) The method and system for reinforcing the movement experience of user scene of view-based access control model identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant