CN115482363A - Method and system for detecting interaction of moving block and digital object in virtual scene - Google Patents
Method and system for detecting interaction of moving block and digital object in virtual scene Download PDFInfo
- Publication number
- CN115482363A CN115482363A CN202211117349.4A CN202211117349A CN115482363A CN 115482363 A CN115482363 A CN 115482363A CN 202211117349 A CN202211117349 A CN 202211117349A CN 115482363 A CN115482363 A CN 115482363A
- Authority
- CN
- China
- Prior art keywords
- digital object
- interaction
- virtual scene
- user
- handheld
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Architecture (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses an interactive detection method and system for a moving block and a digital object in a virtual scene, wherein the interactive detection method for the moving block and the digital object in the virtual scene comprises the following steps: when a user enters a pre-constructed virtual scene, calculating virtual coordinates of the user in the virtual scene; displaying a digital object controlled by the handheld VR device in a virtual scene according to the virtual coordinates of the user and the handheld VR device of the user; presenting a motion block in the virtual scene according to a preset algorithm and controlling the motion block to move; controlling the digital object to execute interactive action with the motion block according to the interactive instruction of the handheld VR equipment; and detecting the interaction result of the digital object and the motion block, and feeding back the interaction result in a preset form through the handheld VR equipment. The technical scheme of the invention can solve the problems that in the prior art, the interactive information and the interactive effect are difficult to feed back to the user and the user experience is poor due to the mode of detecting the moving block by using the camera.
Description
Technical Field
The invention relates to the technical field of virtual reality, in particular to an interactive detection method and system for a moving block and a digital object in a virtual scene.
Background
VR (virtual reality technology). The virtual reality technology is a high-level man-machine interaction technology which comprehensively applies computer graphics, man-machine interface technology, sensor technology, artificial intelligence and the like, creates a vivid artificial simulation environment and can effectively simulate various perceptions of a human in a natural environment.
The virtual reality scene generally requires a user to carry VR equipment, such as VR helmets, VR head displays, VR handles and other virtual reality simulation equipment to display the virtual scene, interact with the virtual scene, and simulate various perceptions of the user. A large number of virtual motion block structures often exist in a virtual scene, interactive operations such as contact, collision or click are carried out through relative movement of a digital object operated by a user and the motion block structures, the user can sense the change of the surrounding environment in the virtual scene, and sense and operate surrounding objects, so that the purposes of simulating a real scene and enhancing user experience are achieved.
In the prior art, a VR headset is usually worn by a user, a camera is arranged in the headset, and the moving distance and the position relation of a moving block are visually detected by the user in a virtual scene through the camera. For example, a visual cone of a current camera in a VR head display in a virtual scene is obtained, visibility of a moving block and a digital object in the virtual scene is calculated according to the visual cone, when the moving block and the digital object are determined to be visible, distances between the moving block and the current camera and distances between the moving block and the digital object and the current camera are calculated respectively, the display size of the moving block is set and provided for a user to display through the distances between the moving block and the current camera, and then the user can conveniently use the digital object to operate the moving block. However, the above method can only determine the distance between the moving square, and cannot sense the interaction between the moving square and the digital object, so that when a user uses the digital object to interact with the moving square, the digital object is difficult to feed back information and effect generated by the interaction to the user, which makes it difficult to obtain a relatively real user experience.
Disclosure of Invention
The invention provides an interactive detection scheme of a moving block and a digital object in a virtual scene, and aims to solve the problems that interactive information and an interactive effect are difficult to feed back to a user and the user experience is poor in a mode of detecting the moving block by using a camera in the prior art.
To achieve the above object, according to a first aspect of the present invention, there is provided a method for detecting interaction between a moving block and a digital object in a virtual scene, including:
when detecting that the user enters a pre-constructed virtual scene, calculating virtual coordinates of the user in the virtual scene;
displaying a digital object controlled by the handheld VR device in a virtual scene according to the virtual coordinates of the user and the handheld VR device of the user;
presenting a motion block in the virtual scene according to a preset algorithm and controlling the motion block to move;
controlling the digital object to execute interactive action with the motion block according to the interactive instruction of the handheld VR equipment;
and detecting an interaction result of the digital object and the motion block, and feeding back the interaction result in a preset form through handheld VR equipment.
Preferably, the step of calculating the virtual coordinate of the user in the virtual scene includes:
constructing a three-dimensional virtual scene, and establishing a virtual coordinate system in the virtual scene;
when a user enters a virtual scene, detecting the head-mounted VR equipment of the user in the virtual scene, and fixedly matching the origin of coordinates of a virtual coordinate system as a standing point of the user;
and setting the vertical coordinate of the user in the virtual scene according to the height of the head-mounted VR device.
Preferably, the step of displaying the digital object controlled by the handheld VR device in the virtual scene includes:
reading a holding point of the handheld VR device when the handheld VR device is detected to be started;
and generating a holding point coordinate of the handheld VR device in the virtual scene, and generating and displaying the digital object in the virtual scene according to the holding point coordinate.
Preferably, the step of generating and displaying the digital object in the virtual scene according to the coordinates of the holding point includes:
a Box digital model is constructed in advance and is used as an adaptive bridge between the handheld VR equipment and a digital object;
establishing a position point which is coincident with a holding point coordinate of the handheld VR equipment in the Box digital model;
the digital object is inserted into the Box digital model and the grip point of the digital object is merged with the grip point of the handheld VR device.
Preferably, the step of presenting the moving block in the virtual scene according to a predetermined algorithm and controlling the moving of the moving block in the interaction detection method includes:
marking the motion types of different motion blocks by using an abstract interface, and setting a plurality of behavior instruction controllers respectively matched with the motion types;
when a motion block appears in the virtual scene, the behavior instruction controller is used for controlling the motion block to move towards the user according to the motion type.
Preferably, the step of presenting the moving block in the virtual scene according to a predetermined algorithm and controlling the moving of the moving block in the interaction detection method includes:
calculating and setting the moving speed of the moving square block according to the virtual coordinate of the user in the virtual scene and the preset path of the moving square block;
setting an interactive optimal judgment surface of the digital object and the motion square according to the moving speed of the motion square and the virtual coordinate of the user;
when the sports box moves to the interaction optimal decision surface, a handheld VR device is used to prompt the user to perform an interactive action on the sports box by using the digital object.
Preferably, in the above interaction detecting method, the step of controlling the digital object to perform the interaction with the moving block includes:
acquiring strength and speed information acquired by handheld VR equipment;
calculating the moving direction and speed information of the digital object according to the force and speed information;
and controlling the digital object to interact with the motion block according to the moving direction and speed information.
Preferably, the above interaction detecting method, detecting an interaction result of the digital object and the moving block, and feeding back the interaction result in a predetermined form through the handheld VR device, includes:
setting a physical object which is the same as the digital object model, and tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image;
when the digital object interacts with the moving square, the interaction position and time information of the physical object and the moving square are selected as interaction results;
using the interaction location and time information, an output force of the handheld VR device is calculated and set.
According to a second aspect of the present invention, the present invention further provides a system for detecting interaction between a moving block and a digital object in a virtual scene, comprising:
the virtual coordinate calculation module is used for calculating the virtual coordinate of the user in the virtual scene when the user enters the pre-constructed virtual scene;
the digital object display module is used for displaying a digital object controlled by the handheld VR equipment in a virtual scene according to the virtual coordinates of the user and the handheld VR equipment of the user;
the motion block control module is used for presenting a motion block in the virtual scene according to a preset algorithm and controlling the motion block to move;
the interactive action execution module is used for controlling the digital object to execute the interactive action with the movement square according to the interactive instruction of the handheld VR equipment;
and the interactive result detection module is used for detecting the interactive result of the digital object and the motion block and feeding back the interactive result through the handheld VR equipment according to a preset form.
Preferably, in the interaction detection system, the interaction result detection module includes:
the digital object tracking sub-module is used for setting a physical object which is the same as the digital object model and tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image;
the interactive result selection submodule is used for selecting the interactive position and time information of the physical object and the moving square as an interactive result when the digital object and the moving square are interacted;
and the output force calculation submodule is used for calculating and setting the output force of the handheld VR equipment by using the interactive position and time information.
In summary, according to the interaction detection scheme for the moving block and the digital object in the virtual scene provided by the present invention, when it is detected that the user enters the virtual scene, the virtual coordinates of the user in the virtual scene are calculated, then the digital object controlled by the handheld VR device is displayed according to the handheld VR device and the virtual coordinates, and then the moving block is presented in the virtual scene and the moving of the moving block is controlled, so that when an interaction instruction of the handheld VR device is received, the moving block can be sensed and the digital object can be controlled to perform an interaction action with the moving block through the interaction instruction, for example: cutting, striking, touching, etc.; monitoring the interaction result (such as whether the digital object hits a preset position) of the moving square in real time; according to the interaction result, the handheld VR equipment can feed back the interaction result according to a preset form, and the problems that in the prior art, the digital object is difficult to feed back information and effect generated by interaction to a user, and vivid user experience is difficult to obtain are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for detecting interaction between a moving block and a digital object in a virtual scene according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for calculating virtual coordinates according to the embodiment shown in FIG. 1;
FIG. 3 is a flowchart illustrating a method for displaying a digital object according to the embodiment shown in FIG. 1;
FIG. 4 is a flow chart illustrating a method for generating a digital object according to the embodiment shown in FIG. 3;
FIG. 5 is a flow chart illustrating a first method for presenting and controlling movement of a motion block according to the embodiment shown in FIG. 1;
FIG. 6 is a flow chart illustrating a second method for presenting and controlling movement of a moving block according to the embodiment shown in FIG. 1;
FIG. 7 is a flow chart illustrating a method for performing an interaction provided by the embodiment shown in FIG. 1;
FIG. 8 is a flow chart illustrating a method for feeding back interaction results according to the embodiment shown in FIG. 1;
FIG. 9 is a schematic structural diagram of an interactive optimal decision plane according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an interaction detection system for a moving block and a digital object in a virtual scene according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an interaction result detection module provided in the embodiment shown in fig. 10.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention mainly solves the technical problems that:
in the prior art, a VR headset is usually worn by a user, a camera is arranged in the headset, and the moving distance and the position relation of a moving block are visually detected by the user in a virtual scene through the camera. For example, a visual cone of a current camera in a VR head display in a virtual scene is obtained, visibility of a moving block and a digital object in the virtual scene is calculated according to the visual cone, when the moving block and the digital object are determined to be visible, distances between the moving block and the current camera and distances between the moving block and the digital object and the current camera are calculated respectively, the display size of the moving block is set and provided for a user to display through the distances between the moving block and the current camera, and then the user can conveniently use the digital object to operate the moving block. However, the above method can only determine the distance between the moving square, and cannot sense the interaction between the moving square and the digital object, so that when the user uses the digital object to interact with the moving square, the digital object is difficult to feed back information and effect generated by the interaction to the user, which results in difficulty in obtaining a relatively real user experience.
In order to solve the above problems, the following embodiments of the present invention provide an interaction detection scheme for a moving block and a digital object in a virtual scene, where the digital object controlled by a handheld VR device is generated in the virtual scene, the digital object is used to install an interaction instruction to perform an interaction with the moving block, and when an interaction result between the digital object and the moving block is detected, the interaction result is fed back by the handheld VR device according to a predetermined form, so as to achieve the purpose of effectively feeding back information and effects generated by the interaction to a user, and enabling the user to obtain a real user experience.
To achieve the above object, please refer to fig. 1, where fig. 1 is a flowchart illustrating a method for detecting interaction between a moving block and a digital object in a virtual scene according to an embodiment of the present invention. As shown in fig. 1, the method for detecting interaction between a moving block and a digital object in a virtual scene includes:
s110: and when the user is detected to enter the pre-constructed virtual scene, calculating the virtual coordinates of the user in the virtual scene. In the embodiment of the application, a panoramic 3D virtual scene is pre-constructed, when the virtual scene is pre-loaded, a coordinate origin of the virtual scene is calculated and obtained, the position of a user is locked at the coordinate origin, and when the user wears and starts VR equipment, a computer system sets the vertical coordinate of the user according to the position of the VR equipment, so that the height information of the user in the virtual scene is obtained. Specifically, as a preferred embodiment, as shown in fig. 2, the above-mentioned interaction detection method, step S110: the step of calculating the virtual coordinates of the user in the virtual scene comprises:
s111: and constructing a three-dimensional virtual scene, and establishing a virtual coordinate system in the virtual scene. In the embodiment of the application, the virtual scene can be displayed in the VR device of the user.
S112: when the user enters the virtual scene, the head-mounted VR equipment of the user in the virtual scene is detected, and the coordinate origin of the virtual coordinate system is fixedly matched as the standing point of the user. According to the embodiment of the application, the coordinate origin of the virtual coordinate system is preset in the virtual scene, the standing point of the user is matched with the coordinate origin, so that the position of the user can be accurately positioned in real time along with the movement of the user, and the coordinates of other objects in the virtual scene are adjusted according to the position of the user.
S113: and setting the vertical coordinate of the user in the virtual scene according to the height of the head-mounted VR device.
In the technical scheme provided by the embodiment of the application, a three-dimensional panoramic virtual scene is pre-constructed, then when the virtual scene is pre-loaded, a computer system obtains a preset coordinate origin (0,0) in the virtual scene, and the position of a user is presented and locked at the X-axis position of the coordinate origin. When a user wears and starts the VR equipment, the computer system reads the center position of the distance between the head display lenses of the VR equipment worn by the user, and the Y-axis coordinate position of the user in the (0,0) coordinate point is obtained and presented according to the center position, wherein the Y-axis coordinate position is the height of the user in the virtual space.
The embodiment shown in fig. 1 provides the method for detecting interaction between a moving block and a digital object in a virtual scene, further comprising:
s120: and displaying the digital object controlled by the handheld VR device in the virtual scene according to the virtual coordinates of the user and the handheld VR device of the user. After generating the virtual coordinates of the user, the coordinate position of the user-held VR device, such as the coordinates of the holding point, can be calculated from the user-held VR device, and then a responsive digital object, such as an object like a glove, a fan, or a weapon, is generated at the coordinate position. The digital object is controlled by the user, based on the position and orientation of the user's hand, specifically in this embodiment, the position and orientation setting of the handheld VR device, so that the user can move the digital object by controlling the handheld VR device.
As a preferred embodiment, as shown in fig. 3, the step of displaying the digital object controlled by the handheld VR device in the virtual scene includes:
s121: and when the handheld VR device is detected to be started, reading the holding point of the handheld VR device.
S122: and generating a holding point coordinate of the handheld VR device in the virtual scene, and generating and displaying the digital object in the virtual scene according to the holding point coordinate.
Specifically, when the user wears and starts handheld VR device, the computer system will read the palm position of the VR handle that the user both hands held. Reading the VR application system by the computing system so as to obtain the position information of the VR handle; and then acquiring and presenting the positions of the two hands of the user in the virtual space according to the palm positions of the VR handles held by the two hands of the user, namely, the holding point coordinates at which one or more digital objects controlled by the VR handles of the user are presented. The digital object is controlled by the VR handle, and objects in the virtual scene can be processed by the digital object according to user operation, such as interactive actions of cutting, carrying or striking.
Specifically, as a preferred embodiment, as shown in fig. 4, the step of generating and displaying the digital object in the virtual scene according to the coordinates of the holding point includes:
s1221: and constructing a Box digital model in advance to serve as an adaptive bridge between the handheld VR device and the digital object.
S1222: and establishing a position point which is coincident with the holding point coordinate of the handheld VR equipment in the Box digital model.
S1223: inserting the digital object into the Box digital model, and merging a holding point of the digital object with a holding point of the handheld VR device.
According to the technical scheme provided by the embodiment of the application, a BOX digital model is constructed in advance, and the model is a transparent four-side square and is used for generating a digital square. In this application example, the transparent BOX model is not presented directly to the user, but merely as an adapter bridge between the VR handle and the digital object being controlled by the user. Binding a position point in the BOX digital model, wherein the position point is a coincidence point (point A) with the palm position of the VR handle in the virtual space of the user; designing a holding point (point B) in a digital object model held by a user; and (4) overlapping the point A and the point B, and inserting the model of the digital object into the BOX model. The method has the advantages that the method is not limited to the specific model form of the digital object, the model style of the digital object can be flexibly changed, and the holding point of the digital object can be always presented at the palm positions of both hands of the user by adjusting the point positions of the digital object model and the BOX digital model.
After the digital object is real, the method for detecting the interaction between the moving block and the digital object provided by the embodiment shown in fig. 1 further includes:
s130: and presenting the motion blocks in the virtual scene according to a predetermined algorithm and controlling the motion blocks to move. The motion block is a block-shaped moving object in a virtual scene, such as a car, a box, a panel, or the like, and is made to interact with the motion object by presenting the motion block according to a predetermined algorithm and controlling the motion block to move, particularly, to the digital object side.
Specifically, as a preferred embodiment, as shown in fig. 5, the step of presenting the moving block in the virtual scene according to the predetermined algorithm and controlling the moving block to move includes:
s131: and marking the motion types of different motion blocks by using an abstract interface, and setting a plurality of behavior command controllers respectively matched with the motion types.
S132: when a motion block appears in the virtual scene, the behavior instruction controller is used for controlling the motion block to move towards the user according to the motion type.
According to the technical scheme provided by the embodiment of the application, a data object-oriented programming processing mode is adopted for the generation and movement of the motion block, and an object-oriented thinking and programming method is not adopted. In particular, in the method for realizing a plurality of digital targets (namely, the moving blocks), the category splitting and marking are carried out by means of passing the moving blocks through an abstract interface, and corresponding control commands are executed by a plurality of controllers. The advantage of the above method is that it is more scalable in order to employ behavior-oriented expansion and therefore more usable. And when one part of the data meets the performance bottleneck, the data can be processed into multi-thread operation. Wherein, the marking the motion types of the plurality of digital objects by the abstract interface comprises: movable, rotatable, cuttable and additionally movable. In addition, the plurality of behavior instruction controllers include: a moving object controller, a rotating object controller, a cutting object controller, and an additional moving object controller.
As another preferred embodiment, as shown in fig. 6, the step of presenting the moving block in the virtual scene according to a predetermined algorithm and controlling the moving of the moving block includes:
s133: and calculating and setting the moving speed of the moving square according to the virtual coordinates of the user in the virtual scene and the preset path of the moving square. The generated point coordinates of the motion block are known, when the virtual coordinates of the user in the virtual scene are obtained, the generated point coordinates and the virtual coordinates of the user can be used for directly calculating the linear distance between the generated point coordinates of the motion block and the user, and the motion track and the coordinates of the preset path can be calculated according to the motion mode corresponding to the preset path of the motion block, such as the motion modes of a parabola or a straight line, and the like, by combining the linear distance, so as to calculate and control the moving speed of the motion block. For example, when the movement square reaches 3 meters in front of the user, selecting a first moving speed; when the square block of the user reaches 1 m in front of the user, selecting a second moving speed; wherein the first moving speed is greater than the second moving speed.
S134: and setting an interactive optimal judgment surface of the digital object and the motion block according to the moving speed of the motion block and the virtual coordinate of the user.
Specifically, as shown in fig. 9, in the embodiment of the present application, a distance between the exercise square and the user may be specifically calculated according to the generated point coordinates of the exercise square and the virtual coordinates of the user, and then a corresponding speed change surface, a determination surface and a destruction surface are set, where the speed change surface is a position where the exercise square changes speed, and the determination surface is a position where the digital object interacts with the exercise square, such as cutting or hitting; a destruction surface may be provided at the rear side of the user, which is destroyed when the movement cube moves to the destruction surface.
S135: when the sports box moves to the interaction optimal decision surface, a handheld VR device is used to prompt the user to perform an interactive action on the sports box by using the digital object. The method and the device for determining the interaction optimal judgment surface are set by combining the moving speed of the moving block and the virtual coordinate of the user, so that the optimal response time can be set for the user, and the user can interact with the moving block by using a digital object. Specifically, when the exercise square moves to the interaction optimal decision surface, the handheld VR device prompts the user to use the digital object to perform an interaction action on the exercise square.
According to the technical scheme provided by the embodiment of the application, the moving speed of the moving block is calculated and set according to the virtual coordinate of the user in the virtual scene and the preset path of the moving block; setting an interactive optimal judgment surface of the digital object and the motion square according to the moving speed of the motion square and the virtual coordinate of the user; thus, when the motion block moves to the interaction optimal decision surface, the VR device can prompt the user to use the digital object to perform interaction on the motion block, so that the optimal interaction effect is obtained.
After the steps of presenting the motion block and controlling the motion block to move, the method for detecting the interaction between the motion block and the digital object provided by the embodiment shown in fig. 1 further includes the following steps:
s140: and controlling the digital object to perform interactive action with the motion block according to the interactive instruction of the handheld VR equipment. And through the interactive instruction of the handheld VR device, for example, the moving block is hit at the preset speed in the flying direction of the moving block, so that the digital object can hit the moving block at the preset speed in the flying direction of the moving block to complete the interactive action. By the method, the interaction success rate of the digital object and the moving block can be improved.
As a preferred embodiment, as shown in fig. 7, the step of controlling the digital object to perform the interaction with the motion block includes:
s141: and acquiring the strength and speed information collected by the handheld VR equipment.
S142: and calculating the moving direction and speed information of the digital object according to the force and speed information.
S143: and controlling the digital object to interact with the motion block according to the moving direction and speed information.
According to the technical scheme, when a user interacts with the movement block, the handheld VR equipment is controlled to interact with the movement block with a certain force, so that the handheld VR equipment can acquire corresponding force and speed information, the moving direction and moving speed of the digital object can be calculated according to the force and speed information, the digital object is further controlled to interact with the movement block according to the moving direction and speed information, and the digital object is controlled to strike the movement block according to the moving direction and moving speed.
After controlling the digital object to perform the interaction with the motion block, the embodiment shown in fig. 1 provides a technical solution further comprising:
s150: and detecting an interaction result of the digital object and the motion block, and feeding back the interaction result through the handheld VR equipment according to a preset form. The handheld VR device can feed back interaction results in a preset form, such as vibration amplitude, vibration frequency and the like, because when the digital object interacts with the moving block, the interaction results (such as the digital object cuts the moving block and the digital object strikes the moving block) and the moving block in the virtual scene feed back a certain reaction force of the digital object, the handheld VR device can generate corresponding feedback results in a preset form according to the reaction force, such as a certain vibration frequency and vibration amplitude, and the handheld VR device can simulate real interaction information by feeding back the interaction results in the preset form without adding auxiliary equipment to simulate interaction results in a real scene.
Specifically, as a preferred embodiment, as shown in fig. 8, the step of detecting an interaction result of the digital object with the moving block and feeding back the interaction result in a predetermined form through the handheld VR device includes:
s151: setting the same physical object as the digital object model, and tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image.
S152: when the digital object interacts with the moving square, the interaction position and time information of the physical object and the moving square are selected as an interaction result.
S153: using the interaction location and time information, an output force of the handheld VR device is calculated and set.
The technical scheme provided by the embodiment of the application, through arranging the physical objects which are the same as the digital object model,
the embodiment of the application discloses a high-speed object trigger detection method based on a physical engine simulation frame supplement, and a method for tracking and rendering a digital object by using a physical object. Wherein the predetermined physical object is a digital object predetermined to satisfy the method, hidden (not presented to the user) in the virtual space, and operative to complement the frame rate and detect and track the rendered digital object transmitted by the VR system at the fixed frame rate; wherein rendering a digital object refers to the digital object being presented in the instance. Specifically, time and position information of a digital object to be rendered is captured, the position of the digital object is tracked, when the digital object is interacted with a moving block, the interaction position and time information of a physical object and the moving block are selected as interaction results, then the output force of the handheld VR equipment is calculated and set, and then the interaction results can be fed back.
To sum up, according to the method for detecting interaction between a moving block and a digital object in a virtual scene provided in the embodiment of the present invention, when it is detected that a user enters the virtual scene, a virtual coordinate of the user in the virtual scene is calculated, and then a digital object controlled by a handheld VR device is displayed according to the handheld VR device and the virtual coordinate, and then the moving block is presented in the virtual scene and the moving of the moving block is controlled, so that when an interaction instruction of the handheld VR device is received, the moving block can be sensed and the digital object can be controlled to perform an interaction action with the moving block through the interaction instruction, for example: cutting, striking, touching, etc.; at the moment, the interactive result of the digital object and the moving square is monitored in real time; according to the interaction result, the handheld VR equipment can feed back the interaction result according to a preset form, and the problems that in the prior art, the digital object is difficult to feed back information and effect generated by interaction to a user, and vivid user experience is difficult to obtain are solved.
Based on the same concept of the above method embodiment, the embodiment of the present invention further provides an interactive detection system for a moving block and a digital object in a virtual scene, which is used to implement the above method of the present invention.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an interaction detection system for a moving block and a digital object in a virtual scene according to an embodiment of the present invention. As shown in fig. 10, the system for detecting interaction between a moving block and a digital object includes:
a virtual coordinate calculation module 110, configured to calculate a virtual coordinate of a user in a virtual scene when the user enters a pre-constructed virtual scene;
a digital object display module 120, configured to display a digital object controlled by a handheld VR device in a virtual scene according to the virtual coordinates of the user and the handheld VR device of the user;
a motion block control module 130 for presenting motion blocks in the virtual scene according to a predetermined algorithm and controlling the motion blocks to move;
the interactive action execution module 140 is used for controlling the digital object to execute the interactive action with the motion block according to the interactive instruction of the handheld VR device;
and the interaction result detection module 150 is configured to detect an interaction result between the digital object and the motion block, and feed back the interaction result in a predetermined form through the handheld VR device.
To sum up, in the interaction detection system for a moving block and a digital object in a virtual scene provided in the embodiment of the present invention, when it is detected that a user enters the virtual scene, virtual coordinates of the user in the virtual scene are calculated, and then the digital object controlled by the handheld VR device is displayed according to the handheld VR device and the virtual coordinates, and then the moving block is presented and the moving block is controlled in the virtual scene, so that when an interaction instruction of the handheld VR device is received, the moving block can be sensed and the digital object can be controlled to perform an interaction action with the moving block through the interaction instruction, for example: cutting, striking, touching, etc.; at the moment, the interactive result of the digital object and the moving square is monitored in real time; according to the interaction result, the handheld VR equipment can feed back the interaction result according to a preset form, and the problems that in the prior art, the digital object is difficult to feed back information and effect generated by interaction to a user, and vivid user experience is difficult to obtain are solved.
As a preferred embodiment, as shown in fig. 11, the interaction result detecting module 150 includes:
a digital object tracking sub-module 151 for setting a physical object identical to the digital object model, and tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image;
an interaction result selection submodule 152, configured to select, when the digital object interacts with the moving block, interaction position and time information of the physical object and the moving block as an interaction result;
and the output force calculating operator module 153 is used for calculating and setting the output force of the handheld VR device by using the interaction position and time information.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A method for detecting interaction between a moving block and a digital object in a virtual scene, comprising:
when detecting that a user enters a pre-constructed virtual scene, calculating virtual coordinates of the user in the virtual scene;
displaying a digital object controlled by a handheld VR device in the virtual scene according to the virtual coordinates of the user and the handheld VR device of the user;
presenting a motion block in the virtual scene according to a preset algorithm and controlling the motion block to move;
controlling the digital object to perform interactive action with the motion block according to an interactive instruction of the handheld VR device;
and detecting an interaction result of the digital object and the moving block, and feeding back the interaction result through the handheld VR equipment according to a preset form.
2. The interaction detection method according to claim 1, wherein the step of calculating the virtual coordinates of the user in the virtual scene comprises:
constructing a three-dimensional virtual scene, and establishing a virtual coordinate system in the virtual scene;
when the user enters a virtual scene, detecting the head-mounted VR equipment of the user in the virtual scene, and fixedly matching the origin of coordinates of the virtual coordinate system as a standing point of the user;
and setting the ordinate of the user in the virtual scene according to the height of the head-mounted VR device.
3. The interaction detection method of claim 1, wherein the step of displaying the digital object controlled by the handheld VR device in the virtual scene comprises:
reading a holding point of the handheld VR device when the handheld VR device is detected to be started;
and generating a holding point coordinate of the handheld VR device in the virtual scene, and generating and displaying the digital object in the virtual scene according to the holding point coordinate.
4. The interaction detection method according to claim 3, wherein the step of generating and displaying the digital object in the virtual scene according to the handshake coordinates comprises:
constructing a Box digital model in advance to serve as an adaptive bridge between the handheld VR equipment and a digital object;
establishing a position point which is coincident with the holding point coordinate of the handheld VR equipment in the Box digital model;
inserting the digital object into the Box digital model, and merging a holding point of the digital object with a holding point of the handheld VR device.
5. The interaction detection method according to claim 1, wherein the step of presenting the moving blocks in the virtual scene according to a predetermined algorithm and controlling the moving blocks to move comprises:
marking the motion types of different motion blocks by using an abstract interface, and setting a plurality of behavior instruction controllers respectively matched with the motion types;
when a motion block appears in the virtual scene, the behavior instruction controller is used for controlling the motion block to move to the user according to the motion type.
6. The interaction detection method according to claim 1, wherein the step of presenting a moving block in the virtual scene according to a predetermined algorithm and controlling the moving of the moving block comprises:
calculating and setting the moving speed of the moving square according to the virtual coordinate of the user in the virtual scene and the preset path of the moving square;
setting an interaction optimal judgment surface of the digital object and the motion block according to the moving speed of the motion block and the virtual coordinate of the user;
when the movement block moves to the interaction optimal decision surface, prompting a user to perform an interactive action on the movement block by using the digital object by using the handheld VR device.
7. The interaction detection method according to claim 1, wherein the step of controlling the digital object to perform the interaction with the motion block comprises:
acquiring strength and speed information acquired by the handheld VR equipment;
calculating the moving direction and speed information of the digital object according to the force and speed information;
and controlling the digital object to interact with the motion block according to the moving direction and speed information.
8. The interaction detection method according to claim 1, wherein the step of detecting the interaction result of the digital object with the moving block and feeding back the interaction result in a predetermined form through the handheld VR device comprises:
setting a physical object which is the same as the digital object model, and tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image;
when the digital object interacts with the moving block, selecting the interaction position and time information of the physical object and the moving block as the interaction result;
calculating and setting an output force of the handheld VR device using the interaction location and time information.
9. A system for detecting interaction between a moving block and a digital object in a virtual scene, comprising:
the virtual coordinate calculation module is used for calculating the virtual coordinate of the user in a virtual scene when the user enters the virtual scene which is constructed in advance;
a digital object display module, configured to display a digital object controlled by a handheld VR device in the virtual scene according to the virtual coordinates of the user and the handheld VR device of the user;
the motion block control module is used for presenting motion blocks in the virtual scene according to a preset algorithm and controlling the motion blocks to move;
the interaction action execution module is used for controlling the digital object to execute the interaction action with the motion block according to the interaction instruction of the handheld VR equipment;
and the interactive result detection module is used for detecting the interactive result of the digital object and the motion block and feeding back the interactive result through the handheld VR equipment according to a preset form.
10. The interaction detection system of claim 9, wherein the interaction result detection module comprises:
the digital object tracking sub-module is used for setting a physical object which is the same as the digital object model and tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image;
the interaction result selection submodule is used for selecting the interaction position and time information of the physical object and the moving square as the interaction result when the digital object interacts with the moving square;
and the output force calculation submodule is used for calculating and setting the output force of the handheld VR equipment by using the interaction position and the time information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211117349.4A CN115482363A (en) | 2022-09-14 | 2022-09-14 | Method and system for detecting interaction of moving block and digital object in virtual scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211117349.4A CN115482363A (en) | 2022-09-14 | 2022-09-14 | Method and system for detecting interaction of moving block and digital object in virtual scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115482363A true CN115482363A (en) | 2022-12-16 |
Family
ID=84423619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211117349.4A Pending CN115482363A (en) | 2022-09-14 | 2022-09-14 | Method and system for detecting interaction of moving block and digital object in virtual scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115482363A (en) |
-
2022
- 2022-09-14 CN CN202211117349.4A patent/CN115482363A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102110811B1 (en) | System and method for human computer interaction | |
US9927869B2 (en) | Apparatus for outputting virtual keyboard and method of controlling the same | |
US6573896B1 (en) | Three-dimensional arrow | |
Ha et al. | WeARHand: Head-worn, RGB-D camera-based, bare-hand user interface with visually enhanced depth perception | |
US9483119B2 (en) | Stereo interactive method, display device, operating stick and system | |
US11701590B2 (en) | Player-tracking video game | |
EP2371434B1 (en) | Image generation system, image generation method, and information storage medium | |
CN102884492A (en) | Pointing device of augmented reality | |
US20110109628A1 (en) | Method for producing an effect on virtual objects | |
CN112105486B (en) | Augmented reality for industrial robots | |
TWI528224B (en) | 3d gesture manipulation method and apparatus | |
WO2017021902A1 (en) | System and method for gesture based measurement of virtual reality space | |
Yang et al. | An augmented reality-based training system with a natural user interface for manual milling operations | |
CN109710077B (en) | Virtual object collision judgment method and device based on VR and locomotive practical training system | |
CN107145222A (en) | The automatic binding system of instrument and method based on Unity d engines and VR equipment | |
EP3811186B1 (en) | Input scaling to keep controller inside field of view | |
WO2017000917A1 (en) | Positioning method and apparatus for motion-stimulation button | |
JPH09138637A (en) | Pseudo visibility device | |
US20230267667A1 (en) | Immersive analysis environment for human motion data | |
CN113238705A (en) | Virtual keyboard interaction method and system | |
CN115482363A (en) | Method and system for detecting interaction of moving block and digital object in virtual scene | |
Caputo et al. | Single-Handed vs. Two Handed Manipulation in Virtual Reality: A Novel Metaphor and Experimental Comparisons. | |
US20210232289A1 (en) | Virtual user detection | |
Hagedorn et al. | Sketch-based navigation in 3d virtual environments | |
Figueiredo et al. | Bare hand natural interaction with augmented objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |