CN115482362A - Method and system for detecting relative positions of multiple digital targets and user in virtual scene - Google Patents

Method and system for detecting relative positions of multiple digital targets and user in virtual scene Download PDF

Info

Publication number
CN115482362A
CN115482362A CN202211117336.7A CN202211117336A CN115482362A CN 115482362 A CN115482362 A CN 115482362A CN 202211117336 A CN202211117336 A CN 202211117336A CN 115482362 A CN115482362 A CN 115482362A
Authority
CN
China
Prior art keywords
digital
target
user
targets
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211117336.7A
Other languages
Chinese (zh)
Inventor
孙峰
王熙然
张国栋
丁皓明
孙诗瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shiyin Technology And Culture Co ltd
Original Assignee
Beijing Shiyin Technology And Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shiyin Technology And Culture Co ltd filed Critical Beijing Shiyin Technology And Culture Co ltd
Priority to CN202211117336.7A priority Critical patent/CN115482362A/en
Publication of CN115482362A publication Critical patent/CN115482362A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a system for detecting relative positions of multiple digital targets and a user in a virtual scene, wherein the method for detecting the relative positions comprises the steps of calculating virtual coordinates of VR equipment in the virtual scene when detecting that the user enters the virtual scene through the VR equipment; generating a digital object corresponding to the position in a virtual scene according to the virtual coordinate of the VR equipment; generating a plurality of digital targets in a virtual scene, and setting a moving path of the digital targets according to the distance between the digital targets and a user; controlling a plurality of digital targets to move towards a user along a moving path; monitoring the distance between the digital target and the user in real time, and adjusting the position relation between the digital target and the digital object according to the distance between the digital target and the user; and when the digital object moves to the interactive judgment surface, controlling the digital object to execute the interactive action on the digital object according to the interactive instruction of the VR equipment. The technical scheme of the invention can solve the problem that the accurate interaction between the digital object and the digital target is difficult to realize in the prior art.

Description

Method and system for detecting relative positions of multiple digital targets and user in virtual scene
Technical Field
The invention relates to the technical field of virtual reality, in particular to a method and a system for detecting relative positions of multiple digital targets and a user in a virtual scene.
Background
VR (Virtual Reality) is a high-level man-machine interaction technology that comprehensively applies computer graphics, man-machine interface technology, sensor technology, artificial intelligence, etc. to create a realistic artificial simulation environment and effectively simulate various senses of a human in a natural environment.
The appearance of VR scene usually needs the user to carry VR equipment, for example virtual reality analog device show virtual scene such as VR helmet, VR head display and VR handle, interacts with virtual scene to simulate user's various perceptions. There are often a large number of virtual digital objects in a virtual scene, such as virtual cars, virtual cabins, virtual buildings, virtual blocks, etc. The user can use the VR device to generate a corresponding digital object in the virtual scene, and interact with the digital object, such as avoiding, contacting or striking. In order to realize interaction between a digital object and a digital target, a head-mounted VR device is generally used to acquire a picture of the digital target in a virtual scene, and then a user observes movement of the digital target through light rays, and then interacts with the digital target using the digital object.
However, in the prior art, a method for controlling interaction between a digital object and a digital target by using a VR device usually only requires that a user visually observes the position of the digital target in a virtual scene, so that the position of the digital target cannot be accurately determined, and especially when a plurality of digital targets exist, accurate interaction between the digital object and the digital target is difficult to achieve.
Disclosure of Invention
The invention provides a scheme for detecting relative positions of multiple digital targets and a user in a virtual scene, and aims to solve the problem that accurate interaction between a digital object and a digital target is difficult to realize in the prior art.
To achieve the above object, according to a first aspect of the present invention, the present invention provides a method for detecting relative positions of a multi-digital object and a user in a virtual scene, including:
when detecting that a user enters a virtual scene through VR equipment, calculating virtual coordinates of the VR equipment in the virtual scene;
generating a digital object corresponding to the position in a virtual scene according to the virtual coordinate of the VR equipment;
generating a plurality of digital targets in a virtual scene, and setting a moving path of the digital targets according to the distance between the digital targets and a user;
controlling a plurality of digital targets to move towards a user along a moving path;
monitoring the distance between the digital target and the user in real time, and adjusting the position relation between the digital target and the digital object according to the distance between the digital target and the user;
and when the digital object moves to the interactive judgment surface, controlling the digital object to execute the interactive action on the digital object according to the interactive instruction of the VR equipment.
Preferably, in the above method for detecting a relative position, the step of generating a plurality of digital objects in a virtual scene and setting a moving path of the digital objects according to a distance between the digital objects and a user includes:
setting a plurality of target birth points which are distributed in parallel at a preset distance right in front of VR equipment;
controlling the digital target to be generated from the position of the corresponding target birth point;
and controlling the plurality of digital targets to sequentially move towards the user according to a first moving speed according to a preset behavior instruction.
Preferably, in the above-mentioned relative position detecting method, the step of adjusting the positional relationship between the digital object and the digital object according to the distance between the digital object and the user includes:
setting a target variable speed surface of a plurality of digital targets, and controlling the digital targets to move towards a user at a second moving speed when the digital targets reach the target variable speed surface;
setting interaction judgment surfaces of a plurality of digital targets, and controlling the digital targets to interact with the digital targets when the digital targets reach the interaction judgment surfaces;
and setting a plurality of target destroying surfaces of the digital targets, and destroying the digital targets when the digital targets reach the target destroying surfaces.
Preferably, the above-mentioned method for detecting relative position further comprises, after the step of adjusting the positional relationship between the digital object and the digital object according to the distance between the digital object and the user:
acquiring a virtual coordinate of a digital target in a virtual scene in real time;
judging whether the digital target reaches the interaction judging surface or not according to the virtual coordinate of the digital target and the virtual coordinate of the interaction judging surface;
and if the digital target is judged to reach the interaction judgment surface, sending proximity prompt information to the user through VR equipment to obtain an interaction instruction of the user.
Preferably, in the above method for detecting a relative position, the step of determining whether the digital target reaches the interaction determination surface based on the virtual coordinates of the digital target and the virtual coordinates of the interaction determination surface includes:
calculating a second moving speed of the digital target according to the speed configuration parameter of the digital target and the distance between the target speed change surface and the interaction judgment surface;
calculating the virtual coordinate of the digital target in real time according to the second moving speed of the digital target;
and when the virtual coordinate of the digital target is coincident with the virtual coordinate of the interaction judgment surface, judging that the digital target reaches the interaction judgment surface, and controlling the VR equipment to send approach prompt information to the user.
Preferably, the step of controlling the digital object to perform the interactive action with the digital object according to the interactive instruction of the VR device includes:
calculating the virtual coordinate of the digital object according to the virtual coordinate of the VR equipment and the connection relation between the VR equipment and the digital object;
calculating a distance between the digital object and the digital target using the virtual coordinates of the digital object and the virtual coordinates of the digital target;
calculating the direction and position of the interaction of the digital object and the digital target by using the distance between the digital object and the digital target;
and prompting the user to interact with the digital target by using the direction and the position of the digital target to interact with the digital target.
Preferably, the method for detecting relative position further includes, after the step of controlling the digital object to perform the interactive action on the digital object according to the interactive instruction of the VR device:
detecting interaction state information of a digital object and a digital target;
and controlling the VR equipment to generate and feed back a physical response signal according to the interaction state information and the connection relation between the digital object and the VR equipment.
Preferably, in the method for detecting the relative position, the step of controlling the VR device to generate and feed back the physical response signal according to the interaction state information and the connection relationship between the digital object and the VR device includes:
setting a physical object corresponding to the digital object, and tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image;
when the digital object carries out interaction action with the digital target, calculating interaction state information of the physical object and the digital target, wherein the interaction state information comprises interaction position and interaction time;
and calculating and controlling the magnitude and direction of the output force of the VR device by using the interaction position and the interaction time and the connection relation between the digital object and the VR device.
Preferably, the tracking unit tracks the position of the digital object according to the relative position between the physical object and the digital object in each frame of image, and includes:
calculating linear and angular velocities of the digital object;
calculating a tracking linear velocity of the physical object using a distance difference between the physical object and the digital object and a linear velocity of the digital object; and the number of the first and second groups,
angular velocity of the physical object is simulated using the angular velocity of the digital object.
According to a second aspect of the present invention, the present invention further provides a system for detecting the relative position of a multi-digital object and a user in a virtual scene, including:
the system comprises a memory, a processor and a relative position detection program which is stored on the memory and can run on the processor, wherein the relative position detection program realizes the steps of the relative position detection method according to any one technical scheme when being executed by the processor.
In summary, according to the interaction detection scheme of the digital object and the digital target in the virtual scene provided by the technical scheme of the present invention, when it is detected that the user enters the virtual scene through the VR device, the virtual coordinate of the VR device in the virtual scene is calculated, so that the position of the user in the virtual scene can be determined; at the moment, a digital object connected with the VR equipment is generated in a virtual scene according to the virtual coordinates of the VR equipment, and the digital object can correspond to the position of the VR equipment, so that when a user moves the VR equipment, the digital object corresponding to the position also moves; generating a plurality of digital objects in the virtual scene, setting a moving path of the digital objects, and controlling the digital objects to move towards the user along the moving path, wherein the user can have enough time to observe the positions and the distances of the digital objects, so as to control the action of the VR device to enable the digital objects to interact with the digital objects. In addition, the distance between the digital target and the user (namely, the distance between the digital target and the VR device) is monitored in real time, the position relation between the digital target and the digital object is adjusted according to the distance between the digital target and the user, for example, a plurality of sensing surfaces are arranged between the digital target and the user according to the distance between the digital target and the user, when the digital target moves to the corresponding sensing surfaces, corresponding adjustment is performed, specifically, when the digital target moves to an interaction judgment surface, the user is prompted to use the VR device to interact with the digital target, and the VR device obtains an interaction instruction of the user, so that the digital target can be controlled to perform an interaction action on the digital target according to the interaction instruction of the VR device. Because the position relation between the digital target and the digital object is adjusted and the interaction judgment surface is set, when the digital target moves to the interaction judgment surface, a user can be prompted to use VR equipment for interaction, and then the digital target is controlled to accurately execute interaction action on the digital target according to an interaction instruction of the VR equipment, so that the position of the digital target can be accurately determined, and the digital target can be accurately interacted.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for detecting interaction between a digital object and a digital target in a first virtual scene according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a moving path setting method provided by the embodiment shown in FIG. 1;
FIG. 3 is a schematic flow chart illustrating a method for adjusting a positional relationship according to the embodiment shown in FIG. 1;
fig. 4 is a schematic flowchart of a second method for detecting interaction between a digital object and a digital target in a virtual scene according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for determining a position of a digital target according to the embodiment shown in FIG. 4;
FIG. 6 is a flow chart illustrating a method for performing an interaction according to the embodiment shown in FIG. 1;
fig. 7 is a flowchart illustrating a method for detecting interaction between a digital object and a digital target in a third virtual scene according to an embodiment of the present invention;
FIG. 8 is a flow chart illustrating a method for generating a physical response signal according to the embodiment shown in FIG. 7;
FIG. 9 is a flowchart illustrating a method for tracking a location of a digital object according to the embodiment shown in FIG. 8;
fig. 10 is a schematic structural diagram of a target birth point according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a virtual scene according to an embodiment of the present invention;
FIG. 12 is a schematic structural diagram of a moving path of a digital target according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a system for detecting interaction between a digital object and a digital target in a virtual scene according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention mainly solves the technical problems that:
in order to realize the interaction between a digital object and a digital target in the prior art, a head-mounted VR device is generally used to acquire a picture of the digital target in a virtual scene, and then the digital object is used to interact with the digital target. However, whether the digital object successfully interacts with the digital target, such as whether the digital object successfully hits the digital target, whether the digital object is avoided, and the like, cannot be sensed. That is to say, the real sense of touch of human body is difficult to simulate to present virtual equipment, leads to user experience effect not good.
In order to solve the above problem, the following embodiments of the present invention provide an interaction detection scheme for a digital object and a digital target in a virtual scene, where an interaction instruction forwarded by a VR device is obtained, the digital object is controlled to perform an interaction with the digital target according to the interaction instruction, then interaction state information of the digital object and the digital target is detected, and information such as a magnitude of an acting force received by the VR device is determined according to the interaction state information and a connection relationship between the digital object and the VR device, so that the VR device can be controlled to generate and feed back a physical response signal, thereby achieving an objective of simulating a real touch feeling of the digital target and a human body and providing a better experience effect for a user.
To achieve the above object, please refer to fig. 1, where fig. 1 is a schematic flow chart of a method for detecting interaction between a digital object and a digital target in a virtual scene according to an embodiment of the present invention. As shown in fig. 1, the method for detecting interaction between a digital object and a digital target in a virtual scene includes:
s110: and when detecting that the user enters the virtual scene through the VR device, calculating the virtual coordinates of the VR device in the virtual scene.
In the embodiment of the application, a panoramic three-dimensional virtual scene is pre-constructed, and a user can observe the three-dimensional virtual scene by using VR equipment, for example, VR head display equipment, so that the virtual coordinate of the VR equipment in the virtual scene can be calculated when the user is detected to enter the virtual scene. When the virtual scene is preloaded, the computer system acquires (0,0) origin coordinates in the virtual scene, presents and locks the user position at an X-axis position of the origin coordinates, and reads height information of the VR head display device, such as the position of the middle point of two lenses of the VR head display device, as a Y-axis position of the VR device in a coordinate system.
S120: and generating a digital object corresponding to the position in the virtual scene according to the virtual coordinates of the VR device. The digital object is a virtual object connected to the VR device in a virtual scene, for example: the VR device can be a handheld VR device, and therefore according to the virtual coordinates of the VR device, the digital object which is vector to the VR device can be generated in the virtual scene by combining information such as the shape and length of the digital object. Specifically, the digital object vector with the handheld VR device may be presented at the palm of the handheld VR device, such as a VR handle, and the positions of the user's hands in the virtual space may be presented.
According to the technical scheme provided by the embodiment of the application, a Box digital model is constructed, and the digital model is a transparent four-side square; in an application instance, the transparent Box digital model is not presented to the user, but merely as an adaptation bridge between the handheld VR device and the digital object controlled by the user. Binding a position point in the Box digital model, wherein the position point is a coincident point (point A) with the palm center position of two hands (particularly handheld VR equipment) of a user in a virtual space; a holding point (point B) of a digital object held by a user is designed, a model of the digital object is inserted into a BOX model, and the point A and the point B are made to coincide with each other. By the method, the model style of the digital object can be flexibly changed without being limited to the specific model form of the digital object, and the holding point of the digital object is displayed at the palm of the two hands of the user by adjusting the point positions of the digital object and the Box digital model.
S130: a plurality of digital targets are generated in the virtual scene, and moving paths of the digital targets are set according to the distances between the digital targets and a user. A plurality of digital targets are respectively generated from target birth points at different locations in the virtual scene. Specifically, the target birth point is preset to a plurality of different positions, the digital targets generated from the different target birth points can be presented and approach to the user position, and the user needs to control one or more digital objects to collide with/avoid the digital targets approaching to the different positions. Thus, it is necessary to set the distance from the target birth point of the digital target to the user, and then set the moving path of the digital target according to the distance, wherein the moving path of most digital targets is a straight line.
S140: and controlling the plurality of digital targets to move towards the user along the moving path. The method comprises the following steps that a plurality of digital targets need to execute a moving command according to a preset path, position and time in the process of approaching to the position of a user; and configuring the environment of different moving paths in the virtual scene through the parameters of the digital target stored by the computer system.
S150: and monitoring the distance between the digital target and the user in real time, and adjusting the position relation between the digital target and the digital object according to the distance between the digital target and the user. Specifically, a target speed change surface, an interaction determination surface and a target destruction surface can be established according to the distance between the target birth point of the plurality of digital targets and the user. The digital target is shifted at the target shifting surface, the time or speed taken by the digital target to reach the target shifting surface is calculated and corresponding acceleration/deceleration actions are performed. And controlling the one or more digital objects to collide with the digital targets based on the VR device at the position where the interaction determination surface is the collision contact of the one or more digital objects and the plurality of digital targets. The target destruction surface is located behind the user and is used for destroying the digital targets, for example, when the digital targets do not collide with one or more digital objects controlled by the user, the digital targets move to the target destruction surface to be destroyed.
S160: and when the digital object moves to the interactive judgment surface, controlling the digital object to execute the interactive action on the digital object according to the interactive instruction of the VR equipment. When the digital target reaches the interactive judgment surface, if a user initiates interactive behaviors such as chopping and the like to the digital target through one or more controlled digital objects and touches the digital target, the digital target can be destroyed or divided into two parts immediately; if the user does not initiate a chopping action on the digital target or does not touch the digital target, the digital target continues to execute the moving command until the destruction surface is destroyed by itself, so that the user can observe the interaction effect in the virtual scene through the VR device.
In summary, in the method for detecting interaction between a digital object and a digital target in a virtual scene provided by the embodiments of the present invention, when it is detected that a user enters the virtual scene through a VR device, a virtual coordinate of the VR device in the virtual scene is calculated, so that a position of the user in the virtual scene can be determined; at this time, a digital object connected with the VR device is generated in a virtual scene according to the virtual coordinates of the VR device, and the digital object can correspond to the position of the VR device, so that when a user moves the VR device, the digital object corresponding to the position also moves; generating a plurality of digital objects in the virtual scene, setting a moving path of the digital objects, and controlling the digital objects to move towards the user along the moving path, wherein the user can have enough time to observe the positions and the distances of the digital objects, so as to control the action of the VR device to enable the digital objects to interact with the digital objects. In addition, the distance between the digital target and the user (namely, the distance between the digital target and the VR device) is monitored in real time, the position relation between the digital target and the digital object is adjusted according to the distance between the digital target and the user, for example, a plurality of sensing surfaces are arranged between the digital target and the user according to the distance between the digital target and the user, when the digital target moves to the corresponding sensing surfaces, corresponding adjustment is performed, specifically, when the digital target moves to an interaction judgment surface, the user is prompted to use the VR device to interact with the digital target, and the VR device obtains an interaction instruction of the user, so that the digital target can be controlled to perform an interaction action on the digital target according to the interaction instruction of the VR device. Because the position relation between the digital target and the digital object is adjusted and the interaction judgment surface is set, when the digital target moves to the interaction judgment surface, a user can be prompted to use VR equipment for interaction, and then the digital target is controlled to accurately execute interaction action on the digital target according to an interaction instruction of the VR equipment, so that the position of the digital target can be accurately determined, and the digital target can be accurately interacted.
As a preferred embodiment, as shown in fig. 2, the step S130: generating a plurality of digital targets in a virtual scene, and setting a moving path of the digital targets according to the distance between the digital targets and a user, wherein the moving path comprises the following steps:
s131: a plurality of parallel distributed target birth points are disposed a predetermined distance directly in front of the VR device.
S132: the control digital target is generated from the location of the corresponding target birth point.
S133: and controlling the plurality of digital targets to sequentially move towards the user according to a first moving speed according to a preset behavior instruction.
In the technical scheme provided by the embodiment of the application, the target birth points of a plurality of digital targets are arranged right in front of the user position in the virtual scene, and the computer system reads and locks the distances (unit: meter) between the birth points of the plurality of digital targets and the user position. Referring specifically to fig. 10, the virtual scene includes, but is not limited to, 9 digital target birth points located right in front of the user position and at a fixed distance. In the virtual scene, target birth points of a plurality of digital targets are distributed in a plurality of preset scene coordinate points in parallel according to a preset distance from a user position. The preset coordinate point refers to the width and the height of the (X, Y) axis of the coordinate point of the user position; the transverse and longitudinal spacing between the plurality of digital target points is preset. The digital targets can approach the user position along a linear path according to the preset generating position of the digital targets through the action command initiated by the mobile controller. Each target birth point supports the rendering and movement of only 1 digital target at a time. By editing the preset digital targets and the positions of the target birth points, a plurality of digital targets can be continuously presented in the virtual space and close to the user position.
In addition, as a preferred embodiment, as shown in fig. 3, step S150 in the embodiment shown in fig. 1: adjusting the position relationship between the digital object and the digital object according to the distance between the digital object and the user specifically comprises:
s151: and setting a target variable speed surface of a plurality of digital targets, and controlling the digital targets to move towards the user at a second moving speed when the digital targets reach the target variable speed surface.
As shown in fig. 11, in the computer system, a target shifting surface 1 of the digital target is established according to the position of the digital target and the user; when a plurality of digital targets pass through the target variable speed surface 1 from the target birth point, second moving speed time (or time) for the digital targets to reach the interaction judgment surface 2 is calculated according to speed configuration data (moving speed coefficient and global control coefficient) of the digital targets and the distance between the target variable speed surface 1 and the interaction judgment surface 2, so that acceleration or deceleration behaviors are executed. The above process requires presetting the position of the target shifting surface 1 of the digital target and presetting the distance (unit: meter) between the target shifting surface 1 and the interactive decision surface 2.
S152: and setting a plurality of interactive judgment surfaces of the digital targets, and controlling the digital targets to interact with the digital targets when the digital targets reach the interactive judgment surfaces.
As shown in fig. 11, in the computer system, an interaction determination surface 2 is established. The interactive decision surface 2 is the best decision surface for a user to control one or more digital objects, such as collision/contact, etc. of the digital objects with a plurality of digital targets. The position setting of the interaction decision surface 2 is based on the sum of the physical lengths of the user's arm and the one or more digital objects, and is capable of controlling the appropriate position at which the one or more digital objects collide with the digital object, by which the distance (unit: meter) of the interaction decision surface 2 from the user's position is preset.
S153: and setting a plurality of target destruction surfaces of the digital targets, and destroying the digital targets when the digital targets reach the target destruction surfaces.
As shown in fig. 11, in the computer system, a digital object destruction surface 3 is established, and when a plurality of digital objects do not collide with one or more digital objects controlled by a user, the digital objects move to the target destruction surface 3 and execute a destruction action. The target destruction surface 3 is located behind the user, and the unit of the distance between the target destruction surface 3 and the user position is meter.
As shown in fig. 12 in particular, the plurality of digital objects will perform a linear movement from the object birth point to the digital object destruction plane. After moving 86m over 0.3s to reach the target shift plane, it reaches the interaction determination plane over 13 m. When the digital target reaches the target judgment surface, if a user 1m away initiates a chopping action to the digital target through one or more digital objects and touches the digital target, the digital target is destroyed immediately; and if the user does not initiate the chopping action on the digital target or does not touch the digital target, the digital target continues to execute the moving command, and the digital target is automatically destroyed when moving for 2m to reach the target destruction surface.
In addition, as a preferred embodiment, as shown in fig. 4, in the relative position detecting method provided in the embodiment of the present application, in the step S150: the step of adjusting the position relationship between the digital object and the digital object according to the distance between the digital object and the user further comprises the following steps:
s210: and acquiring the virtual coordinates of the digital target in the virtual scene in real time.
S220: and judging whether the digital target reaches the interaction judging surface or not according to the virtual coordinate of the digital target and the virtual coordinate of the interaction judging surface.
S230: and if the digital target is judged to reach the interaction judgment surface, sending proximity prompt information to the user through VR equipment to obtain an interaction instruction of the user.
According to the technical scheme, the virtual coordinate of the digital target in the virtual scene is acquired in real time, then the real-time distance between the digital target and the interactive judging surface can be calculated according to the virtual coordinate of the digital target and the virtual coordinate of the interactive judging surface, when the digital target is coincident with the virtual coordinate of the interactive judging surface, the digital target is judged to reach the interactive judging surface, and at the moment, the proximity prompt information is sent to a user through VR equipment, so that the user can be informed to use the VR equipment, and the digital target are controlled to perform interactive operations such as collision, cutting or beating. The interactive judgment surface is an optimal judgment surface for controlling one digital object or making the digital object and a plurality of digital targets in collision contact by a user. The position of the interactive decision surface is selected based on the sum of the physical lengths of the user's arm and the one or more digital objects being controlled to achieve an appropriate position for controlling the collision of the one or more digital objects with the digital target.
As a preferred embodiment, as shown in fig. 5, the step S220: judging whether the digital target reaches the interaction judging surface according to the virtual coordinate of the digital target and the virtual coordinate of the interaction judging surface, which specifically comprises the following steps:
s221: and calculating a second moving speed of the digital target according to the speed configuration parameters of the digital target and the distance between the target speed change surface and the interaction judgment surface.
S222: and calculating the virtual coordinate of the digital target in real time according to the second moving speed of the digital target.
S223: and when the virtual coordinate of the digital target is coincident with the virtual coordinate of the interactive judgment surface, judging that the digital target reaches the interactive judgment surface, and controlling the VR equipment to send approach prompt information to the user.
According to the technical scheme provided by the embodiment of the application, when a plurality of digital targets pass through the target speed change surface from the target birth point, a second moving speed or time for reaching the digital target judgment surface is calculated according to the speed configuration parameters (the moving speed coefficient and the global control coefficient) of the digital targets and the distance between the target speed change surface and the interactive judgment surface, and then acceleration/deceleration behaviors are executed according to the second moving speed. And when the virtual coordinate of the digital target is coincident with the virtual coordinate of the interactive judgment surface, the digital target is judged to reach the interactive judgment surface, and at the moment, the VR equipment can be controlled to send approach prompt information to the user, so that the user has enough time to control the VR equipment to carry out interactive operation with the digital target.
As a preferred embodiment, as shown in fig. 6, the above-mentioned relative position detecting method, step S160: controlling the digital object to execute interactive action on the digital object according to the interactive instruction of the VR device, and the interactive action comprises the following steps:
s161: and calculating the virtual coordinate of the digital object according to the virtual coordinate of the VR equipment and the connection relation between the VR equipment and the digital object.
S162: the distance of the digital object from the digital object is calculated using the virtual coordinates of the digital object and the virtual coordinates of the digital object.
S163: and calculating the direction and the position of the interaction of the digital object and the digital target by using the distance between the digital object and the digital target.
S164: and prompting the user to interact with the digital target by using the direction and the position of the digital target to interact with the digital target.
In the technical scheme provided by the embodiment of the application, because the digital object is directly connected with the VR device, and the holding point of the VR device is consistent with the holding point of the digital object, the virtual coordinate of the digital object can be calculated according to the shape of the digital object. The distance between the digital object and the digital obstacle can be calculated by directly subtracting the virtual coordinates of the digital object and the digital target, so that a user can conveniently use the digital object to avoid the digital obstacle. Then, according to the distance, the interaction direction and position of the digital object and the digital target can be calculated according to a preset algorithm, so that the user is quickly prompted to interact with the digital target in the accurate direction and position.
In addition, as a preferred embodiment, as shown in fig. 7, in the relative position detecting method provided in the embodiment of the present application, in the step S160: after controlling the digital object to execute the interactive action on the digital object according to the interactive instruction of the VR device, the method further comprises the following steps:
s310: interaction state information of the digital object and the digital target is detected.
S320: and controlling the VR equipment to generate and feed back a physical response signal according to the interactive state information and the connection relation between the digital object and the VR equipment.
According to the technical scheme provided by the embodiment of the application, the interaction state information of the digital object and the digital target, such as the interaction position, the interaction time and the like, is detected, so that the VR equipment can be controlled to generate and feed back a physical response signal, such as the magnitude and the direction of output force and the like, by combining the connection relation of the digital object and the VR equipment, and the sensing modes such as vibration and the like are caused on the VR equipment.
As a preferred embodiment, as shown in fig. 8, in the relative position detecting method provided in the embodiment of the present application, the step S320: controlling the VR equipment to generate and feed back a physical response signal according to the interaction state information and the connection relation between the digital object and the VR equipment comprises:
s321: setting a physical object corresponding to the digital object, and tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image.
S322: when the digital object carries out interaction action with the digital target, the interaction state information of the physical object and the digital target is calculated, and the interaction state information comprises interaction position and interaction time.
S323: and calculating and controlling the magnitude and direction of the output force of the VR device by using the interaction position and the interaction time and the connection relation between the digital object and the VR device.
According to the technical scheme, the VR system position output is matched with an adaptation bridge of a physical system. The purpose is to convert physical digital object information output by a VR system into physical force and input the physical force into a physical system, and the specific method comprises the following steps: the method comprises the steps that an adaptation bridge acquires digital object parameters (including time and positions for rendering a digital object and a physical digital object) in a VR system every frame; then outputting force to a physical system through an adaptive bridge, and forming a force calculation condition through the force (linear velocity and angular velocity); and finally, the physical system executes calculation according to the received force calculation condition, and the computer system initiates a tracking command to the physical digital object according to the digital object parameters output by the VR system and the force calculation result.
In the technical scheme provided by the embodiment of the application, a computer system captures time and position information of a rendered digital object output by a VR application system, and the motion time and position change of the rendered digital object are known; then, a physical object of the same model as the rendered digital object is preset, and a fixed frame rate (e.g., 65 frames per second) of the physical object is set in the computer system, and the computer system detects the position of the physical object relative to the rendered digital object every frame. The physical object and the rendering digital object are controlled by a user through a VR handle, the rendering digital object can preferentially execute a motion behavior, and the physical object can execute a tracking command of the rendering digital object after detecting a relative position through each frame of the computer system.
Referring to fig. 9, the step S321: tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image, comprising:
s3211: linear and angular velocities of the digital object are calculated.
S3212: the tracking linear velocity of the physical object is calculated using the difference in distance between the physical object and the digital object and the linear velocity of the digital object.
S3213: angular velocity of the physical object is simulated using the angular velocity of the digital object.
The tracking method of the physical object comprises the following steps: in an example application, one or more digital objects controlled by a user usually only have two motion behaviors of linear movement and rotation, so the computer system executes a tracking command by operating according to linear velocity and angular velocity between a physical object and a rendered digital object. Detecting the distance difference or sampling time between the current position of the physical digital object and the position of a rendered digital object currently transmitted by the VR application system by the computer system through each frame, so as to obtain parameters such as linear speed required by tracking; and dividing the rotation angle of the currently transmitted rendering digital object by the time for rotation through the VR system so as to obtain the angular speed of the rendering digital object and form the simulated angular speed of the physical object.
In addition, referring to fig. 13, fig. 13 is a schematic structural diagram of a system for detecting relative positions of multiple digital targets and a user in a virtual scene according to an embodiment of the present invention. As shown in fig. 13, the relative position detecting system includes:
a communication line 1002, a communication module 1003, a memory 1004, a processor 1001 and a relative position detection program stored on the memory 1004 and operable on the processor 1001, the relative position detection program when executed by the processor 1001 implementing the steps of the relative position detection method according to any one of the embodiments described above.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for detecting relative positions of a multi-digital target and a user in a virtual scene is characterized by comprising the following steps:
when detecting that a user enters a virtual scene through VR equipment, calculating virtual coordinates of the VR equipment in the virtual scene;
generating a digital object corresponding to the position in the virtual scene according to the virtual coordinate of the VR equipment;
generating a plurality of digital targets in the virtual scene, and setting the moving path of the digital targets according to the distance between the digital targets and a user;
controlling the plurality of digital targets to move towards the user along the movement path;
monitoring the distance between the digital target and a user in real time, and adjusting the position relation between the digital target and the digital object according to the distance between the digital target and the user;
and when the digital target moves to an interaction judgment surface, controlling the digital target to execute an interactive action on the digital target according to the interactive instruction of the VR equipment.
2. The relative position detecting method according to claim 1, wherein the step of generating a plurality of digital targets in the virtual scene, and setting a moving path of the digital targets according to a distance between the digital targets and a user includes:
setting a plurality of target birth points in parallel distribution at a predetermined distance directly in front of the VR device;
controlling the digital target to be generated from the position of the corresponding target birth point;
and controlling the plurality of digital targets to sequentially move towards the user according to a first moving speed according to a preset behavior instruction.
3. The relative position detecting method according to claim 1 or 2, wherein the step of adjusting the positional relationship of the digital object and the digital object according to the distance between the digital object and the user includes:
setting target variable speed surfaces of the plurality of digital targets, and controlling the digital targets to move towards a user at a second moving speed when the digital targets reach the target variable speed surfaces;
setting interaction judgment surfaces of the plurality of digital targets, and controlling the digital targets to interact with the digital targets when the digital targets reach the interaction judgment surfaces;
and setting target destruction surfaces of the plurality of digital targets, and destroying the digital objects when the digital targets reach the target destruction surfaces.
4. The relative position detecting method according to claim 3, wherein after the step of adjusting the positional relationship of the digital object and the digital object according to the distance between the digital object and the user, the method further comprises:
acquiring virtual coordinates of the digital target in the virtual scene in real time;
judging whether the digital target reaches the interaction judgment surface or not according to the virtual coordinate of the digital target and the virtual coordinate of the interaction judgment surface;
and if the digital target is judged to reach the interaction judgment surface, sending proximity prompt information to a user through the VR equipment so as to obtain an interaction instruction of the user.
5. The method according to claim 4, wherein the step of determining whether the digital object reaches the interaction determination surface based on the virtual coordinates of the digital object and the virtual coordinates of the interaction determination surface includes:
calculating a second moving speed of the digital target according to the speed configuration parameter of the digital target and the distance between the target speed change surface and the interaction judgment surface;
calculating the virtual coordinate of the digital target in real time according to the second moving speed of the digital target;
and when the virtual coordinate of the digital target is coincident with the virtual coordinate of the interaction judgment surface, judging that the digital target reaches the interaction judgment surface, and controlling the VR equipment to send the approach prompt information to a user.
6. The relative position detecting method according to claim 1, wherein the step of controlling the digital object to perform an interactive action on the digital object according to the interactive instruction of the VR device comprises:
calculating the virtual coordinate of the digital object according to the virtual coordinate of the VR equipment and the connection relation between the VR equipment and the digital object;
calculating a distance of the digital object from the digital target using the virtual coordinates of the digital object and the virtual coordinates of the digital target;
calculating a direction and a position of the digital object interacting with the digital target using the distance of the digital object from the digital target;
and prompting a user to interact with the digital target by using the direction and the position of the digital target interacting with the digital target.
7. The relative position detection method of claim 1, wherein after the step of controlling the digital object to perform an interactive action on the digital object according to the interactive instruction of the VR device, the method further comprises:
detecting interaction state information of the digital object and the digital target;
and controlling the VR equipment to generate and feed back a physical response signal according to the interaction state information and the connection relation between the digital object and the VR equipment.
8. The method of claim 7, wherein the step of controlling the VR device to generate and feed back the physical response signal according to the interaction status information and the connection relationship between the digital object and the VR device comprises:
setting a physical object corresponding to the digital object, and tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image;
when the digital object carries out an interactive action with the digital target, calculating interactive state information of the physical object and the digital target, wherein the interactive state information comprises an interactive position and interactive time;
and calculating and controlling the magnitude and direction of the output force of the VR equipment by using the interaction position and the interaction time and the connection relation of the digital object and the VR equipment.
9. The method according to claim 8, wherein the step of tracking the position of the digital object according to the relative position of the physical object and the digital object in each frame of image comprises:
calculating linear and angular velocities of the digital object;
calculating a tracking linear velocity of the physical object using a distance difference between the physical object and the digital object and a linear velocity of the digital object; and the number of the first and second groups,
simulating the angular velocity of the physical object using the angular velocity of the digital object.
10. A system for detecting relative positions of a plurality of digital objects and a user in a virtual scene, comprising:
memory, a processor and a relative position detection program stored on the memory and executable on the processor, the relative position detection program when executed by the processor implementing the steps of the relative position detection method according to any one of claims 1 to 9.
CN202211117336.7A 2022-09-14 2022-09-14 Method and system for detecting relative positions of multiple digital targets and user in virtual scene Pending CN115482362A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211117336.7A CN115482362A (en) 2022-09-14 2022-09-14 Method and system for detecting relative positions of multiple digital targets and user in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211117336.7A CN115482362A (en) 2022-09-14 2022-09-14 Method and system for detecting relative positions of multiple digital targets and user in virtual scene

Publications (1)

Publication Number Publication Date
CN115482362A true CN115482362A (en) 2022-12-16

Family

ID=84392603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211117336.7A Pending CN115482362A (en) 2022-09-14 2022-09-14 Method and system for detecting relative positions of multiple digital targets and user in virtual scene

Country Status (1)

Country Link
CN (1) CN115482362A (en)

Similar Documents

Publication Publication Date Title
US8405656B2 (en) Method and system for three dimensional interaction of a subject
US8166421B2 (en) Three-dimensional user interface
CN110476142A (en) Virtual objects user interface is shown
US10453235B2 (en) Image processing apparatus displaying image of virtual object and method of displaying the same
CN109313821B (en) Three-dimensional object scan feedback
Beattie et al. Taking the LEAP with the Oculus HMD and CAD-Plucking at thin Air?
WO2007091008A1 (en) Controlling the motion of virtual objects in a virtual space
CN113710432A (en) Method for determining a trajectory of a robot
JP2023171435A (en) Device and method for generating dynamic virtual content in mixed reality
US11164377B2 (en) Motion-controlled portals in virtual reality
KR100936090B1 (en) The semi-immersive multi computerized numuerical control machine tools simulation system
CN109710077B (en) Virtual object collision judgment method and device based on VR and locomotive practical training system
CN110809751B (en) Methods, apparatuses, systems, computer programs for implementing mediated real virtual content consumption
US6798416B2 (en) Generating animation data using multiple interpolation procedures
CN115482362A (en) Method and system for detecting relative positions of multiple digital targets and user in virtual scene
US20210232289A1 (en) Virtual user detection
US20220111290A1 (en) Haptic engine for spatial computing
CN115482361A (en) Method and system for detecting digital obstacles and digital object contact in virtual scene
Higgins et al. Head Pose as a Proxy for Gaze in Virtual Reality
EP3534241A1 (en) Method, apparatus, systems, computer programs for enabling mediated reality
CN115482363A (en) Method and system for detecting interaction of moving block and digital object in virtual scene
US11398047B2 (en) Virtual reality simulations using surface tracking
WO2022255206A1 (en) Information processing apparatus, information processing method, and computer program
KR20170082028A (en) Rim motion apparatus
CN114637394A (en) Interactive operation system and method for bare hand and simulated touch screen interface in VR environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination