CN114470768A - Virtual item control method and device, electronic equipment and readable storage medium - Google Patents

Virtual item control method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114470768A
CN114470768A CN202210139179.3A CN202210139179A CN114470768A CN 114470768 A CN114470768 A CN 114470768A CN 202210139179 A CN202210139179 A CN 202210139179A CN 114470768 A CN114470768 A CN 114470768A
Authority
CN
China
Prior art keywords
virtual
motion data
real
data
real motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210139179.3A
Other languages
Chinese (zh)
Other versions
CN114470768B (en
Inventor
顾佳祺
陈都
王骁玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210139179.3A priority Critical patent/CN114470768B/en
Publication of CN114470768A publication Critical patent/CN114470768A/en
Application granted granted Critical
Publication of CN114470768B publication Critical patent/CN114470768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/573Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/812Ball games, e.g. soccer or baseball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8011Ball
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a virtual item control method, apparatus, electronic device, and storage medium, where the virtual item control method includes: acquiring real motion data of a real prop; determining partial real motion data from the real motion data to obtain target real motion data; determining virtual motion data based on the target real motion data; driving a virtual prop to move in a 3D scene based on the target real motion data and the virtual motion data; the 3D scene is generated by rendering 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after rendering. According to the embodiment of the application, the motion trail of the virtual prop can meet the expected requirement, and the display effect of the motion trail can be improved.

Description

Virtual item control method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of live broadcast technologies, and in particular, to a virtual item control method, a virtual item control apparatus, an electronic device, and a computer-readable storage medium.
Background
In the related art, the main method of virtual live broadcasting includes: control signals regarding motion expression data of (a person among) actors are acquired through the motion capture device, and avatar motions are driven.
Interaction with the virtual prop can be involved in the live broadcast process of the virtual image, wherein the driving of the virtual prop is similar to that of the virtual image, the motion data about the real prop is obtained through the motion capture equipment, and the virtual prop is driven to move.
However, due to uncontrollable factors existing in the process of controlling the real objects by the actors, the movement of the virtual props driven by the real props often cannot achieve the expected effect. Taking the shooting scene as an example, if the motion data of the real basketball is captured to drive the motion of the virtual basketball, the live broadcast effect may be affected, for example, if the actors shoot for many times but do not shoot the basketball, and at this time, if the motion data of the real basketball is still used to drive the motion of the virtual basketball, the viewing experience of the user will be affected.
Disclosure of Invention
The embodiment of the disclosure at least provides a method and a device for controlling a virtual item, electronic equipment and a storage medium.
The embodiment of the disclosure provides a virtual item control method, which includes:
acquiring real motion data of a real prop;
determining partial real motion data from the real motion data to obtain target real motion data;
determining virtual motion data based on the target real motion data;
driving a virtual prop to move in a 3D scene based on the target real motion data and the virtual motion data; the 3D scene is generated by rendering 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after rendering.
In the embodiment of the disclosure, after the real motion data of the real prop is obtained, part of the real motion data is determined from the real motion data to obtain the target real motion data, then the virtual motion data is determined based on the target real motion data, and the virtual prop is driven to move in the 3D scene based on the target real motion data and the virtual motion data. In addition, because the virtual motion data are obtained based on part of the real motion data, the matching degree of the virtual motion data and the real motion data can be improved, and the display effect of the motion trail of the virtual prop is further improved.
In a possible implementation, the determining partial real motion data from the real motion data to obtain target real motion data includes:
determining initial real motion data from the real motion data to obtain target real motion data;
the determining virtual motion data based on the target real motion data comprises:
determining the virtual motion data based on an initial velocity in the initial real motion data, a timestamp of the virtual motion data being later than a timestamp of the initial real motion data.
In the embodiment of the present disclosure, the virtual motion data is determined based on the initial velocity in the initial real motion data, and the timestamp of the virtual motion data is later than the timestamp of the initial real motion data, that is, the subsequent motion data is determined according to the initial velocity of the real prop, so that not only is the verisimilitude of the motion trajectory of the virtual prop ensured, but also the motion trajectory of the virtual prop can conform to the expected effect.
In a possible embodiment, the virtual motion data includes a first part of virtual data and a preset second part of virtual data, and a timestamp of the first part of virtual data is earlier than a timestamp of the preset second part of virtual data, and the determining the virtual motion data based on an initial velocity in the initial real motion data includes:
determining the first portion of virtual data based on an initial velocity in the initial real motion data;
the driving the virtual prop to move in the 3D scene based on the target real motion data and the virtual motion data comprises:
and driving the virtual prop to move in the 3D scene based on the target real motion data, the first part of virtual data and the preset second part of virtual data in sequence.
In the embodiment of the disclosure, the virtual motion data includes a first part of virtual data and a preset second part of virtual data, and the first part of virtual data is determined based on the initial velocity in the initial real motion data, so that the final application result of the virtual prop can meet the expected requirement through the preset second part of virtual data, and the virtual prop can be transitionally linked between the real motion trajectory and the preset virtual motion trajectory through the first part of virtual data, and cannot jump, thereby improving the trajectory display effect of the virtual prop.
In a possible implementation, the obtaining partial real motion data from the real motion data to obtain target real motion data includes:
determining ending real motion data from the real motion data to obtain target real motion data;
the determining virtual motion data based on the target real motion data comprises:
determining the virtual motion data based on the trailing real motion data, the virtual motion data having a timestamp earlier than a timestamp of the trailing real motion data.
In the embodiment of the disclosure, in the process of the movement of the virtual prop, the virtual movement data is used first, and then the real movement data is used, so that the movement result of the virtual prop is matched with the movement result of the real prop, and meanwhile, the track display effect of the virtual prop in the movement process is improved.
In a possible embodiment, the virtual motion data includes a first part of virtual data and a preset second part of virtual data, and the driving of the virtual prop in the 3D scene based on the target real motion data and the virtual motion data includes:
driving the virtual prop to move in the 3D scene based on the preset second part of virtual data;
determining transitional real motion data from the real motion data under the condition that the state of the virtual prop meets a first preset condition;
generating the first part of virtual data in real time based on the transitional real motion data and the preset second part of virtual data;
driving the virtual prop to continue moving in the 3D scene based on the first portion of virtual data;
and driving the virtual prop to continue moving in the 3D scene based on the target real motion data under the condition that the state of the virtual prop meets a second preset condition.
In the embodiment of the disclosure, because the first part of virtual data is generated in real time based on the transitional real motion data and the preset second part of virtual data, the motion trajectory of the virtual prop driven by the preset second part of virtual data and the motion trajectory driven by the target real motion data are smoothly transited and linked without jumping, and the display effect of the motion trajectory of the virtual prop is further improved.
In a possible implementation manner, in a case that the virtual prop moves to a first preset position range in the 3D scene, determining that the state of the virtual prop meets the first preset condition; and/or the presence of a gas in the gas,
and under the condition that the virtual prop moves to a second preset position range in the 3D scene, determining that the state of the virtual prop meets the second preset condition.
In a possible implementation, the determining the real motion data includes determining a real trajectory data from the real motion data, and obtaining the target real motion data includes:
determining the real track data from the real motion data to obtain the target real motion data;
the determining virtual motion data based on the target real motion data comprises:
determining virtual rotation data based on the real trajectory data;
the driving the virtual prop to move in the 3D scene based on the target real motion data and the virtual motion data comprises:
and driving the virtual prop to move in the 3D scene based on the real track data and the virtual rotation data.
In the embodiment of the disclosure, the real track data and the virtual rotation data are combined, so that the display effect of the virtual prop in the motion process can be improved.
In one possible implementation, the determining partial real motion data from the real motion data includes:
determining each segment of real motion data under the condition that the real prop receives an external force from the real motion data to obtain the target real data;
the determining virtual motion data based on the target real motion data comprises:
determining each piece of virtual motion data matched with each piece of real motion data based on each piece of real motion data;
the driving the virtual prop to move in the 3D scene based on the target real motion data and the virtual motion data comprises:
and driving the virtual prop to move in the 3D scene based on the real motion data and the virtual motion data.
In a possible implementation, the determining partial real motion data from the real motion data to obtain target real motion data includes:
and under the condition that the real motion data indicate that the real motion result of the real prop meets a preset condition, determining partial real motion data from the real motion data to obtain target real motion data.
In a possible implementation manner, in a case that an error between the real motion result and a preset motion result is within a preset range, it is determined that the real motion result meets the preset condition.
In one possible implementation, the 3D scene information further contains virtual object information, the virtual object information being used to generate a virtual object after rendering, the virtual object being driven by control information captured by a motion capture device; the method further comprises the following steps:
acquiring control information of the virtual object;
and under the condition that the control information indicates that a real object touches the real prop, controlling the virtual prop to move and touch the virtual object.
In the embodiment of the disclosure, under the condition that the control information indicates that the real object touches the real prop, the virtual prop is controlled to move and contact with the virtual object, so that the situation that a virtual space or transition contact occurs between the virtual object and the virtual prop under the condition that the real object contacts the real prop can be avoided, and the viewing experience of a user is further improved.
The embodiment of the present disclosure provides a virtual prop control device, including:
the real data acquisition module is used for acquiring real motion data of the real prop;
the real data determining module is used for determining partial real motion data from the real motion data to obtain target real motion data;
the virtual data determining module is used for determining virtual motion data based on the target real motion data;
the virtual prop driving module is used for driving a virtual prop to move in a 3D scene based on the target real motion data and the virtual motion data; the 3D scene is generated by rendering 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after rendering.
In a possible implementation, the truth data determining module is specifically configured to:
determining initial real motion data from the real motion data to obtain target real motion data;
the virtual data determination module is specifically configured to:
determining the virtual motion data based on an initial velocity in the initial real motion data, a timestamp of the virtual motion data being later than a timestamp of the initial real motion data.
In a possible implementation manner, the virtual motion data includes a first part of virtual data and a preset second part of virtual data, and the virtual data determination module is specifically configured to:
determining the first portion of virtual data based on an initial velocity in the initial real motion data;
the virtual prop driving module is specifically configured to:
and driving the virtual prop to move in the 3D scene based on the target real motion data, the first part of virtual data and the preset second part of virtual data in sequence.
In a possible implementation, the truth data determining module is specifically configured to:
determining ending real motion data from the real motion data to obtain target real motion data;
the virtual data determination module is specifically configured to:
determining the virtual motion data based on the trailing real motion data, the virtual motion data having a timestamp earlier than a timestamp of the trailing real motion data.
In a possible implementation manner, the virtual motion data includes a first part of virtual data and a preset second part of virtual data, and the virtual prop driving module is specifically configured to:
driving the virtual prop to move in the 3D scene based on the preset second part of virtual data;
determining transitional real motion data from the real motion data under the condition that the state of the virtual prop meets a first preset condition;
generating the first part of virtual data in real time based on the transitional real motion data and the preset second part of virtual data;
driving the virtual prop to continue moving in the 3D scene based on the first portion of virtual data;
and driving the virtual prop to continue moving in the 3D scene based on the target real motion data under the condition that the state of the virtual prop meets a second preset condition.
In a possible implementation manner, in a case that the virtual prop moves to a first preset position range in the 3D scene, determining that the state of the virtual prop meets the first preset condition; and/or the presence of a gas in the gas,
and under the condition that the virtual prop moves to a second preset position range in the 3D scene, determining that the state of the virtual prop meets the second preset condition.
In a possible implementation, the real motion data includes real trajectory data, and the real data determination module is specifically configured to:
determining the real track data from the real motion data to obtain the target real motion data;
the virtual data determination module is specifically configured to:
determining virtual rotation data based on the real trajectory data;
the virtual prop driving module is specifically configured to:
and driving the virtual prop to move in the 3D scene based on the real track data and the virtual rotation data.
In a possible implementation, the truth data determining module is specifically configured to:
determining each segment of real motion data under the condition that the real prop receives an external force from the real motion data to obtain the target real data;
the virtual data determination module is specifically configured to:
determining each piece of virtual motion data matched with each piece of real motion data based on each piece of real motion data;
the virtual prop driving module is specifically configured to:
and driving the virtual prop to move in the 3D scene based on the real motion data and the virtual motion data.
In a possible implementation, the truth data determining module is specifically configured to:
and under the condition that the real motion data indicate that the real motion result of the real prop meets a preset condition, determining partial real motion data from the real motion data to obtain target real motion data.
In a possible implementation manner, in a case that an error between the real motion result and a preset motion result is within a preset range, it is determined that the real motion result meets the preset condition.
In one possible implementation, the 3D scene information further contains virtual object information, the virtual object information being used to generate a virtual object after rendering, the virtual object being driven by control information captured by a motion capture device; the apparatus further comprises a virtual object driver module to:
acquiring control information of the virtual object;
and under the condition that the control information indicates that a real object touches the real prop, controlling the virtual prop to move and contact with the virtual object.
An embodiment of the present disclosure provides an electronic device, including: the system comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory are communicated through the bus when an electronic device runs, and the machine-readable instructions are executed by the processor to execute the virtual prop control method.
The embodiment of the disclosure provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program executes the virtual prop control method.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a virtual item control method provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a relationship between first target real motion data and virtual motion data provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating a relationship between second target real motion data and virtual motion data provided by an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a relationship between a third target real motion data and a virtual motion data provided by an embodiment of the disclosure;
fig. 5 shows a flowchart of a method for driving a virtual prop to move in a 3D scene according to an embodiment of the present disclosure;
FIG. 6 is a diagram illustrating a relationship between fourth target real motion data and virtual motion data provided by an embodiment of the disclosure;
fig. 7 shows a flowchart of another virtual item control method provided in the embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating a relationship between real track data and virtual rotation data provided by an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a virtual item control device provided in an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of another virtual prop control device provided in the embodiment of the present disclosure;
fig. 11 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making any creative effort, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
More and more users choose to watch live video through live platforms, such as live games, live news, and the like. In order to improve the live broadcast effect, a mode of performing video live broadcast by replacing a real anchor with an avatar appears.
One form of the avatar is to capture the control signal for the actor (the person in the game), drive the avatar in the game engine to move, and simultaneously acquire the actor's voice, and fuse the actor's voice with the avatar picture to generate video data. Interaction with the virtual prop can be involved in the live broadcast process of the virtual image, wherein the driving of the virtual prop is similar to that of the virtual image, the motion data about the real prop is obtained through the motion capture equipment, and the virtual prop is driven to move.
However, due to uncontrollable factors existing in the process of controlling the real objects by the actors, the movement of the virtual props driven by the real props often cannot achieve the expected effect. Taking the shooting scene as an example, if the motion data of the real basketball is captured to drive the motion of the virtual basketball, the live broadcast effect may be affected, for example, if the actors shoot for many times but do not shoot the basketball, and at this time, if the motion data of the real basketball is still used to drive the motion of the virtual basketball, the viewing experience of the user will be affected.
The present disclosure provides a virtual item control method, including: acquiring real motion data of a real prop; determining partial real motion data from the real motion data to obtain target real motion data; determining virtual motion data based on the target real motion data; driving a virtual prop to move in the 3D scene based on the target real motion data and the virtual motion data; the 3D scene is generated by rendering 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual props after rendering.
In the embodiment of the disclosure, after the real motion data of the real prop is obtained, part of the real motion data is determined from the real motion data to obtain the target real motion data, then the virtual motion data is determined based on the target real motion data, and the virtual prop is driven to move in the 3D scene based on the target real motion data and the virtual motion data. In addition, because the virtual motion data are obtained based on part of the real motion data, the matching degree of the virtual motion data and the real motion data can be improved, and the display effect of the motion trail of the virtual prop is further improved.
An execution subject of the virtual item control method provided by the embodiment of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a vehicle-mounted device, a wearable device, or a server or other processing device. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, big data, an artificial intelligence platform and the like. In addition, the virtual prop control method can be implemented by the processor calling the computer readable instructions stored in the memory.
Referring to fig. 1, which is a flowchart of a first virtual item control method provided in the embodiment of the present disclosure, the virtual item control method includes the following steps S101 to S104:
and S101, acquiring real motion data of the real prop.
The real prop refers to a device used by a real object (actor) in the process of performing. The real prop can be in interactive fit with the real object, and further an expected performance effect is achieved. In addition, the real props used are different for different show scene show contents.
Illustratively, the real prop includes, but is not limited to, various objects such as a basketball, a soccer ball, a curling, a fan, a cup, and the like.
In some embodiments, a plurality of optical mark points (for example, may be made of a light-reflecting material) may be set on the real prop in advance, and then position information of each optical mark point of the real prop may be captured by an optical device, so as to obtain real motion data of the real prop. Wherein the optical capture device includes at least one of an infrared camera, an RGB camera, and a depth camera, and the disclosed embodiments do not limit the type of the optical capture device.
S102, determining partial real motion data from the real motion data to obtain target real motion data.
It can be understood that, in order to make the motion trajectory of the virtual prop in the 3D scene achieve the expected effect, in some cases, the motion of the virtual prop may be driven by only a part of real motion data, that is, by combining a part of real motion data and a part of virtual motion data. Wherein the relevant description about the 3D scene will be elaborated later.
For example, when the real motion data indicate that the real motion result of the real prop meets a preset condition, partial real motion data may be determined from the real motion data to obtain target real motion data. For example, taking a shooting scene as an example, if the real motion data of a real basketball indicates that the real basketball collides with a real basket but does not enter the real basket, it indicates that an actor needs to put the basketball into the basket, and only the real basketball is not put into the real basket due to some objective factors, at this time, part of the real motion data can be determined from the real motion data to obtain target real motion data, and then the virtual motion data is combined to enable the virtual basketball to be shot into the virtual basket, so as to achieve the expected effect.
In some possible embodiments, in a case where an error between the real motion result and a preset motion result is within a preset range, it is determined that the real motion result meets the preset condition. Similarly, taking the shooting scene as an example, if the real motion data indicates that the distance between the real basketball and the real basket is within a preset periphery (e.g., 10cm), it is determined that the real motion result meets the preset condition.
In other scenes, similar to shooting scenes, such as archery scenes, if the real motion data of the real bow and arrow indicate that the real bow and arrow fall on the target and the distance from the target center is within a preset periphery, the virtual motion data can be combined to drive the virtual bow and arrow to fall on the position of the target center.
It should be noted that, the above determining, according to the preset condition, part of the real motion data from the real motion data to obtain the target real motion data is merely an example, and in other embodiments, the determination may not be based on any preset condition.
How to determine partial real motion data from the real motion data to obtain target real motion data is described in detail below:
in a first mode, initial real motion data is determined from the real motion data, and the target real motion data is obtained. That is, the real motion data at different moments are sorted according to the time stamps, and the real motion data with the front time stamp is obtained and used as the target real motion data. Wherein the time stamp is used to characterize the time of data generation. In this embodiment, the manner of acquiring the partial real motion data may be combined with the aforementioned preset condition.
For example, taking a shooting scene as an example, the initial real motion data refers to motion data of a real basketball at the moment of separating from the hand of the real object, and similarly, taking a football scene as an example, the initial real motion data refers to motion data of a real football at the moment of separating from the foot of the real object. Also exemplarily, taking an archery scene as an example, the initial real motion data may also refer to an initial speed of the bow and the arrow when the bow and the arrow are disengaged from the bow.
In a second mode, the ending real motion data is determined from the real motion data, and the target real motion data is obtained. That is, the real motion data at different moments are sorted according to the time stamps, and the real motion data behind the time stamps are acquired and used as the target real motion data.
Illustratively, the ending real motion data refers to motion data of a real ball contacting (receiving) a real object, taking a ball-catching scene as an example, and for example, the ending real motion data refers to motion data of a real basketball falling into a real basket, taking a shooting scene as an example.
S103, determining virtual motion data based on the target real motion data.
For example, the virtual motion data may be determined based on the size and/or direction of the target real motion data, and thus the matching degree of the virtual motion data and the target real motion data may be improved. The following describes in detail the manner of determining virtual motion data for different target real motion data.
In the first case, in the case that the target real motion data is the initial real motion data (i.e. for the first mode described above), the virtual motion data is determined based on the initial velocity in the initial real motion data, and the timestamp of the virtual motion data is later than the timestamp of the initial real motion data. Wherein the initial velocity includes the magnitude and direction of the initial velocity.
Exemplarily, please refer to fig. 2, which is a schematic diagram illustrating a relationship between first target real motion data and virtual motion data provided in an embodiment of the present disclosure. As can be seen from fig. 2, based on the time axis t, the initial real motion data is before, and the determined virtual motion data is after, the initial real motion data and the virtual motion data together constitute the driving data of the virtual prop.
In the embodiment of the present disclosure, the virtual motion data is determined based on the initial velocity in the initial real motion data, and the timestamp of the virtual motion data is later than the timestamp of the initial real motion data, that is, the subsequent motion data is determined according to the initial velocity of the real prop, so that not only is the verisimilitude of the motion trajectory of the virtual prop ensured, but also the motion trajectory of the virtual prop can conform to the expected effect.
In a second case, in a case where the target real motion data is trailing real motion data (that is, for the second manner described above), the virtual motion data is determined based on the trailing real motion data, and a time stamp of the virtual motion data is earlier than a time stamp of the trailing real motion data.
For example, please refer to fig. 3, which is a schematic diagram illustrating a relationship between second target real motion data and virtual motion data according to an embodiment of the present disclosure. As can be seen from fig. 3, based on the time axis t, the determined virtual motion data is before the determined ending real motion data, and the virtual motion data and the ending real motion data together form the driving data of the virtual prop.
In the embodiment of the disclosure, in the process of the movement of the virtual prop, the virtual prop is driven to move by using the virtual movement data first, and then the virtual prop is driven to move by using the real movement data, so that the trajectory display effect of the virtual prop in the movement process can be improved while the movement result of the virtual prop is matched with the movement result of the real prop.
S104, driving a virtual prop to move in a 3D scene based on the target real motion data and the virtual motion data; the 3D scene is generated by rendering 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after rendering.
Illustratively, since the target real motion data and the virtual motion data constitute complete driving data of the virtual prop, the virtual prop may be driven to move in the 3D scene based on the target real motion data and the virtual motion data.
The 3D scene information may be run in a computer CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and a memory, and includes gridded model information and map texture information. Accordingly, virtual prop information includes, by way of example and not limitation, gridded model data, voxel data, and map texture data, or a combination thereof. Wherein the mesh includes, but is not limited to, a triangular mesh, a quadrilateral mesh, other polygonal mesh, or a combination thereof. In the embodiment of the present disclosure, the mesh is a triangular mesh.
The 3D scene information is rendered in a 3D rendering environment, which may generate a 3D scene. The 3D rendering environment may be a 3D engine running in the electronic device capable of generating imagery information based on one or more perspectives based on the data to be rendered. The virtual prop information is a prop model existing in the 3D engine, and can generate a corresponding virtual prop after rendering. In the disclosed embodiment, the virtual prop may be various virtual objects, such as a virtual basketball, a virtual arrow, a virtual football, a virtual curling stone, and the like.
For example, referring to fig. 4, in some embodiments, the virtual motion data includes a first part of virtual data and a preset second part of virtual data, and a timestamp of the first part of virtual data is earlier than a timestamp of the preset second part of virtual data. Therefore, the first part of virtual data may be determined based on the initial velocity in the initial real motion data, and then the virtual prop may be driven to move in the 3D scene based on the target real motion data, the first part of virtual data, and the preset second part of virtual data in sequence.
The preset second part of virtual data may be set according to actual requirements, for example, may be determined according to the type and the display effect of the virtual item, which is not limited herein.
In the embodiment of the disclosure, the virtual motion data includes a first part of virtual data and a preset second part of virtual data, and the first part of virtual data is determined based on the initial velocity in the initial real motion data, so that the final application result of the virtual prop can meet the expected requirement through the preset second part of virtual data, and the virtual prop can be transitionally linked between the real motion trajectory and the preset virtual motion trajectory through the first part of virtual data, and cannot jump, thereby improving the trajectory display effect of the virtual prop.
In other embodiments, the timestamp of at least part of the second predetermined portion of the dummy data is earlier than the timestamp of the first portion of the dummy data, and the timestamp of at least part of the second predetermined portion of the dummy data coincides with the timestamp of the first portion of the dummy data. Specifically, referring to fig. 5, in this embodiment, for step S104, when the virtual item is driven to move in the 3D scene based on the target real motion data and the virtual motion data, the following steps S1041 to S1045 may be included:
s1041, driving the virtual prop to move in the 3D scene based on the preset second part of virtual data.
For example, referring to fig. 6, the preset second part of virtual data may be divided into two parts according to the timestamp, that is, the preset second part of virtual data a (also referred to as first sub virtual data) and the preset second part of virtual data b (also referred to as second sub virtual data) in fig. 6, that is, the preset second part of virtual data may include two parts, that is, the first sub virtual data and the second sub virtual data.
The time stamp of the preset second part of virtual data a is earlier than that of the first part of virtual data, and the time stamp of the preset second part of virtual data b is overlapped with that of the first part of virtual data. Therefore, in the embodiment of the present disclosure, the virtual prop may be driven to move in the 3D scene based on the preset second part of virtual data a.
And S1042, determining transitional real motion data from the real motion data under the condition that the state of the virtual prop meets a first preset condition.
It can be understood that, during the process of driving the virtual prop to move based on the preset second part of virtual data, the state of the virtual prop may change, where the state of the virtual prop includes, but is not limited to, a position state of the virtual prop in the 3D scene, a deformation state of the virtual prop, or a relationship state with respect to other objects.
For example, it may be determined that the state of the virtual item meets the first preset condition when the virtual item moves to a first preset position range in the 3D scene. For example, in a shooting scene, the virtual basketball may be driven to move at a position farther from the virtual basket by a preset second part of virtual data b, and when the virtual basketball moves to a first preset position range from the virtual basket, the state of the virtual basketball is determined to meet a first preset condition, and then transitional real movement data is determined from the real movement data. In addition, the scene of the virtual football is similar to the virtual basketball, and the description is omitted here.
Of course, in other embodiments, the first preset condition may also refer to a time when the virtual prop moves, for example, when the virtual basketball deviates from the virtual object and moves in the air for a predetermined time (for example, 2 seconds), it is determined that the first preset condition is met.
And S1043, generating the first part of virtual data in real time based on the transitional real motion data and the preset second part of virtual data.
Referring to fig. 6 again, after the transitional real motion data is determined, the preset second part of virtual data b may be fused with the transitional real motion data to generate the first part of virtual data in real time, so that the matching degree between the first part of virtual data and the transitional real motion data may be higher.
S1044, driving the virtual prop to continue to move in the 3D scene based on the first part of virtual data.
Exemplarily, after the first part of virtual data is generated in real time, the virtual prop can be driven to continue to move in the 3D scene based on the first part of virtual data, so that the motion trajectory of the virtual prop can retain the special effect of the virtual data, and can approach the motion state of the real prop, thereby improving the display effect of the motion trajectory of the virtual prop.
And S1045, when the state of the virtual prop meets a second preset condition, driving the virtual prop to continue to move in the 3D scene based on the target real motion data.
Exemplarily, when the virtual prop moves to a second preset position range in the 3D scene, it is determined that the state of the virtual prop meets the second preset condition, and taking the shooting scene as an example again, when the virtual basketball is within the second preset position range from the virtual basket, it is determined that the state of the virtual basketball meets the second preset condition, and at this time, the virtual prop is driven to continue to move in the 3D scene based on the target real movement data, so that the real movement effect of the virtual prop can be ensured. Wherein the second preset position range is closer to the virtual basket than the first preset position range. Similarly, in this embodiment, the second preset condition may also refer to a time when the virtual prop moves, for example, when the virtual basketball deviates from the virtual object and moves in the air for a predetermined time (for example, 5 seconds), it is determined that the second preset condition is met.
In the embodiment of the disclosure, because the first part of virtual data is generated in real time based on the transitional real motion data and the preset second part of virtual data, the motion trajectory of the virtual prop driven by the preset second part of virtual data and the motion trajectory driven by the target real motion data are smoothly transited and linked without jumping, and the display effect of the motion trajectory of the virtual prop is further improved.
In addition, the scene of the virtual object (such as a virtual character) connected with other virtual props (such as a virtual sphere and a virtual fan) is similar to the shooting scene, so that the track display effect of the virtual props in the air movement process can be improved while the result that the virtual object is connected with the virtual props is ensured.
Wherein the 3D scene information further comprises virtual object information, the virtual object information is used for generating a virtual object after rendering, and the virtual object is driven by control information captured by the motion capture device. For example, skeletal movements as well as facial expression movements of a real object may be captured by an optical capture device. Additionally, the motion capture device may also include reflective marked clothing worn on the real subject or gloves worn on the hand, etc.
In some embodiments, in order to improve the effect of contact between a virtual object and a virtual item and avoid the occurrence of a virtual space or contact transition, it is necessary to determine control information of the obtained virtual object, and control the virtual item to move and contact with the virtual object under the condition that the control information indicates that the real object touches the real item, that is, to avoid a bad phenomenon caused by an uncoordinated ratio between the real object and the virtual object, the virtual object and the virtual item need to be redirected, so that viewing experience of a user is improved.
The driving of the virtual props in some special scenes will be described in detail below, for example, in a scene where the virtual props are virtual curling irons or virtual table tennis balls, because the external force is required to be received uninterruptedly in the process of the movement of the virtual curling stone or the virtual table tennis, in order to improve the real effect of the virtual prop in the movement process, when partial real motion data are determined from the real motion data, determining each segment of real motion data under the condition that the real prop receives external force from the real motion data to obtain the target real data, determining each segment of virtual motion data matched with each segment of real motion data based on each segment of real motion data, and then driving the virtual prop to move in the 3D scene based on the real motion data and the virtual motion data. That is, in this scenario, multiple segments of real motion data need to be acquired, corresponding virtual motion data is determined for each segment of real motion data, and then in the process of virtual prop motion, the real motion data and the virtual motion data are mixed to drive the motion of the virtual prop.
The real motion data and the virtual motion data in the above embodiments are discussed from a time dimension, and a relationship between the real motion data and the virtual motion data is described below based on a data dimension.
In some embodiments, the real motion data includes real trajectory data, as shown in fig. 7, which is a flowchart of another virtual prop control method provided in an embodiment of the present disclosure, and as shown in fig. 7, the method includes the following steps S201 to S204:
s201, acquiring real motion data of the real prop.
This step is similar to the step S101, and is not described herein again.
S202, determining the real track data from the real motion data.
Illustratively, the real motion data may include real trajectory data as well as real rotation data. Wherein the real trajectory data is composed of real position data at different times. In the embodiment of the disclosure, the real track data is retained, and the real rotation data is removed. In other embodiments, the real rotation data may be retained and the real trajectory data may be removed.
S203, determining virtual rotation data based on the real track data.
For example, referring to fig. 8, after determining the real trajectory data, the virtual rotation data may be determined according to the real trajectory data. Specifically, different virtual rotation data may be determined according to different position data in the real trajectory data, for example, the rotation speed of the virtual rotation data may be positively correlated with the position in the real trajectory data.
Taking shooting as an example, when the real trajectory data indicates that the real basketball moves to the highest point, first virtual rotation data is determined, and second virtual rotation data is determined when the real basketball descends to a preset position, wherein the rotation speed of the first virtual rotation data is greater than that of the second virtual rotation data. In this way, in the process of driving the virtual basketball, the virtual basketball can show a faster rotating speed in the air when moving to the highest point, and can show a slower rotating speed when falling from the highest point to the preset position.
And S204, driving the virtual prop to move in the 3D scene based on the real track data and the virtual rotation data.
Referring to fig. 8 again, the real trajectory data and the virtual rotation data constitute prop driving data of the virtual prop, that is, the virtual prop can be driven to move in the 3D scene. In the embodiment of the disclosure, the virtual prop is driven to move in the 3D scene based on the real track data and the virtual rotation data, so that the reality of the track movement of the virtual prop can be ensured, and a corresponding virtual special effect (such as continuous rotation in the air) can be displayed in the moving process, thereby further improving the viewing experience of the user.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, a virtual item control device corresponding to the virtual item control method is further provided in the embodiment of the present disclosure, and because the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the virtual item control method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 9, which is a schematic view of a virtual prop control apparatus 500 provided in an embodiment of the present disclosure, the apparatus includes:
a real data obtaining module 501, configured to obtain real motion data of a real prop;
a real data determining module 502, configured to determine partial real motion data from the real motion data to obtain target real motion data;
a virtual data determining module 503, configured to determine virtual motion data based on the target real motion data;
a virtual item driving module 504, configured to drive a virtual item to move in a 3D scene based on the target real motion data and the virtual motion data; the 3D scene is generated by rendering 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after rendering.
In a possible implementation, the real data determining module 502 is specifically configured to:
determining initial real motion data from the real motion data to obtain target real motion data;
the virtual data determining module 503 is specifically configured to:
determining the virtual motion data based on an initial velocity in the initial real motion data, a timestamp of the virtual motion data being later than a timestamp of the initial real motion data.
In a possible implementation manner, the virtual motion data includes a first part of virtual data and a preset second part of virtual data, and a timestamp of the first part of virtual data is earlier than a timestamp of the preset second part of virtual data, and the virtual data determining module 503 is specifically configured to:
determining the first portion of virtual data based on an initial velocity in the initial real motion data;
the virtual prop driving module 504 is specifically configured to:
and driving the virtual prop to move in the 3D scene based on the target real motion data, the first part of virtual data and the preset second part of virtual data in sequence.
In a possible implementation, the real data determining module 502 is specifically configured to:
determining ending real motion data from the real motion data to obtain target real motion data;
the virtual data determining module 503 is specifically configured to:
determining the virtual motion data based on the trailing real motion data, the virtual motion data having a timestamp earlier than a timestamp of the trailing real motion data.
In a possible implementation manner, the virtual motion data includes a first part of virtual data and a preset second part of virtual data, and the virtual prop driving module 504 is specifically configured to:
driving the virtual prop to move in the 3D scene based on the preset second part of virtual data;
determining transitional real motion data from the real motion data under the condition that the state of the virtual prop meets a first preset condition;
generating the first part of virtual data in real time based on the transitional real motion data and the preset second part of virtual data;
driving the virtual prop to continue moving in the 3D scene based on the first portion of virtual data;
and driving the virtual prop to continue moving in the 3D scene based on the target real motion data under the condition that the state of the virtual prop meets a second preset condition.
In a possible implementation manner, in a case that the virtual prop moves to a first preset position range in the 3D scene, determining that the state of the virtual prop meets the first preset condition; and/or determining that the state of the virtual prop meets a second preset condition under the condition that the virtual prop moves to a second preset position range in the 3D scene.
In a possible implementation, the real motion data includes real trajectory data, and the real data determining module 502 is specifically configured to:
determining the real track data from the real motion data to obtain the target real motion data;
the virtual data determining module 503 is specifically configured to:
determining virtual rotation data based on the real trajectory data;
the virtual prop driving module 504 is specifically configured to:
and driving the virtual prop to move in the 3D scene based on the real track data and the virtual rotation data.
In a possible implementation, the real data determining module 502 is specifically configured to:
determining each segment of real motion data under the condition that the real prop receives an external force from the real motion data to obtain the target real data;
the virtual data determining module 503 is specifically configured to:
determining each piece of virtual motion data matched with each piece of real motion data based on each piece of real motion data;
the virtual prop driving module 504 is specifically configured to:
and driving the virtual prop to move in the 3D scene based on the real motion data and the virtual motion data.
In a possible implementation, the real data determining module 502 is specifically configured to:
and under the condition that the real motion data indicate that the real motion result of the real prop meets a preset condition, determining partial real motion data from the real motion data to obtain target real motion data.
In a possible implementation manner, in a case that an error between the real motion result and a preset motion result is within a preset range, it is determined that the real motion result meets the preset condition.
In one possible implementation, the 3D scene information further contains virtual object information, the virtual object information being used to generate a virtual object after rendering, the virtual object being driven by control information captured by a motion capture device; referring to fig. 10, the apparatus further includes a virtual object driver module 505, where the virtual object driver module 505 is configured to:
acquiring control information of the virtual object;
and under the condition that the control information indicates that a real object touches the real prop, controlling the virtual prop to move and touch the virtual object.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 11, a schematic structural diagram of an electronic device 700 provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory and temporarily stores operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In this embodiment, the memory 702 is specifically configured to store application program codes for executing the scheme of the present application, and is controlled by the processor 701 to execute. That is, when the electronic device 700 is operated, the processor 701 and the memory 702 communicate with each other through the bus 703, so that the processor 701 executes the application program code stored in the memory 702, thereby executing the method described in any of the foregoing embodiments.
The Memory 702 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 700. In other embodiments of the present application, the electronic device 700 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the virtual item control method in the foregoing method embodiment are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute steps of the virtual item control method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (14)

1. A virtual item control method is characterized by comprising the following steps:
acquiring real motion data of a real prop;
determining partial real motion data from the real motion data to obtain target real motion data;
determining virtual motion data based on the target real motion data;
driving a virtual prop to move in a 3D scene based on the target real motion data and the virtual motion data; the 3D scene is generated by rendering 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after rendering.
2. The method of claim 1, wherein determining partial real motion data from the real motion data to obtain target real motion data comprises:
determining initial real motion data from the real motion data to obtain target real motion data;
the determining virtual motion data based on the target real motion data comprises:
determining the virtual motion data based on an initial velocity in the initial real motion data, a timestamp of the virtual motion data being later than a timestamp of the initial real motion data.
3. The method according to claim 2, wherein the virtual motion data comprises a first part of virtual data and a preset second part of virtual data, and a timestamp of the first part of virtual data is earlier than a timestamp of the preset second part of virtual data;
said determining said virtual motion data based on an initial velocity in said initial real motion data, comprising:
determining the first portion of virtual data based on an initial velocity in the initial real motion data;
the driving the virtual prop to move in the 3D scene based on the target real motion data and the virtual motion data comprises:
and driving the virtual prop to move in the 3D scene based on the target real motion data, the first part of virtual data and the preset second part of virtual data in sequence.
4. The method of claim 1, wherein the obtaining partial real motion data from the real motion data to obtain target real motion data comprises:
determining ending real motion data from the real motion data to obtain target real motion data;
the determining virtual motion data based on the target real motion data comprises:
determining the virtual motion data based on the trailing real motion data, the virtual motion data having a timestamp earlier than a timestamp of the trailing real motion data.
5. The method of claim 4, wherein the virtual motion data comprises a first portion of virtual data and a preset second portion of virtual data, and the driving the virtual prop to move in the 3D scene based on the target real motion data and the virtual motion data comprises:
driving the virtual prop to move in the 3D scene based on the preset second part of virtual data;
determining transitional real motion data from the real motion data under the condition that the state of the virtual prop meets a first preset condition;
generating the first part of virtual data in real time based on the transitional real motion data and the preset second part of virtual data;
driving the virtual prop to continue moving in the 3D scene based on the first portion of virtual data;
and driving the virtual prop to continue moving in the 3D scene based on the target real motion data under the condition that the state of the virtual prop meets a second preset condition.
6. The method according to claim 5, wherein in case the virtual item moves to a first preset position range in the 3D scene, determining that the state of the virtual item meets the first preset condition; and/or the presence of a gas in the gas,
and under the condition that the virtual prop moves to a second preset position range in the 3D scene, determining that the state of the virtual prop meets the second preset condition.
7. The method of claim 1, wherein the real motion data comprises real trajectory data, and wherein determining partial real motion data from the real motion data to obtain target real motion data comprises:
determining the real track data from the real motion data to obtain the target real motion data;
the determining virtual motion data based on the target real motion data comprises:
determining virtual rotation data based on the real trajectory data;
the driving the virtual prop to move in the 3D scene based on the target real motion data and the virtual motion data comprises:
and driving the virtual prop to move in the 3D scene based on the real track data and the virtual rotation data.
8. The method of claim 1, wherein determining partial true motion data from the true motion data comprises:
determining each segment of real motion data under the condition that the real prop receives an external force from the real motion data to obtain the target real data;
the determining virtual motion data based on the target real motion data comprises:
determining each piece of virtual motion data matched with each piece of real motion data based on each piece of real motion data;
the driving the virtual prop to move in the 3D scene based on the target real motion data and the virtual motion data comprises:
and driving the virtual prop to move in the 3D scene based on the real motion data and the virtual motion data.
9. The method of claim 1, wherein determining partial real motion data from the real motion data to obtain target real motion data comprises:
and under the condition that the real motion data indicate that the real motion result of the real prop meets a preset condition, determining partial real motion data from the real motion data to obtain target real motion data.
10. The method according to claim 9, wherein the real motion result is determined to meet the preset condition if an error between the real motion result and a preset motion result is within a preset range.
11. The method of claim 1, wherein the 3D scene information further comprises virtual object information, the virtual object information being used for generating a virtual object after rendering; the method further comprises the following steps:
acquiring control information of the virtual object;
and under the condition that the control information indicates that a real object touches the real prop, controlling the virtual prop to move and touch the virtual object.
12. A virtual prop control apparatus, comprising:
the real data acquisition module is used for acquiring real motion data of the real prop;
the real data determining module is used for determining partial real motion data from the real motion data to obtain target real motion data;
the virtual data determining module is used for determining virtual motion data based on the target real motion data;
the virtual prop driving module is used for driving a virtual prop to move in a 3D scene based on the target real motion data and the virtual motion data; the 3D scene is generated by rendering 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after rendering.
13. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions being executable by the processor to perform the virtual item control method according to any one of claims 1 to 11.
14. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program performs the virtual item control method according to any one of claims 1 to 11.
CN202210139179.3A 2022-02-15 2022-02-15 Virtual prop control method and device, electronic equipment and readable storage medium Active CN114470768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210139179.3A CN114470768B (en) 2022-02-15 2022-02-15 Virtual prop control method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210139179.3A CN114470768B (en) 2022-02-15 2022-02-15 Virtual prop control method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114470768A true CN114470768A (en) 2022-05-13
CN114470768B CN114470768B (en) 2023-07-25

Family

ID=81480898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210139179.3A Active CN114470768B (en) 2022-02-15 2022-02-15 Virtual prop control method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114470768B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117289791A (en) * 2023-08-22 2023-12-26 杭州空介视觉科技有限公司 Meta universe artificial intelligence virtual equipment data generation method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090060692A (en) * 2007-12-10 2009-06-15 한국전자통신연구원 Apparatus and method for changing physical data of virtual object
US20120249429A1 (en) * 2011-03-29 2012-10-04 Anderson Glen J Continued virtual links between gestures and user interface elements
CN103258338A (en) * 2012-02-16 2013-08-21 克利特股份有限公司 Method and system for driving simulated virtual environments with real data
CN109453517A (en) * 2018-10-16 2019-03-12 Oppo广东移动通信有限公司 Virtual role control method and device, storage medium, mobile terminal
US20190134487A1 (en) * 2017-03-07 2019-05-09 vSports, LLC Mixed-Reality Sports Tracking and Simulation
US20200043210A1 (en) * 2018-08-06 2020-02-06 Baidu Online Network Technology (Beijing) Co., Ltd. Mixed reality interaction method, apparatus, device and storage medium
CN111803913A (en) * 2020-06-15 2020-10-23 北京首钢建设投资有限公司 Curling motion experience and training system and device based on virtual reality
CN112732081A (en) * 2020-12-31 2021-04-30 珠海金山网络游戏科技有限公司 Virtual object moving method and device
US20210268382A1 (en) * 2019-06-05 2021-09-02 Tencent Technology (Shenzhen) Company Limited Information display method and apparatus, electronic device, and computer storage medium
CN113473159A (en) * 2020-03-11 2021-10-01 广州虎牙科技有限公司 Digital human live broadcast method and device, live broadcast management equipment and readable storage medium
CN113713393A (en) * 2021-08-27 2021-11-30 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic equipment
CN113905251A (en) * 2021-10-26 2022-01-07 北京字跳网络技术有限公司 Virtual object control method and device, electronic equipment and readable storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090060692A (en) * 2007-12-10 2009-06-15 한국전자통신연구원 Apparatus and method for changing physical data of virtual object
US20120249429A1 (en) * 2011-03-29 2012-10-04 Anderson Glen J Continued virtual links between gestures and user interface elements
CN103258338A (en) * 2012-02-16 2013-08-21 克利特股份有限公司 Method and system for driving simulated virtual environments with real data
US20130218542A1 (en) * 2012-02-16 2013-08-22 Crytek Gmbh Method and system for driving simulated virtual environments with real data
US20190134487A1 (en) * 2017-03-07 2019-05-09 vSports, LLC Mixed-Reality Sports Tracking and Simulation
US20200043210A1 (en) * 2018-08-06 2020-02-06 Baidu Online Network Technology (Beijing) Co., Ltd. Mixed reality interaction method, apparatus, device and storage medium
CN109453517A (en) * 2018-10-16 2019-03-12 Oppo广东移动通信有限公司 Virtual role control method and device, storage medium, mobile terminal
US20210268382A1 (en) * 2019-06-05 2021-09-02 Tencent Technology (Shenzhen) Company Limited Information display method and apparatus, electronic device, and computer storage medium
CN113473159A (en) * 2020-03-11 2021-10-01 广州虎牙科技有限公司 Digital human live broadcast method and device, live broadcast management equipment and readable storage medium
CN111803913A (en) * 2020-06-15 2020-10-23 北京首钢建设投资有限公司 Curling motion experience and training system and device based on virtual reality
CN112732081A (en) * 2020-12-31 2021-04-30 珠海金山网络游戏科技有限公司 Virtual object moving method and device
CN113713393A (en) * 2021-08-27 2021-11-30 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic equipment
CN113905251A (en) * 2021-10-26 2022-01-07 北京字跳网络技术有限公司 Virtual object control method and device, electronic equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117289791A (en) * 2023-08-22 2023-12-26 杭州空介视觉科技有限公司 Meta universe artificial intelligence virtual equipment data generation method

Also Published As

Publication number Publication date
CN114470768B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
US10681337B2 (en) Method, apparatus, and non-transitory computer-readable storage medium for view point selection assistance in free viewpoint video generation
CN113905251A (en) Virtual object control method and device, electronic equipment and readable storage medium
CN106982387B (en) Bullet screen display and push method and device and bullet screen application system
CN111638793B (en) Display method and device of aircraft, electronic equipment and storage medium
CN109939438B (en) Track display method and device, storage medium and electronic device
CN108958475B (en) Virtual object control method, device and equipment
CN112148189A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN112927349B (en) Three-dimensional virtual special effect generation method and device, computer equipment and storage medium
CN113082697B (en) Game interaction method and device and electronic equipment
CN112637665B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN113852838B (en) Video data generation method, device, electronic equipment and readable storage medium
CN114745598A (en) Video data display method and device, electronic equipment and storage medium
CN112882576B (en) AR interaction method and device, electronic equipment and storage medium
CN113117332B (en) Lens visual angle adjusting method and device, electronic equipment and storage medium
CN114615513A (en) Video data generation method and device, electronic equipment and storage medium
CN114095744A (en) Video live broadcast method and device, electronic equipment and readable storage medium
CN111836110A (en) Display method and device of game video, electronic equipment and storage medium
CN113784160A (en) Video data generation method and device, electronic equipment and readable storage medium
CN114470768A (en) Virtual item control method and device, electronic equipment and readable storage medium
CN113342233A (en) Interaction method, interaction device, computer equipment and storage medium
CN112150464B (en) Image detection method and device, electronic equipment and storage medium
CN113680054A (en) Game interaction method and device based on computer vision library
CN114584681A (en) Target object motion display method and device, electronic equipment and storage medium
CN112150602A (en) Model image rendering method and device, storage medium and electronic equipment
CN114618163A (en) Driving method and device of virtual prop, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant