CN114125552A - Video data generation method and device, storage medium and electronic device - Google Patents

Video data generation method and device, storage medium and electronic device Download PDF

Info

Publication number
CN114125552A
CN114125552A CN202111443627.0A CN202111443627A CN114125552A CN 114125552 A CN114125552 A CN 114125552A CN 202111443627 A CN202111443627 A CN 202111443627A CN 114125552 A CN114125552 A CN 114125552A
Authority
CN
China
Prior art keywords
video
virtual
model
data
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111443627.0A
Other languages
Chinese (zh)
Inventor
姜喜文
陈军英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202111443627.0A priority Critical patent/CN114125552A/en
Publication of CN114125552A publication Critical patent/CN114125552A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for generating video data, a storage medium and an electronic device, wherein the method comprises the following steps: loading a first video at a first client, and detecting an editing request of the first video; responding to the editing request, and acquiring source video template data of the first video, wherein the source video template data comprises a virtual role model allowing editing; acquiring character characteristic data of the virtual character model; and rendering in a virtual scene model template corresponding to the source video template data according to the role characteristic data to generate a second video. The invention solves the technical problem that the video can only be edited by the image special effect in the related technology, and realizes the deep editing of the video.

Description

Video data generation method and device, storage medium and electronic device
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for generating video data, a storage medium and an electronic device.
Background
In the related technology, videos are shot by users or edited based on existing videos, and if the users want to play a role in a certain bridge section, on one hand, the users can only build a scene again for shooting, the cost is high, the effect is not as expected, on the other hand, the video editing method is realized by adding some image special effects in the later period, the visual effect of editing is effective, and deep editing cannot be performed.
In view of the above problems in the related art, no effective solution has been found at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for generating video data, a storage medium and an electronic device.
According to an embodiment of the present invention, there is provided a video data generation method including: loading a first video at a first client, and detecting an editing request of the first video; responding to the editing request, and acquiring source video template data of the first video, wherein the source video template data comprises a virtual role model allowing editing; acquiring character characteristic data of the virtual character model; and rendering in a virtual scene model template corresponding to the source video template data according to the role characteristic data to generate a second video.
According to another embodiment of the present invention, there is provided a video data generation apparatus including: the system comprises a detection module, a first video processing module and a second video processing module, wherein the detection module is used for loading a first video at a first client and detecting an editing request of the first video; a first obtaining module, configured to obtain, in response to the editing request, source video template data of the first video, where the source video template data includes a virtual character model that allows editing; the second acquisition module is used for acquiring role characteristic data of the virtual role model; and the generating module is used for rendering and generating a second video in the virtual scene model template corresponding to the source video template data according to the role characteristic data.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the method and the device, a first video is loaded at a first client, an editing request of the first video is detected, source video template data of the first video is obtained in response to the editing request, wherein the source video template data comprise virtual role models allowing editing, and role characteristic data of the virtual role models are obtained; according to the role characteristic data, rendering is carried out in a virtual scene model template corresponding to the source video template data to generate a second video, the role characteristic data can be adopted to configure or control the virtual role model in the virtual scene model template through obtaining the source video template data of the first video and the role characteristic data of the virtual role model, the first video is edited again, the technical problem that the video can only be edited through image special effects in the related technology is solved, and the deep editing of the video is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a video data generation server according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for generating video data according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a hierarchical tree in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of video editing according to an embodiment of the present invention;
fig. 5 is a block diagram of a video data generation apparatus according to an embodiment of the present invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method provided by the first embodiment of the present application may be executed in a game server, a cloud server, a computer, or a similar electronic terminal. Taking an example of the video data running on a server, fig. 1 is a block diagram of a hardware structure of a video data generation server according to an embodiment of the present invention. As shown in fig. 1, the server may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and is not intended to limit the structure of the server. For example, the server may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a server program, for example, a software program and a module of application software, such as a server program corresponding to a video data generation method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the server program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to a server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. In the present embodiment, the processor 104 is configured to control the target virtual character to perform a specified operation to complete the game task in response to the human-machine interaction instruction and the game policy. The memory 104 is used for storing program scripts, configuration information, attribute information of virtual characters, and the like.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the server. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
Optionally, the input/output device 108 further includes a human-computer interaction screen for acquiring a human-computer interaction instruction through a human-computer interaction interface and presenting a streaming media picture;
in this embodiment, a method for generating video data is provided, and fig. 2 is a schematic flowchart of a method for generating video data according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, loading a first video at a first client, and detecting an editing request of the first video;
step S204, responding to the editing request, and acquiring source video template data of the first video, wherein the source video template data comprises a virtual role model allowing editing;
the video server is configured with a plurality of virtual videos such as movie bridge segments, game videos and the like in advance, the first client selects the first video and loads and plays the first video, the parameters of specific role elements in the first video are set to be replaceable, other roles are not replaceable (such as scenes, props, non-key characters and the like), and when a user edits and generates a new second video, the user selects at least one favorite role in the first video as a replacing or processing object. Optionally, the source video template data further includes a virtual character model or a scene model (such as a building, a mountain, an NPC, etc.) that is not allowed to be edited, and configuration data of such a model is fixed.
Step S206, acquiring character characteristic data of the virtual character model;
optionally, the character feature data may be face three-dimensional feature data, motion feature data, body feature data, and sound feature data.
Step S208, according to the character characteristic data, rendering in a virtual scene model template corresponding to the source video template data to generate a second video;
optionally, the virtual role model is configured or controlled in the virtual scene model template by using the role characteristic data, so as to implement depth rendering.
Optionally, the first video is loaded on a video playing terminal (such as a live broadcast terminal, a television, a video conference terminal, a game terminal, and an instant messaging terminal), and a virtual picture is presented in real time, where the virtual picture includes a virtual character rendered based on a virtual character model.
Through the steps, loading a first video at a first client, detecting an editing request of the first video, responding to the editing request, and acquiring source video template data of the first video, wherein the source video template data comprises a virtual role model allowing editing, and acquiring role characteristic data of the virtual role model; according to the role characteristic data, rendering is carried out in a virtual scene model template corresponding to the source video template data to generate a second video, the role characteristic data can be adopted to configure or control the virtual role model in the virtual scene model template through obtaining the source video template data of the first video and the role characteristic data of the virtual role model, the first video is edited again, the technical problem that the video can only be edited through image special effects in the related technology is solved, and the deep editing of the video is realized.
In an implementation manner of this embodiment, the loading the first video at the first client includes: in the process of loading the virtual game at the first client, generating a game clip video by using a picture frame of the virtual game; the game clip video is loaded.
In this embodiment, the role characteristic data may be acquired from a plurality of data sources, and the acquiring the role characteristic data of the virtual role model includes: capturing at least one of the following virtual character models from live-action data collected by a first client: face three-dimensional characteristic data, action characteristic data, body characteristic data and sound characteristic data; and/or, requesting at least one of the following virtual character models from the library of feature materials: face three-dimensional characteristic data, action characteristic data, body characteristic data and sound characteristic data; selecting a game role in the virtual game, and reading the role characteristic data of the virtual role model from the role attribute or the skill attribute of the game role.
Optionally, the three-dimensional feature data of the face may be three-dimensional data of facial features, three-dimensional data of specific organs of the face, etc., the motion feature data may be continuous dynamic motion of the real user, such as martial arts motion, dancing motion, drama motion, etc., the physical feature data may be data of height, weight, body-shape proportion, etc., and the sound feature data may be feature data of tone, voice print, etc.
In one example, when character feature data is collected, feature information of a part of a user, such as a face, is collected in real time by using a depth camera of a mobile phone of the user, and the collected face feature information is used for controlling a virtual character model to present a corresponding expression, so that a second video capable of reflecting the real-time expression of the user in the process of shooting the video is generated. On the other hand, character feature data may be obtained from an existing live-action image, and during the acquisition process, the face or body of the user needs to be scanned in a panoramic manner to generate a skeleton model of the virtual character model.
The character characteristic data can be collected in real time through a client side, and can also be requested from a characteristic material library, a plurality of sets of character characteristic data are stored in the characteristic material library in advance, each set of character characteristic data can be adapted to different types of virtual character models, the character attribute of a game character stores characteristic data such as character images, sound and the like of the game character, so that the character characteristic data can be read from the character attribute, and the skill attribute stores special-effect animation or action characteristics when corresponding skills are triggered, so that the character characteristic data can also be read from the skill attribute. When character feature data is requested from the feature material library, the character feature data may be input data of an action star, a famous blogger, or the like, voice data of a singer, or the like, by acquiring and purchasing the data with a certain amount of consumption of a virtual.
Optionally, the virtual Character model may be a Player-Controlled Character (PCC) Non-Player Character (Non-Player-Controlled Character, NPC), and the Character feature data of the obtained virtual Character model may be, but is not limited to: acquiring first role characteristic data of a PCC model from a first client; acquiring second role characteristic data of the NPC model from the first client; and acquiring third color characteristic data of the first virtual character model from the first client side, and acquiring fourth color characteristic data of the second virtual character model from the second client side.
In this embodiment, when obtaining the role feature data of the virtual role model, the role feature data of the same virtual role model may be obtained from multiple clients, for example, one client obtains the sound feature of the user a, and one client obtains the action feature of the user B, so as to achieve the visual effect of the composite body, or the role feature data may be obtained separately for the virtual role model, for example, the third role feature data of the first virtual role model is obtained from the first client, the fourth role feature data of the second virtual role model is obtained from the second client, and the first client and the second client are linked through different virtual role models to respectively control the virtual role models in the video, where the linkage may be synchronous or asynchronous.
In one implementation of this embodiment, the physics engine of the game server is invoked to generate the second video:
in one implementation scenario, the role feature data includes third role feature data of the first virtual role model and fourth role feature data of the second virtual role model, and rendering and generating the second video in the virtual scene model template corresponding to the source video template data according to the role feature data includes:
s11, calling a game scene sub-model, a world motion sub-model, a first virtual role sub-model, a second virtual role sub-model and a game interaction sub-model of the virtual scene model template from the game rendering engine;
the game scene submodel comprises models of a dynamic scene and a static scene, the world motion submodel is a motion logic model of an interactive object, and external calling can be realized after a game server develops an interface according to the type of a game scenario. In the virtual scene of the metasma, the game development company develops the editing interface and renders and outputs the second video in the cloud server of the metasma community in a unified manner.
S12, editing a first virtual character sub-model in the game scene sub-model and the world movement sub-model by adopting the third color characteristic data, and editing a second virtual character sub-model in the game scene sub-model and the world movement sub-model by adopting the fourth color characteristic data to generate an intermediate video;
and S13, adding the interaction information between the first virtual character submodel and the second virtual character submodel in the intermediate video by adopting the game interaction submodel to generate a second video.
In one example, when the game interaction submodel is used to add interaction information between the first virtual character submodel and the second virtual character submodel in the intermediate video, interaction features such as body movement, language social interaction (e.g., real-time conversation social interaction of two shooting players, talking together a video scenario), i.e., social friend interaction between the first virtual character submodel and the second virtual character submodel are extracted from the third color feature data of the first virtual character submodel and the fourth color feature data of the second virtual character model.
In some scenarios of this embodiment, the first video may be obtained from the video social software, and edited based on the first video, the video social software and the game server are opened, the game server is used as a support of a technology bottom layer, and distribution capability and popularity of the video social software are fully utilized.
Based on the above scenario, the first client is a video social client, loads the first video at the first client, and detects an editing request of the first video, including: the method comprises the steps of loading a first video at a video social client side, and detecting an editing request of the first video in a playing interface of the video social client side for playing the first video, wherein the editing request is carried in video frame data of the first video.
The code of the editing request carries an interface script of the game server, the editing resource of the game server can be called through triggering, and the editing request can be continuously issued on the video social software after the second video is generated after the editing is completed.
In an implementation manner of this embodiment, after the first client loads the first video, the method further includes:
s21, extracting an editor tag from the first video, wherein the editor tag is used for identifying an editing account related to the first video;
s22, calculating the flow data of the first video generated on the first client;
and S23, saving the flow data to a charging account of the editing account.
Optionally, the step of storing the traffic data in the charging account of the editing account includes: extracting a parent account and a child account of the first video from the hierarchical tree, wherein the parent account is an editing account of a source video of the first video, and the child account is an editing account of the first video; calculating the editing proportion of the parent-level account and the child-level account to the first video respectively; dividing the flow data according to the editing proportion, and respectively storing the divided flow sub-data to the parent-level account and the child-level account.
Fig. 3 is a schematic diagram of a hierarchical tree according to an embodiment of the present invention, which illustrates three levels, where a first level includes account 1, a second level includes account 2 and account 3, a third level includes account 4, account 5, account 6, and account 7, taking a sub-level account editing a first video as account 5 as an example, the parent account number comprises an account number 2 and an account number 1, the account number 1 edits a video A on the basis of an original video, the account number 2 edits a video B on the basis of the video A to generate a video B, the account number 5 edits a first video on the basis of the video B, if the edit proportion of account number 1 to the first video is 10%, the edit proportion of account number 2 to the first video is 30%, the edit proportion of account number 5 to the first video is 60%, when the traffic data of the first video is divided, account 1, account 2, and account 5 are divided into 10%, 30%, and 60%, respectively.
The link of the editing request of the embodiment is carried in the video frame data of the first video, and the first video is recorded and displayed on the playing interface of the first video, so that other audiences can find the link conveniently and initiate the video editing request; and when the platform side records the video editing and calculates the flow division, the video editing and distribution are promoted, and the distribution and re-creation on the platform are promoted.
In another implementation of this embodiment, the image processing engine of the game server is invoked to generate the second video:
in an implementation scenario of this embodiment, generating a second video by rendering in a virtual scene model template corresponding to source video template data according to the character feature data includes:
s31, deleting the source virtual role corresponding to the virtual role model in the virtual scene model template to obtain an intermediate virtual scene model template;
s32, playing the intermediate virtual scene model template from the initial position, and adding a virtual character model in the playing picture of the intermediate virtual scene model template;
in this embodiment, the source virtual role and the intermediate virtual scene model template respectively correspond to a part of source video template data, and after the data of the source virtual role is deleted from the source video template data, the intermediate virtual scene model template is loaded and played, and the virtual role model is added again, where the virtual role model is a configurable original model.
S33, controlling the virtual character model to move in the playing picture to form an action track based on the character characteristic data, and recording the action track;
in some examples, controlling the virtual character model to move in the play screen to form the motion track based on the character feature data includes: generating a control instruction aiming at the virtual character model in a playing picture based on the character characteristic data, and correspondingly adding a plurality of virtual cameras at a plurality of visual angles of the playing picture, wherein the control instruction is used for indicating the virtual character model to interact with scene elements in the playing picture; and responding to the control instruction, controlling the virtual character model to move in the playing picture to form a first motion track, and controlling a plurality of virtual cameras to shoot a plurality of groups of dynamic pictures of the virtual character model in the moving process along the first motion track.
In some examples, controlling the virtual character model to move in the play screen to form the motion track based on the character feature data includes: sequentially extracting facial features and posture features from the character feature data according to a storage time sequence, wherein the character feature data stores a plurality of groups of facial features and a plurality of groups of posture features in a queue based on the storage time sequence; and generating a face action instruction and a body action instruction which respectively correspond to the face characteristic and the body posture characteristic, and adopting the face action instruction and the body action instruction to synchronously control the target virtual image to move in the playing picture so as to form a second action track.
And S34, re-rendering the virtual character model in the intermediate virtual scene model template along the action track, and generating a second video containing the target virtual character.
In some examples, re-rendering the virtual character model in the intermediate virtual scene model template along the motion trajectory, the generating the second video containing the target virtual character includes: acquiring a plurality of groups of dynamic pictures of a plurality of virtual cameras; and selecting a group of video pictures from the multiple groups of dynamic pictures according to each frame of playing pictures of the intermediate virtual scene model template, and clipping the playing pictures and the video pictures frame by frame to generate a second video containing the target virtual role.
The user can enter the virtual scene model template corresponding to the first video through the virtual role selected by the user, and in the process of controlling the virtual role, the user performs deduction with the virtual role or NPC controlled by other users according to the scenario corresponding to the first video, or automatically interacts with other virtual roles and NPC, a plurality of virtual cameras in the virtual scene shoot the deduction process, and the pictures shot by the virtual cameras in the later period can be edited to obtain the video clip of the second video.
Fig. 4 is a schematic diagram of video editing according to an embodiment of the present invention, where a first video screenshot and a second video screenshot are respectively screenshots of the same video position in a first video and a second video, a role 1 and a role 2 are configured and rendered based on the same virtual role model, and the virtual role model is reconfigured by using role characteristic data, and pictures in other areas are unchanged, so that video editing is implemented.
In an implementation manner of this embodiment, the obtaining of the character feature data of the virtual character model includes: collecting a first target audio, extracting tone features from the first target audio, and extracting the speech information of the virtual character model in the first video; and generating character dubbing data of the virtual character model based on the tone color characteristics and the speech information.
By adopting the embodiment, dubbing can be carried out on the virtual character model in the first video, and the speech-line information is kept unchanged, for example, the tone of a certain character in the first video can be edited into the tone of the user, and the video picture is unchanged.
In another implementation manner of this embodiment, the obtaining of the character feature data of the virtual character model includes: collecting a second target audio frequency, and analyzing character information in the second target audio frequency; respectively configuring the second target audio and the character information into role dubbing data and role speech-line data of the virtual role model; recognizing the mouth shape feature and the body action feature in the second target audio, and respectively configuring the mouth shape feature and the body action feature into mouth shape action data and body action data of the virtual character model.
With this embodiment, dubbing, lines, actions, and the like of the virtual character model in the first video can be edited.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
In this embodiment, a video data generating apparatus is further provided, which is used to implement the foregoing embodiments and preferred embodiments, and the description already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a video data generation apparatus according to an embodiment of the present invention, and as shown in fig. 5, the apparatus includes: a detection module 50, a first acquisition module 52, a second acquisition module 54, a generation module 56, wherein,
a detection module 50, configured to load a first video at a first client, and detect an editing request of the first video;
a first obtaining module 52, configured to obtain, in response to the editing request, source video template data of the first video, where the source video template data includes a virtual character model that allows editing;
a second obtaining module 54, configured to obtain character feature data of the virtual character model;
and a generating module 56, configured to render and generate a second video in a virtual scene model template corresponding to the source video template data according to the role feature data.
Optionally, the detection module includes: the generating unit is used for generating a game clip video by adopting the picture frame of the virtual game in the process of loading the virtual game by the first client; a loading unit for loading the game clip video.
Optionally, the second obtaining module includes at least one of: a first obtaining unit, configured to obtain first character feature data of a PCC model of a player-controlled character from the first client; a second obtaining unit, configured to obtain second character feature data of the NPC model of the non-player character from the first client; and the third acquisition unit is used for acquiring third color characteristic data of the first virtual character model from the first client and acquiring fourth color characteristic data of the second virtual character model from the second client.
Optionally, the character feature data includes third color feature data of the first virtual character model and fourth color feature data of the second virtual character model, and the generating module includes: the calling unit is used for calling a game scene submodel, a world motion submodel, a first virtual role submodel, a second virtual role submodel and a game interaction submodel of the virtual scene model template from a game rendering engine; the editing unit is used for editing the first virtual character sub-model in the game scene sub-model and the world movement sub-model by adopting the third color characteristic data, editing the second virtual character sub-model in the game scene sub-model and the world movement sub-model by adopting the fourth color characteristic data and generating an intermediate video; and the first generation unit is used for adding the interaction information between the first virtual character submodel and the second virtual character submodel in the intermediate video by adopting the game interaction submodel to generate the second video.
Optionally, the first client is a video social client, and the detection module includes: the detection unit is used for loading a first video at a video social client and detecting an editing request of the first video in a playing interface of the video social client for playing the first video, wherein the editing request is carried in video frame data of the first video.
Optionally, the apparatus further comprises: the device comprises an extraction module, a detection module and a display module, wherein the extraction module is used for extracting an editor tag from a first video after the detection module loads the first video on a first client, and the editor tag is used for identifying an editing account related to the first video; the calculation module is used for calculating the flow data of the first video generated on the first client; and the storage module is used for storing the flow data to a charging account of the editing account.
Optionally, the saving module includes: an extracting unit, configured to extract a parent account and a child account of the first video from a hierarchical tree, where the parent account is an editing account of a source video of the first video, and the child account is an editing account of the first video; the computing unit is used for computing the editing proportion of the parent-level account and the child-level account to the first video respectively; and the storage unit is used for dividing the flow data according to the editing proportion and storing the divided flow sub-data to the parent-level account and the child-level account respectively.
Optionally, the second obtaining module includes: a fourth obtaining unit, configured to capture, from the live-action data collected by the first client, at least one of the following virtual character models: face three-dimensional characteristic data, action characteristic data, body characteristic data and sound characteristic data; and/or a fifth obtaining unit, configured to request, from a feature material library, at least one of the following of the virtual character models: face three-dimensional characteristic data, action characteristic data, body characteristic data and sound characteristic data; and the sixth acquisition unit is used for selecting the game role in the virtual game and reading the role characteristic data of the virtual role model from the role attribute or the skill attribute of the game role.
Optionally, the generating module includes: a deleting unit, configured to delete a source virtual character corresponding to the virtual character model from the virtual scene model template, to obtain an intermediate virtual scene model template; an adding unit, configured to play the intermediate virtual scene model template from an initial position, and add the virtual character model to a play picture of the intermediate virtual scene model template; the control unit is used for controlling the virtual character model to move in the playing picture to form an action track based on the character characteristic data and recording the action track; and the second generation unit is used for re-rendering the virtual character model in the intermediate virtual scene model template along the action track and generating a second video containing the target virtual character.
Optionally, the control unit includes: a first control subunit, configured to generate, in the playing picture based on the character feature data, a control instruction for the virtual character model, and add a plurality of virtual cameras to a plurality of viewing angles of the playing picture, where the control instruction is used to instruct the virtual character model to interact with a scene element in the playing picture; and the second control subunit is used for responding to the control instruction, controlling the virtual character model to move in the playing picture to form a first motion track, and controlling the plurality of virtual cameras to shoot a plurality of groups of dynamic pictures of the virtual character model in the moving process along the first motion track.
Optionally, the second generating unit includes: an acquiring subunit, configured to acquire multiple sets of dynamic pictures of the virtual cameras; and the generating subunit is used for selecting a group of video pictures in the multiple groups of dynamic pictures according to each frame of playing pictures of the intermediate virtual scene model template, and clipping the playing pictures and the video pictures frame by frame to generate a second video containing the target virtual character.
Optionally, the control unit includes: the extraction subunit is used for sequentially extracting the facial features and the posture features in the character feature data according to a storage time sequence, wherein the character feature data stores a plurality of groups of facial features and posture features in a queue based on the storage time sequence; and the generating subunit is used for generating a face action instruction and a body action instruction which respectively correspond to the face feature and the body posture feature, and synchronously controlling the target virtual image to move in the playing picture by adopting the face action instruction and the body action instruction to form a second action track.
Optionally, the second obtaining module includes: the first processing unit is used for acquiring a first target audio, extracting tone features from the first target audio and extracting speech information of the virtual character model in the first video; and the generating unit is used for generating the character dubbing data of the virtual character model based on the tone color characteristics and the speech-line information.
Optionally, the second obtaining module includes: the second processing unit is used for acquiring a second target audio and analyzing the character information in the second target audio; the first configuration unit is used for configuring the second target audio and the character information into role dubbing data and role speech-line data of the virtual role model respectively; and the second configuration unit is used for identifying the mouth shape characteristics and the body action characteristics in the second target audio and configuring the mouth shape characteristics and the body action characteristics into mouth shape action data and body action data of the virtual character model respectively.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Fig. 6 is a structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 6, the electronic device includes a processor 61, a communication interface 62, a memory 63, and a communication bus 64, where the processor 61, the communication interface 62, and the memory 63 complete mutual communication through the communication bus 64, and the memory 63 is used for storing a computer program;
the processor 61 is configured to implement the following steps when executing the program stored in the memory 63: loading a first video at a first client, and detecting an editing request of the first video; responding to the editing request, and acquiring source video template data of the first video, wherein the source video template data comprises a virtual role model allowing editing; acquiring character characteristic data of the virtual character model; and rendering in a virtual scene model template corresponding to the source video template data according to the role characteristic data to generate a second video.
Optionally, loading the first video at the first client includes: in the process of loading a virtual game at a first client, generating a game clip video by using a picture frame of the virtual game; the game clip video is loaded.
Optionally, the obtaining of the character feature data of the virtual character model includes at least one of: obtaining first character feature data of a PCC model of a player-controlled character from the first client; obtaining second character feature data of a non-player character NPC model from the first client; and acquiring third color characteristic data of the first virtual character model from the first client side, and acquiring fourth color characteristic data of the second virtual character model from the second client side.
Optionally, the role feature data includes third role feature data of the first virtual role model and fourth role feature data of the second virtual role model, and rendering and generating the second video in the virtual scene model template corresponding to the source video template data according to the role feature data includes: calling a game scene sub-model, a world motion sub-model, a first virtual role sub-model, a second virtual role sub-model and a game interaction sub-model of the virtual scene model template from a game rendering engine; editing the first virtual role sub-model in the game scene sub-model and the world motion sub-model by adopting the third color characteristic data, and editing the second virtual role sub-model in the game scene sub-model and the world motion sub-model by adopting the fourth color characteristic data to generate an intermediate video; and adding the interactive information between the first virtual character submodel and the second virtual character submodel in the intermediate video by adopting the game interactive submodel to generate the second video.
Optionally, the first client is a video social client, loads a first video at the first client, and detects an edit request of the first video, including: the method comprises the steps of loading a first video at a video social client side, and detecting an editing request of the first video in a playing interface of the video social client side for playing the first video, wherein the editing request is carried in video frame data of the first video.
Optionally, after the first client loads the first video, the method further includes: extracting an editor tag from the first video, wherein the editor tag is used for identifying an editing account related to the first video; calculating the traffic data generated by the first video on the first client; and storing the flow data to a charging account of the editing account.
Optionally, the step of storing the traffic data in the charging account of the editing account includes: extracting a parent account and a child account of the first video from a hierarchical tree, wherein the parent account is an editing account of a source video of the first video, and the child account is an editing account of the first video; calculating the editing proportion of the parent-level account number and the child-level account number to the first video respectively; dividing the flow data according to the editing proportion, and respectively storing the divided flow sub-data to the parent-level account and the child-level account.
Optionally, the obtaining of the character feature data of the virtual character model includes: capturing at least one of the following of the virtual character model from the live-action data collected by the first client: face three-dimensional characteristic data, action characteristic data, body characteristic data and sound characteristic data; and/or, requesting from a library of feature materials at least one of the following for the virtual character model: face three-dimensional characteristic data, action characteristic data, body characteristic data and sound characteristic data; selecting a game role in a virtual game, and reading character characteristic data of the virtual role model from the character attribute or the skill attribute of the game role.
Optionally, rendering and generating a second video in the virtual scene model template corresponding to the source video template data according to the role feature data includes: deleting a source virtual role corresponding to the virtual role model from the virtual scene model template to obtain an intermediate virtual scene model template; playing the intermediate virtual scene model template from an initial position, and adding the virtual character model in a playing picture of the intermediate virtual scene model template; controlling the virtual character model to move in the playing picture to form an action track based on the character characteristic data, and recording the action track; and re-rendering the virtual character model in the intermediate virtual scene model template along the action track to generate a second video containing the target virtual character.
Optionally, the controlling, based on the character feature data, the virtual character model to move in the play picture to form an action track includes: generating a control instruction aiming at the virtual character model in the playing picture based on the character characteristic data, and correspondingly adding a plurality of virtual cameras at a plurality of visual angles of the playing picture, wherein the control instruction is used for indicating the virtual character model to interact with scene elements in the playing picture; and responding to the control instruction, controlling the virtual character model to move in the playing picture to form a first motion track, and controlling a plurality of virtual cameras to shoot a plurality of groups of dynamic pictures of the virtual character model in the moving process along the first motion track.
Optionally, re-rendering the virtual character model in the intermediate virtual scene model template along the motion trajectory, and generating a second video including a target virtual character includes: acquiring multiple groups of dynamic pictures of the virtual cameras; and selecting a group of video pictures from the multiple groups of dynamic pictures according to each frame of playing pictures of the intermediate virtual scene model template, and clipping the playing pictures and the video pictures frame by frame to generate a second video containing a target virtual role.
Optionally, the controlling, based on the character feature data, the virtual character model to move in the play picture to form an action track includes: sequentially extracting facial features and posture features in the character feature data according to a storage time sequence, wherein the character feature data stores a plurality of groups of facial features and a plurality of groups of posture features in a queue based on the storage time sequence; and generating a face action instruction and a body action instruction which respectively correspond to the face feature and the body posture feature, and adopting the face action instruction and the body action instruction to synchronously control the target virtual image to move in the playing picture so as to form a second action track.
Optionally, the obtaining of the character feature data of the virtual character model includes: acquiring a first target audio, extracting tone features from the first target audio, and extracting the speech information of the virtual character model in the first video; and generating character dubbing data of the virtual character model based on the tone characteristic and the speech information.
Optionally, the obtaining of the character feature data of the virtual character model includes: collecting a second target audio frequency, and analyzing character information in the second target audio frequency; configuring the second target audio and the character information into role dubbing data and role speech-line data of the virtual role model respectively; recognizing mouth shape features and body motion features in the second target audio, and configuring the mouth shape features and the body motion features into mouth shape motion data and body motion data of the virtual character model respectively.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment provided by the present application, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to execute the video data generation method described in any of the above embodiments.
In a further embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of generating video data as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (17)

1. A method for generating video data, comprising:
loading a first video at a first client, and detecting an editing request of the first video;
responding to the editing request, and acquiring source video template data of the first video, wherein the source video template data comprises a virtual role model allowing editing;
acquiring character characteristic data of the virtual character model;
and rendering in a virtual scene model template corresponding to the source video template data according to the role characteristic data to generate a second video.
2. The method of claim 1, wherein loading the first video at the first client comprises:
in the process of loading a virtual game at a first client, generating a game clip video by using a picture frame of the virtual game;
the game clip video is loaded.
3. The method of claim 1, wherein obtaining character characteristic data of the virtual character model comprises at least one of:
obtaining first character feature data of a PCC model of a player-controlled character from the first client;
obtaining second character feature data of a non-player character NPC model from the first client;
and acquiring third color characteristic data of the first virtual character model from the first client side, and acquiring fourth color characteristic data of the second virtual character model from the second client side.
4. The method of claim 1, wherein the character feature data comprises third color feature data of the first virtual character model and fourth color feature data of the second virtual character model, and the rendering the second video in the virtual scene model template corresponding to the source video template data according to the character feature data comprises:
calling a game scene sub-model, a world motion sub-model, a first virtual role sub-model, a second virtual role sub-model and a game interaction sub-model of the virtual scene model template from a game rendering engine;
editing the first virtual role sub-model in the game scene sub-model and the world motion sub-model by adopting the third color characteristic data, and editing the second virtual role sub-model in the game scene sub-model and the world motion sub-model by adopting the fourth color characteristic data to generate an intermediate video;
and adding the interactive information between the first virtual character submodel and the second virtual character submodel in the intermediate video by adopting the game interactive submodel to generate the second video.
5. The method of claim 1, wherein the first client is a video social client, and wherein loading the first video at the first client and detecting the edit request of the first video comprises:
the method comprises the steps of loading a first video at a video social client side, and detecting an editing request of the first video in a playing interface of the video social client side for playing the first video, wherein the editing request is carried in video frame data of the first video.
6. The method of claim 1, wherein after the first client loads the first video, the method further comprises:
extracting an editor tag from the first video, wherein the editor tag is used for identifying an editing account related to the first video;
calculating the traffic data generated by the first video on the first client;
and storing the flow data to a charging account of the editing account.
7. The method of claim 6, wherein saving the traffic data to a billing account of the compiled account number comprises:
extracting a parent account and a child account of the first video from a hierarchical tree, wherein the parent account is an editing account of a source video of the first video, and the child account is an editing account of the first video;
calculating the editing proportion of the parent-level account number and the child-level account number to the first video respectively;
dividing the flow data according to the editing proportion, and respectively storing the divided flow sub-data to the parent-level account and the child-level account.
8. The method of claim 1, wherein obtaining character feature data for the virtual character model comprises:
capturing at least one of the following of the virtual character model from the live-action data collected by the first client: face three-dimensional characteristic data, action characteristic data, body characteristic data and sound characteristic data; and/or the presence of a gas in the gas,
requesting from a library of feature materials at least one of the following for the virtual character model: face three-dimensional characteristic data, action characteristic data, body characteristic data and sound characteristic data;
selecting a game role in a virtual game, and reading character characteristic data of the virtual role model from the character attribute or the skill attribute of the game role.
9. The method of claim 1, wherein rendering a second video in a virtual scene model template corresponding to the source video template data according to the character feature data comprises:
deleting a source virtual role corresponding to the virtual role model from the virtual scene model template to obtain an intermediate virtual scene model template;
playing the intermediate virtual scene model template from an initial position, and adding the virtual character model in a playing picture of the intermediate virtual scene model template;
controlling the virtual character model to move in the playing picture to form an action track based on the character characteristic data, and recording the action track;
and re-rendering the virtual character model in the intermediate virtual scene model template along the action track to generate a second video containing the target virtual character.
10. The method of claim 9, wherein controlling the virtual character model to move in the play screen to form a motion track based on the character feature data comprises:
generating a control instruction aiming at the virtual character model in the playing picture based on the character characteristic data, and correspondingly adding a plurality of virtual cameras at a plurality of visual angles of the playing picture, wherein the control instruction is used for indicating the virtual character model to interact with scene elements in the playing picture;
and responding to the control instruction, controlling the virtual character model to move in the playing picture to form a first motion track, and controlling a plurality of virtual cameras to shoot a plurality of groups of dynamic pictures of the virtual character model in the moving process along the first motion track.
11. The method of claim 10, wherein re-rendering the virtual character model in the intermediate virtual scene model template along the motion trajectory, generating a second video containing a target virtual character comprises:
acquiring multiple groups of dynamic pictures of the virtual cameras;
and selecting a group of video pictures from the multiple groups of dynamic pictures according to each frame of playing pictures of the intermediate virtual scene model template, and clipping the playing pictures and the video pictures frame by frame to generate a second video containing a target virtual role.
12. The method of claim 9, wherein controlling the virtual character model to move in the play screen to form a motion track based on the character feature data comprises:
sequentially extracting facial features and posture features in the character feature data according to a storage time sequence, wherein the character feature data stores a plurality of groups of facial features and a plurality of groups of posture features in a queue based on the storage time sequence;
and generating a face action instruction and a body action instruction which respectively correspond to the face feature and the body posture feature, and adopting the face action instruction and the body action instruction to synchronously control the target virtual image to move in the playing picture so as to form a second action track.
13. The method of claim 1, wherein obtaining character feature data for the virtual character model comprises:
acquiring a first target audio, extracting tone features from the first target audio, and extracting the speech information of the virtual character model in the first video;
and generating character dubbing data of the virtual character model based on the tone characteristic and the speech information.
14. The method of claim 1, wherein obtaining character feature data for the virtual character model comprises:
collecting a second target audio frequency, and analyzing character information in the second target audio frequency;
configuring the second target audio and the character information into role dubbing data and role speech-line data of the virtual role model respectively;
recognizing mouth shape features and body motion features in the second target audio, and configuring the mouth shape features and the body motion features into mouth shape motion data and body motion data of the virtual character model respectively.
15. An apparatus for generating video data, comprising:
the system comprises a detection module, a first video processing module and a second video processing module, wherein the detection module is used for loading a first video at a first client and detecting an editing request of the first video;
a first obtaining module, configured to obtain, in response to the editing request, source video template data of the first video, where the source video template data includes a virtual character model that allows editing;
the second acquisition module is used for acquiring role characteristic data of the virtual role model;
and the generating module is used for rendering and generating a second video in the virtual scene model template corresponding to the source video template data according to the role characteristic data.
16. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 14 when executed.
17. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 14.
CN202111443627.0A 2021-11-30 2021-11-30 Video data generation method and device, storage medium and electronic device Pending CN114125552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111443627.0A CN114125552A (en) 2021-11-30 2021-11-30 Video data generation method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111443627.0A CN114125552A (en) 2021-11-30 2021-11-30 Video data generation method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN114125552A true CN114125552A (en) 2022-03-01

Family

ID=80368619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111443627.0A Pending CN114125552A (en) 2021-11-30 2021-11-30 Video data generation method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114125552A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116261009A (en) * 2022-12-26 2023-06-13 北京奇树有鱼文化传媒有限公司 Video detection method, device, equipment and medium for intelligently converting video audience

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093078A1 (en) * 2014-09-29 2016-03-31 Amazon Technologies, Inc. Virtual world generation engine
US20180085672A1 (en) * 2016-09-28 2018-03-29 Nintendo Co., Ltd. Display control apparatus, display control system, display control method and storage medium
CN111558221A (en) * 2020-05-13 2020-08-21 腾讯科技(深圳)有限公司 Virtual scene display method and device, storage medium and electronic equipment
US20200306640A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual character generation from image or video data
CN111768479A (en) * 2020-07-29 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, computer device, and storage medium
CN112162628A (en) * 2020-09-01 2021-01-01 魔珐(上海)信息科技有限公司 Multi-mode interaction method, device and system based on virtual role, storage medium and terminal
CN112738623A (en) * 2019-10-14 2021-04-30 北京字节跳动网络技术有限公司 Video file generation method, device, terminal and storage medium
CN112822556A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Game picture shooting method, device, equipment and storage medium
CN112862935A (en) * 2021-03-16 2021-05-28 天津亚克互动科技有限公司 Game character motion processing method and device, storage medium and computer equipment
CN113240782A (en) * 2021-05-26 2021-08-10 完美世界(北京)软件科技发展有限公司 Streaming media generation method and device based on virtual role

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093078A1 (en) * 2014-09-29 2016-03-31 Amazon Technologies, Inc. Virtual world generation engine
US20180085672A1 (en) * 2016-09-28 2018-03-29 Nintendo Co., Ltd. Display control apparatus, display control system, display control method and storage medium
US20200306640A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual character generation from image or video data
CN112738623A (en) * 2019-10-14 2021-04-30 北京字节跳动网络技术有限公司 Video file generation method, device, terminal and storage medium
CN111558221A (en) * 2020-05-13 2020-08-21 腾讯科技(深圳)有限公司 Virtual scene display method and device, storage medium and electronic equipment
CN111768479A (en) * 2020-07-29 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, computer device, and storage medium
CN112162628A (en) * 2020-09-01 2021-01-01 魔珐(上海)信息科技有限公司 Multi-mode interaction method, device and system based on virtual role, storage medium and terminal
CN112822556A (en) * 2020-12-31 2021-05-18 上海米哈游天命科技有限公司 Game picture shooting method, device, equipment and storage medium
CN112862935A (en) * 2021-03-16 2021-05-28 天津亚克互动科技有限公司 Game character motion processing method and device, storage medium and computer equipment
CN113240782A (en) * 2021-05-26 2021-08-10 完美世界(北京)软件科技发展有限公司 Streaming media generation method and device based on virtual role

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SIMON ALEXANDERSON, GUSTAV EJE HENTER, TARAS KUCHERENKO, JONAS BESKOW: "Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows", COMPUTER GRAPHICS FORUM, vol. 39, no. 2, pages 487 - 496 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116261009A (en) * 2022-12-26 2023-06-13 北京奇树有鱼文化传媒有限公司 Video detection method, device, equipment and medium for intelligently converting video audience
CN116261009B (en) * 2022-12-26 2023-09-08 北京奇树有鱼文化传媒有限公司 Video detection method, device, equipment and medium for intelligently converting video audience

Similar Documents

Publication Publication Date Title
US9381429B2 (en) Compositing multiple scene shots into a video game clip
US20170161931A1 (en) Adapting content to augmented reality virtual objects
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109743584B (en) Panoramic video synthesis method, server, terminal device and storage medium
KR20160098949A (en) Apparatus and method for generating a video, and computer program for executing the method
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN109361954B (en) Video resource recording method and device, storage medium and electronic device
CN112422844A (en) Method, device and equipment for adding special effect in video and readable storage medium
CN111667557A (en) Animation production method and device, storage medium and terminal
WO2023185809A1 (en) Video data generation method and apparatus, and electronic device and storage medium
CN112827172A (en) Shooting method, shooting device, electronic equipment and storage medium
CN114125552A (en) Video data generation method and device, storage medium and electronic device
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
US20230053308A1 (en) Simulation of likenesses and mannerisms in extended reality environments
CN116017082A (en) Information processing method and electronic equipment
KR20200028830A (en) Real-time computer graphics video broadcasting service system
WO2022105097A1 (en) Video stream processing method and apparatus, and electronic device, storage medium and computer program
KR101221540B1 (en) Interactive media mapping system and method thereof
WO2018233533A1 (en) Editing device and system for on-line integrated augmented reality
CN115442658A (en) Live broadcast method and device, storage medium, electronic equipment and product
TWI652600B (en) Online integration of augmented reality editing devices and systems
CN112822555A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112791401A (en) Shooting method, shooting device, electronic equipment and storage medium
CN111666793A (en) Video processing method, video processing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination