CN111638791B - Virtual character generation method and device, electronic equipment and storage medium - Google Patents

Virtual character generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111638791B
CN111638791B CN202010494281.6A CN202010494281A CN111638791B CN 111638791 B CN111638791 B CN 111638791B CN 202010494281 A CN202010494281 A CN 202010494281A CN 111638791 B CN111638791 B CN 111638791B
Authority
CN
China
Prior art keywords
joint
image frame
character
current image
motion state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010494281.6A
Other languages
Chinese (zh)
Other versions
CN111638791A (en
Inventor
王光伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Volcano Engine Technology Co Ltd
Original Assignee
Beijing Volcano Engine Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Volcano Engine Technology Co Ltd filed Critical Beijing Volcano Engine Technology Co Ltd
Priority to CN202010494281.6A priority Critical patent/CN111638791B/en
Publication of CN111638791A publication Critical patent/CN111638791A/en
Application granted granted Critical
Publication of CN111638791B publication Critical patent/CN111638791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

In the method, the apparatus, the electronic device, and the storage medium for generating a virtual character provided in this embodiment, the motion state of each joint is identified based on the information of the human joint point by using the motion state identification model, so that the virtual character is effectively simulated based on the motion state of the joint.

Description

Virtual character generation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the field of image processing, and in particular, to a method and an apparatus for generating a virtual character, an electronic device, and a storage medium.
Background
With the development of technology, it becomes possible to display virtual characters synchronized with user actions on a display interface.
In the prior art, in order to realize synchronous simulation of user actions, positions of human body joint points of a user need to be collected, and then, based on the collected positions of the human body joint points, positions of joint points of a virtual character are simulated correspondingly, so as to obtain the virtual character consistent with user behaviors.
However, in the process of collecting the positions of the human body joint points, because the collecting range of the collecting device is limited, the information of the positions of the human body joint points is easy to be lost, which causes the problems that the virtual character is inconsistent with the user behavior and the personification degree is reduced when the virtual character simulates the user behavior.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides a method and an apparatus for generating a virtual character, an electronic device, and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a method for generating a virtual role, including:
acquiring information of character joint points in a current image frame;
processing the character joint point information by using a preset motion state recognition model to obtain the character posture of a character in the current image frame, and determining the motion state of each joint of the character in the current image frame;
and generating a virtual character according to the motion state of each joint of the person in the current image frame.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating a virtual character, including:
the acquisition module is used for acquiring the information of the character joint points in the current image frame;
the processing module is used for processing the character joint point information by using a preset motion state recognition model to obtain the character posture of a character in the current image frame and determining the motion state of each joint of the character in the current image frame;
and the generating module is used for generating the virtual character according to the motion state of each joint of the person in the current image frame.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method for generating a virtual character as set forth in the first aspect and various possible designs of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, where computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the method for generating a virtual character according to the first aspect and various possible designs of the first aspect is implemented.
In the method, the apparatus, the electronic device, and the storage medium for generating a virtual character provided in this embodiment, the motion state of each joint is identified based on the information of the human joint point by using the motion state identification model, so that the virtual character is effectively simulated based on the motion state of the joint.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic diagram of a network architecture upon which the present disclosure is based;
fig. 2 is a schematic flowchart of a method for generating a virtual role according to an embodiment of the present disclosure;
fig. 3 is an interface schematic diagram of a virtual role generation method provided in the embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another virtual role generation method provided in the embodiment of the present disclosure;
fig. 5 is a block diagram illustrating a structure of an apparatus for generating a virtual character according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
With the development of science and technology, virtual characters gradually enter the entertainment life of people. With the development of the motion migration technology, it becomes possible to display a virtual character synchronized with a user motion on a display interface.
In the prior art, in order to realize synchronous simulation of user actions, the positions of all joint points of a user (namely, a human object to be simulated) need to be acquired firstly. Generally, it can be obtained by using image acquisition techniques and image joint recognition techniques. Then, the positions of the joint points of the virtual character are mapped to the positions of the joint points of the virtual character, so that the positions of the joint points of the virtual character are consistent with the positions of the joint points of the user, and the virtual character consistent with the user behavior or the action posture is obtained.
However, in the process of collecting the positions of the human body joint points, the user is in a continuous motion state, and because the collection range of the collection device is limited, the problem that a certain or some human body joint points of the user exceed the collection range is easily caused, which causes the obtained information of the human body joint points to be lost, so that the certain or some joint points cannot be well simulated when the virtual character simulates the user behavior, and further causes the problems of inconsistency with the user behavior and reduced personification degree.
In order to solve the problems, the method converts the existing mode of simply mapping the positions of the user joint points and the positions of the virtual character joint points one by one into the mode of firstly identifying the motion states of all joints based on the collected joint point information and then simulating all joints of the virtual character according to the motion change of all joints. In other words, since the motion state recognition model is used, even when joint point information is missing, information on the motion state such as the position and rotation state of each joint of the user at present can be analyzed and obtained, and further, generation and motion simulation of a virtual character can be realized directly from the information on the motion state of each joint obtained by the analysis, and the simulation effect is further improved.
Referring to fig. 1, fig. 1 is a schematic diagram of a network architecture based on which the present disclosure is based, and the network architecture shown in fig. 1 may specifically include a virtual role generation apparatus 2 and a terminal 1.
The terminal 1 may be a hardware device such as a mobile phone of a user, a desktop computer, a smart home device, a tablet computer, and the like, which can be used to collect an image and display the image, and the virtual character generation apparatus 2 is hardware or software that can interact with the terminal 1 through a network, and is configured to execute a virtual character generation method described in each example below, and process an image collected by the user from the collection device of the terminal 1, so as to output the generated virtual character on the display device of the terminal 1. Of course, the acquisition device and/or the display device on the terminal 1 may be integrated on the same hardware, or may be distributed on multiple hardware and implement data interaction based on wired or wireless connection
In the network architecture shown in fig. 1, when the virtual role generating device 1 is hardware, it may include a cloud server with a computing function; when the virtual character generating apparatus 1 is software, it can be installed in electronic devices with computing function, wherein the electronic devices include, but are not limited to, laptop portable computers, desktop computers, and terminals 1.
That is, the virtual character generation method based on the present disclosure may be specifically based on the embodiment shown in fig. 1, and is applicable to various application scenarios, including but not limited to: fast modeling scenes of virtual roles; social/interactive scenarios of multiple virtual characters; human-computer interaction scenes based on virtual characters and the like.
The method for generating the virtual roles can be used for quickly generating the special-effect modeling of the characters in the game field or the movie field, namely, the action migration between the characters to be modeled and the virtual roles is realized by the method for generating the virtual roles, so that the modeling is convenient to establish.
The social/interactive scenario based on multiple virtual characters on-line is generally applicable to multi-user on-line interaction. For example, based on game interaction of multiple users on line in a game scene, generally, in the scene, the virtual character of each user is displayed on the display interface at the same time, and the interaction between the users is realized by changing or putting out different actions of the users on line so that the corresponding virtual character on the line changes or puts out the corresponding actions, so that other users can make corresponding reactions.
The human-computer interaction scene based on the virtual character is generally applicable to interaction between a user and the intelligent device, for example, the intelligent device is controlled based on human body gestures.
In a first aspect, referring to fig. 2, fig. 2 is a schematic flowchart of a method for generating a virtual role according to an embodiment of the present disclosure. The method for generating the virtual role provided by the embodiment of the disclosure comprises the following steps:
step 101, acquiring information of human joint points in a current image frame.
It should be noted that the execution subject of the generation method provided by this example is the aforementioned virtual character generation apparatus, which can interact with the terminal to obtain the image captured by the terminal. These images will be pre-processed to become image data for the image frames available for processing. The preprocessing includes, but is not limited to, framing, denoising, matrixing, and the like.
Specifically, the data input by the terminal to the generating device may be an image frame or a video stream, and when the data is a video stream, the generating device further needs to perform framing processing on the video stream. Typically, a video stream of one second will include 30 frames of image frames, and each image frame can be divided from the video stream by using a video stream frame dividing technique. In order to generate a virtual character, the generation device needs to process each frame so as to keep the synchronization between the virtual character and the user behavior.
Further, the acquiring of the information of the human joint point in the current image frame may specifically include: identifying a person in the current image frame to obtain the image position of the person in the current image frame; and carrying out joint point identification processing on the image corresponding to the image position to obtain character joint point information. And identifying the image corresponding to the image position according to a preset joint identification model to obtain the joint position and the joint type of each joint in the current image frame. That is, the identification of the image position of the person and the identification of the person joint point information can be implemented based on the existing pixel identification model, that is, by identifying the object in the image frame and the position of the object for the type of the object to which each pixel in the image frame belongs, the person joint point information in the current image frame can be obtained.
And 102, processing the information of the character joint points by using a preset motion state identification model to obtain the character posture of the character in the current image frame, and determining the motion state of each joint of the character in the current image frame.
Step 103, generating a virtual character according to the motion state of each joint of the person in the current image frame.
In order to better generate the virtual character, in the embodiment of the disclosure, the character posture in the current image frame is obtained through analysis by processing the character optical node information by using the neural network model. Then, based on the obtained posture of the person, the movement states of the joints of the person, such as the positions and rotation states, are determined.
Specifically, the motion state recognition model includes a convolution processing layer, which can be used to perform feature extraction processing on human body joint point information to obtain an implicit variable feature of the character pose, where the implicit variable feature is a high-dimensional feature vector and can be used to represent the character pose. And then, the convolution processing layer of the motion state identification model performs reduction processing, namely dimension reduction processing, on the hidden variable characteristics of the figure posture to obtain joint positions and rotation states of all joints of the figure.
And finally, transferring the joint positions and the rotation states of all joints of the person in the current image frame to the joints of the virtual character by utilizing a motion transfer technology, thereby obtaining the virtual character which keeps consistent with the motion of the person in the current image frame.
It should be noted that, in the embodiment of the present disclosure, since the motion state recognition model is used, it is possible to process the character joint point information of the missing part of joint points to obtain the motion state of each joint of the character to generate the virtual character. Therefore, compared with the technical scheme in the prior art that the acquired joint information is directly mapped to the joints of the virtual roles, the method has higher personification degree and synchronization degree.
In an alternative embodiment, a training process for the motion state recognition model may also be included. Specifically, the method comprises the following steps:
constructing a motion state recognition model to be trained, and collecting training samples; the training sample comprises a plurality of pieces of character joint point sample information, and joint positions and rotation states of joints corresponding to the pieces of character joint point sample information; and training the motion state recognition model to be trained by using the training sample to obtain a trained motion state recognition model.
In order to improve the noise resistance of the motion state identification model and improve the robustness of the motion state of the joint obtained by identification, noise sample information and joint positions and rotation states of partial joints corresponding to each piece of noise sample information are also included in the training sample. Specifically, the noise sample information may refer to sample information in which joint point information is partially missing or partially abnormal, and the trained motion state recognition model can be made to have certain accuracy in processing of the human joint point information with noise by learning the sample information, so that the anthropomorphic requirement is met.
The outline of the avatar generated by the avatar generation device may or may not match the outline of the person in the image frame. Fig. 3 is an interface schematic diagram of a virtual character generation method provided by an embodiment of the present disclosure, and as shown in fig. 3, a virtual character generation device sends a generated virtual character to a display device of a terminal, so that the display device displays the virtual character on a display interface, where an image of a woman is acquired, and a male virtual character is generated by the virtual character generation device, where an action of the male virtual character is identical to an action of the woman, but an appearance of the male virtual character is completely different. Of course, the appearance of the virtual character can be set by the user, and the virtual character is fused into the virtual character provided by the present disclosure by adopting the prior art and is output to the display device together.
In the method for generating a virtual character provided by this embodiment, the motion state of each joint is identified based on the information of the joint points of the character by using the motion state identification model, so that the virtual character is effectively simulated based on the motion state of the joint.
On the basis of the foregoing embodiment, fig. 4 is a schematic flowchart of a method for generating another virtual role provided in the embodiment of the present disclosure, and as shown in fig. 4, the method includes:
step 201, acquiring information of a person joint point in a current image frame;
step 202, performing feature extraction processing on the human body joint point information to obtain hidden variable features of the human posture;
step 203, restoring the latent variable characteristics of the human posture in the previous image frame and the latent variable characteristics of the human posture in the current image frame to obtain joint positions and rotation states of joints of the human in the previous image frame and joint positions and rotation states of the joints in the current image frame; the hidden variable characteristic of the character posture in the previous image frame is obtained by processing character joint point information in the previous image frame by using a preset motion state identification model;
and step 204, generating a virtual character according to the joint positions and the rotation states of the joints of the person in the previous image frame and the current image frame.
Similar to the foregoing embodiment, in this embodiment, it should be noted that the execution subject of the generation method provided by this example is the foregoing virtual character generation apparatus, which can interact with the terminal to obtain the image captured by the terminal. These images will be pre-processed to become image data for the image frames available for processing. The preprocessing includes, but is not limited to, framing, denoising, matrixing, and the like. For a specific implementation manner of the method, reference may be made to a scheme corresponding to the foregoing step 101, which is not described herein again.
Unlike the foregoing embodiment, in the present embodiment, the human joint point information is processed by using a preset motion state recognition model to obtain the human pose of the human in the current image frame, and determine the motion state of each joint of the human in the current image frame, specifically, the following steps may be adopted:
firstly, carrying out feature extraction processing on human body joint point information in a current image frame to obtain the hidden variable feature of the human posture. The motion state recognition model comprises a convolution processing layer to carry out feature extraction processing on human body joint point information to obtain the hidden variable feature of the figure posture, wherein the hidden variable feature is a high-dimensional feature vector and can be used for representing the figure posture.
And then, restoring the hidden variable characteristics of the human posture in the previous image frame and the hidden variable characteristics of the human posture in the current image frame to obtain the joint positions and the rotation states of the joints of the human in the previous image frame and the joint positions and the rotation states of the joints in the current image frame. It should be noted that the hidden variable feature of the human pose in the previous image frame is obtained by processing the human joint point information in the previous image frame by using a preset motion state recognition model when receiving the human joint point information in the previous image frame. That is, the joint positions and the rotation states of the previous image frame and the current image frame of each joint of the person can be obtained by the dimension reduction processing.
And finally, generating the virtual character according to the joint positions and the rotation states of the joints of the person in the previous image frame and the current image frame.
Specifically, the movement locus of each joint may be obtained first from the joint positions and the rotation states of each joint of the person in the previous image frame and the current image frame. It can be understood that the motion trajectory refers to the motion state of each joint with time sequence information, and the displacement trajectory of the joint can be obtained through analysis of the joint positions in different time frames; and the rotation locus of the joint can be obtained by analyzing the rotation state of the joint at different time frames. The transformation of each joint between the previous image frame and the current image frame can be effectively determined by using the motion track formed by the displacement track and the rotation track.
And then, according to the motion track of each joint, performing track simulation on each joint of the virtual character of the previous image frame on the basis of the motion state of each joint of the virtual character, and generating the virtual character of the current image frame.
In addition, in order to further improve the reality of the motion anthropomorphic reality of the obtained virtual character, the motion state recognition model further comprises a judgment processing layer which can be used for judging the reality of the motion state of each joint of the character in the current image frame and generating the virtual character based on the motion state of each joint of the character in the current image frame passing through the judgment.
In the method for generating a virtual character provided by this embodiment, based on the foregoing embodiment, the motion state of each joint of adjacent image frames is used to determine the motion trajectory of each joint of a person, so that a virtual character can be generated based on the motion trajectory, the obtained motion of the virtual character is smoother, and the anthropomorphic effect is more realistic.
Fig. 5 is a block diagram of a virtual character generating apparatus according to an embodiment of the present disclosure, which corresponds to the virtual character generating method according to the foregoing embodiment. For ease of illustration, only portions that are relevant to embodiments of the present disclosure are shown. Referring to fig. 5, the virtual character generation apparatus includes: the device comprises an acquisition module 10, a processing module 20 and a generation module 30.
The acquisition module 10 is configured to acquire information of a person joint point in a current image frame;
the processing module 20 is configured to process the information of the joints of the person by using a preset motion state recognition model to obtain a person posture of the person in the current image frame, and determine a motion state of each joint of the person in the current image frame;
and a generating module 30, configured to generate a virtual character according to the motion state of each joint of the person in the current image frame.
In an alternative embodiment, the processing module 20 is specifically configured to: carrying out feature extraction processing on the human body joint point information to obtain hidden variable features of the figure posture; and restoring the hidden variable characteristics of the figure posture to obtain the joint positions and the rotating states of all joints of the figure.
In an alternative embodiment, the processing module 20 is further configured to:
processing the character joint point information in the previous image frame by using a preset motion state identification model to obtain the hidden variable characteristics of the character posture in the previous image frame;
restoring the latent variable characteristics of the human posture in the previous image frame and the latent variable characteristics of the human posture in the current image frame to obtain joint positions and rotation states of joints of the human in the previous image frame and joint positions and rotation states of the joints in the current image frame;
correspondingly, the generating module 30 is specifically configured to: and generating a virtual character according to the joint positions and the rotation states of the joints of the person in the previous image frame and the current image frame.
In an alternative embodiment, the processing module 20 is specifically configured to: obtaining the motion trail of each joint according to the joint position and the rotation state of each joint of the person in the previous image frame and the current image frame; and according to the motion track of each joint, performing track simulation on each joint of the virtual character of the previous image frame on the basis of the motion state of each joint of the virtual character, and generating the virtual character of the current image frame.
In an alternative embodiment, the generating module 30 is further configured to: the reality of the motion state of each joint of the person in the current image frame is determined by using the motion state recognition model, and the virtual character is generated based on the motion state of each joint of the person in the current image frame passing through the determination.
In an alternative embodiment, the method further comprises: a training module;
the training module is configured to: constructing a motion state recognition model to be trained, and collecting training samples; the training sample comprises a plurality of pieces of character joint point sample information, and joint positions and rotation states of joints corresponding to the pieces of character joint point sample information; and training the motion state recognition model to be trained by using the training sample to obtain a trained motion state recognition model.
In an optional embodiment, the training samples further include noise sample information, and joint positions and rotation states of partial joints corresponding to each piece of noise sample information.
In an optional embodiment, the obtaining module 10 is specifically configured to perform person identification in the current image frame, and obtain an image position of a person in the current image frame; and carrying out joint point identification processing on the image corresponding to the image position to obtain character joint point information.
In an optional embodiment, the obtaining module 10 is specifically configured to perform recognition processing on an image corresponding to the image position according to a preset joint recognition model, so as to obtain a joint position and a joint type of each joint in the current image frame.
The virtual character generation apparatus provided in this embodiment recognizes the motion state of each joint based on the character joint point information by using the motion state recognition model, so as to effectively simulate the virtual character based on the motion state of the joint, and compared with the prior art of a virtual character based on the position information of the human body joint points acquired by collection, the virtual character generation apparatus provided in this disclosure recognizes the motion state by using the motion state recognition model, so that it can have a better result for virtual character behavior simulation when part of the joint information is missing, and the virtual character is generated based on the motion state of each joint, so that the degree of personification is higher.
The electronic device provided in this embodiment may be used to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
Referring to fig. 6, a schematic diagram of a structure of an electronic device 900 suitable for implementing an embodiment of the present disclosure is shown, where the electronic device 900 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 900 may include a virtual character generating device (e.g., a central processing unit, a graphic processor, etc.) 901, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage device 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the electronic apparatus 900 are also stored. The virtual character generating apparatus 901, ROM902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
Generally, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 907 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. The communication device 909 may allow the electronic apparatus 900 to perform wireless or wired communication with other apparatuses to exchange data. While fig. 6 illustrates an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication device 909, or installed from the storage device 908, or installed from the ROM 902. The computer program performs the above-described functions defined in the method of the embodiment of the present disclosure when executed by the virtual character generation apparatus 901.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The following are some embodiments of the disclosure.
In a first aspect, according to one or more embodiments of the present disclosure, a method for generating a virtual character includes:
acquiring information of character joint points in a current image frame;
processing the character joint point information by using a preset motion state recognition model to obtain the character posture of a character in the current image frame, and determining the motion state of each joint of the character in the current image frame;
and generating a virtual character according to the motion state of each joint of the person in the current image frame.
In an optional embodiment provided by the present disclosure, the processing the information of the joint points of the person by using a preset motion state recognition model to obtain a pose of the person in the current image frame, and determining motion states of joints of the person includes:
carrying out feature extraction processing on the human body joint point information to obtain hidden variable features of the figure posture;
and restoring the hidden variable characteristics of the figure posture to obtain the joint positions and the rotating states of all joints of the figure.
In an optional embodiment provided by the present disclosure, further comprising: processing the character joint point information in the previous image frame by using a preset motion state identification model to obtain the hidden variable characteristics of the character posture in the previous image frame;
correspondingly, the restoring the hidden variable characteristics of the figure posture to obtain the joint positions and the rotation states of the joints of the figure comprises the following steps:
restoring the latent variable characteristics of the human posture in the previous image frame and the latent variable characteristics of the human posture in the current image frame to obtain joint positions and rotation states of joints of the human in the previous image frame and joint positions and rotation states of the joints in the current image frame;
correspondingly, the generating of the virtual character according to the motion state of each joint of the person in the current image frame comprises:
and generating a virtual character according to the joint positions and the rotation states of the joints of the person in the previous image frame and the current image frame.
In an optional embodiment provided by the present disclosure, the generating a virtual character according to joint positions and rotation states of joints of a person in a previous image frame and a current image frame includes:
obtaining the motion trail of each joint according to the joint position and the rotation state of each joint of the person in the previous image frame and the current image frame;
and according to the motion track of each joint, performing track simulation on each joint of the virtual character of the previous image frame on the basis of the motion state of each joint of the virtual character, and generating the virtual character of the current image frame.
In an optional embodiment provided by the present disclosure, the generating a virtual character according to the motion states of the joints of the person in the current image frame further includes:
the reality of the motion state of each joint of the person in the current image frame is determined by using the motion state recognition model, and the virtual character is generated based on the motion state of each joint of the person in the current image frame passing through the determination.
In an optional embodiment provided by the present disclosure, further comprising:
constructing a motion state recognition model to be trained, and collecting training samples; the training sample comprises a plurality of pieces of character joint point sample information, and joint positions and rotation states of joints corresponding to the pieces of character joint point sample information;
and training the motion state recognition model to be trained by using the training sample to obtain a trained motion state recognition model.
In an optional embodiment provided by the present disclosure, the training samples further include noise sample information, and joint positions and rotation states of partial joints corresponding to each piece of noise sample information.
In an optional embodiment provided by the present disclosure, the acquiring information of the human joint point in the current image frame includes:
identifying a person in the current image frame to obtain the image position of the person in the current image frame;
and carrying out joint point identification processing on the image corresponding to the image position to obtain character joint point information.
In an optional embodiment provided by the present disclosure, the performing joint identification processing on the image corresponding to the image position to obtain the person joint information includes:
and according to a preset joint identification model, carrying out identification processing on the image corresponding to the image position to obtain the joint position and the joint type of each joint in the current image frame.
In a third aspect, according to one or more embodiments of the present disclosure, an apparatus for generating a virtual character includes:
the acquisition module is used for acquiring the information of the character joint points in the current image frame;
the processing module is used for processing the character joint point information by using a preset motion state recognition model to obtain the character posture of a character in the current image frame and determining the motion state of each joint of the character in the current image frame;
and the generating module is used for generating the virtual character according to the motion state of each joint of the person in the current image frame.
In an optional embodiment provided by the present disclosure, the processing module is specifically configured to: carrying out feature extraction processing on the human body joint point information to obtain hidden variable features of the figure posture; and restoring the hidden variable characteristics of the figure posture to obtain the joint positions and the rotating states of all joints of the figure.
In an optional embodiment provided by the present disclosure, the processing module is further configured to:
processing the character joint point information in the previous image frame by using a preset motion state identification model to obtain the hidden variable characteristics of the character posture in the previous image frame;
restoring the latent variable characteristics of the human posture in the previous image frame and the latent variable characteristics of the human posture in the current image frame to obtain joint positions and rotation states of joints of the human in the previous image frame and joint positions and rotation states of the joints in the current image frame;
correspondingly, the generating module is specifically configured to: and generating a virtual character according to the joint positions and the rotation states of the joints of the person in the previous image frame and the current image frame.
In an optional embodiment provided by the present disclosure, the processing module is specifically configured to: obtaining the motion trail of each joint according to the joint position and the rotation state of each joint of the person in the previous image frame and the current image frame; and according to the motion track of each joint, performing track simulation on each joint of the virtual character of the previous image frame on the basis of the motion state of each joint of the virtual character, and generating the virtual character of the current image frame.
In an optional embodiment provided by the present disclosure, the generating module is further configured to: the reality of the motion state of each joint of the person in the current image frame is determined by using the motion state recognition model, and the virtual character is generated based on the motion state of each joint of the person in the current image frame passing through the determination.
In an optional embodiment provided by the present disclosure, further comprising: a training module;
the training module is configured to: constructing a motion state recognition model to be trained, and collecting training samples; the training sample comprises a plurality of pieces of character joint point sample information, and joint positions and rotation states of joints corresponding to the pieces of character joint point sample information; and training the motion state recognition model to be trained by using the training sample to obtain a trained motion state recognition model.
In an optional embodiment provided by the present disclosure, the training samples further include noise sample information, and joint positions and rotation states of partial joints corresponding to each piece of noise sample information.
In an optional embodiment provided by the present disclosure, the obtaining module is specifically configured to perform person identification in a current image frame, and obtain an image position of a person in the current image frame; and carrying out joint point identification processing on the image corresponding to the image position to obtain character joint point information.
In an optional embodiment provided by the present disclosure, the obtaining module is specifically configured to perform recognition processing on an image corresponding to the image position according to a preset joint recognition model, so as to obtain a joint position and a joint type of each joint in a current image frame.
In a third aspect, in accordance with one or more embodiments of the present disclosure, an electronic device comprises: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method for virtual character generation as in any one of the preceding claims.
In a fourth aspect, according to one or more embodiments of the present disclosure, a computer-readable storage medium has stored therein computer-executable instructions, which when executed by a processor, implement the virtual character generation method according to any one of the preceding claims.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (16)

1. A method for generating a virtual character, comprising:
acquiring information of character joint points in a current image frame;
processing the character joint point information by using a preset motion state recognition model to obtain the character posture of a character in the current image frame, and determining the motion state of each joint of the character in the current image frame;
generating a virtual role according to the motion state of each joint of a person in the current image frame; wherein the motion state of each joint comprises a joint position and a rotation state;
the method for recognizing the image of the person by using the preset motion state includes the following steps:
performing feature extraction processing on the character joint point information to obtain hidden variable features of the character posture; restoring the hidden variable characteristics of the figure posture to obtain joint positions and rotation states of joints of the figure;
the method for generating the virtual role further comprises the following steps: processing the character joint point information in the previous image frame by using a preset motion state identification model to obtain the hidden variable characteristics of the character posture in the previous image frame;
correspondingly, the restoring the hidden variable characteristics of the figure posture to obtain the joint positions and the rotation states of the joints of the figure comprises the following steps:
restoring the latent variable characteristics of the human posture in the previous image frame and the latent variable characteristics of the human posture in the current image frame to obtain joint positions and rotation states of joints of the human in the previous image frame and joint positions and rotation states of the joints in the current image frame;
correspondingly, the generating of the virtual character according to the motion state of each joint of the person in the current image frame comprises:
and generating a virtual character according to the joint positions and the rotation states of the joints of the person in the previous image frame and the current image frame.
2. The method of generating a virtual character according to claim 1, wherein the generating a virtual character based on joint positions and rotation states of joints of the person in a previous image frame and a current image frame includes:
obtaining the motion trail of each joint according to the joint position and the rotation state of each joint of the person in the previous image frame and the current image frame;
and according to the motion track of each joint, performing track simulation on each joint of the virtual character of the previous image frame on the basis of the motion state of each joint of the virtual character, and generating the virtual character of the current image frame.
3. The method of generating a virtual character according to claim 1, wherein the generating a virtual character based on the motion state of each joint of the person in the current image frame further comprises:
the reality of the motion state of each joint of the person in the current image frame is determined by using the motion state recognition model, and the virtual character is generated based on the motion state of each joint of the person in the current image frame passing through the determination.
4. The method for generating a virtual character according to claim 1, further comprising:
constructing a motion state recognition model to be trained, and collecting training samples; the training sample comprises a plurality of pieces of character joint point sample information, and joint positions and rotation states of joints corresponding to the pieces of character joint point sample information;
and training the motion state recognition model to be trained by using the training sample to obtain a trained motion state recognition model.
5. The method for generating a virtual character according to claim 4, wherein the training samples further include noise sample information, and joint positions and rotation states of partial joints corresponding to each piece of the noise sample information.
6. The method for generating a virtual character according to any one of claims 1 to 5, wherein the acquiring information on the human joint point in the current image frame includes:
identifying a person in the current image frame to obtain the image position of the person in the current image frame;
and carrying out joint point identification processing on the image corresponding to the image position to obtain character joint point information.
7. The method of claim 6, wherein the obtaining of the person joint information by performing joint recognition processing on the image corresponding to the image position comprises:
and according to a preset joint identification model, carrying out identification processing on the image corresponding to the image position to obtain the joint position and the joint type of each joint in the current image frame.
8. An apparatus for generating a virtual character, comprising:
the acquisition module is used for acquiring the information of the character joint points in the current image frame;
the processing module is used for processing the character joint point information by using a preset motion state recognition model to obtain the character posture of a character in the current image frame and determining the motion state of each joint of the character in the current image frame;
the generating module is used for generating a virtual role according to the motion state of each joint of the person in the current image frame; wherein the motion state of each joint comprises a joint position and a rotation state;
wherein the processing module is specifically configured to: performing feature extraction processing on the character joint point information to obtain hidden variable features of the character posture; restoring the hidden variable characteristics of the figure posture to obtain joint positions and rotation states of joints of the figure;
the processing module is further configured to: processing the character joint point information in the previous image frame by using a preset motion state identification model to obtain the hidden variable characteristics of the character posture in the previous image frame; restoring the latent variable characteristics of the human posture in the previous image frame and the latent variable characteristics of the human posture in the current image frame to obtain joint positions and rotation states of joints of the human in the previous image frame and joint positions and rotation states of the joints in the current image frame; correspondingly, the generating module is specifically configured to: and generating a virtual character according to the joint positions and the rotation states of the joints of the person in the previous image frame and the current image frame.
9. The device for generating a virtual character according to claim 8, wherein the processing module is specifically configured to: obtaining the motion trail of each joint according to the joint position and the rotation state of each joint of the person in the previous image frame and the current image frame; and according to the motion track of each joint, performing track simulation on each joint of the virtual character of the previous image frame on the basis of the motion state of each joint of the virtual character, and generating the virtual character of the current image frame.
10. The apparatus for generating virtual characters according to claim 8, wherein the generating module is further configured to: the reality of the motion state of each joint of the person in the current image frame is determined by using the motion state recognition model, and the virtual character is generated based on the motion state of each joint of the person in the current image frame passing through the determination.
11. The virtual character generation apparatus according to claim 8, further comprising: a training module;
the training module is configured to: constructing a motion state recognition model to be trained, and collecting training samples; the training sample comprises a plurality of pieces of character joint point sample information, and joint positions and rotation states of joints corresponding to the pieces of character joint point sample information; and training the motion state recognition model to be trained by using the training sample to obtain a trained motion state recognition model.
12. The apparatus for generating a virtual character according to claim 11, wherein the training samples further include noise sample information, and joint positions and rotation states of partial joints corresponding to each of the noise sample information.
13. The apparatus for generating a virtual character according to any one of claims 8-12, wherein the obtaining module is specifically configured to perform person identification in a current image frame, and obtain an image position of a person in the current image frame; and carrying out joint point identification processing on the image corresponding to the image position to obtain character joint point information.
14. The device for generating a virtual character according to claim 13, wherein the obtaining module is specifically configured to perform recognition processing on the image corresponding to the image position according to a preset joint recognition model, so as to obtain the joint position and the joint type of each joint in the current image frame.
15. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method for virtual character generation of any of claims 1-7.
16. A computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the method for generating a virtual character according to any one of claims 1 to 7.
CN202010494281.6A 2020-06-03 2020-06-03 Virtual character generation method and device, electronic equipment and storage medium Active CN111638791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010494281.6A CN111638791B (en) 2020-06-03 2020-06-03 Virtual character generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010494281.6A CN111638791B (en) 2020-06-03 2020-06-03 Virtual character generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111638791A CN111638791A (en) 2020-09-08
CN111638791B true CN111638791B (en) 2021-11-09

Family

ID=72329714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010494281.6A Active CN111638791B (en) 2020-06-03 2020-06-03 Virtual character generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111638791B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256128A (en) * 2020-10-22 2021-01-22 武汉科领软件科技有限公司 Interactive effect development platform
CN114699770A (en) * 2022-04-19 2022-07-05 北京字跳网络技术有限公司 Method and device for controlling motion of virtual object
CN114967937B (en) * 2022-08-03 2022-09-30 环球数科集团有限公司 Virtual human motion generation method and system
CN116681809A (en) * 2023-06-28 2023-09-01 北京百度网讯科技有限公司 Method and device for driving virtual image, electronic equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423721A (en) * 2017-08-08 2017-12-01 珠海习悦信息技术有限公司 Interactive action detection method, device, storage medium and processor
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN108345869A (en) * 2018-03-09 2018-07-31 南京理工大学 Driver's gesture recognition method based on depth image and virtual data
CN109145788A (en) * 2018-08-08 2019-01-04 北京云舶在线科技有限公司 Attitude data method for catching and system based on video
CN109871800A (en) * 2019-02-13 2019-06-11 北京健康有益科技有限公司 A kind of estimation method of human posture, device and storage medium
CN110139115A (en) * 2019-04-30 2019-08-16 广州虎牙信息科技有限公司 Virtual image attitude control method, device and electronic equipment based on key point
US10535174B1 (en) * 2017-09-14 2020-01-14 Electronic Arts Inc. Particle-based inverse kinematic rendering system
CN111208783A (en) * 2019-12-30 2020-05-29 深圳市优必选科技股份有限公司 Action simulation method, device, terminal and computer storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11474593B2 (en) * 2018-05-07 2022-10-18 Finch Technologies Ltd. Tracking user movements to control a skeleton model in a computer system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423721A (en) * 2017-08-08 2017-12-01 珠海习悦信息技术有限公司 Interactive action detection method, device, storage medium and processor
US10535174B1 (en) * 2017-09-14 2020-01-14 Electronic Arts Inc. Particle-based inverse kinematic rendering system
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN108345869A (en) * 2018-03-09 2018-07-31 南京理工大学 Driver's gesture recognition method based on depth image and virtual data
CN109145788A (en) * 2018-08-08 2019-01-04 北京云舶在线科技有限公司 Attitude data method for catching and system based on video
CN109871800A (en) * 2019-02-13 2019-06-11 北京健康有益科技有限公司 A kind of estimation method of human posture, device and storage medium
CN110139115A (en) * 2019-04-30 2019-08-16 广州虎牙信息科技有限公司 Virtual image attitude control method, device and electronic equipment based on key point
CN111208783A (en) * 2019-12-30 2020-05-29 深圳市优必选科技股份有限公司 Action simulation method, device, terminal and computer storage medium

Also Published As

Publication number Publication date
CN111638791A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN111638791B (en) Virtual character generation method and device, electronic equipment and storage medium
CN111556278B (en) Video processing method, video display device and storage medium
US20230386137A1 (en) Elastic object rendering method and apparatus, device, and storage medium
CN109754464B (en) Method and apparatus for generating information
CN111368668B (en) Three-dimensional hand recognition method and device, electronic equipment and storage medium
CN110473293A (en) Virtual objects processing method and processing device, storage medium and electronic equipment
WO2020253716A1 (en) Image generation method and device
WO2023030381A1 (en) Three-dimensional human head reconstruction method and apparatus, and device and medium
WO2023125365A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111246196B (en) Video processing method and device, electronic equipment and computer readable storage medium
JP2023039426A (en) Computer implementation method, information processing system, computer program (spatio-temporal relation based mr content arrangement)
CN110287816B (en) Vehicle door motion detection method, device and computer readable storage medium
CN111652675A (en) Display method and device and electronic equipment
CN113163135B (en) Animation adding method, device, equipment and medium for video
CN113610034B (en) Method and device for identifying character entities in video, storage medium and electronic equipment
CN115775310A (en) Data processing method and device, electronic equipment and storage medium
CN111447379B (en) Method and device for generating information
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN113111684B (en) Training method and device for neural network model and image processing system
CN109410121B (en) Human image beard generation method and device
CN111627106B (en) Face model reconstruction method, device, medium and equipment
CN114299615A (en) Key point-based multi-feature fusion action identification method, device, medium and equipment
WO2023005725A1 (en) Pose estimation method and apparatus, and device and medium
US11836437B2 (en) Character display method and apparatus, electronic device, and storage medium
CN113573153B (en) Image processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201229

Address after: 100190 1309, 13th floor, building 4, Zijin Digital Park, Haidian District, Beijing

Applicant after: Beijing volcano Engine Technology Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant